Monday, December 26, 2011

RSA SecurID 130 Appliance Basic Setup

I set up one of these years ago and wrote up a short article describing how to use one of these to authenticate against them from a Cisco ASA. I set another of these up recently and figured I'd cover the basic points to get it running.

Assuming your Active Directory domain is called testad.local and you've named the appliance "rsaappliance.testad.local" in DNS...

1. You'll need to do the base configuration and set up the initial license file. It's important that the following is configured correctly:



  • IP Address (it's probably necessary that you create a PTR record in DNS for the appliance. At the very least, you should have the IP address in DNS with a matching hostname (i.e., rsaappliance.testad.local)

  • NTP/time sources - you should use the same NTP servers that you're using for your Active Directory domain controllers. If you are running VMware ESX/ESXi and are running the domain controllers as VMs, you can use the ESX/ESXi host(s) as NTP time sources.

    2. The next work requires use of the operations-console:

    https://rsaappliance.testad.local:7072/operations-console

     Under the admin console -> deployment configuration -> Identity sources -> add new ->

    (you'll probably be required to provide administrative credentials to get in)

    provide the:
  • Identity Source Name: descriptive name





  • Type (Microsoft Active Directory)





  • Directory URL
  • directory UL: ldap://dc1.testad.local  (or ldaps://dc1.testad.local, if available)
  • directory failover URL: ldap://dc2.testad.local
  • directory user id: you can create a user for this purpose. Assuming you created a user called "rsaauth" in the default users container in Active directory, you'll construct the entry like so:
       cn=rsaauth,cn=Users,dc=testad,dc=local
     
       Of course, you might have OUs set up for these sorts of things. If you had an OU in your domain called "utilityusers," the entry would be:
       cn=rsaauth,ou=utilityusers,dc=testad,dc=local
     
       (for those of you unfamiliar with LDAP, cn should be the full name of the user.)
       2b. click on the "map" tab and set your User Base DN and User Group base DN. If you're not using any OUs, you'll default to the standard cn=users,dc=testad,dc=local... otherwise, put in the appropriate OU. You can fine tune the LDAP search filters and mappings below, but all you need to get started is the User Base DN and User Group DN.
     
      By the way, if you check "Directory is an Active Directory Global Catalog," you'll likely get an error in a later step:
     
      "Cannot link the runtime identity source because no administrative identity sources reference this runtime source"
     
      The easiest way to fix this is to uncheck "Directory is an Active Directory Global Catalog" - or do additional configuration.

    3. You'll want to enable the Radius Server, if you're going to authenticate against this appliance from, say, a Cisco ASA:

     Deployment Configuration -> RAIDUS -> Configure Server -> go ahead and create your RADIUS server... the defaults should be fine.


     4. You'll need to link the newly created Identity Source to a realm (newly created or the default SystemDomain realm.)

     Go to the security console:

     https://rsaappliance.testad.local:7004/console-am

     5. From there, go to Administration -> REALMS -> Manage Existing (you can create a new realm, if you have the appropriate licensing)
        select the "SystemDomain" realm (or the realm you created if you chose to create your own.)
        Under Link Identity Source, select the active directory Identity source you created in step one and click the arrow pointing to the right to put it in the linked field. Now, save your entry.
       
        If you get the dreaded "Cannot link the runtime identity source because no administrative identity sources reference this runtime source," this probably means you set the Active Directory Identity Source in step 1 to be a Global catalog.
       
     You should be ready to add tokens. To do so under the security console:

     Authentication ->  Manage existing -> New Import SecurID Tokens Job ->

     
  • name: just a descriptive name
     
  • Security domain: SystemDomain (unless you've created your own security domain)
     
  • Import file: (import the xml file from the CD RSA included with the tokens
     
  • File password: This password was likely on a scratchoff slip of paper in a separate folder

     Provided the tokens import correctly, you should be able to start assigning them to users.

    6. Finally, you'll probably need to add a Radius client if you you enabled the Radius server in step 3. From the Security console ( https://rsaappliance.testad.local:7004/console-am): RADIUS -> RADIUS clients -> add new:
  • Client Name: use a descriptive name
  • IP Address: the IP address of the client
  • Make / Model: Select the appropriate model (i.e., Cisco PIX for Cisco ASA)
  • Shared Secret: choose a long password
  • Go ahead and save without RSA agent.
  • Wednesday, November 30, 2011

    Disk usage not accounted for in Linux

    I had a filesystem nearly at full capacity (99%, in this case.) This server had a single root filesystem and du -sh /*listed only about 20% of the used capacity.

    I suspected an open file handle that hadn't been released, even though the actual file had been deleted. Normally, if a process dies, it will release the handles, though a child process might still be around. Anyway, I turned to lsof and looked for deleted:

    lsof | grep deleted

    syslogd   30139      root    7w      REG              253,0 6794843241    1307979 /var/log/ldap.log (deleted)


    Syslog was the culprit. After a restart of syslogd, the handle was released and all was well.

    Tuesday, October 11, 2011

    Left Join Example - SQL Query

    Today, I had to write a MySQL query that found entries in one table that had no corresponding entries in a second table. The schema was written in such a a way that table A's id column corresponded to an id column in  table B (call the column tableA_id.) It's possible that some entries in table A might not have corresponding entries in table B (and vice versa, but I didn't actual care about the unique table B entries.) Table A was constructed with an "id" column and a "name" column. Table B was created like so: "id" "attribute" and "tableA_id"

    The query (a rather simple one, at that...)

    select A.id, A.name from A left join B  on A.id = B.tableA_id where B.tableA_id is NULL

    Monday, September 26, 2011

    Oneliner for renaming files

    Say you have a series of files you want to rename like so:

     a37.zip to a37-archive.zip
    b38.zip to b38-archive.zip
    c39.zip to c39-archive.zip

    and so on...

    Here's a useful oneliner for bash:

    for x in `ls`; do echo $x | sed -e 's/\(.*\).zip/\1-archive.zip/g'| xargs -t mv $x ; done

    That should to it. Of course, there are other ways of doing it, but this way is fairly simple.

    Sunday, August 28, 2011

    Nagios Group Service Checks and Exclusions

    Using hostgroup_name in service checks is very, very useful. However, I find that I often want to include the Nagios server in one of the hostgroups, but don't necessarily want to configure nrpe for the nagios host. An easy way around this is to use Nagios's exclude feature. You can exclude a group (or host) by prefacing it with "!"

    For instance:

    1. create a group that contains only your nagios host

    define hostgroup {
            hostgroup_name localhost-monitor-hosts
            members         mynagiosserver
    }


    2. create the service:

    define service {
            hostgroup_name          unix-hosts,!localhost-monitor-hosts
            use                             critical-service
            service_description             check_disk
            check_command                   check_nrpe!check_disk!80!85
            check_freshness                 1
            register                        1
            }

    This will check all hosts in the unix-hosts group, excluding any in localhost-monitor-hosts. You can obviously create multiple groups, each with a different purpose.

    Note, you can also define a service with more than one host, and exclude hosts that way:

    define service {
             host_name www1, www2, !www3
             use generic-service
             service_description check_ping
             check_command check_ping!100.0,20%!500.0,60%
    }

    but I find that more taxing, as you have to maintain the host_name... which is kind ignoring the whole point of groups.




    Wednesday, July 20, 2011

    Dell OMSA on Linux: disable SNMP

    In spite of the fact that the Dell OMSA tools on linux have a config file called omreg.cfg that explicitly allows shut off of SNMP, it doesn't seem to work.

    The proper way to disable snmp for the Dell tools is:


    cd /opt/dell/srvadmin/var/lib/srvadmin-deng
    touch dcfwsnmp.off  dcsnmp.off

    You can restart the Dell services again:

    /opt/dell/srvadmin/sbin/srvadmin-services.sh restart

    Wednesday, June 15, 2011

    Replacing Self Signed Remote Desktop Services Certificate on Windows 2008R2

    I recently had an issue where users were no longer able to connect to a remote desktop services host because the certificate had expired. The error was:


    “Remote Desktop Disconnected: Remote Desktop cannot connect to the remote computer because the authentication certificate received from the remote computer is expired or invalid.  In some cases, this might also be caused by a large time discrepancy between the client and the server computers.”

    I knew that the times were correct, and after looking at the certificate, I realized it had expired.

    I didn't see the need to buy a proper CA signed certificate for a server that was only accessible internally, so I decided to get rid of the old certificate and make the host create a new, self-signed certificate.

    To do this:

     1. open mmc.exe (Microsoft Management Console)
     2. add the add-in - certificates (for the computer account) (and select local computer)
     3. navigate to the remote desktop folder -> certificates
     4. delete the certificate for the name of the server and close the mmc instance
     5. Go to: administrative tools -> remote desktop services -> remote desktop session host configuration
     6. Select the instance in the main window - rdp -tcp -> right click and select properties
     7. on the window that pops up, select default

    Please Donate To Bitcoin Address: [[address]]

    Donation of [[value]] BTC Received. Thank You.
    [[error]]

    Thursday, May 26, 2011

    CentOS and Active Directory Users

    You can normally do this through the gui tool, but I figured it's still fairly easy to do it with the command line tool. In this case, I want to join my machine to the domain and use Active Directory for authentication and user information. You'll likely want to back up your authentication config files (just backup /etc to be safe, unless you know exactly which files you need) before you start. Do this at your own risk.

    system-config-authentication  --smbrealm=my_ads_fqdn \
     --winbindjoin=domain_admin_username --smbsecurity=ads \
     --enablewinbindusedefaultdomain --enablekrb5 --enablewinbind \
     --update

    If you want the users to have a default shell, run system-config-authentication again:

    system-config-authentication --winbindtemplateshell=/bin/bash --update

    Conversely, you could have added the "--winbindtemplateshell=/bin/bash" to the first command above.

    Friday, May 20, 2011

    Qakbot Outbreak and Removal

    I recently had to deal with a large Qakbot worm outbreak. Qakbot's an information gathering nuisance. It appeared that there were three different variants of the worm. The problem was that it spread through the desktops and the servers through domain admins. This is likely because of domain admins logging into infected workstations.
      Unfortunately, the centralized AV was not fully deployed - and was sometimes out of date. The problem was magnified by the fact that the company is multi-site and fairly wide open, subnet wise.
      Apparently, Qakbot spreads through CIFS shares. It likes to push itself through the administrative shares and place executables to be run into the All Users directory. It then creates registry entries to start itself upon login.  It does annoying things like changing permissions on the AV program directories such that updates won't work. It then uses the credentials of the user to attack the administrative shares and (likely) remote registry service of other machines. This can be bad if you happen to be a domain admin, Anyway, the virus doesn't seem to work correctly on Windows 7 or 2008 for some reason.

      The biggest problem removing it was that we were not able to isolate segments of the network, and by logging on, you'd simply activate it, if the executables had already been placed.
      I figured out a simple way to clean up a machine remotely (do this at your own risk, of course. You may damage your system if you are not careful.):

  • 1. First, check out the administrative share of the machine:
        \\c$\documents and settings\all users\application data\microsoft

        The Qakbot worm creates a randomly named directory. Let's pretend our folder name was "gvcajscwxoq." I can't tell you exactly what it'll be called, but the name will be rather random and should have only one exe, two dlls, and maybe some .tmp or .dmp files.  So, we'd have a folder called:
      \\c$\documents and settings\all users\application data\microsoft\gvcajscwxoq
    There will be an executable and two DLLs witht the same name as the folder "gvcajscwxoq." There may be a few other tmp files. What you need to do is delete or rename the files to a different extension - like .vir.


  • 2. rename gvcajscwxoq.exe to gvcajscwxoq.vir.

  • 3. rename all the DLLs in that folder called "gvcajscwxoq" to another name.


    That works well enough if the virus isn't active on the machine. If it is, you'll need to kill the process remotely before logging in. I found a useful freeware commercial tool called Desktop Central:

    Desktop Central

    (note, I'm in no way associated with that company. I just found the tool to be useful.)


  • 4. Anyway, after installing the free edition on your local work station, launch the tool and launch "remote task manager"

    Provided it can connect to the infected machine, you should find the rogue process running as the same random  name of the folder you attempted to rename above. End that process (make sure it's the same random name!) Wait about 20-30 seconds and then try to rename the remaining files in the application data\microsoft folder. You should be able to. You probably can't rename the bad folder, yet.


  • 5. Now, you can log into the box. Run msconfig and look under the startup tab very carefully. The worm likes to append itself to a legitimate application's startup. For instance, if you had a network management suite gui that started up, like the Broadcom control suite, it might modify that entry to launch itself and then the Broadcom suite. You should find one entry with the gvcajscwxoq.exe file listed. I'd recommend removing that portion or just disabling the service. Be careful, you'll be disabling a program that normally starts for all users.



  • 6. Now that you've removed the threat, install/update your anti-virus. You may need to modify permissions for your AV software as Qakbot may have changed things. Check out your anti-virus vendor's help section; they may have a tool to change the permissions.
     

     I'd also recommend that all domain admins check out the administrative shares of boxes before logging into them. Remember from above:

    For Windows 2000-2003, XP, look in
    \\servername\c$\documents and settings\all users\application data\microsoft

    For Windows 2008, Vista, and Windows 7, try looking in c:\programdata\microsoft

    If you see a randomly named folder with an exe and two dlls in it, you'll need to cleanup the server before logging in.
  • More on Racktables...

    I figured that I should say some more about Racktables. It's very useful. You can use it to manage:

  • IPv4 and IPv6 inventory - ditch that spreadsheet!

  • Server network interface IP configuration documentation, including what port maps to what switch port


  • Virtual host to hypervisor relationships

  • VLAN inventory - again, ditch that spreadsheet.

  • Server maintenance contracts - store a PDF copy as part of a server object

  • And quite a bit more.


    In just a few weeks, I've found it very useful. It's definitely worth the minimal set up time.


  • Wednesday, May 11, 2011

    Racktables on CentOS 5.6

    I find Racktables to be a very useful tool. Because it requires newer versions of PHP, it's a bit of a hassle to get it running on CentOS 5.x. Here are the steps:


    1. Get the CentOS testing repo (for new versions of PHP.)

     cd /etc/yum.repos.d
     # now we enable the new repo - at least for long enough to satisfy the dependencies
     sed -i CentOS-Testing.repo -e 's/enabled=0/enabled=1/g'

    2. update PHP to 5.3.x
     yum -y install php53 (you may need to remove php-common, if it's installed)

    2b. optional: lock down MySQL:

     sudo /usr/bin/mysql_secure_installation

    You'll want to assign a root password as well as delete anonymous users and disallow remote root logins.

    3. Create the database and database user:
     mysql -u root -p

      create database racktables;
    grant all on racktables.* to racktablesuser;
    grant all on racktables.* to racktablesuser@localhost;
    set password for racktablesuser@localhost=password('your_super_password');
    flush privileges;

    4. untar the racktables source and do a make && make install.

    5. create entries for this host in DNS - A records and/or CNAMEs. For example, create an A record called racktables.yourdomain.local. 

    5b. Create a configuration entry in apache. Edit /etc/httpd/conf.d/racktables.conf:


     ServerName racktables.your_domain.local
     DocumentRoot /usr/local/share/RackTables/wwwroot/


    Obviously, this is not running over SSL. It's not a bad idea to run Racktables over HTTPS.

    5c. Restart apache (sudo /sbin/service httpd restart)

    6. Connect to your host. You'll probably get this error message:

     Database connection parameters are read from /usr/local/share/RackTables/wwwroot/inc/secret.php file,  which cannot be found. You probably need to complete the installation procedure by following this link.

    7. Step through the links till you get to the dependency page (step 2 of 6.)

    8. You'll see that you are missing numerous dependencies. Go ahead and fix the dependencies:

    sudo yum -y install php-mbstring
    sudo yum -y install php-gd
    sudo yum -y install php-snmp
    sudo yum -y install php-pdo
    sudo yum -y install php-mysql

    9. In order to get the unicode support in php pcre

    sudo yum update pcre

    10. We'll leave pcntl as is for now. In the next step, you'll see a warning about the secret.php file. To fix it:

    sudo touch '/usr/local/share/RackTables/wwwroot/inc/secret.php'; sudo chmod 666 '/usr/local/share/RackTables/wwwroot/inc/secret.php'
    (you'll need to either configure SELinux to allow access to that file or disable SELinux altogether. 
    11. Enter the db name and db username and password from earlier. 
    12. Set an administrator password in the next step.
    You should be all set.



    Saturday, May 7, 2011

    iSCSI Targets on Solaris 10 U9

    Assuming we have a pool named pool1 and we wish to create a target for a client host named server1. Our ZFS target server has two interfaces; the interface we want to use for iSCSI has  the IP address 192.168.78.20.

    First, create a new volume for the iSCSI target:

     zfs create -V 100g pool1/vols/iscsi1

    We can share it out (this could have been done during the creation phase, of course)

     zfs set shareiscsi=on pool1/vols/iscsi1

    Let's bind this volume so that it's only available on the iSCSI interface:

     iscsitadm create tpgt 1
     iscsitadm modify tpgt -i 192.168.78.20
     iscsitadm modify target -p 1 pool1/vols/iscsi1

    We should set up some sort of authentication. At a minimum, a target CHAP entry.

    Go ahead and create the client/initiator entry:
     iscsitadm create initiator --iqn iqn.1991-05.com.microsoft:server1.mydomain.com server1

    assign the CHAP username for server1 (we're calling it server1user):
     iscsitadm modify initiator --chap-name server1user server1

    set a CHAP password for that client:
     iscsitadm modify initiator --chap-secret server1
     (it must be at least 12 characters long and no more than 16 characters long.)

    Bind the ACL to the target:
      iscsitadm modify target --acl server1 pool1/vols/iscsi1

    Now all you need to do is to connect to it from the iSCSI initiator on server1. Set the server interface IP as the portal (i.e., 192.168.78.20), look at the targets, and authenticate via CHAP using server1user as the username and whatever password you specified above as the password.

    Tuesday, April 5, 2011

    Printers and DHCP

    Most administrators handle printers in very different ways. My personal favorite (works with MS DHCP and ISC DHCP) is to:

    1. Assign a static DHCP based IP address for the printer in DHCP, and call it lpt-name_of_printer. (i.e., lpt-marketingcolor01 is the reservation 00:11:22:33:44:55 has for 192.168.78.51)

    2. Set up a DNS entry for it (lpt-marketingcolor01.mydomain.local  as an A record for 192.168.78.51)

    3. Create the printer on the print server, using the fully qualified hostname as the address (or in MS parlance: creating a new port -> standard TCP/IP -> use the fully qualified hostname.

    The advantage of this is that if you need to renumber printers, you simply need to modify the reservation and the DNS entry, not the actual printer. Some printers do support supplying a DHCP hostname, but with this method, you have more control.

    Sunday, March 27, 2011

    Bricked Linksys WRT54GS v1.1

    I recently managed to pseudo brick an old Linksys WRT54GS (the specifications of the various WRT54 series wireless routers are here : Wikipedia article

    Anyway, I accidentally broke it with an OpenWRT update that went badly. I did not want to have to create a Jtag for it, if I could avoid it. I tried a  30/30/30 reset. That failed. The router would power on and the DMZ light would turn on for five seconds and then turn off. This continued indefinitely.

    After looking it, up it appeared that the boot loader was working, but that the firmware image was missing or corrupt. I followed the advice I found on several locations:

    Attempt to push the stock Linksys firmware to the device using TFTP. It looked like the simplest way to do that would be to use the DrayTek Vigor router tools. Specifically, I used their "firmware upgrade utility." It worked on the first try. It's a simple process:


  • I set the ethernet interface on my laptop to be on the stock 192.168.1.0/24 network. I could ping the host with no problems.

  • I pointed the firmware upgrade utility to the latest official Linksys/Cisco firmware for the WRT54GS hardware version 1.1 router.

  • After the router rebooted, I had the stock firmware running on the default address (192.168.1.1) with the default password (no username, admin as the password.)

    I later installed DD-WRT using the built-in Linksys firmware update page. DD-WRT. I could have used Tomato or OpenWRT again, but I wanted to try out DD-WRT. I'll likely write a post or two about my experience with DD-WRT.








     
  • Thursday, February 10, 2011

    Upgrading Cisco Catalyst Switch Firmware Using Archive Command

    I had to upgrade some old Cisco Catalyst switches to the latest firmware. The problem was that the flash contained the old firmware... and the delete command did not seem to honor the /r/f flags (at least in this old version from the year 2000.)

    To clear out the html dir on the Cisco switch:

    delete /force /recursive flash:html/*


    Assuming my tftp server's IP is 172.18.27.40 and I have a cisco firmware tar file called "c35xx-new-version.tar" in the root of the tftp server, I upgrade like so:

     archive tar /extract tftp://172.18.27.40/c35xx-new-version.tar flash:
    (or archive tar /xtract tftp://172.18.27.40/c35xx-new-version.tar flash:  - depending on version)
    (later versions of IOS use the flag /xtract instead of /extract.)

    As always, try this at your own risk.

    Thursday, January 27, 2011

    Checkpoint UTM Firewall Clusters Part 4 - NoNAT Rules

       Cisco ASA administrators will be well familiar with noNAT rules... those NAT ACLs listed under NAT 0. It's a similar configuration for the Checkpoint. Using the network groups I created in part 2 of this series,
    Checkpoint UTM Firewall Clusters Part 2: Anti-Spoofing

    One can create individual NoNAT rules like so:

    To prevent NATing between the corp_net (192.168.6.0/23) and the DMZ, you can create a pair of rules (make sure they are above your implied rules!):








    Of course, you might want to avoid any NATing between internal VLANs/subnets. Using our previously created simple group, inside_networks (it contains corpnet, eng_net, qa_net, and router net):






    That should do it.

    Checkpoint UTM Firewall Clusters Part 3 - Overloading NAT and PAT, Proxy Arp

       In this instance, we're going to cover a 1 to 1 NAT (a bi-NAT) and an overload of a single port for the same address. Refer to the first part in this series to get a better idea of the topology:
    Checkpoint UTM Firewall Clusters Part 1


    In this case, we have a web host (172.31.22.80) and an SSH server (172.31.22.22) in the DMZ. We want to create a 1 to 1 NAT (outside address 10.10.80) for the web host, but we also want port 3322 on the outside address to NAT to port 22 on the SSH server. Here's a diagram:



    You will note that I left out some of the infrastructure in this drawing - simply for clarity.
        Anyway, we should create a host node for the web server, set up the NAT, and then create the NAT rule to override port 3322 on the same external address.


    1. Create the node:



    2. Now, set the NAT on external address 10.10.10.80:


    3. Now, create an override rule for the SSH server (we just created a node for the external address, the internal ssh address, as well as a new TCP object - port 3322):
     Here's the override:



    4. We'll follow up by adding a rule to allow traffic in on the firewall. This requires 1 rule:






    That's basically it. If you do not have a static entry, but have a bunch of PATs, you'll notice that the firewall will not automatically proxy arp for the external address. This can be fixed by using the method above for a single 1 to 1  NAT or by editing local.arp on each firewall. This file is in $FWDIR.

    Checkpoint UTM Firewall Clusters Part 2 - Anti-Spoofing

    The first problem I ran into with the Checkpoints is the built in anti-spoofing technology. Refer to my last post to get a sense of the topology: Checkpoint UTM Firewall Clusters Part 1

    Here's the diagram again:













    Anyway, the problem is internal routes. In my example, I have a layer 3 switch handling internal routing. The steps are:

    1. Log into each Checkpoint cluster member and add static routes. You can use either ssh with the sysconfig utility, or use a web browser and go to each firewall (typically port 4434.) In this example case, you'll add:

    subnet netmask gateway
    172.17.16.0 255.255.252.0 192.168.5.200
    192.168.6.0 255.255.254.0 192.168.5.254
    192.168.8.0 255.255.254.0 192.168.5.254
    192.168.10.0 255.255.254.0 192.168.5.254

    Note that 192.168.5.254 is the layer 3 switch.




    2. Create subnet objects for each of the internal networks/VLANs.

    Ignore CP_default_Office, it's part of the demo network config.





    3. If you look at the cluster interface topology, you'll see:



    And if we drill down further:




    And further into the internal interface (where our corp, eng, QA, and colo interfaces reside behind:


    And now to the "Topology tab"

    Topology anti-spoofing config


    This configuration will block the eng, qa, and corp subnets. Depending on the configuration, the Co-Lo net may never need to talk to anything that the firewall manages (DMZ1, etc.) But, better safe than sorry.
    4. Create a simple group and include all four subnets:
















    5. Now, go back to the topology anti-spoofing config in step 3 and modify it to use the group you created.























    There, anti-spoofing should work correctly. Make sure NAT is configured properly!

    Checkpoint UTM Firewall Clusters Part 1

        I recently spent some time setting up a Checkpoint Firewall cluster using UTM firewall appliances. I'm going to post several configuration tips I learned the hard way. I did not find the documentation to be all that useful, though I was in a bit of a rush, so I might have missed something.
       Anyway, I'm laying out the topology in this post. Here are our nets:


                                                                                                                                                                                                                                                                                                                   
    Interface NameSubnetComments
    ext10.10.10.0/24external network
    int192.168.5.0/24router net
    LAN1172.31.24.0/28sync network
    LAN2172.31.23.0/24network management subnet
    LAN3172.31.22.0/24DMZ1
    N/A192.168.6.0/23Corporate LAN (behind L3 switch)
    N/A192.168.8.0/23Engineering LAN (behind L3 switch)
    N/A192.168.10.0/23QA LAN (behind L3 switch)
    N/A172.17.16.0/22subnet from CO-LO - from VPN tunnel




    Note that there is a layer 3 switch behind the inside interface on the Checkpoint cluster, and that at least three VLAN/subnets are behind that switch. Note that there is an IPSec tunnel to the co-lo facility, and that tunnel terminates on the L3 switch in the router network (the endpoint is 192.168.5.200.)
    Here is a simple diagram of the configuration: