It's Not Paranoia, They Are Out To Get You!
A Firewall "Why-To"
Shannon Dealy, DeaTech Research Inc.
firstname.lastname@example.org -- www.deatech.com
Just what we need, another firewall tutorial
Browsing the net, looking for information on securing your Linux system or network, it seems like there are more than enough choices without adding one more. Unfortunately, I have found that most of them will tell you what to do and how to do it without telling you why to do it, so rather than a "how-to", this is more of a "why-to", why you are doing it. The problem with the "how-to" approach is that if you don't understand what you are doing and why, you are more likely to make a mistake that could leave your systems vulnerable, and you will find it more difficult to adapt a particular firewall scheme to your particular specialized needs. Many firewall tutorials also emphasize just one aspect of protecting your system and leave out any information on what the real or potential deficiencies of their approach is. Over time as this document evolves, hopefully it will become a useful source for this missing part of the firewall puzzle, and a pointer to the relevant "how-to" information.
Total Firewall Security
Installing a firewall computer does not make your network secure, it just makes it less insecure. This may seem like a silly distinction, but it is important to recognize that the only 100% foolproof way to protect your data from theft is to completely destroy all copies of it before it is stolen! All other approaches are compromises where we try to balance the value of our being able to access the data against the probability and dangers of it being stolen (you did want to access it didn't you?). Even if you eliminate all outside connections to your computer, someone may break-in and steal the computer itself.
So what makes me an expert?
I'm not, and don't claim to be, unfortunately, neither are many, if not most, of those who do make this claim. I do however have a great deal of relevant knowledge and experience that gives me a good understanding of the mechanisms behind security breaches, and have been managing the security of a full-time internet connection for about seven years, during which time I have seen pretty much every type of attack described in the firewalling literature. My specialty (if it could truly be said that I have one anymore) is actually real-time embedded systems and device drivers, though I have worked on virtually every aspect of computers from hardware design to artificial intelligence.
Frankly, I would love to be able to just buy a firewall and forget it, but so far every one I've looked at either doesn't have the flexibility I need for configuring my internet connections, or has been breached in the six months prior to when I looked it over, usually both. I don't bother to look at anything that hasn't been on the market for at least nine months, since it takes time for the hackers and crackers of the world to look these things over and demonstrate how insecure they really are. It could be argued that since pretty much any firewall system is vulnerable, it would be better to leave it in the hands of the experts rather than doing it yourself, but how do you determine which firewall vendor is truly "expert"? Many breaches of commercial firewall products that I've looked over, were not due to some new form of attack and did not use any novel holes in the system, rather the vendor simply forgot (or worse, wasn't aware of) a well documented existing security hole. Some of the other cases were down right stupid in that the vendor's wonderful point and click user interface or some other aspect of their non-security related custom software was the source of the security hole. Other breaches, while not technically the vendor's fault, were due to the fact that the user was allowed to configure the product in a manner which was inherently insecure, which means that anyone using the product must completely understand firewalls in order to use it properly, in which case they might as well roll their own!
Until very recently, if you just used an occasional dial-up connection to the internet, it was not unreasonable to just take a few minor precautions and not worry about security, though of course for those of us with full time connections at least a primitive firewall was generally considered to be a minimum requirement. Most people could get away with this by virtue of the fact that there were very few people on the net (relatively speaking) who had both the knowledge and the inclination to break into other peoples computers, and when they did go after someone, it was usually a large and/or high profile company, so for the rest of us, we were safe by virtue of being small and not worth their time. Today this has all changed, the few who have the skills now put their knowledge into automated programs and shell scripts which can be used to scan for security holes and attack many computers simultaneously. What's worse, once a system is breached they can put that computer to work scanning for openings and attacking more computers. The situation was bad enough at this point, but they went one step further and started using the internet to distribute the attack programs they had created, so that anyone who wanted to could attack other computers. This created the "script kiddie" phenomenon which is basically thousands of bored children (or childish adults) who lack the expertise to be any kind of a threat, using software downloaded from the internet to implement broad scale attacks against anyone and everyone on the net. Because of this new approach, no one connected to the net is safe at any time, since the entire internet is continuously being scanned for insecure systems from many different sources. It is probably a rare computer IP address which does not get probed for security holes at least once a week, I usually see probes one or more times each day if I bother to check the log files for it.
Internet Communication Protocols
There are many communication protocols used for passing information over the internet, some are for local communication such as ethernet and various modem protocols, used to relay information between directly connected computers. Layered on top of these low level communication protocols are higher level ones which are used to relay information across the many different interconnected computers and devices which make up the internet. The most important protocols are:
- IP - Internet Protocol
- This is the core protocol used for transporting virtually all information across the internet, most other protocols (including the ones which follow) use this as their underlying communication layer.
- ICMP - Internet Control Message Protocol
- This protocol is used for passing connection and control information across the net. It's what is used when you ping another computer or use traceroute to see where problems are occurring on the net.
- UDP - User Datagram Protocol
- Provides unreliable unidirectional packet transmission. Basically, a data packet is sent but whether or not it is received is not reported and no retransmission is attempted by this protocol. This may seem silly, but on a reliable network where the overwhelming majority of packets get through, this can greatly increase throughput because of the lower overhead required in sending a packet this way. Even though UDP is not as popular as TCP (see below), common services on many computers are available using either TCP or UDP.
- TCP - Transmission Control Protocol
- Provides a reliable bidirectional connection between two computers, will deliver the packet, or let you know it failed. This is the primary work horse of high level internet information transfer, though it is much less efficient than UDP.
While these higher level protocols are the primary means by which useful information is passed across the internet (such as email and web pages), there are many other protocols available as well, but since they are usually rather specialized and rarely used (relatively speaking), there is little point in dealing with them further at this point.
A Few Useful Terms
A technique where network data packets are either stopped at or allowed to pass through a network connection based on the contents of the packet. Typical criteria used for filtering include:
- Protocol Type -- ICMP, TCP, UDP, etc.
- Source Address -- Which computer supposedly originated the packet (could be falsified).
- Destination Address -- Which computer the packet is addressed to.
- Protocol sub-type -- Some protocol specific sub-type such as TCP's "SYN" packet used to initiate a connection.
Filtering techniques could also include analysis of the rest of the packet's contents, multiple related packets, or any combination of the above.
- NAT (Network Address Translation)
A technique where a network connection to a port is re-routed to another network port, possibly on a different computer system. This effectively hides from the connecting computer, the actual destination it is connecting to. This technique can be used to allow several computers to share a single IP address on the internet, so long as they do not require the use of the same port on that IP address.
- Proxy Server
Types of attack
Before deciding what to do, it is important to understand what types of attacks may occur and how they will affect you and your computer system. The following list (which is by no means complete) gives the general classes of attack along with some common or well-known examples and specific solutions to these problems:
- External Attacks
- Attacks originating from outside your home or office computer/network.
- Denial of Service (DOS)
- the purpose of this type of attack is notto gain control over your computer, rather it is to prevent anyone from making use of one or more of the services that the attacked computer provides. Some examples include:
SYN Attack -- A "SYN" packet is used to initiate a connection between computers using the TCP protocol, it is part of a three way handshake used by TCP to set up a connection. In this attack, repeated "SYN" packets are sent to the computer under attack, the attacked computer sends it's response handshake packet, and waits for the final handshake packet from the attacking computer (which never sends it). Each of these incomplete connection attempts ties up one network port on the computer until it times out, if enough are sent before the timeout occurs, the system runs out of ports and/or other resources at which point no one else can connect.
Linux Solution: there is a kernel feature called "syn-cookies," which while it won't stop the attack will quite effectively prevent it from causing any real problems. This feature must be configured in your Linux kernel to make use of it, and (for some strange reason) this function must be specifically enabled each time the system is rebooted (it is off by default). This is a no brainer, if your computer is connected to the internet, you want this feature. I did initially have a problem with syn-cookies being ineffective until I increased the number of local network ports available to the system by changing the settings in:
the default is rather small (only about four thousand ports), and apparently didn't allow enough margin for timeouts, etc.. I now run with it configured for about 31000 ports.
Process Table Overflow -- Most computers have some kind of limit on the total number of processes that can be active at one time, in many cases if this limit is ever reached, the system will crash or at least become virtually unusable. One way to do this is to simply establish as many connections as possible to as many different system services as possible. Many standard services will create a new process for each connection, quickly using up all space in the process table.
Linux Solution: the best solution to this problem is to configure each service on your computer to limit the number of processes it can spawn so that the maximum total number of processes that will be used by these services is less than the maximum size of the process table. As a supplemental form of protection it is a good idea to install a program called watchdog which monitors your system for a variety of problems that could stop it from responding -- including a full process table, or running out of memory. Whenever a condition occurs which could or does lock up your computer, watchdog forces a reboot of the system. Highly recommended for any computer that needs to be up 24x7, and not a bad idea for other systems as well. It does require your kernel be modified to support the /dev/watchdog device.
Network/Server Overload -- No matter how fast your connection to the internet is, someone else has a faster one, and if they make requests faster than your server or internet connection can handle them, your site will become virtually unusable to everyone else. Even if the person attacking you doesn't have a faster link, they can use other computers that they have compromised to launch multiple attacks which, when combined, exceed the capabilities of your server.
Linux Solution: there is no complete solution for this problem, Linux or otherwise. You can use iptables or ipchains to block all connections from computers involved in the attack, but for a truly wide spread attack launched from hundreds or thousands of systems this may not be practical, since it is necessary to determine which are legitimate connection attempts, and which are part of the attack. In any case, this requires active monitoring of the system and manual intervention, which is usually not practical except for large server systems and e-commerce sites. Ultimately, the only long term solution is to analyze your log files to determine the source of the attack(s), use the "whois" command to find contact information for the ISP that manages the IP addresses the attacks are coming from and report the problem to them.
Ping of Death -- This one should be fixed in any computer operating system which has been updated in the last couple of years, but it is a classic example of how easy it can be to knock a system off-line. In this attack, a person simply sent a 64k+ byte "ping" packet to the target system. This would overflow the receive buffer and crash the network link if not the entire computer.
On the bright side, for these kinds of attack your data is not in any danger of being stolen or corrupted, and in some cases the simplest course if you are not running an E-commerce or other high availability site, may be to just ignore the problem until you get enough attacks to be irritating. I ignored a system bug for a couple years which aided in a process table overflow attack. The system was already running the watchdog program mentioned above, and since this type of attack on the computer was relatively rare and the down-time was always less than two minutes, it was only a minor irritation that would hardly have been worth the time to perform a kernel upgrade to "cure" the problem.
- the reasons for this type of attack are virtually unlimited, it can be anything from just proving they can break into your system, to revenge.
Standard accounts and password scans -- This type of attack simply attempts to log-in using any available login service (telnet, ssh, rsh, etc.) by using common account names (root, games, mail, etc.) or the names of users discovered by looking at internet discussion groups, company web sites and other sources. Armed with a potential list of account names, the attacker will use a list of common passwords or simply words from the dictionary in an automated attempt to log-in to the system. A more serious attacker dedicated to breaking into your computer specifically, will research people with accounts on the system and apply birth dates, names of children and other personal information in order to find a working account and password. This type of attack is also a common internal attack, but is even more likely to succeed since personal information is even more readily available to those on the inside.
What to do about it -- Because these approaches will often succeed in any concerted attack, it is best to disable any login access from the internet. If you must allow it, require strong computer generated passwords on any account with access from the outside, allow access only through an encrypted connections such as provided by a secure sockets version of telnet (telnet-ssl) or secure shell log in such as openssh, and preferably if possible, only allow login from specific known external computer systems. For internal attacks, these same approaches can be applied to internal computer systems as well.
Known bugs, common bugs and security holes -- In this type of attack, the attacker looks for bugs or system security holes in your computer which can be used to gain access. Once they have one of these bugs or holes, it is used to break into the computer.
Solution -- regardless of operating system, there is only one viable solution to this problem, keep the computer's operating system up-to-date with the latest security patches, and subscribe to a security notification list for your particular operating system. I believe most Linux distributions do maintain a security email list for this purpose. It is important to note here that the fewer programs and services you have installed on a computer, the better your chances that any given bug or security hole won't apply to your system and can be safely ignored.
Computer viruses -- Many people today only think of computer viruses and worms as an irritation which may delete files on their hard disk or display silly messages, unfortunately, many of them are very discrete and instead gather information to send out to their originator so that they can better attack your network, or even just install a program to give their originator direct access to your network.
What to do -- an obvious first step is to install anti-virus programs on all computer systems, but since new viruses occur on a regular basis and may not be caught by your "up-to-date" anti-virus program, it is important to block the unintended entry of any executable program into your network, particularly those which can be "accidentally" run by someone using an email program. In order to do this, the email server should have filters to catch all the common executable attachment file types, and all web browsers should go through a proxy server with similar filtering installed.
- Launch platform
These attacks and/or abuses of the internet are intended to simply gain enough access to allow your computer to be used to hide the real identity of a person engaged in some obnoxious activity. While this is a common approach for attacking computer systems, it is even more commonly used for sending spam. Because of the limited use they want to make of your computer, this may not even disrupt normal operations for you (other than using up some of your connection bandwidth). Of course, since they are using your computers against someone else on the internet, the person or company on the receiving end may come after you, since you are the "source" of the problem. Often, this approach does not even require that your system be breached, they may simply make use of services which you provide for them without realizing it.
Forged return address -- one way to implement a DOS attack is to flood a system with garbage sent by other computers, to do this an attacker can simply send a stream of requests to other computer systems using a forged return address so that the responses (which could be much larger than the requests) will be sent to the computer under attack. This kind of attack can't work using TCP connections because of the three way handshake required to setup a connection, so to reduce the risk, or prevent your system from being used for this purpose, you only need to shutdown any unnecessary ICMP and UDP related services. Technically, some other services might be able to be used for this as well, but since only TCP, ICMP and UDP are used by most systems, if your system is running any other protocols, then presumably you need them for some reason.
SMTP relay -- probably one of the single most annoying problems for everyone on the internet. A spammer, after finding out that your mail server allows relaying, will route all of their spam messages through your email server out to the rest of the internet. If your computer is configured to provide email services, you must make sure that the mail server program you are running has been configured to prevent relaying out to the internet, any messages which come in from the internet. Be careful when doing this, the most common mistake made by ISP's tightening up the security of their mail systems is to block the relaying of messages to other mail servers within their domain, or to other domains for which they are supposed to be providing the mail services.
- Internal Attacks
if you have a large network shared by many people, an internal attack should be a major concern, since most networks are least protected against this. Small or single user networks generally do not give this any consideration at all, but it could be a big mistake to do so. Once your firewall is breached by an outside attack, the next stage of the attack is in fact an internal attack! There are far too many different kinds of internal attacks to list them all here, but some of the more common general approaches include:
- Password Cracking
- Temp file
- Buffer Overflow
The basic philosophy of any good network firewall/security system will be "security by design" which is simply a restatement of the obvious approach, a carefully crafted design intended to insure the security of the system. While security by design is unquestionably the best approach, it does not gaurantee success. There are at least two other approaches which can be used to supplement the design of your security system. Generally, these are only used by the truly paranoid because they are usually too much work for the limited value they provide:
"Security by redundancy" -- this approach daisy chains different implementations of the same functional program so that the data must pass through each one in turn. The theory is that each program will have different bugs, so an exploit which will breach one will not make it through the other. An example of this might be to route email through both sendmail and smail.
"Security by obscurity" -- this approach uses custom software and/or programs that are rarely used by others. The theory here is that if the program is rarely used then no one will know what the exploits are, and very few will be inclined to try and find them. The downside of this is that no one knows what the exploits are, so no one has bothered to fix them. This is generally not a good approach, though it may have some applications, particularly if used in conjunction with "security by redundancy".
Design of the system
When designing your firewall system, in order to determine the approach that is best suited to your needs, you need to decide several things:
What (if any) services do you wish to provide to people connecting from the internet. You may only intend for these services to be used by specific people, but if they can access it from the internet, others may be able to make use of them as well.
What data has to be protected and to what degree. It may be that some information must be protected from disclosure, while for other information, it is sufficient to simply protect it from being modified. In my office, proprietary customer source code must receive the maximum protection possible, while other things such as mirrors of data from the internet could be read by anyone without concern.
Do you trust the people using the internal network? This does not necessarily mean they are dishonest (though it could), it may simply mean that through lack of knowledge or carelessness could they accidentally breach the security of your network?
How paranoid are you? It probably seems silly, but some people take a more casual attitude than others toward these things, and so may be comfortable using a lower level of security and taking their chances. Basically, the more paranoid you are, the more work it's going to be. Personally, I am completely paranoid, so I spend entirely too much time on my firewall system (why else would I be writing this?) :-)
A typical approach to securing networks generally goes something like this:
Remove, disable, and/or block access to everything on the network, then install/enable only those items which you need, but only to the most minimal level necessary to perform the required function.
Because it is usually rather inconvenient to maintain this level of security on computers used for day-to-day operations, usually networks are split into a high security external network which provides services to the internet (sometimes referred to as the DMZ) and an internal network with relatively weak security that accesses the internet using connections which go through the more tightly secured external network. Implementing this philosophy usually goes something like this:
Create separate networks for internal and external use with a single point of connection (usually a computer configured specifically for the task) used to bridge between the two.
Usually the external network will use IP addresses assigned by your internet service provider and the internal network will use IP addresses selected from one of the reserved blocks of IP addresses which are not allowed to be routed through the internet:
10.0.0.0 - 10.255.255.255
172.16.0.0 - 172.31.255.255
192.168.0.0 - 192.168.255.255
It is also reasonable to use some of these reserved addresses in your external network, but they should be taken from a different subnet than the ones selected for your internal network.
Everything you wish to keep private should reside on the internal network.
Place on the external network all computers which provide any services you wish to have accessible from the internet -- web, mail, ftp, etc..
Filter Both Outbound and Inbound Data -- On your external network, at each point which it connects to the internet, set up packet filtering software to block all packets passing between the external network and the internet, except for that which you explicitly need. If you are using a Linux box, then ipchains or iptables are probably the best choices for this. A typical configuration might enable full access for TCP packets to/from port 80 on your web server computer, and bi-directional TCP communication with the general use ports defined by the setting of:
though incoming TCP connections (SYN packets) addressed to these ports should be blocked, since they should only be initiating connections using the web server port (80). The reason for the use of the ports in the "ip_local_port_range" block is that for most services, each time a TCP connection is made, a free port is allocated from this range to be used for the actual communication with the requesting computer, in this way the main port for the service is kept free so that it can handle the next request.
Other services should be handled in the same manner, enabling just the type of access required, and only to/from the computers or networks which are required. For ICMP and UDP protocols, there is no need to allow inbound access to the local port range, since these are one shot protocols and no incoming connections are required other than to the master service port, though the computer may require the use of these ports for outbound ICMP and UDP connections.
Setup a bridge computer between the internal and external network -- Use IP masquerading (a form of NAT) to give access from your internal computers to the outside world, but wherever possible, block connections from the internal network (using the same filtering techniques as in the preceding item) and instead setup proxy servers for all services required by people on the internal network. A proxy server can act as a cache to reduce the load on your internet connection while at the same time blocking an attack that attempts to reach your internal network. Whenever a connection is made to any computer on the internet without using a proxy, regardless of the address translations that may occur as it passes through your firewalls, it is in essence a direct pipe from your internal network out to the internet, and if the system you are connecting to or any of the routers in between have been compromised, you might be attacked through your web browser, ftp, or other program you are using once you initiate a connection, by using a proxy server, you move the attack back out onto your external more secure network where a successful attacker will have more trouble doing any damage.
- For each computer on the external network, the following should be done:
Install secure versions of the programs you wish to run on each system where possible. Many of the programs that are commonly used to run the internet, such as sendmail and ftp were designed over ten years ago when internet security really wasn't much of a concern, and as such, are not very secure. Some of them have undergone numerous major revisions to make them more secure, but the odds are that a more recent program designed from the start with security in mind will provide better protection. Currently I use proftpd for FTP services, exim for mail services and apache for web services, and have no complaints about any of them.
Remove all unnecessary software from the computer. Every extra program you leave on the system presents one more tool which might be used by an attacker to consolidate their position and further their attack once they've broken in. In particular, remove compilers and other development tools. If you were building a prison, would you leave a pick and shovel in the cells?
Remove or disable logins on all unnecessary accounts. Every extra account you leave on the system could provide another path of attack for an intruder.
Disable all services (any program which is available on demand by accessing a particular network port) which haven't already been removed and which are not absolutely necessary. Every running service is one more path of attack that might be used to break into the system. Basically these programs fall into two major categories under most Linux installations:
- Services started automatically by the system on boot-up through scripts residing in /etc/init.d (which are linked to from /etc/rcX.d directories), or some other "standard" location (there is still some variation between Linux distributions on this).
- Services which are handled by inetd (or a similar program) which is a "service" that launches other services on demand.
Use the ps command to list all remaining active processes on your system, identify the purpose of each one by reading the documentation, and try to determine which can be removed. Also use the lsof and/or netstat commands to take a look at what network ports are in use and by what programs, and again review if they are needed.
Down-grade the privileges of the programs you are running On UNIX/Linux based systems, the first 1024 network ports require super user privileges to access, but many of the programs that by convention use these ports do not need super user privileges to operate, they only need it to access the standard port they are used on. Each of these programs, if breached, can potentially allow the attacker unrestricted access to the computer (since they are running with super user privileges), so reconfigure these programs to use unprivileged accounts and redirect connections targeted at the "standard" port to a non-privileged one. A common instance of this is the web server, it's so common there is even a semi-standard non-privileged port (8080) used for the web server after reconfiguration. Other programs such as domain name services (BIND) may also be good candidates. When making these evaluations, it's important to remember that some programs do require super user privileges for part of their operations, and for other programs it may be desirable to run them only on the privileged ports in order to keep a non-privileged user from running their own program on the unprivileged "server port" and getting users to provide passwords or other confidential information by using a trojan horse attack where they setup a program that pretends to be your server.
Block all network connections both in and out bound -- By using external and/or internal network filtering software, all access to each computer on the external network should be blocked to or from both the internet and the internal network. When doing this, make sure you have console access, since if it is done properly, you will lose all network access to the system once the initial block is created. Once everything has been blocked, then enable support for just the types of connections you need to be able to provide (such as web services), and wherever possile, restrict the access to as few computers as possible. Typically for a web server, there will be no restrictions since you want people to be able to view your web site, but for something like a Virtual Private Network (VPN) connection, there may be only one system at a fixed address that ever connects to your server, why leave it open for others to attempt to break-in? To do this under Linux there are a variety of choices, the one I prefer is done at the kernel level using ipchains, or on the newer 2.4.x kernels, iptables. If you are doing this for the first time, I highly recommend upgrading to a 2.4.x kernel and using iptables, it is much easier to understand and simpler to use than ipchains.
Block all login access to computers on the external/secure network except from the console -- or if that is to much trouble, install only support for secure telnet and/or secure shell (openssh) for logging in, force your users to use really strong passwords, and only log-in using clients which support encryption. If possible, it's also a good idea to restrict this type of connection to "known" computers (possibly only those on your internal network) which can be identified by their IP address and/or an identifier string created and managed by your secure shell program and related services.
Install/Enable shadow passwords. One of the first things an attacker will do after breaking in is take a copy of the system password file so that an attempt can be made to crack the passwords in it and use them to gain further system access. Using shadow passwords makes this much harder by moving all the encrypted passwords out of the standard globally visible "/etc/passwd" file and placing them into a separate, better protected file which is only visible to the super user. Any decent Linux distribution will include support for shadow passwords, but you may have to install it separately, and/or manually enable them before you get this added protection.
Install an intrusion detection program. Programs such as tripwire and others can detect and report some of the common signs of an intruder such as modifications to program files which shouldn't change.
Log Anything Unusual -- Many programs allow a wide range of configuration options to control what information they report, review these options and enable the ones which might indicate an attack or other security problem. Ipchains and iptables allow you to log information about packets they process, and can provide good information which would indicate an attack is in progress.
For the less paranoid, the entire external "network" above may be configured onto a single computer system if it can handle the load (usually not a problem for any site with bandwidth less than a T1 connection).
For the completely paranoid, further actions can be taken to make things more secure on the external network:
Hide the identity of each program. Many programs used to provide external services (web servers, ftp servers, mail servers, etc.) will provide detailed information about themselves when requested, some won't even bother with waiting for a request and will simply send this type of information as part of a greeting message whenever another program connects to them. This information may include the name of the program, revision, and even some configuration options, all of this could potentially be used to aid an attacker in determining how to proceed. Many programs of this type provide options to override the information provided, and where available, this information should at least be replaced with an empty string. For even greater security, many will opt to take a lesson from the military and use dis-information, by replacing the identity information with that for a completely different program such as making your apache web server report that it is a netscape server.
Set up separate computers for every service to be provided (where feasible). By doing this, you minimize the disruption that occurs when one system is compromised and make it more difficult for an attacker to gain a significant level of control over your network, though (depending on how you set things up) this does have more administrative overhead for the initial setup, the maintenance overhead can be virtually eliminated with a few simple shell scripts.
Use external logging -- Setup each computer to send a copy of it's kernel log information in real-time to a separate computer system which can track and analyze the logs for possible attacks. One of the first things a person will do to cover their tracks after breaking into a system is delete information from the log files which may reveal their prescence. If that information resides on another computer, then they must break into the other computer before they can delete the information. A common practice among the completely paranoid a number of years ago was to send the log data directly to a printer, since printouts cannot be purged electronically.
An Improved Approach
As with all things there are trade-offs, and for the small office, the totally paranoid approach mentioned earlier where each service is placed on a separate computer is simply not practical. Fortunately, there is a new option available, using VMWare's virtual machine technology I have taken a new approach, using a single hardware computer system, it's now possible to set up a series of virtual computers, one to handle each service, with a number of additional advantages over other approaches:
Using VMWare with multiple virtual machines (each node is a separate virtual machine) Internet Internet Internet || || || +-----||-------||-------||----------------+ | || || || | | external external external . . . | | link VM link VM link VM | | \ | | / | | ---------------------- | | | | | | /- Logging | | Firewall filter/NAT | | +--------------------+ | Network router --|- Proxy | | | | | | | | | | | \- Bridge ========== Internal Network | | | | | | | - - - - - | - - - - - - - | | | | | | +--------------------+ | External | | | Servers | | | | | | -------------------- | | / | | | | \ | | web ftp vpn DNS SMTP . . . | | | +-----------------------------------------+
- VMWare makes it possible to make the virtual hard disk for a virtual machine truly "read-only" by setting it to "nonpersistent," if you power down the virtual machine and restart it, all changes to the hard disk are lost including any modifications made by someone who broke in.
- It is also possible to get network log-in access to the virtual machine through the host computer while having all network logins disabled on the virtual computer. This means you can create a complete firewall and external network system on a single host computer, but still safely retain the ability to log-in to each virtual computer through your internal network. To do this, configure the Linux Kernel used for the external network to include support for a serial console, and configure each virtual machine to use one of the virtual /dev/ptyqX devices as a console port. Once this has been done, it's possible to access the virtual machine from the local host computer using any standard terminal program by connecting to the corresponding /dev/ttyqX device.
- The multiple virtual computer approach makes services into a plug-in module, each time you add a new internet connection, network server, proxy, or other feature, it's simply a matter of adding another virtual machine, and to remove the feature, you just shutdown the virtual machine providing the service, no need to remove it or it's configuration.
Now that you (hopefully) have a better understanding of why things are done, you can move on to the specifics of how they are done. This section lists some resources that will give you the "how-to" information for configuring your firewall system.
- Securing your computer system quick reference guide www.linuxsecurity.com/docs/QuickRefCard.pdf
- IPTables how-to, should be included with any distribution that supports iptables. If iptables is installed, it will probably be in the /usr/share/doc/iptables directory. This is much more comprehensible than it's predecessor the ipchains how-to, probably mostly due to the fact that iptables is much easier to understand and use.
- /proc filesystem documentation (proc.txt) in the "Documentation" sub-directory of the source code for your kernel. In particular, look at the documentation for the "/proc/sys/net/ipv4" configuration subdirectories. Many features related to the security and operation of your network connections can be set here.
- inetd - system service launcher, can be used to enable/disable various system services. Usually used in conjunction with TCP wrappers which through the hosts.allow/hosts.deny files can provide controlled access to some specified services.
- iptables/ipchains - provides packet filtering and NAT capabilities. The heart and soul of any good Linux firewall system. If possible use one of the newer 2.4.x kernels and go with iptables.
- syslogd - Logging daemon, can be used to redirect log information to other computers, or configured to receive log data from other computers.
- logcheck - Scans log files for unusual occurances
- rinetd - simple to use program for redirecting TCP connections to different ports and/or computers
- proftpd - an FTP server program designed for greater security.
- watchdog - automatically reboots a locked-up or unresponsive system
Programs to Test the Security of Your Firewall
Presentation given to the Mid-Willamette Valley Linux User Group, May 5th, 2001
This was supposed to be the outline/overview of the talk, I got a little carried away :-) it will undoubtedly be expanded at a later date to fill in some areas I short changed on this revision.
If you should find an error in this document, please let me know (relevant references, online or otherwise much appreciated), the only thing I hate worse than being wrong (and the embarassment that goes with it), is the dissemination of incorrect information to others.