From the earliest of days, Man has learnt well to defend his assets. Moats used to be built around castles and castles had high walls with heavily armed soldiers standing guard atop them. Sites chosen for such castles were not very easy to get to either - if they weren't at the top of a rocky cliff, they would be on a coast where no ship would berth. And if any enemy should be brave enough to attempt to climb the walls, they would pour boiling hot coal tar and rain arrows and rocks on them.
Okay, how is this history lesson going to help you secure your servers and ultimately protect your valuable data? Well, we are going to make them as impregnable as the castles of old. Of course, history's castles often fell to determined enemies, ours simply will not.
Taking a lesson from history's pages though, it seems wiser and more efficient to prevent a loss than to attempt to recover from an attack. Prevention is better than cure. Enterprises are realizing quickly that its better to deploy something that can detect as well as prevent intrusions rather than simply detect one in progress and try to alert the responsible personnel.
Perhaps the avenues of attack today are much wider and numerous compared to those available even two years ago. This has in turn led to an explosion in the different types of vectors that deepen the intrusion. But what are they and why should they exist at all? What are the questions we need to ask ourselves before we go ahead and purchase and deploy one on our networks and systems? Let us explore this in our first article in this story.
Why prevention is better than detection?
Enterprises are rapidly turning into mobile and metamorphic workplaces, with a rapidly increasing number of employees acquiring laptops to work from. As these executives travel between departments, offices and campuses, suitable connectivity must be provided for them to simply do their job. Plugging into the nearest Ethernet port and Wi-Fi are the most often used options. However, these are also the most dangerous, since without proper and strict policies in place, undefended or unclean systems could easily plug in and infect the entire infrastructure in no time. How exactly do you force a visiting consultant to install or use your particular favorite antivirus? Rather than grapple with such issues, it is usual practice to leave systems open. And that action alone endangers more than one system.
Consider for example, the rather innocuous cellphone Trojan called "Skulls". This vector simply turns the application icons on your Symbian cellphone to those of the skull and cross-bones. It uses the notorious Cabir worm to spread itself through Bluetooth. Increasingly laptops, smart devices, printers and even some brands of PCs are equipped with Bluetooth. Consider this nightmarish scenario, where someone rewrites a portion of the Skulls code to let the worm replicate to non-Symbian systems. The moment an infected cellphone enters your Bluetooth zone, the worm would transfer itself to the nearest open Bluetooth device (say a laptop) and as that device connects to other systems over Wi-Fi or even Ethernet, the worm could spread to other systems. Of course, the ultimate aim of the worm would be to reach another Symbian cellphone and it can now reach one that it previously had no means to reach - say all the way across the globe, through your LAN and then through your ISP. What means do you have deployed to detect such a spread, let alone fight it?
An IPS (Intrusion Prevention System) is a software that strikes a synergic balance between being an active firewall, a software update center, a malware definitions server and policies enforcer. An IPS has policies and rules that it compares network traffic to. If any traffic violates the policies and rules, the IPS can be configured to respond by fighting that threat rather than simply alerting you to its existence.
Typical responses might be to block all traffic from that IP address or to block incoming traffic from that port to proactively protect just the computer or entire network. How effective the IPS is depends on which of the two methods it will employ and in what combination.
IPS systems respond to either changes in traffic flow and patterns or to certain predefined signatures and the responses to those signatures. Let us see what each of these are.
Traffic flow pattern method
Spreading malware and agents on a network cause rapid fluctuations in the network flow. This is easily noticeable and can be flagged for action. What would typically happen is that if a worm is trying to get in, it will initiate remote scans and then try to find particular vulnerabilities. Monitoring active network loads and comparing them to what it should be like at this time of day will tell you if it is suspicious. Once you have decided it is, you can then set about identifying the particular machines that are participating in the scenario and isolate the infector. Further action can be initiated in the form of isolating that machine from the network and running scans on it to locate the agent and finally remove it.
Signature and response method
When a mal-agent tries to infiltrate your infrastructure, it leaves behind a trail of what it has done and where it has been. This is its "attack signature". The responses it sends out and tries to get other systems into giving it are usually quite sufficient to tell you what you're dealing with and from where. An IPS looking for such activity patterns should necessarily have an underlying and integrated IDS system that would help it fine tune its findings over time and eliminate false-positives.
How effective is your IPS?
An important factor that governs what an IPS solution can do is where it sits on your network. For example, an IPS that sits at the gateway level is more like a firewall. It can do little to prevent agents infiltrating through floppies, CDs, USB drives and so forth. However, some IPS systems do have agents that you can install on machines throughout your network and these agents can proactively cooperate with a central IPS server to detect and fight intrusions through these means also.
Maintenance of attack signatures and removal techniques for various agents is important too. It should further give you the ability to drill down and define what kind of policies to apply and what actions to take on positive identification of a mal-agent. If the IPS solution can maintain a database of attacks over a period of time and use that to further research, it is an excellent choice. We have compiled a list of ten questions, in the box, to help you select the right IPS solution for your enterprise.
Commonly available enterprise-class security solutions (from vendors like Trend Micro, McAfee, Symantec and eTrust) usually combine an IDS, corporate firewalls and antiviral software into effective IPS systems. Consult our November 2004 shootout of the same for more details on what these solutions could specifically do.
Put together, an IPS provides an active line of defense and are aptly called 'Self Defending Systems'. There are a few other things you need to keep in mind. IPS should not try to replace existing technologies, but should add to them. Implementing an enterprise-wide IPS is not easy, because configuring it is a vigorous and continuous process. We need to literally teach the system to differentiate between normal traffic and something suspicious.
ROI on IPS systems
From a purely economic standpoint, ROI is not something that's self-evident for an IPS, because there is no directly measurable profit to be derived, just that the system will work securely. Hence, you need to consider how much the company could lose if the product or technology were not in place. How much money would have to be spent on rebuilding servers, recovering data, the time and resources of dedicating technical personnel to clean up after an attack, etc?
Security starts with the operating system. Unless the OS itself is configured for maximum security, while still allowing required functionality, any number of deployed intrusion detection or prevention systems would simply be ineffective. Now, these are other than deploying firewalls and maintaining it in upto date trim by applying patches and updates - it is a given that you will do those.
When we started working on this story, we decided to find out for ourselves just how secure the Windows and Linux really are; media frenzy about them notwithstanding. Our discussion below utilizes our findings. We first did a full installation of both OS, without installing any extra applications than what came with it on its installation media. We then ran Nessus (an open source port scanner) and InternetPeriscope (from LokBox Software) to attempt to discover what we could about them.
Securing Windows Server 2003
Out of the box, Windows Server 2003 is a "locked down" OS. That means anything its publisher (Microsoft) does not deem absolutely necessary to run on it is turned off or not even installed by default. For example, on servers, the Web server application is considered its biggest weakness simply because of its larger visibility to the public world. For this reason, IIS is not even installed by default. Even when it does get installed, things like CGI, WebDAV, Internet Data Connector components remain 'Prohibited'.
Remarkably, the ICF (Internet Connection Firewall, now renamed to "Windows Firewall") turned out to be a pretty secure firewall. With it running, basic networking remains unavailable and reports indicated that if the machine is indeed turned on and put on the network, it is in "stealth" mode. Nessus infact refused to scan the server, telling us "Scan returned an empty report". If you check, you will also find that you cannot browse to the machine from your Network Places. InternetPeriscope did manage to report a few open UDP ports, but these correspond to various networking features (like ICMP Time stamp) that are non-critical.
Spread your risks. The greater the number of assets you have around the more distributed an attack would be. Of course, this would result in a larger number of potentially successful attacks, but in a much more diluted form. For example, if you tend to running single server boxes that have all infrastructural services, chances are that a crash in one could seriously injure your entire network (a crash in the DNS would also render Active Directory useless, thus bringing down user authentication, file replication and if you have it deployed, your Exchange mail server). It is thus much better to distribute them among as many physical machines as possible so that you can easily troubleshoot and bring back a victimized server without affecting other systems. Of course, as a load balanced network, you probably already have that built in!
Policy based control mechanisms
Group policies are an effective way to setup rules and enforce them at the OS and server level. To use this, fire up "gpedit.msc" from the Run box. Under Computer Configuration>Windows Settings>Security Settings>Password Policy, first setup a 'maximum password age' between 10 and 30 days. This forces your users to change their passwords frequently and minimizes unauthorized use. Also set 'passwords must meet complexity requirements' to 'enabled'. This option ensures that user passwords contain a healthy mix of alphabets, numbers and symbols to make them that much harder to guess. If set to enabled, disable the option to store passwords in reversible encryption. This ensures that someone trying to decrypt stored passwords would get junk.
Account lockouts are a healthy way to keep undesirable people out. Would be hackers would attempt multiple logins trying to guess your password and setting accounts to lockout automatically after a particular number of invalid attempts is effective. You can set them to unlock after an interval or never - in which case, you would have to manually re-enable it from your user-manager console. The threshold should be a reasonable figure and must allow for legitimate users mistyping their passwords. Set it up from the Account Lockout Policy options. First set the threshold to (say) '3', this will enable the other two options. Now set the duration to (say) '30' minutes and the reset timeout to (say) '30' minutes. The lockout and timeout duration must be higher than what a typical hacker would wait around for between lockouts.
A paper trail is the best way to trace a problem. Auditing is meant for this. Turn it on for any relevant events. It would be wise to turn it on for the 'Failure' events of all types and for 'Success' of critical events like 'policy change' and 'privilege use', which would be the first target of would-be hackers. These events would be logged to the System portion of the Windows Event Log.
Hackers frequently use default accounts present in the system to gain access. To prevent this, go to the Security Options portion and use the 'Accounts' set of options to disable or rename the Administrator and Guest accounts. It is best to disable the Guest account and rename the Administrator account to something else (like 'AcmeAdmin'). If you exclusively use Windows 2000/XP clients and 2000/2003 servers, you could turn on digital signatures to sign all network
communications and use only NTLMv2 messages between servers and clients. This would ensure that only 'known' and trusted systems are allowed to participate in network activities. All other attempts would be rejected. Of course, you would need to correspondingly change settings on the client machines as well. Options to sign are under the 'Domain member', 'Microsoft network client' and 'Microsoft network server' groups.
Set up the server to require 'Domain controller authentication to unlock a workstation' to force a client to re-authenticate before unlocking itself. On the network front, disable all of 'anonymous SID/Name translation', 'anonymous enumeration of SAM accounts and shares', 'let everyone permissions apply to anonymous users' ('Everyone' includes both authenticated and unauthenticated users, while 'anonymous' includes only unauthenticated users). If you have the recovery console installed, disallow 'automatic administrative logon' to it.
By default, Windows Server 2003 allows unlimited access to applications both to install themselves as well as access various parts of the Windows Registry. This is a very bad idea and a serious security outage. Access the 'Software Restriction Policies' folder, right-click and select 'New Software Restriction Policies'. Two subfolders will appear, along with three keys. Click on the 'Enforcement' key and change the option from 'All users' to 'All users except local administrators'. If you have your own custom application type running, go to the 'Designated File Types' option and add the new file extension. Now open the 'Trusted Publishers' option and select that only 'Local computer administrators' can select whom to trust. If you're on an Active Directory domain, you can select 'Enterprise administrators' instead. Now go into the Security Levels folder, right-click 'Disallowed' and set it as the default policy. If you visit the 'Additional Rules' folder, you will be greeted by a set of sensitive Registry keys. If you do not want one of them to be accessible to an application, double-click it and set the 'Security Level' to 'Disallowed'.
Sometimes, you want to disallow the use of a particular file, regardless of what it is called or where it is kept. To do this, right-click in a blank area of the 'Additional Rules' window and select 'New Hash Rule'. Select any copy of the the file you want to block using the Browse button. The file information box is populated with its attributes. Select it to be 'Disallowed' and click OK to disable it. Similarly, you can create rules to block a zone of Websites (New Internet Zone Rule) or an entire directory path (New Path Rule).
Finally, visit the folders inside the Administrative Templates folder and enable or disable the following: Windows Components> Terminal Services>Client/Server Data Redirection - set all options whose names begin with 'Allow' to 'Disabled' and those that start as 'Do not allow' to 'Enabled'. This turns off all forms of redirection. Windows Components>Terminal Services>Encryption and Security - Enable 'Always prompt client for password on connection'. System - Enable 'Display Shutdown Event Tracker'. This would cause the failure of a remote attempt to shutdown the server, since you will also have to specify the reason. System>Logon - Enable the 'Disable legacy run list' option. Most worms today use the legacy runlist to launch themselves.
Shares
It's a bad idea to leave unwanted shared folders and drives around. Only share out what you need. To find out what's visible on network, fire up the File Server Management console (Administrative Tools>Manage Your Server>File Server) and look at the list under the 'Shares' folder. The shares with a "$" at the end of their names are automatically shared by Windows for various purposes and are called 'administrative shares' - you can't do anything about these, except turn them off, which can lead to their own problems. The only one of these that you can configure is the "wwwroot$" share that exists if you have IIS installed. Make sure security is very strong on this and only the necessary users have "full" or "write" access.
If you have "Microsoft Services for Network File System" installed (provided on this month's CD, see box for deployment instructions), the task of managing your networked file system becomes even easier. Using this kit, you can enable or disable TCP and NFS transports for file-serving, map Windows user names and groups to UNIX groups and setup locking preferences. Once this is installed, you will see an additional tab called 'NFS Sharing' on the properties box for drives and folders. You can now share these resources with a different character-encoding (currently only ANSI for English and different Japanese systems are supported). You can also setup the UID and GID (similar to UNIX systems) for anonymous users and setup the type of access for each folder (read-only, read-write or no access). One of the first things you would notice here, is that by default, all folders will be shared with 'Root access' disabled. This means that the 'root' or 'administrator' user cannot sign on to this folder and this is a good security feature. Permissions here are set per machine.
Securing Win XP
Most of the elements of securing your Win XP can be done using Group policies. In an enterprise, these would be done at the domain level and hence we are not separately covering them here - see the above Server 2003 discussion for insight on how to do this.
Win XP lets you create multiple accounts, but from a security standpoint, these accounts are insecure from the word go. Reason? Their passwords are blank and all of them are 'administrators' by class. This is the last thing you want to have in your enterprise. So the first thing to do would be assigning passwords to all the users especially all users assigned administrative privileges. More the number of administrator-class users on a system, the greater information a hacker can dig up.
Consider moving desktop users into the local 'Power Users' group rather than assigning them Administrator. Another trick that you might want to use to complicate hacking into your system is to create a local account with absolutely no privileges and renaming it to Administrator with a strong password. Also, eliminate unnecessary and redundant user accounts like test accounts, shared accounts, accounts of ex-employees.
It is unlikely that a hacker would walk up, put a gun to your head and take control of your system. Unauthorized use of systems happens only when the user is away. For this reason, never leave your system unlocked. Always setup a screensaver, and protect it with a password to prevent such usage.
Win XP uses 'Simple File Sharing' to share your files. While this maybe sufficient for a home network, it is a poor choice for an enterprise and should be disabled. This ensures that your files are not available to everyone - you will now need to specifically grant access to your shares. Now if you need to share your files, you will need to specifically right-click on them and enable it for sharing.
Win XP has the ability to encrypt your files and folders using its EFS (Encrypting File System). Once your files are encrypted, it is useless to attempt to use it
somewhere else, since the decryption process is dependant on digital certificates maintained by the OS. Also encrypt the 'temp' folders to further secure data left around by your applications.
Either disable the 'Offline Files' feature or encrypt its database. To encrypt it, open the 'Offline Files' tab for the folder's properties and check on the 'Encrypt files to secure data' option.
Disabling the Auto Run feature for the CDROM should be a good move, considering the fact that one could easily install some malicious code using this feature. Do this by going to ((Run > GPEDIT.MSC > Computer Configuration > Windows Settings > Security Settings > Local Policy > Security Options).
You can prevent users from connecting devices to the USB indiscriminately by disabling (as the Administrator) the USB adaptors from Device Manager — if you have already removed the user from the Administrators group, then they cannot re-enable it.
You may also protect the Bluetooth and Wireless interfaces. Wireless connections should use encrypted communications (WEP or WPA). Bluetooth devices will be setup to use non-promiscious mode for operations.
Securing PCQLinux 2005
Natively Linux is a secure OS. Today we have distributions of Linux, which are out of the box U.S. Common Criteria Evaluation at Evaluation Assurance Level 4 Certified. An example of such a distro is SuSE Linux. CC-EAL5 is military grade and considered the most secure.
Some of the common benefits that we get while using Linux on workstations instead of any other OS are that it has less number of malware. This implies a lesser number of attacks. So if you don't have an anti virus installed in your machine (which is not at all advisable) still you have more chances of survival as compared to other
OSs.
But here the biggest question is: if Linux is natively secure and have very few viruses and other malware then should a normal user should take effort to secure his workstation in a corporate environment. And the answer is 'Yes' Linux is comparatively secure but not 100% secure. If proper measures are not taken, there are lots of ways by which a Linux machine can be compromised. And a single compromised machine in the network can create havoc for the whole network.
So in this article we will see what are the common software and human flaws which can lead to a compromised workstation and finally we will see how you can stop them.
The Linux Workstation
1. Always use a boot-loader password and prefer GRUB to LILO. This is important because, it is very easy to bypass the normal Linux boot process and boot the machine into a single user mode, which doesn't require a password, and then change the root's password.
2. Never do a 'full install' of any Linux distribution on a production machine. While installing select the Custom option (Available in most of the Linux distributions) and select only those applications, which you really need. The reason to do this is to minimize unwanted numbers of applications, as the number of applications is directly proportional to number of vulnerabilities. And in case of normal Linux workstation or a desktop installation, make sure that the unwanted server services like DHCP, DNS, TFTBoot, Apache, telnet, FTP, Sendmail, SMB are not installed and if installed, then not running. You can stop this application by running ntsysv in any Fedora or PCQLinux machine.
3. Another important thing is to get rid of the .rhosts files, as they are a favorites of hackers. The .rhosts files contain names of systems on which you have an account. When you use TELNET to log in to a system, the system checks its .rhosts file and if your machine name is found, it gives you access without the need for a password. In most of the Linux distros this file is created in your home directory. You can remove it by running the following command
#rm -rf ~/.rhosts
You can even append this command into your .bash_profile file so that each time your system is boots up this file automatically gets deleted.
4. There are many anti viruses available for Linux, which are, free (open sourced) and paid as well. If you are using PCQLinux 2005 you will get CLAM out of the box, which is one of the best open source anti virus available out there. Do install an anti-virus on the Linux system.
5. Enable the firewall (iptables) at the time of installation. In a simple test of vulnerability assessment we found that the number of threats reduced to 99% by just enabling the inbuilt firewall in a full installation of PCQLinux 2005. And the 1% which was left was just because of the reason that the ICMP time stamping was enabled in the machine. You can disable it by just denying the ping requests in your firewall. To do so, run firestarter in PCQLinux and follow the wizard and when prompted for 'Network Services Setup' select the first option which says 'Disable Public access to all network services' and the flaw will be patched.
The Linux Server
Naturally securing a server is much more difficult and important than securing a normal workstation. But to begin with, keep in mind that the security measures discussed earlier for the workstation are inherited here as well. And in this article we will go further and see what is available to make your server as secure as possible.
SELinux
One of the biggest security threats for a Linux server is the 'root' user. Yes, we are not joking. Root is a standard and default user in any version/distro of Linux, just as 'Administrator' exists on Windows. And because of this, the first attempt any hacker or a Trojan will be to try to guess the password for this user. And if in any case this password becomes known, then the complete security for your machine is gone.
And that's where SELinux comes into existence. With the help of SELinux you can create a layer of user level access control list with which you can define some rules. Using these rules even the root user can be restricted for doing some tasks. For example, you could create a rule to set the default level of authority to that of a normal user if someone tries to login as root from Telnet. But if he logs in to the machine locally, then he will get the usual full rights.
Installing SELinux is not at all difficult. In PCQLinux 2005 SELinux is enabled by default. For a detailed article on configuring and using SELinux, read our article Enhancing Security in Linux (August 2004, page 102).
HoneyPots
Honeypots are another very interesting concept by which you can protect your servers from hackers and worms. For example you can use a honey pot called LaBrea, which creates hundreds of fake IPs in your network and diverts all the DoS attacks among those fake IPs saving your main server (Prevent DoS attacks, April 2004, page: TODO). There are others like Honeyd as well which can create a decoy chamber in your server so that when any hacker tries to hack the server, he gets diverted into that decoy and feels that he has successfully hacked into the system and spends his time figuring out and hunting for important data (Fool hackers with Honeyd, May 2004, page: TODO). All these honeypots also silently create logs of what the hacker is trying to and this can then be used to not only trace him out, but also further enhance your server security.
64-bit protection
Our visual on the first page of this article shows one form of attack being your video memory. Well, what is this about? What happens is this — when something is 'displayed' by your computer, the information about it is compiled by the graphics engine and then sent to the graphics hardware. This is organised into 'pages' and only one such page is displayed at a time. The CPU then picks up what should currently be displayed and marks it. This is then automatically sent to the video device. The set that was previously used goes back into the buffer. To protect what's on screen from getting garble, the currently active page is protected by the CPU.
Apparently, the pages that are not active are considered not worthy of any protection and viruses (like the Gold Bug) exist that waits for such pages to arrive and then sift through it for potentially useful information. There is no form of antivirus or other protection against this.
The new 64-bit CPU from Intel (the Intel 6xx family to be precise) takes care of this, by including something called the 'Execute Disable' (XD) bit. OSes now have the option to set this bit in the video memory to indicate it should be protected as well. Note however, that this option can be turned off to support any legacy application that might require unprotected pages.
In conclusion
Security starts within. But, to understand the last level of security (that is physical security) lets suppose: Tom Cruise of Mission Impossible 2 comes inside your server room suspended from the roof. Then opens up you machine's cabinet and takes out or short the battery in your motherboard and sets your BIOS password to default. And then sets the boot devise priority to CD-ROM. After that he boots up the machine with a standard Knoppix CD, mounts your partitions and copies all the important data into a USB pen drive and goes away with his chopper.
So now what you will do? And the answer is very simple. After all the effort you have taken for securing your machine over the network.
It is also very important to keep a very tight watch on the physical security of your servers. Well, the concept is quite away from the scope of this article but still you should have security guards and keys and locks at the door of your server room and don't leave any room at the roof top so that Tom Cruise can not climb down from there and hack into your server.
Having a sound IT policy for your enterprise goes a long way to minimizing if not eliminating the risks. Grounding these policies with a good implementation firms up the confidence that your infrastructure will be safe and your data secure for a reasonably long time.
After all, it is not necessary to get a virus attack to lose all your data... but that is subject enough for a different story. Actually, you need a little bit of everything- some preventive, some cleaners, some disaster management, a little protective storage-in our management recipe for an optimistic synergy between both
technology and requirements.
The total cost of survival does outweigh the cost of ownership or operation. That's the way the cookie crumbles!
Anindya Roy, Binesh Kutty, Sujay V Sarma
IPS aptitude
Can the IPS identify machines on your network that need IPS protection?
Atleast, it should have agents that you can install on these machines that send back information about attacks.
Does the IPS offer a mode where it can learn over a period of time? How effective is this? Can this process be controlled humanly?
Learning about attacks and what you did against a particular threat can be a big plus where human intervention is difficult. Effectiveness will start low, but improve over time. Humanly control, as providing updated databases, teaching what it did wrong or provide alternate actions that it can take, can only benefit our enterprise.
What kinds of intrusions (DoS, protocol attacks, vulnerability exploits, application attacks) can it handle?
An ideal IPS should handle all of them.
What kind of actions can it take after identification? What is its alerting process like? Can it escalate alerts?
Alert escalation is important. If a designated person does not respond within some time, the software should escalate the call to the next person in the hierarchy till a suitable response is registered.
Can the IPS communicate with other IDS/cleaning software (like firewalls, antivirus products, etc)?
Most IPS software vendors also bundle IDS plugins. And such IPS software will more often than not communicate with these programs as well.
How are the centralized management and reporting features?
IPS should offer reporting and management using standard Web browsers. Check the versions of Web browsers supported.
Does it support either SENS or SNMP, or otherwise use the MMC?
This is good to have although not a 'must', since it can help you centralize your monitoring and management efforts.
Are there any available tools with it to analyze its logs and learn further from it?
Some report in 'well known' formats and third party tools can be downloaded to analyze these logs. This can also help you track down particular errors or entries that may seem to indicate an 'intrusion'.
A typical enterprise network would have multiple platforms. How many of these does it support?
Most IPS would readily run on Windows server platforms. If you have NetWare, Lotus Domino, UNIX or Linux deployed, would your IPS work with them as well?
Can it handle outgoing as well as internal attacks?
Most attacks actually originate or are helped by something inside your network. It should be able to guard against both.
The antidotes aren't working
Douglas Brockett, VP Worldwide Marketing, SonicWALL
Early this quarter, the UK government released a research showing 68% of large companies were infected by viruses in 2003, in spite of the fact that 99% of them were using antivirus products. The findings underline the fact that antivirus software on its own does not do enough to protect businesses. This should be a wake-up call to all those involved in their sale.
Antivirus capability isn't keeping up with the need for speed in deploying updates, many vendors are citing gateway antivirus as the way forward. The trouble is this still isn't comprehensive protection. While these solutions may allow you to distribute network updates from a single point, they never give you control over laptops of your travelling workforce. They still rely on client-based antivirus software. Gateway antivirus is not enough by itself. This is why some form of enforced client capability needs to be an essential part of any security strategy.
The most effective solutions are proactive, and continuously updated, managed services that stop known and unknown threats at the Internet level, before they ever reach corporate networks and end users. The antivirus solution needs to have built-in auto-enforcement at the client and the gateway levels. You also want to be able to protect network vulnerabilities by stopping worms, Trojans and other attacks before they can get into networks using intrusion prevention. The most effective IPS work at the application layer - Layer 7 - using Deep Packet Inspection. This is important because some offerings portending to be intrusion prevention systems only protect Layer 3 and 4 data. They make a big deal of having 1500 signatures for intrusion detection while keeping quiet about having just 30 for intrusion prevention.
The health of the corporate body is under constant attack. The choice is clear. You can either continue to fight a losing battle by prescribing fresh antidotes every time there's a new infection, or focus your efforts on helping networks develop their own immune system. If you were the patient what would you prefer?
OS installation tips
Always install your OS with the system disconnected from the network and put it online only once all the basic software and security systems are online on the machine.
On Windows, use the NTFS option for your partitions, since it increases security to your files and data.
Keep your system files and user data on different partitions. Also keep any software copies you may want to retain on the same system on different drives or partitions. This allows you to control their visibility and access better.
Always set a complex and unique password for administrator (root on Linux) user accounts. It should be a non-dictionary word, at least 6 characters long and with a mix of alphanumerical characters. And should not resemble some common information like your address, birthday, or your nickname. Guard this password with your life.
Set your first boot-device priority to your hard disk and change other options to 'none' and finally put a strong system password to your BIOS as well. This will make sure that no one can boot you machine with some other media, say, a bootable CD or floppy.
Should you patch?
In the light of our discovery from our tests on OS security, we wondered about the numerous patches that MS keeps posting at regular intervals. At the time of writing this, there were 35 such updates that MS had deemed as a 'security update'. But are all of these updates and patches necessary? The basic flaw in all three methods of automatically patching your system - using the local Automatic Updates service, through a centralized Software Update Server, or visiting the Windows Updates website -is that they all require you to first let the updates server scan your server for required updates from the inside. They all require you to have a client in some form (either an application or an ActiveX control on a webpage) to do this. This is valid only when the attacker is already inside your system's defenses.
Also, the update systems do not seem to be checking if the patch is required at all on your system. For example, one of the updates we found installed on our Windows server was "Security Update for Windows Server 2003 (KB873333)". We looked up this KB, and found the following description - "An attacker must have valid logon credentials and be able to log on locally to exploit this vulnerability. The vulnerability could not be exploited remotely or by anonymous users." And "An attacker who successfully exploited this vulnerability could take complete control of an affected system. However, user interaction is required to exploit this vulnerability on Windows 2000, Windows XP, and Windows Server 2003." Now, in the light of the fact that, one the attacker must have physical and valid access to the server and two, user interaction is definitely required to make use of this, puts us in little doubt that in a typical enterprise deployment, the Administrator of that server would be its sole "attacker" who can use this vulnerability. And since the server's ICF was anyway shielding the machine effectively, we see no way for this attack to have ever taken place on the server in question!
Well, the updates are to prevent attacks if someone does manage to gain access these arguments notwithstanding and install something on your systems to exploit these vulnerabilities. And that is why you should apply these patches.