Advertisment

Is Free Software Insecure?

author-image
PCQ Bureau
New Update

The debate is endless–proponents of

free (open source) software and proprietary (closed source) software are fighting it out

on mailing lists, Usenet, personal flames, advertisements, and Websites. The

question–which gives better security as a generic model of software development, open

source or closed source software?

Advertisment

The immediate reaction (may I call it

knee-jerk) to this question is, "If it’s open for scrutiny, then it can’t

be secure, since anyone can go through the source code and find out potential

exploits." Yes, it does look that way. At least until we try to separate the facts

from the myths.

Myth 1

Since anyone can examine

free software source code, anyone can find security holes and exploit them.

Advertisment

Absolutely true. Anyone can examine the

source code, look for (and perhaps find) security holes and exploit them.

By the same token, anyone could also

examine the source code, find potential holes, and report them to the author. Experience

says that the number of "good guys" examining any given source code is much more

than the number of black hats. It follows that the chances of a hole being found and fixed

far outstrip the chances of a hole being found and exploited.

For example, a number of buffer overflows

were found in various Linux programs, arising from the use of an insecure function

(strcpy). These were quickly fixed using a secure version (strncpy), before any

significant exploits of these holes could be made on the Internet. In addition, hundreds

of other software packages (which hadn’t been demonstrated to be insecure) were

examined and all strcpy’s replaced with their secure versions, much before anyone

thought of trying to exploit them.

Advertisment

Myth 2

Since free software

explicitly comes with no warranty, there’s no incentive for the authors to fix

security holes quickly or effectively.

At the very least, a statement like this

shows a serious lack of understanding of the free software development model. Many papers

have discussed what drives a person to make free software, and while we don’t have

the bandwidth to discuss this in detail, a couple of motivations stand out

starkly–peer recognition and creativity. Programmers write free software because

they’re innately creative people, and for many, their day jobs are unable to

channelize that creativity. Another driving factor is peer recognition, where recognition

that comes to you as an author of good software is worth its weight in gold.

Advertisment

Given these, it follows that free software

authors are extremely interested in ensuring that their products are kept updated and

bug-free as far and as soon as possible.

This is borne out by numerous examples. To

take just one, when the infamous "teardrop attack" was first launched in 1997,

Linux and FreeBSD fixes were available within a few hours of the attack becoming

widespread. Contrast this with proprietary operating system vendors, who took from two

weeks to forever to come out with a fix. To quote one network appliance vendor,

"There’s no fix scheduled for this. The device is more secure when used on a

secure network protected by a firewall."

Which brings us to... Myth 3

Advertisment

The free software author

may not have the resources, time, or inclination to provide security fixes for her

products.

That’s quite possible. However,

exactly the same is true of proprietary software authors. There have been many cases of

companies which write proprietary software refusing to acknowledge a security issue as

such (the "it’s-not-a-bug-it’s-a-feature" syndrome), or refusing to

act on security issues in older versions, since they’d prefer users to upgrade to the

latest versions (presumably at a hefty cost). And what do you do when the author flatly

refuses to fix the problem?

As a free software user, you don’t

have to depend on the author’s mood to get fixes for your software. You can fix it

yourself, hire a professional to do it for you, convince (or blackmail) a friend into

doing it, or send out a request onto one of the many help channels available on the Net.

Unlike proprietary software, where only the author has the resources to fix security

problems, free software has an endless number of fallback resources.

Advertisment

Myth 4

Keeping a security or

encryption algorithm proprietary is the only way to ensure it not being cracked.

I can’t do better than to quote Bruce

Schneier, a respected cryptography expert here: "...Security has nothing to do with

functionality. You can have two algorithms, one secure and the other insecure, and both

can work perfectly. They can encrypt and decrypt, they can be efficient and have a pretty

user interface, they can never crash. The only way to tell good cryptography from bad

cryptography is to have it examined."

Advertisment

What does this mean? It means that just

because an algorithm hasn’t been broken yet doesn’t imply that it’s secure.

The only way to ensure that an algorithm is secure is by exposing it to review by a large

number of experts from various backgrounds.

Obviously this isn’t possible with

proprietary software and algorithms, since experts will want to discuss, publish papers

and present their findings about their research, which proprietary software authors

don’t permit. Open algorithms and security products, on the other hand, are designed

to be secure even when the algorithm is known to everyone. They go through massive amounts

of peer reviews and public exposure, and are only accepted by end-users when all probes

against them turn out negative. PGP, SSH, IPSec, and SSL are excellent examples of open

algorithms and products that have withstood the test of time, against the combined might

of crackers as well as benevolent reviewers.

Closed (proprietary) algorithms, however,

suffer from the "house-key-under-the-front-doormat" syndrome–you’re

absolutely safe as long as your algorithm remains unknown. Once the algorithm is revealed,

however, each and every product that relies on that algorithm is instantly vulnerable.

Only a very lucky fluke can give you a secure algorithm without public reviews. Examples abound in the world of security.

US digital cellular companies relied on proprietary crypto algorithms. On the day the

algorithms were revealed, they were cracked, and today the same companies are considering

public algorithms to replace their broken ones. Similarly, Microsoft’s PPTP (a VPN

protocol) relied heavily on proprietary encryption techniques. These were cracked because

though they used a known encryption algorithm, they surrounded it with proprietary

infrastructure, which completely negated the power of the encryption algorithm. There were

multiple rounds of escalation of this problem, with MS fixing problems and cracker

organizations finding out further problems in the fixes, which could’ve been avoided

if MS had chosen a known algorithm and infrastructure in the first place.

In conclusion, then, we see that

there’s really no reasonable way of implementing security except by peer review and

public scrutiny. There have been many instances where the free software and open

development model has scored over the proprietary, closed development model where security

issues are concerned, and more turn up each day.

In the short as well as the long run, open

software and algorithms will score over their closed cousins in providing trusted and

tested secure systems, despite many proprietary software authors’ claims to the

contrary.

Raj Mathur

is manager, technical marketing at SGI India

Advertisment