About
Community
Bad Ideas
Drugs
Ego
Erotica
Fringe
Society
Technology
Hack
Introduction to Hacking
Hack Attack
Hacker Zines
Hacking LANs, WANs, Networks, & Outdials
Magnetic Stripes and Other Data Formats
Software Cracking
Understanding the Internet
Legalities of Hacking
Word Lists
register | bbs | search | rss | faq | about
meet up | add to del.icio.us | digg it

Maximum Security: A Hacker's Guide to Protecting Your Internet Site and Network

by Anonymous


NOTICE: TO ALL CONCERNED Certain text files and messages contained on this site deal with activities and devices which would be in violation of various Federal, State, and local laws if actually carried out or constructed. The webmasters of this site do not advocate the breaking of any law. Our text files and message bases are for informational purposes only. We recommend that you contact your local law enforcement officials before undertaking any project based upon any information obtained from this or any other web site. We do not guarantee that any of the information contained on this system is correct, workable, or factual. We are not responsible for, nor do we assume any liability for, damages resulting from the use of any information on this site.

Maximum Security: A Hacker's Guide to Protecting Your Internet Site and Network

Table of Contents:

Introduction

I Setting the Stage

Chapter 1 - Why Did I Write This Book?

Chapter 2 - How This Book Will Help You

Chapter 3 - Hackers and Crackers

Chapter 4 - Just Who Can Be Hacked, Anyway?

II Understanding the Terrain

Chapter 5 - Is Security a Futile Endeavor?

Chapter 6 - A Brief Primer on TCP/IP

Chapter 7 - Birth of a Network: The Internet

Chapter 8 - Internet Warfare

III Tools

Chapter 9 - Scanners

Chapter 10 - Password Crackers

Chapter 11 - Trojans

Chapter 12 - Sniffers

Chapter 13 - Techniques to Hide One's Identity

Chapter 14 - Destructive Devices

IV Platforms and Security

Chapter 15 - The Hole

Chapter 16 - Microsoft

Chapter 17 - UNIX: The Big Kahuna

Chapter 18 - Novell

Chapter 19 - VAX/VMS

Chapter 20 - Macintosh

Chapter 21 - Plan 9 from Bell Labs

V Beginning at Ground Zero

Chapter 22 - Who or What Is Root?

Chapter 23 - An Introduction to Breaching a Server Internally

Chapter 24 - Security Concepts

VI The Remote Attack

Chapter 25 - The Remote Attack

Chapter 26 - Levels of Attack

Chapter 27 - Firewalls

Chapter 28 - Spoofing Attacks

Chapter 29 - Telnet-Based Attacks

Chapter 30 - Language, Extensions, and Security

VII The Law

Chapter 31 - Reality Bytes: Computer Security and the Law

VIII Appendixes

Appendix A - How to Get More Information

Appendix B - Security Consultants

Appendix C - A Hidden Message About the Internet

Appendix D - What's on the CD-ROM

Acknowledgments

My acknowledgments are brief. First, I would like to acknowledge the folks at Sams, particularly Randi Roger, Scott Meyers, Mark Taber, Blake Hall, Eric Murray, Bob Correll, and Kate Shoup. Without them, my work would resemble a tangled, horrible mess. They are an awesome editing team and their expertise is truly extraordinary.

Next, I extend my deepest gratitude to Michael Michaleczko, and Ron and Stacie Latreille. These individuals offered critical support, without which this book could not have been written.

Also, I would like to recognize the significant contribution made by John David Sale, a network security specialist located in Van Nuys, California. His input was invaluable. A similar thanks is also extended to Peter Benson, an Internet and EDI Consultant in Santa Monica, California (who, incidentally, is the current chairman of ASC X12E). Peter's patience was (and is) difficult to fathom. Moreover, I forward a special acknowledgment to David Pennells and his merry band of programmers. Those cats run the most robust and reliable wire in the southwestern United States.

About the Author

The author describes himself as a "UNIX propeller head" and is a dedicated advocate of the Perl programming language, Linux, and FreeBSD.

After spending four years as a system administrator for two California health-care firms, the author started his own security-consulting business. Currently, he specializes in testing the security of various networking platforms (breaking into computer networks and subsequently revealing what holes lead to the unauthorized entry) including but not limited to Novell NetWare, Microsoft Windows NT, SunOS, Solaris, Linux, and Microsoft Windows 95. His most recent assignment was to secure a wide area network that spans from Los Angeles to Montreal.

The author now lives quietly in southern California with a Sun SPARCStation, an IBM RS/6000, two Pentiums, a Macintosh, various remnants of a MicroVAX, and his wife.

In the late 1980s, the author was convicted of a series of financial crimes after developing a technique to circumvent bank security in Automatic Teller Machine systems. He therefore prefers to remain anonymous.

Why Did I Write This Book?

Hacking and cracking are activities that generate intense public interest. Stories of hacked servers and downed Internet providers appear regularly in national news. Consequently, publishers are in a race to deliver books on these subjects. To its credit, the publishing community has not failed in this resolve. Security books appear on shelves in ever-increasing numbers. However, the public remains wary. Consumers recognize driving commercialism when they see it, and are understandably suspicious of books such as this one. They need only browse the shelves of their local bookstore to accurately assess the situation.

Books about Internet security are common (firewall technology seems to dominate the subject list). In such books, the information is often sparse, confined to a narrow range of products. Authors typically include full-text reproductions of stale, dated documents that are readily available on the Net. This poses a problem, mainly because such texts are impractical. Experienced readers are already aware of these reference sources, and inexperienced ones are poorly served by them. Hence, consumers know that they might get little bang for their buck. Because of this trend, Internet security books have sold poorly at America's neighborhood bookstores.

Another reason that such books sell poorly is this: The public erroneously believes that to hack or crack, you must first be a genius or a UNIX guru. Neither is true, though admittedly, certain exploits require advanced knowledge of the target's operating system. However, these exploits can now be simplified through utilities that are available for a wide range of platforms. Despite the availability of such programs, however, the public remains mystified by hacking and cracking, and therefore, reticent to spend forty dollars for a hacking book.

So, at the outset, Sams.net embarked on a rather unusual journey in publishing this book. The Sams.net imprint occupies a place of authority within the field. Better than two thirds of all information professionals I know have purchased at least one Sams.net product. For that reason, this book represented to them a special situation.

Hacking, cracking, and Internet security are all explosive subjects. There is a sharp difference between publishing a primer about C++ and publishing a hacking guide. A book such as this one harbors certain dangers, including

The possibility that readers will use the information maliciously

The possibility of angering the often-secretive Internet-security community

The possibility of angering vendors that have yet to close security holes within their software

If any of these dangers materialize, Sams.net will be subject to scrutiny or perhaps even censure. So, again, if all of this is true, why would Sams.net publish this book?

Sams.net published this book (and I agreed to write it) because there is a real need. I'd like to explain that need for a moment, because it is a matter of some dispute within the Internet community. Many people feel that this need is a manufactured one, a device dreamt up by software vendors specializing in security products. This charge--as the reader will soon learn--is unfounded.

Today, thousands of institutions, businesses, and individuals are going online. This phenomenon--which has been given a dozen different names--is most commonly referred to as the Internet explosion. That explosion has drastically altered the composition of the Internet. By composition of the Internet, I refer to the cyberography of the Net, or the demography of cyberspace. This quality is used to express the now diverse mixture of users (who have varying degrees of online expertise) and their operating systems.

A decade ago, most servers were maintained by personnel with at least basic knowledge of network security. That fact didn't prevent break-ins, of course, but they occurred rarely in proportion to the number of potential targets. Today, the Internet's population is dominated by those without strong security knowledge, many of whom establish direct links to the backbone. The number of viable targets is staggering.

Similarly, individual users are unaware that their personal computers are at risk of penetration. Folks across the country surf the Net using networked operating systems, oblivious to dangers common to their platform. To be blunt, much of America is going online unarmed and unprepared.

You might wonder even more why Sams would publish a book such as this. After all, isn't the dissemination of such information likely to cause (rather than prevent) computer break-ins?

In the short run, yes. Some readers will use this book for dark and unintended purposes. However, this activity will not weaken network security; it will strengthen it. To demonstrate why, I'd like to briefly examine the two most common reasons for security breaches:

Misconfiguration of the victim host

System flaws or deficiency of vendor response

Misconfiguration of the Victim Host

The primary reason for security breaches is misconfiguration of the victim host. Plainly stated, most operating systems ship in an insecure state. There are two manifestations of this phenomenon, which I classify as active and passive states of insecurity in shipped software.

The Active State

The active state of insecurity in shipped software primarily involves network utilities. Certain network utilities, when enabled, create serious security risks. Many software products ship with these options enabled. The resulting risks remain until the system administrator deactivates or properly configures the utility in question.

A good example would be network printing options (the capability of printing over an Ethernet or the Internet). These options might be enabled in a fresh install, leaving the system insecure. It is up to the system administrator (or user) to disable these utilities. However, to disable them, the administrator (or user) must first know of their existence.

You might wonder how a user could be unaware of such utilities. The answer is simple: Think of your favorite word processor. Just how much do you know about it? If you routinely write macros in a word-processing environment, you are an advanced user, one member of a limited class. In contrast, the majority of people use only the basic functions of word processors: text, tables, spell check, and so forth. There is certainly nothing wrong with this approach. Nevertheless, most word processors have more advanced features, which are often missed by casual users.

For example, how many readers who used DOS-based WordPerfect knew that it included a command-line screen-capture utility? It was called Grab. It grabbed the screen in any DOS-based program. At the time, that functionality was unheard of in word processors. The Grab program was extremely powerful when coupled with a sister utility called Convert, which was used to transform other graphic file formats into *.wpg files, a format suitable for importation into a WordPerfect document. Both utilities were called from a command line in the C:\WP directory. Neither were directly accessible from within the WordPerfect environment. So, despite the power of these two utilities, they were not well known.

Similarly, users might know little about the inner workings of their favorite operating system. For most, the cost of acquiring such knowledge far exceeds the value. Oh, they pick up tidbits over the years. Perhaps they read computer periodicals that feature occasional tips and tricks. Or perhaps they learn because they are required to, at a job or other official position where extensive training is offered. No matter how they acquire the knowledge, nearly everyone knows something cool about their operating system. (Example: the Microsoft programming team easter egg in Windows 95.)

The Microsoft programming team easter egg: The Microsoft programming team easter egg is a program hidden in the heart of Windows 95. When you enter the correct keystrokes and undertake the correct actions, this program displays the names of each programmer responsible for Windows 95. To view that easter egg, perform the following steps:

1. Right-click the Desktop and choose New|Folder.
2. Name that folder and now the moment you've all been waiting for.
3. Right-click that folder and choose Rename.
4. Rename the folder we proudly present for your viewing pleasure.
5. Right-click the folder and choose Rename.
6. Rename the folder The Microsoft Windows 95 Product Team!.
7. Open that folder by double-clicking it.

The preceding steps will lead to the appearance of a multimedia presentation about the folks who coded Windows 95. (A word of caution: The presentation is quite long.)

Unfortunately, keeping up with the times is difficult. The software industry is a dynamic environment, and users are generally two years behind development. This lag in the assimilation of new technology only contributes to the security problem. When an operating-system- development team materially alters its product, a large class of users is suddenly left knowing less. Microsoft Windows 95 is a good example of this phenomenon. New support has been added for many different protocols: protocols with which the average Windows user might not be familiar. So, it is possible (and probable) that users might be unaware of obscure network utilities at work with their operating systems.

This is especially so with UNIX-based operating systems, but for a slightly different reason. UNIX is a large and inherently complex system. Comparing it to other operating systems can be instructive. DOS contains perhaps 30 commonly used commands. In contrast, a stock distribution of UNIX (without considering windowed systems) supports several hundred commands. Further, each command has one or more command-line options, increasing the complexity of each utility or program.

In any case, in the active state of insecurity in shipped software, utilities are enabled and this fact is unknown to the user. These utilities, while enabled, can foster security holes of varying magnitude. When a machine configured in this manner is connected to the Net, it is a hack waiting to happen.

Active state problems are easily remedied. The solution is to turn off (or properly configure) the offending utility or service. Typical examples of active state problems include

Network printing utilities

File-sharing utilities

Default passwords

Sample networking programs Of the examples listed, default passwords is the most common. Most multiuser operating systems on the market have at least one default password (or an account requiring no password at all).

The Passive State

The passive state involves operating systems with built-in security utilities. These utilities can be quite effective when enabled, but remain worthless until the system administrator activates them. In the passive state, these utilities are never activated, usually because the user is unaware that they exist. Again, the source of the problem is the same: The user or system administrator lacks adequate knowledge of the system.

To understand the passive state, consider logging utilities. Many networked operating systems provide good logging utilities. These comprise the cornerstone of any investigation. Often, these utilities are not set to active in a fresh installation. (Vendors might leave this choice to the system administrator for a variety of reasons. For example, certain logging utilities consume space on local drives by generating large text or database files. Machines with limited storage are poor candidates for conducting heavy logging.) Because vendors cannot guess the hardware configuration of the consumer's machine, logging choices are almost always left to the end-user.

Other situations that result in passive-state insecurity can arise: Situations where user knowledge (or lack thereof) is not the problem. For instance, certain security utilities are simply impractical. Consider security programs that administer file-access privileges (such as those that restrict user access depending on security level, time of day, and so forth). Perhaps your small network cannot operate with fluidity and efficiency if advanced access restrictions are enabled. If so, you must take that chance, perhaps implementing other security procedures to compensate. In essence, these issues are the basis of security theory: You must balance the risks against practical security measures, based on the sensitivity of your network data.

You will notice that both active and passive states of insecurity in software result from the consumer's lack of knowledge (not from any vendor's act or omission). This is an education issue, and education is a theme that will recur throughout this book.

NOTE: Education issues are matters entirely within your control. That is, you can eliminate these problems by providing yourself or your associates with adequate education. (Put another way, crackers can gain most effectively by attacking networks where such knowledge is lacking.) That settled, I want to examine matters that might not be within the end-user's control.

System Flaws or Deficiency of Vendor Response

System flaws or deficiency of vendor response are matters beyond the end-user's control. Although vendors might argue this point furiously, here's a fact: These factors are the second most common source of security problems. Anyone who subscribes to a bug mailing list knows this. Each day, bugs or programming weaknesses are found in network software. Each day, these are posted to the Internet in advisories or warnings. Unfortunately, not all users read such advisories.

System flaws needn't be classified into many subcategories here. It's sufficient to say that a system flaw is any element of a program that causes the program to

Work improperly (under either normal or extreme conditions)

Allow crackers to exploit that weakness (or improper operation) to damage or gain control of a system

I am concerned with two types of system flaws. The first, which I call a pure flaw, is a security flaw nested within the security structure itself. It is a flaw inherent within a security-related program. By exploiting it, a cracker obtains one-step, unauthorized access to the system or its data.

The Netscape secure sockets layer flaw: In January, 1996, two students in the Computer Science department at the University of California, Berkeley highlighted a serious flaw in the Netscape Navigator encryption scheme. Their findings were published in Dr. Dobb's Journal. The article was titled Randomness and the Netscape Browser by Ian Goldberg and David Wagner. In it, Goldberg and Wagner explain that Netscape's implementation of a cryptographic protocol called Secure Sockets Layer (SSL) was inherently flawed. This flaw would allow secure communications intercepted on the WWW to be cracked. This is an excellent example of a pure flaw. (It should be noted here that the flaw in Netscape's SSL implementation was originally discovered by an individual in France. However, Goldberg and Wagner were the first individuals in the United States to provide a detailed analysis of it.)

Conversely, there are secondary flaws. A secondary flaw is any flaw arising in a program that, while totally unrelated to security, opens a security hole elsewhere on the system. In other words, the programmers were charged with making the program functional, not secure. No one (at the time the program was designed) imagined cause for concern, nor did they imagine that such a flaw could arise.

Secondary flaws are far more common than pure flaws, particularly on platforms that have not traditionally been security oriented. An example of a secondary security flaw is any flaw within a program that requires special access privileges in order to complete its tasks (in other words, a program that must run with root or superuser privileges). If that program can be attacked, the cracker can work through that program to gain special, privileged access to files. Historically, printer utilities have been problems in this area. (For example, in late 1996, SGI determined that root privileges could be obtained through the Netprint utility in its IRIX operating system.)

Whether pure or secondary, system flaws are especially dangerous to the Internet community because they often emerge in programs that are used on a daily basis, such as FTP or Telnet. These mission-critical applications form the very heart of the Internet and cannot be suddenly taken away, even if a security flaw exists within them.

To understand this concept, imagine if Microsoft Word were discovered to be totally insecure. Would people stop using it? Of course not. Millions of offices throughout the world rely on Word. However, there is a vast difference between a serious security flaw in Microsoft Word and a serious security flaw in NCSA HTTPD, which is a popular Web-server package. The serious flaw in HTTPD would place hundreds of thousands of servers (and therefore, millions of accounts) at risk. Because of the Internet's size and the services it now offers, flaws inherent within its security structure are of international concern.

So, whenever a flaw is discovered within sendmail, FTP, Gopher, HTTP, or other indispensable elements of the Internet, programmers develop patches (small programs or source code) to temporarily solve the problem. These patches are distributed to the world at large, along with detailed advisories. This brings us to vendor response.

Vendor Response

Vendor response has traditionally been good, but this shouldn't give you a false sense of security. Vendors are in the business of selling software. To them, there is nothing fascinating about someone discovering a hole in the system. At best, a security hole represents a loss of revenue or prestige. Accordingly, vendors quickly issue assurances to allay users' fears, but actual corrective action can sometimes be long in coming.

The reasons for this can be complex, and often the vendor is not to blame. Sometimes, immediate corrective action just isn't feasible, such as the following:

When the affected application is comprehensively tied to the operating-system source

When the application is very widely in use or is a standard

When the application is third-party software and that third party has poor support, has gone out of business, or is otherwise unavailable In these instances, a patch (or other solution) can provide temporary relief. However, for this system to work effectively, all users must know that the patch is available. Notifying the public would seem to be the vendor's responsibility and, to be fair, vendors post such patches to security groups and mailing lists. However, vendors might not always take the extra step of informing the general public. In many cases, it just isn't cost effective.

Once again, this issue breaks down to knowledge. Users who have good knowledge of their network utilities, of holes, and of patches are well prepared. Users without such knowledge tend to be victims. That, more than any other reason, is why I wrote this book. In a nutshell, security education is the best policy.

Why Education in Security Is Important

Traditionally, security folks have attempted to obscure security information from the average user. As such, security specialists occupy positions of prestige in the computing world. They are regarded as high priests of arcane and recondite knowledge that is unavailable to normal folks. There was a time when this approach had merit. After all, users should be afforded such information only on a need-to-know basis. However, the average American has now achieved need-to-know status.

So, I pose the question again: Who needs to be educated about Internet security? The answer is: We all do. I hope that this book, which is both a cracker's manual and an Internet security reference, will force into the foreground issues that need to be discussed. Moreover, I wrote this book to increase awareness of security among the general public. As such, this book starts with basic information and progresses with increasing complexity. For the absolute novice, this book is best read cover to cover. Equally, those readers familiar with security will want to quickly venture into later chapters.

The answer to the question regarding the importance of education and Internet security depends on your station in life. If you are a merchant or business person, the answer is straightforward: In order to conduct commerce on the Net, you must be assured of some reasonable level of data security. This reason is also shared by consumers. If crackers are capable of capturing Net traffic containing sensitive financial data, why buy over the Internet? And of course, between the consumer and the merchant stands yet another class of individual concerned with data security: the software vendor who supplies the tools to facilitate that commerce. These parties (and their reasons for security) are obvious. However, there are some not so obvious reasons.

Privacy is one such concern. The Internet represents the first real evidence that an Orwellian society can be established. Every user should be aware that nonencrypted communication across the Internet is totally insecure. Likewise, each user should be aware that government agencies--not crackers--pose the greatest threat. Although the Internet is a wonderful resource for research or recreation, it is not your friend (at least, not if you have anything to hide).

There are other more concrete reasons to promote security education. I will focus on these for a moment. The Internet is becoming more popular. Each day, development firms introduce new and innovative ways to use the Network. It is likely that within five years, the Internet will become an important and functional part of our lives.

The Corporate Sector

For the moment, set aside dramatic scenarios such as corporate espionage. These subjects are exciting for purposes of discussion, but their actual incidence is rare. Instead, I'd like to concentrate on a very real problem: cost.

The average corporate database is designed using proprietary software. Licensing fees for these big database packages can amount to tens of thousands of dollars. Fixed costs of these databases include programming, maintenance, and upgrade fees. In short, development and sustained use of a large, corporate database is costly and labor intensive.

When a firm maintains such a database onsite but without connecting it to the Internet, security is a limited concern. To be fair, an administrator must grasp the basics of network security to prevent aspiring hackers in this or that department from gaining unauthorized access to data. Nevertheless, the number of potential perpetrators is limited and access is usually restricted to a few, well-known protocols.

Now, take that same database and connect it to the Net. Suddenly, the picture is drastically different. First, the number of potential perpetrators is unknown and unlimited. An attack could originate from anywhere, here or overseas. Furthermore, access is no longer limited to one or two protocols.

The very simple operation of connecting that database to the Internet opens many avenues of entry. For example, database access architecture might require the use of one or more foreign languages to get the data from the database to the HTML page. I have seen scenarios that were incredibly complex. In one scenario, I observed a six-part process. From the moment the user clicked a Submit button, a series of operations were undertaken:

1. The variable search terms submitted by the user were extracted and parsed by a Perl script.

2. The Perl script fed these variables to an intermediate program designed to interface with a proprietary database package.

3. The proprietary database package returned the result, passing it back to a Perl script that formatted the data into HTML.

Anyone legitimately employed in Internet security can see that this scenario was a disaster waiting to happen. Each stage of the operation boasted a potential security hole. For exactly this reason, the development of database security techniques is now a hot subject in many circles.

Administrative personnel are sometimes quick to deny (or restrict) funding for security within their corporation. They see this cost as unnecessary, largely because they do not understand the dire nature of the alternative. The reality is this: One or more talented crackers could--in minutes or hours--destroy several years of data entry.

Before business on the Internet can be reliably conducted, some acceptable level of security must be reached. For companies, education is an economical way to achieve at least minimal security. What they spend now may save many times that amount later.

Government

Folklore and common sense both suggest that government agencies know something more, something special about computer security. Unfortunately, this simply isn't true (with the notable exception of the National Security Agency). As you will learn, government agencies routinely fail in their quest for security.

In the following chapters, I will examine various reports (including one very recent one) that demonstrate the poor security now maintained by U.S. government servers. The sensitivity of data accessed by hackers is amazing.

These arms of government (and their attending institutions) hold some of the most personal data on Americans. More importantly, these folks hold sensitive data related to national security. At the minimum, this information needs to be protected.

Operating Systems

There is substantial rivalry on the Internet between users of different operating systems. Let me make one thing clear: It does not matter which operating system you use. Unless it is a secure operating system (that is, one where the main purpose of its design is network security), there will always be security holes, apparent or otherwise. True, studies have shown that to date, fewer holes have been found in Mac and PC-based operating systems (as opposed to UNIX, for example), at least in the context to the Internet. However, such studies are probably premature and unreliable.

Open Systems

UNIX is an open system. As such, its source is available to the public for examination. In fact, many common UNIX programs come only in source form. Others include binary distributions, but still include the source. (An illustrative example would be the Gopher package from the University of Minnesota.) Because of this, much is known about the UNIX operating system and its security flaws. Hackers can inexpensively establish Linux boxes in their homes and hack until their faces turn blue.

Closed and Proprietary Systems Conversely, the source of proprietary and closed operating systems is unavailable. The manufacturers of such software furiously protect their source, claiming it to be a trade secret. As these proprietary operating systems gravitate to the Net, their security flaws will become more readily apparent. To be frank, this process depends largely on the cracking community. As crackers put these operating systems (and their newly implemented TCP/IP) to the test, interesting results will undoubtedly emerge. But, to my point.

We no longer live in a world governed exclusively by a single operating system. As the Internet grows in scope and size, all operating systems known to humankind will become integral parts of the network. Therefore, operating-system rivalry must be replaced by a more sensible approach. Network security now depends on having good, general security knowledge. (Or, from another angle, successful hacking and cracking depends on knowing all platforms, not just one.) So, I ask my readers to temporarily put aside their bias. In terms of the Internet at least, the security of each one of us depends on us all and that is no trivial statement.

How Will This Book Affect the Internet Community? This section begins with a short bedtime story. It is called The Loneliness of the Long-Distance Net Surfer.

The Information Superhighway is a dangerous place. Oh, the main highway isn't so bad. Prodigy, America Online, Microsoft Network...these are fairly clean thoroughfares. They are beautifully paved, with colorful signs and helpful hints on where to go and what to do. But pick a wrong exit, and you travel down a different highway: one littered with burned-out vehicles, overturned dumpsters, and graffiti on the walls. You see smoke rising from fires set on each side of the road. If you listen, you can hear echoes of a distant subway mixed with strange, exotic music.

You pull to a stop and roll down the window. An insane man stumbles from an alley, his tattered clothes blowing in the wind. He careens toward your vehicle, his weathered shoes scraping against broken glass and concrete. He is mumbling as he approaches your window. He leans in and you can smell his acrid breath. He smiles--missing two front teeth--and says "Hey, buddy...got a light?" You reach for the lighter, he reaches for a knife. As he slits your throat, his accomplices emerge from the shadows. They descend on your car as you fade into unconsciousness. Another Net Surfer bites the dust. Others decry your fate. He should have stayed on the main road! Didn't the people at the pub tell him so? Unlucky fellow.

This snippet is an exaggeration; a parody of horror stories often posted to the Net. Most commonly, they are posted by commercial entities seeking to capitalize on your fears and limited understanding of the Internet. These stories are invariably followed by endorsements for this or that product. Protect your business! Shield yourself now! This is an example of a phenomenon I refer to as Internet voodoo. To practitioners of this secret art, the average user appears as a rather gullible chap. A sucker.

If this book accomplishes nothing else, I hope it plays a small part in eradicating Internet voodoo. It provides enough education to shield the user (or new system administrator) from unscrupulous forces on the Net. Such forces give the Internet-security field a bad name.

I am uncertain as to what other effects this book might have on the Internet community. I suspect that these effects will be subtle or even imperceptible. Some of these effects might admittedly be negative and for this, I apologize. I am aware that Chapter 9, "Scanners," where I make most of the known scanners accessible to and easily understood by anyone, will probably result in a slew of network attacks (probably initiated by youngsters just beginning their education in hacking or cracking). Nevertheless, I am hoping that new network administrators will also employ these tools against their own networks. In essence, I have tried to provide a gateway through which any user can become security literate. I believe that the value of the widespread dissemination of security material will result in an increased number of hackers (and perhaps, crackers).

Summary

I hope this chapter clearly articulates the reasons I wrote this book:

To provide inexperienced users with a comprehensive source about security

To provide system administrators with a reference book

To generally heighten public awareness of the need for adequate security There is also another, one that is less general: I wanted to narrow the gap between the radical and conservative information now available about Internet security. It is significant that many valuable contributions to Internet security have come from the fringe (a sector seldom recognized for its work). To provide the Internet community with a book of value, these fringe elements had to be included.

The trouble is, if you examine security documents from the fringe, they are very grass roots and revolutionary. This style--which is uniquely American if nothing else--is often a bit much for square security folks. Likewise, serious security documents can be stuffy, academic, and, to be frank, boring. I wanted to deliver a book of equal value to readers aiming for either camp. I think that I have.

How This Book Will Help You

Prior to writing this book, I had extensive discussions with the Sams.net editorial staff. In those discussions, one thing became immediately clear: Sams.net wanted a book that was valuable to all users, not just to a special class of them. An examination of earlier books on the subject proved instructive. The majority were well written and tastefully presented, but appealed primarily to UNIX or NT system administrators. I recognized that while this class of individuals is an important one, there are millions of average users yearning for basic knowledge of security. To accommodate that need, I aimed at creating an all-purpose Internet security book.

To do so, I had to break some conventions. Accordingly, this book probably differs from other Sams.net books in both content and form. Nevertheless, the book contains copious knowledge, and there are different ways to access it. This chapter briefly outlines how the reader can most effectively access and implement that knowledge.

Is This Book of Practical Use?

Is this book of practical use? Absolutely. It can serve both as a reference book and a general primer. The key for each reader is to determine what information is most important to him or her. The book loosely follows two conventional designs common to books by Sams.net:

Evolutionary ordering (where each chapter arises, in some measure, from information in an earlier one)

Developmental ordering (where you travel from the very simple to the complex) This book is a hybrid of both techniques. For example, the book examines services in the TCP/IP suite, then quickly progresses to how those services are integrated in modern browsers, how such services are compromised, and ultimately, how to secure against such compromises. In this respect, there is an evolutionary pattern to the book.

At the same time, the book begins with a general examination of the structure of the Internet and TCP/IP (which will seem light in comparison to later analyses of sniffing, where you examine the actual construct of an information packet). As you progress, the information becomes more and more advanced. In this respect, there is a developmental pattern to the book.

Using This Book Effectively: Who Are You?

Different people will derive different benefits from this book, depending on their circumstances. I urge each reader to closely examine the following categories. The information will be most valuable to you whether you are

A system administrator

A hacker

A cracker

A business person

A journalist

A casual user

A security specialist

I want to cover these categories and how this book can be valuable to each. If you do not fit cleanly into one of these categories, try the category that best describes you.

System Administrator

A system administrator is any person charged with managing a network or any portion of a network. Sometimes, people might not realize that they are a system administrator. In small companies, for example, programming duties and system administration are sometimes assigned to a single person. Thus, this person is a general, all-purpose technician. They keep the system running, add new accounts, and basically perform any task required on a day-to-day basis. This, for your purposes, is a system administrator.

What This Book Offers the System Administrator

This book presumes only basic knowledge of security from its system administrators, and I believe that this is reasonable. Many capable system administrators are not well versed in security, not because they are lazy or incompetent but because security was for them (until now) not an issue. For example, consider the sysad who lords over an internal LAN. One day, the powers that be decree that the LAN must establish a connection to the Net. Suddenly, that sysad is thrown into an entirely different (and hostile) environment. He or she might be exceptionally skilled at internal security but have little practical experience with the Internet. Today, numerous system administrators are faced with this dilemma. For many, additional funding to hire on-site security specialists is not available and thus, these people must go it alone. Not anymore. This book will serve such system administrators well as an introduction to Internet security.

Likewise, more experienced system administrators can effectively use this book to learn--or perhaps refresh their knowledge about--various aspects of Internet security that have been sparsely covered in books mass-produced for the general public.

For either class of sysad, this book will serve a fundamental purpose: It will assist them in protecting their network. Most importantly, this book shows the attack from both sides of the fence. It shows both how to attack and how to defend in a real-life, combat situation.

Hacker

The term hacker refers to programmers and not to those who unlawfully breach the security of systems. A hacker is any person who investigates the integrity and security of an operating system. Most commonly, these individuals are programmers. They usually have advanced knowledge of both hardware and software and are capable of rigging (or hacking) systems in innovative ways. Often, hackers determine new ways to utilize or implement a network, ways that software manufacturers had not expressly intended.

What This Book Offers the Hacker

This book presumes only basic knowledge of Internet security from its hackers and programmers. For them, this book will provide insight into the Net's most common security weaknesses. It will show how programmers must be aware of these weaknesses. There is an ever-increasing market for those who can code client/server applications, particularly for use on the Net. This book will help programmers make informed decisions about how to develop code safely and cleanly. As an added benefit, analysis of existing network utilities (and their deficiencies) may assist programmers in developing newer and perhaps more effective applications for the Internet.

Cracker

A cracker is any individual who uses advanced knowledge of the Internet (or networks) to compromise network security. Historically, this activity involved cracking encrypted password files, but today, crackers employ a wide range of techniques. Hackers also sometimes test the security of networks, often with the identical tools and techniques used by crackers. To differentiate between these two groups on a trivial level, simply remember this: Crackers engage in such activities without authorization. As such, most cracking activity is unlawful, illegal, and therefore punishable by a term of imprisonment.

What This Book Offers the Cracker

For the budding cracker, this book provides an incisive shortcut to knowledge of cracking that is difficult to acquire. All crackers start somewhere, many on the famous Usenet group alt.2600. As more new users flood the Internet, quality information about cracking (and security) becomes more difficult to find. The range of information is not well represented. Often, texts go from the incredibly fundamental to the excruciatingly technical. There is little material that is in between. This book will save the new cracker hundreds of hours of reading by digesting both the fundamental and the technical into a single (and I hope) well-crafted presentation.

Business Person

For your purposes, business person refers to any individual who has established (or will establish) a commercial enterprise that uses the Internet as a medium. Hence, a business person--within the meaning employed in this book--is anyone who conducts commerce over the Internet by offering goods or services.

NOTE: It does not matter whether these goods or services are offered free as a promotional service. I still classify this as business.

What This Book Offers the Business Person

Businesses establish permanent connections each day. If yours is one of them, this book will help you in many ways, such as helping you make informed decisions about security. It will prepare you for unscrupulous security specialists, who may charge you thousands of dollars to perform basic, system-administration tasks. This book will also offer a basic framework for your internal security policies. You have probably read dozens of dramatic accounts about hackers and crackers, but these materials are largely sensationalized. (Commercial vendors often capitalize on your fear by spreading such stories.) The techniques that will be employed against your system are simple and methodical. Know them, and you will know at least the basics about how to protect your data.

Journalist A journalist is any party who is charged with reporting on the Internet. This can be someone who works for a wire news service or a college student writing for his or her university newspaper. The classification has nothing to do with how much money is paid for the reporting, nor where the reporting is published.

What This Book Offers the Journalist

If you are a journalist, you know that security personnel rarely talk to the media. That is, they rarely provide an inside look at Internet security (and when they do, this usually comes in the form of assurances that might or might not have value). This book will assist journalists in finding good sources and solid answers to questions they might have. Moreover, this book will give the journalist who is new to security an overall view of the terrain. Technology writing is difficult and takes considerable research. My intent is to narrow that field of research for journalists who want to cover the Internet. In coming years, this type of reporting (whether by print or broadcast media) will become more prevalent.

Casual User

A casual user is any individual who uses the Internet purely as a source of entertainment. Such users rarely spend more than 10 hours a week on the Net. They surf subjects that are of personal interest.

What This Book Offers the Casual User

For the casual user, this book will provide an understanding of the Internet's innermost workings. It will prepare the reader for personal attacks of various kinds, not only from other, hostile users, but from the prying eyes of government. Essentially, this book will inform the reader that the Internet is not a toy, that one's identity can be traced and bad things can happen while using the Net. For the casual user, this book might well be retitled How to Avoid Getting Hijacked on the Information Superhighway.

Security Specialist

A security specialist is anyone charged with securing one or more networks from attack. It is not necessary that they get paid for their services in order to qualify in this category. Some people do this as a hobby. If they do it, they are a specialist.

What This Book Offers the Security Specialist

If your job is security, this book can serve as one of two things:

A reference book

An in-depth look at various tools now being employed in the void

NOTE: In this book, the void refers to that portion of the Internet that exists beyond your router or modem. It is the dark, swirling mass of machines, services, and users beyond your computer or network. These are quantities that are unknown to you. This term is commonly used in security circles to refer to such quantities.

Much of the information covered here will be painfully familiar to the security specialist. Some of the material, however, might not be so familiar. (Most notably, some cross-platform materials for those maintaining networks with multiple operating systems.) Additionally, this book imparts a comprehensive view of security, encapsulated into a single text. (And naturally, the materials on the CD-ROM will provide convenience and utility.)

The Good, the Bad, and the Ugly

How you use this book is up to you. If you purchased or otherwise procured this book as a tool to facilitate illegal activities, so be it. You will not be disappointed, for the information contained within is well suited to such undertakings. However, note that this author does not suggest (nor does he condone) such activities. Those who unlawfully penetrate networks seldom do so for fun and often pursue destructive objectives. Considering how long it takes to establish a network, write software, configure hardware, and maintain databases, it is abhorrent to the hacking community that the cracking community should be destructive. Still, that is a choice and one choice--even a bad one--is better than no choice at all. Crackers serve a purpose within the scheme of security, too. They assist the good guys in discovering faults inherent within the network.

Whether you are good, bad, or ugly, here are some tips on how to effectively use this book:

If you are charged with understanding in detail a certain aspect of security, follow the notes closely. Full citations appear in these notes, often showing multiple locations for a security document, RFC, FYI, or IDraft. Digested versions of such documents can never replace having the original, unabridged text.

The end of each chapter contains a small rehash of the information covered. For extremely handy reference, especially for those already familiar with the utilities and concepts discussed, this "Summary" portion of the chapter is quite valuable.

Certain examples contained within this book are available on the CD-ROM. Whenever you see the CD-ROM icon on the outside margin of a page, the resource is available on the CD. This might be source code, technical documents, an HTML presentation, system logs, or other valuable information.

The Book's Parts

The next sections describe the book's various parts. Contained within each description is a list of subjects covered within that chapter.

Part I: Setting the Stage

Part I of this book will be of the greatest value to users who have just joined the Internet community. Topics include

Why I wrote this book
Why you need security
Definitions of hacking and cracking
Who is vulnerable to attack

Essentially, Part I sets the stage for the remaining parts of this book. It will assist readers in understanding the current climate on the Net.

Part II: Understanding the Terrain

Part II of this book is probably the most critical. It illustrates the basic design of the Internet. Each reader must understand this design before he or she can effectively grasp concepts in security. Topics include

Who created the Internet and why
How the Internet is designed and how it works
Poor security on the Internet and the reasons for it
Internet warfare as it relates to individuals and networks

In short, you will examine why and how the Internet was established, what services are available, the emergence of the WWW, why security might be difficult to achieve, and various techniques for living in a hostile computing environment.

Part III: Tools

Part III of this book examines the average toolbox of the hacker or cracker. It familiarizes the reader with Internet munitions, or weapons. It covers the proliferation of such weapons, who creates them, who uses them, how they work, and how the reader can use them. Some of the munitions covered are

Password crackers
Trojans
Sniffers
Tools to aid in obscuring one's identity
Scanners
Destructive devices, such as e-mail bombs and viruses

The coverage necessarily includes real-life examples. This chapter will be most useful to readers engaging in or about to engage in Internet security warfare.

Part IV: Platforms and Security

Part IV of this book ventures into more complex territory, treating vulnerabilities inherent in certain operating systems or applications. At this point, the book forks, concentrating on issues relevant to particular classes of users. (For example, if you are a Novell user, you will naturally gravitate to the Novell chapter.)

Part IV begins with basic discussion of security weaknesses, how they develop, and sources of information in identifying them. Part IV then progresses to platforms, including

Microsoft
UNIX
Novell
VAX/VMS
Macintosh
Plan 9 from Bell Labs Part V: Beginning at Ground Zero

Part V of this book examines who has the power on a given network. I will discuss the relationship between these authoritarian figures and their users, as well as abstract and philosophical views on Internet security. At this point, the material is most suited for those who will be living with security issues each day. Topics include

Root, supervisor, and administrator accounts
Techniques of breaching security internally
Security concepts and philosophy

Part VI: The Remote Attack

Part VI of this book concerns attacks: actual techniques to facilitate the compromise of a remote computer system. In it, I will discuss levels of attack, what these mean, and how one can prepare for them. You will examine various techniques in depth: so in depth that the average user can grasp--and perhaps implement--attacks of this nature. Part VI also examines complex subjects regarding the coding of safe CGI programs, weaknesses of various computer languages, and the relative strengths of certain authentication procedures. Topics discussed in this part include

Definition of a remote attack
Various levels of attack and their dangers
Sniffing techniques
Spoofing techniques
Attacks on Web servers
Attacks based on weaknesses within various programming languages

Part VII: The Law

Part VII confronts the legal, ethical, and social ramifications of Internet security and the lack, compromise, and maintenance thereof.

This Book's Limitations

The scope of this book is wide, but there are limitations on the usefulness of the information. Before examining these individually, I want to make something clear: Internet security is a complex subject. If you are charged with securing a network, relying solely upon this book is a mistake. No book has yet been written that can replace the experience, gut feeling, and basic savvy of a good system administrator. It is likely that no such book will ever be written. That settled, some points on this book's limitations include the following:

Timeliness
Utility

Timeliness

I commenced this project in January, 1997. Undoubtedly, hundreds of holes have emerged or been plugged since then. Thus, the first limitation of this book relates to timeliness.

Timelines might or might not be a huge factor in the value of this book. I say might or might not for one reason only: Many people do not use the latest and the greatest in software or hardware. Economic and administrative realities often preclude this. Thus, there are LANs now operating on Windows for Workgroups that are permanently connected to the Net. Similarly, some individuals are using SPARCstation 1s running SunOS 4.1.3 for access. Because older software and hardware exist in the void, much of the material here will remain current. (Good examples are machines with fresh installs of an older operating system that has now been proven to contain numerous security bugs.)

Equally, I advise the reader to read carefully. Certain bugs examined in this book are common to a single version of software only (for example, Windows NT Server 3.51). The reader must pay particular attention to version information. One version of a given software might harbor a bug, whereas a later version does not. The security of the Internet is not a static thing. New holes are discovered at the rate of one per day. (Unfortunately, such holes often take much longer to fix.)

Be assured, however, that at the time of this writing, the information contained within this book was current. If you are unsure whether the information you need has changed, contact your vendor.

Utility

Although this book contains many practical examples, it is not a how-to for cracking Internet servers. True, I provide many examples of how cracking is done and even utilities with which to accomplish that task, but this book will not make the reader a master hacker or cracker. There is no substitute for experience, and this book cannot provide that.

What this book can provide is a strong background in Internet security, hacking, and cracking. A reader with little knowledge of these subjects will come away with enough information to crack the average server (by average, I mean a server maintained by individuals who have a working but somewhat imperfect knowledge of security).

Also, journalists will find this book bereft of the pulp style of sensationalist literature commonly associated with the subject. For this, I apologize. However, sagas of tiger teams and samurais are of limited value in the actual application of security. Security is a serious subject, and should be reported as responsibly as possible. Within a few years, many Americans will do their banking online. Upon the first instance of a private citizen losing his life savings to a cracker, the general public's fascination with pulp hacking stories will vanish and the fun will be over.

Lastly, bona fide security specialists might find that for them, only the last quarter of the book has significant value. As noted, I developed this book for all audiences. However, these gurus should keep their eyes open as they thumb through this book. They might be pleasantly surprised (or even downright outraged) at some of the information revealed in the last quarter of the text. Like a sleight-of-hand artist who breaks the magician's code, I have dropped some fairly decent baubles in the street.

Summary

In short, depending on your position in life, this book will help you

Protect your network
Learn about security
Crack an Internet server
Educate your staff
Write an informed article about security
Institute a security policy
Design a secure program
Engage in Net warfare
Have some fun

It is of value to hackers, crackers, system administrators, business people, journalists, security specialists, and casual users. There is a high volume of information, the chapters move quickly, and (I hope) the book imparts the information in a clear and concise manner.

Equally, this book cannot make the reader a master hacker or cracker, nor can it suffice as your only source for security information. That said, let's move forward, beginning with a small primer on hackers and crackers.

Just Who Can Be Hacked, Anyway?

The Internet was born in 1969. Almost immediately after the network was established, researchers were confronted with a disturbing fact: The Internet was not secure and could easily be cracked. Today, writers try to minimize this fact, reminding you that the security technologies of the time were primitive. This has little bearing. Today, security technology is quite complex and the Internet is still easily cracked.

I would like to return to those early days of the Internet. Not only will this give you a flavor of the time, it will demonstrate an important point: The Internet is no more secure today than it was twenty years ago.

My evidence begins with a document: a Request for Comments, or RFC. Before you review the document, let me explain what the RFC system is about. This is important because I refer to many RFC documents throughout this book.

The Request For Comments (RFC) System

Requests for Comments (RFC) documents are special. They are written (and posted to the Net) by individuals engaged in the development or maintenance of the Internet. RFC documents serve the important purpose of requesting Internet-wide comments on new or developing technology. Most often, RFC documents contain proposed standards.

The RFC system is one of evolution. The author of an RFC posts the document to the Internet, proposing a standard that he or she would like to see adopted network-wide. The author then waits for feedback from other sources. The document (after more comments/changes have been made) goes to draft or directly to Internet standard status. Comments and changes are made by working groups of the Internet Engineering Task Force (IETF).

Cross Reference: The Internet Engineering Task Force (IETF) is "... a large, open, international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet." To learn more about the IETF, go to its home page at http://www.ietf.cnri.reston.va.us/.

RFC documents are numbered sequentially (the higher the number, the more recent the document) and are distributed at various servers on the Internet.

Cross Reference: One central server from which to retrieve RFC documents is at http://ds0.internic.net/ds/dspg0intdoc.html. This address (URL) is located at InterNIC, or the Network Information Center.

InterNIC

InterNIC provides comprehensive databases on networking information. These databases contain the larger portion of collected knowledge on the design and scope of the Internet. Some of those databases include

The WHOIS Database--This database contains all the names and network numbers of hosts (or machines) permanently connected to the Internet in the United States (except *.mil addresses, which must be obtained at nic.ddn.mil).

The Directory of Directories--This is a massive listing of nearly all resources on the Internet, broken into categories.

The RFC Index--This is a collection of all RFC documents.

Cross Reference: All these documents are centrally available at http://rs.internic.net.

A Holiday Message

As I mentioned earlier, I refer here to an early RFC. The document in question is RFC 602: The Stockings Were Hung by the Chimney with Care. RFC 602 was posted by Bob Metcalfe in December, 1973. The subject matter concerned weak passwords. In it, Metcalfe writes: The ARPA Computer Network is susceptible to security violations for at least the three following reasons:

1. Individual sites, used to physical limitations on machine access, have not yet taken sufficient precautions toward securing their systems against unauthorized remote use. For example, many people still use passwords which are easy to guess: their fist [sic] names, their initials, their host name spelled backwards, a string of characters which are easy to type in sequence (such as ZXCVBNM).

2. The TIP allows access to the ARPANET to a much wider audience than is thought or intended. TIP phone numbers are posted, like those scribbled hastily on the walls of phone booths and men's rooms. The TIP required no user identification before giving service. Thus, many people, including those who used to spend their time ripping off Ma Bell, get access to our stockings in a most anonymous way.

3. There is lingering affection for the challenge of breaking someone's system. This affection lingers despite the fact that everyone knows that it's easy to break systems, even easier to crash them.

All of this would be quite humorous and cause for raucous eye winking and elbow nudging, if it weren't for the fact that in recent weeks at least two major serving hosts were crashed under suspicious circumstances by people who knew what they were risking; on yet a third system, the system wheel password was compromised--by two high school students in Los Angeles no less. We suspect that the number of dangerous security violations is larger than any of us know is growing. You are advised not to sit "in hope that Saint Nicholas would soon be there." That document was posted well over 20 years ago. Naturally, this password problem is no longer an issue. Or is it? Examine this excerpt from a Defense Data Network Security Bulletin, written in 1993:

Host Administrators must assure that passwords are kept secret by their users. Host Administrators must also assure that passwords are robust enough to thwart exhaustive attack by password cracking mechanisms, changed periodically and that password files are adequately protected. Passwords should be changed at least annually.

Take notice. In the more than 25 years of the Internet's existence, it has never been secure. That's a fact. Later in this book, I will try to explain why. For now, however, I confine our inquiry to a narrow question: Just who can be cracked?

The short answer is this: As long as a person maintains a connection to the Internet (permanent or otherwise), he or she can be cracked. Before treating this subject in depth, however, I want to define cracked.

What Is Meant by the Term Cracked? For our purposes, cracked refers to that condition in which the victim network has suffered an unauthorized intrusion. There are various degrees of this condition, each of which is discussed at length within this book. Here, I offer a few examples of this cracked condition:

The intruder gains access and nothing more (access being defined as simple entry; entry that is unauthorized on a network that requires--at a minimum--a login and password).

The intruder gains access and destroys, corrupts, or otherwise alters data.

The intruder gains access and seizes control of a compartmentalized portion of the system or the whole system, perhaps denying access even to privileged users.

The intruder does NOT gain access, but instead implements malicious procedures that cause that network to fail, reboot, hang, or otherwise manifest an inoperable condition, either permanently or temporarily. To be fair, modern security techniques have made cracking more difficult. However, the gorge between the word difficult and the word impossible is wide indeed. Today, crackers have access to (and often study religiously) a wealth of security information, much of which is freely available on the Internet. The balance of knowledge between these individuals and bona-fide security specialists is not greatly disproportionate. In fact, that gap is closing each day.

The purpose of this chapter is to show you that cracking is a common activity: so common that assurances from anyone that the Internet is secure should be viewed with extreme suspicion. To drive that point home, I will begin with governmental entities. After all, defense and intelligence agencies form the basis of our national security infrastructure. They, more than any other group, must be secure.

Government

Throughout the Internet's history, government sites have been popular targets among crackers. This is due primarily to press coverage that follows such an event. Crackers enjoy any media attention they can get. Hence, their philosophy is generally this: If you're going to crack a site, crack one that matters.

Are crackers making headway in compromising our nation's most secure networks? Absolutely. To find evidence that government systems are susceptible to attack, one needn't look far. A recent report filed by the Government Accounting Office (GAO) concerning the security of the nation's defense networks concluded that:

Defense may have been attacked as many as 250,000 times last year...In addition, in testing its systems, DISA attacks and successfully penetrates Defense systems 65 percent of the time. According to Defense officials, attackers have obtained and corrupted sensitive information--they have stolen, modified, and destroyed both data and software. They have installed unwanted files and "back doors" which circumvent normal system protection and allow attackers unauthorized access in the future. They have shut down and crashed entire systems and networks, denying service to users who depend on automated systems to help meet critical missions. Numerous Defense functions have been adversely affected, including weapons and supercomputer research, logistics, finance, procurement, personnel management, military health, and payroll.1 1 Information Security: Computer Attacks at Department of Defense Pose Increasing Risks (Chapter Report, 05/22/96, GAO/AIMD-96-84); Chapter 0:3.2, Paragraph 1.

Cross Reference: Information Security: Computer Attacks at Department of Defense Pose Increasing Risks is available online at http://www.securitymanagement.com/library/000215.html.

That same report revealed that although more than one quarter of a million attacks occur annually, only 1 in 500 attacks are actually detected and reported. (Note that these sites are defense oriented and therefore implement more stringent security policies than many commercial sites. Many government sites employ secure operating systems that also feature advanced, proprietary security utilities.)

Government agencies, mindful of the public confidence, understandably try to minimize these issues. But some of the incidents are difficult to obscure. For example, in 1994, crackers gained carte-blanche access to a weapons-research laboratory in Rome, New York. Over a two-day period, the crackers downloaded vital national security information, including wartime- communication protocols.

Such information is extremely sensitive and, if used improperly, could jeopardize the lives of American service personnel. If crackers with relatively modest equipment can access such information, hostile foreign governments (with ample computing power) could access even more.

SATAN and Other Tools

Today, government sites are cracked with increasing frequency. The authors of the GAO report attribute this largely to the rise of user-friendly security programs (such as SATAN). SATAN is a powerful scanner program that automatically detects security weaknesses in remote hosts. It was released freely on the Net in April, 1995. Its authors, Dan Farmer and Weitse Venema, are legends in Internet security. (You will learn more about these two gentlemen in Chapter 9, "Scanners.")

Because SATAN is conveniently operated through an HTML browser (such as Netscape Navigator or NCSA Mosaic), a cracker requires less practical knowledge of systems. Instead, he or she simply points, clicks, and waits for an alert that SATAN has found a vulnerable system (at least this is what the GAO report suggests). Is it true?

No. Rather, the government is making excuses for its own shoddy security. Here is why: First, SATAN runs only on UNIX platforms. Traditionally, such platforms required expensive workstation hardware. Workstation hardware of this class is extremely specialized and isn't sold at the neighborhood Circuit City store. However, those quick to defend the government make the point that free versions of UNIX now exist for the IBM-compatible platform. One such distribution is a popular operating system named Linux.

Linux is a true 32-bit, multi-user, multi-tasking, UNIX-like operating system. It is a powerful computing environment and, when installed on the average PC, grants the user an enormous amount of authority, particularly in the context of the Internet. For example, Linux distributions now come stocked with every manner of server ever created for TCP/IP transport over the Net.

Cross Reference: Linux runs on a wide range of platforms, not just IBM compatibles. Some of those platforms include the Motorola 68k, the Digital Alpha, the Motorola PowerPC, and even the Sun Microsystems SPARC architecture. If you want to learn more about Linux, go to the ultimate Linux page at http://www.linux.org/.

Distributions of Linux are freely available for download from the Net, or can be obtained at any local bookstore. CD-ROM distributions are usually bundled with books that instruct users on using Linux. In this way, vendors can make money on an otherwise, ostensibly free operating system. The average Linux book containing a Linux installation CD-ROM sells for forty dollars.

Furthermore, most Linux distributions come with extensive development tools. These include a multitude of language compilers and interpreters:

A C language compiler
A C++ language compiler
A SmallTalk interpreter
A BASIC interpreter
A Perl interpreter
Tools for FORTRAN
Tools for Pascal
A common LISP interpreter

Yet, even given these facts, the average kid with little knowledge of UNIX cannot implement a tool such as SATAN on a Linux platform. Such tools rarely come prebuilt in binary form. The majority are distributed as source code, which may then be compiled with options specific to the current platform. Thus, if you are working in AIX (IBM's proprietary version of UNIX), the program must be compiled for AIX. If working in Ultrix (DEC), it must be compiled for Ultrix, and so on.

NOTE: A port was available for Linux not long after SATAN was released. However, the bugs were not completely eliminated and the process of installing and running SATAN would still remain an elusive and frustrating experience for many Linux users. The process of developing an easily implemented port was slow in coming.

Most PC users (without UNIX experience) are hopelessly lost even at the time of the Linux installation. UNIX conventions are drastically different from those in DOS. Thus, before a new Linux user becomes even moderately proficient, a year of use will likely pass. This year will be spent learning how to use MIT's X Window System, how to configure TCP/IP settings, how to get properly connected to the Internet, and how to unpack software packages that come in basic source-code form.

Even after the year has passed, the user may still not be able to use SATAN. The SATAN distribution doesn't compile well on the Linux platform. For it to work, the user must have installed the very latest version of Perl. Only very recent Linux distributions (those released within one year of the publishing of this book) are likely to have such a version installed. Thus, the user must also know how to find, retrieve, unpack, and properly install Perl.

In short, the distance between a non-UNIX literate PC user and one who effectively uses SATAN is very long indeed. Furthermore, during that journey from the former to the latter, the user must have ample time (and a brutal resolve) to learn. This is not the type of journey made by someone who wants to point and click his or her way to super-cracker status. It is a journey undertaken by someone deeply fascinated by operating systems, security, and the Internet in general.

So the government's assertion that SATAN, an excellent tool designed expressly to improve Internet security, has contributed to point-and-click cracking is unfounded. True, SATAN will perform automated scans for a user. Nonetheless, that user must have strong knowledge of Internet security, UNIX, and several programming languages.

There are also collateral issues regarding the machine and connection type. For example, even if the user is seasoned, he or she must still have adequate hardware power to use SATAN effectively.

Cross Reference: You will examine SATAN (and programs like it) in greater detail in Chapter 9. In that chapter, you will be familiarized with many scanners, how they work, how they are designed, and the type of information they can provide for users.

SATAN is not the problem with government sites. Indeed, SATAN is not the only diagnostic tool that can automatically identify security holes in a system. There are dozens of such tools available:

Internet Security Scanner (ISS)
Strobe
Network Security Scanner (NSS)
identTCPscan
Jakal

Chapter 9 examines these automated tools and their methods of operation. For now, I will simply say this: These tools operate by attacking the available TCP/IP services and ports open and running on remote systems.

Whether available to a limited class of users or worldwide, these tools share one common attribute: They check for known holes. That is, they check for security vulnerabilities that are commonly recognized within the security community. The chief value of such tools is their capability to automate the process of checking one or more machines (hundreds of machines, if the user so wishes). These tools accomplish nothing more than a knowledgeable cracker might by hand. They simply automate the process.

Education and Awareness About Security

The problem is not that such tools exist, but that education about security is poor. Moreover, the defense information networks are operating with archaic internal security policies. These policies prevent (rather than promote) security. To demonstrate why, I want to refer to the GAO report I mentioned previously. In it, the government concedes:

...The military services and Defense agencies have issued a number of information security policies, but they are dated, inconsistent and incomplete... The report points to a series of Defense Directives as examples. It cites (as the most significant DoD policy document) Defense Directive 5200.28. This document, Security Requirements for Automated Information Systems, is dated March 21, 1988.

In order to demonstrate the real problem here, let's examine a portion of that Defense Directive. Paragraph 5 of Section D of that document is written as follows:

Computer security features of commercially produced products and Government-developed or -derived products shall be evaluated (as requested) for designation as trusted computer products for inclusion on the Evaluated Products List (EPL). Evaluated products shall be designated as meeting security criteria maintained by the National Computer Security Center (NCSC) at NSA defined by the security division, class, and feature (e.g., B, B1, access control) described in DoD 5200.28-STD (reference (K)).

Cross Reference: Security Requirements for Automated Information Systems is available on the Internet at http://140.229.1.16:9000/htdocs/teinfo/directives/soft/5200.28.html

It is within the provisions of that paragraph that the government's main problem lies. The Evaluated Products List (EPL) is a list of products that have been evaluated for security ratings, based on DoD guidelines. (The National Security Agency actually oversees the evaluation.) Products on the list can have various levels of security certification. For example, Windows NT version 3.51 has obtained a certification of C2. This is a very limited security certification.

Cross Reference: Before you continue, you should probably briefly view the EPL for yourself. Check it out at http://www.radium.ncsc.mil/tpep/epl/index.html.

The first thing you will notice about this list is that most of the products are old. For example, examine the EPL listing for Trusted Information Systems' Trusted XENIX, a UNIX-based operating system.

Cross Reference: The listing for Trusted XENIX can be found at http://www.radium.ncsc.mil/tpep/epl/entries/CSC-EPL-92-001-A.html

If you examine the listing closely, you will be astonished. TIS Trusted XENIX is indeed on the EPL. It is therefore endorsed and cleared as a safe system, one that meets the government's guidelines (as of September 1993). However, examine even more closely the platforms on which this product has been cleared. Here are a few:

AST 386/25 and Premium 386/33
HP Vectra 386
NCR PC386sx
Zenith Z-386/33

These architectures are ancient. They are so old that no one would actually use them, except perhaps as a garage hacking project on a nice Sunday afternoon (or perhaps if they were legacy systems that housed software or other data that was irreplaceable). In other words, by the time products reach the EPL, they are often pathetically obsolete. (The evaluation process is lengthy and expensive not only for the vendor, but for the American people, who are footing the bill for all this.) Therefore, you can conclude that much of the DoD's equipment, software, and security procedures are likewise obsolete.

Now, add the question of internal education. Are Defense personnel trained in (and implementing) the latest security techniques? No. Again, quoting the GAO report:

Defense officials generally agreed that user awareness training was needed, but stated that installation commanders do not always understand computer security risk and thus, do not always devote sufficient resources to the problem.

High-Profile Cases

Lack of awareness is pervasive, extending far beyond the confines of a few isolated Defense sites. It is a problem that affects many federal agencies throughout the country. Evidence of it routinely appears on the front pages of our nation's most popular newspapers. Indeed, some very high-profile government sites were cracked in 1996, including the Central Intelligence Agency (CIA) and the Department of Justice (DoJ).

In the CIA case, a cracker seized control on September 18, 1996, replacing the welcome banner with one that read The Central Stupidity Agency. Accompanying this were links to a hacker group in Scandinavia. Cross Reference: To see the CIA site in its hacked state, visit http://www.skeeve.net/cia/.

NOTE: skeeve.net was one of many sites that preserved the hacked CIA page, primarily for historical purposes. It is reported that after skeeve.net put the hacked CIA page out for display, its server received hundreds of hits from government sites, including the CIA. Some of these hits involved finger queries and other snooping utilities.

In the DoJ incident (Saturday, August 17, 1996), a photograph of Adolf Hitler was offered as the Attorney General of the United States.

Cross Reference: The DoJ site, in its hacked state, can be viewed at http://river-city.clever.net/hacked/doj/.

As of this writing, neither case has been solved; most likely, neither will ever be. Both are reportedly being investigated by the FBI.

Typically, government officials characterize such incidents as rare. Just how rare are they? Not very. In the last year, many such incidents have transpired:

During a period spanning from July, 1995 to March 1996, a student in Argentina compromised key sites in the United States, including those maintained by the Armed Forces and NASA.

In August, 1996, a soldier at Fort Bragg reportedly compromised an "impenetrable" military computer system and widely distributed passwords he obtained.

In December, 1996, hackers seized control of a United States Air Force site, replacing the site's defense statistics with pornography. The Pentagon's networked site, DefenseLINK, was shut down for more than 24 hours as a result. The phenomenon was not limited to federal agencies. In October, 1996, the home page of the Florida State Supreme Court was cracked. Prior to its cracking, the page's intended use was to distribute information about the court, including text reproductions of recent court decisions. The crackers removed this information and replaced it with pornography. Ironically, the Court subsequently reported an unusually high rate of hits.

In 1996 alone, at least six high-profile government sites were cracked. Two of these (the CIA and FBI) were organizations responsible for maintaining departments for information warfare or computer crime. Both are charged with one or more facets of national security. What does all this mean? Is our national security going down the tubes? It depends on how you look at it.

In the CIA and FBI cases, the cracking activity was insignificant. Neither server held valuable information, and the only real damage was to the reputation of their owners. However, the Rome, New York case was far more serious (as was the case at Fort Bragg). Such cases demonstrate the potential for disaster.

There is a more frightening aspect to this: The sites mentioned previously were WWW sites, which are highly visible to the public. Therefore, government agencies cannot hide when their home pages have been cracked. But what about when the crack involves some other portion of the targeted system (a portion generally unseen by the public)? It's likely that when such a crack occurs, the press is not involved. As such, there are probably many more government cracks that you will never hear about.

To be fair, the U.S. government is trying to keep up with the times. In January 1997, a reporter for Computerworld magazine broke a major story concerning Pentagon efforts to increase security. Apparently, the Department of Defense is going to establish its own tiger team (a group of individuals whose sole purpose will be to attack DoD computers). Such attacks will reveal key flaws in DoD security.

Other stories indicate that defense agencies have undertaken new and improved technologies to protect computers holding data vital to national security. However, as reported by Philip Shenon, a prominent technology writer for the New York Times:

While the Pentagon is developing encryption devices that show promise in defeating computer hackers, the accounting office, which is the investigative arm of Congress, warned that none of the proposed technical solutions was foolproof, and that the military's current security program was `dated, inconsistent and incomplete.' The Pentagon's activity to develop devices that "show promise in defeating computer hackers" appears reassuring. From this, one could reasonably infer that something is being done about the problem. However, the reality and seriousness of the situation is being heavily underplayed.

If Defense and other vital networks cannot defend against domestic attacks from crackers, there is little likelihood that they can defend from hostile foreign powers. I made this point earlier in the chapter, but now I want to expand on it.

Can the United States Protect the National Information Infrastructure? The United States cannot be matched by any nation for military power. We have sufficient destructive power at our disposal to eliminate the entire human race. So from a military standpoint, there is no comparison between the United States and even a handful of third-world nations. The same is not true, however, in respect to information warfare.

The introduction of advanced minicomputers has forever changed the balance of power in information warfare. The average Pentium processor now selling at retail computer chains throughout the country is more powerful than many mainframes were five years ago (it is certainly many times faster). Add the porting of high-performance UNIX-based operating systems to the IBM platform, and you have an entirely new environment.

A third-world nation could pose a significant threat to our national information infrastructure. Using the tools described previously (and some high-speed connections), a third-world nation could effectively wage a successful information warfare campaign against the United States at costs well within their means. In fact, it is likely that within the next few years, we'll experience incidents of bona-fide cyberterrorism.

To prepare for the future, more must be done than simply allocating funds. The federal government must work closely with security organizations and corporate entities to establish new and improved standards. If the new standards do not provide for quicker and more efficient means of implementing security, we will be faced with very dire circumstances.

Who Holds the Cards?

This (not legitimate security tools such as SATAN) is the problem: Thirty years ago, the U.S. government held all the cards with respect to technology. The average U.S. citizen held next to nothing. Today, the average American has access to very advanced technology. In some instances, that technology is so advanced that it equals technology currently possessed by the government. Encryption technology is a good example.

Many Americans use encryption programs to protect their data from others. Some of these encryption programs (such as the very famous utility PGP, created by Phil Zimmermann) produce military-grade encryption. This level of encryption is sufficiently strong that U.S. intelligence agencies cannot crack it (at least not within a reasonable amount of time, and often, time is of the essence).

For example, suppose one individual sends a message to another person regarding the date on which they will jointly blow up the United Nations building. Clearly, time is of the essence. If U.S. intelligence officials cannot decipher this message before the date of the event, they might as well have not cracked the message at all.

This principle applies directly to Internet security. Security technology has trickled down to the masses at an astonishing rate. Crackers (and other talented programmers) have taken this technology and rapidly improved it. Meanwhile, the government moves along more slowly, tied down by restrictive and archaic policies. This has allowed the private sector to catch up (and even surpass) the government in some fields of research.

This is a matter of national concern. Many grass-roots radical cracker organizations are enthralled with these circumstances. They often heckle the government, taking pleasure in the advanced knowledge that they possess. These are irresponsible forces in the programming community, forces that carelessly perpetuate the weakening of the national information infrastructure. Such forces should work to assist and enlighten government agencies, but they often do not, and their reasons are sometimes understandable.

The government has, for many years, treated crackers and even hackers as criminals of high order. As such, the government is unwilling to accept whatever valuable information these folks have to offer. Communication between these opposing forces is almost always negative. Bitter legal disputes have developed over the years. Indeed, some very legitimate security specialists have lost time, money, and dignity at the hands of the U.S. government. On more than one occasion, the government was entirely mistaken and ruined (or otherwise seriously disrupted) the lives of law-abiding citizens. In the next chapter, I will discuss a few such cases. Most arise out of the government's poor understanding of the technology.

New paths of communication should be opened between the government and those in possession of advanced knowledge. The Internet marginally assists in this process, usually through devices such as mailing lists and Usenet. However, there is currently no concerted effort to bring these opposing forces together on an official basis. This is unfortunate because it fosters a situation where good minds in America remain pitted against one another. Before we can effectively defend our national information infrastructure, we must come to terms with this problem. For the moment, we are at war with ourselves.

The Public Sector

I realize that a category such as the public sector might be easily misunderstood. To prevent that, I want to identify the range of this category. Here, the public sector refers to any entity that is not a government, an institution, or an individual. Thus, I will be examining companies (public and private), Internet service providers, organizations, or any other entity of commercial or semi-commercial character.

Before forging ahead, one point should be made: Commercial and other public entities do not share the experience enjoyed by government sites. In other words, they have not yet been cracked to pieces. Only in the past five years have commercial entities flocked to the Internet. Therefore, some allowances must be made. It is unreasonable to expect these folks to make their sites impenetrable. Many are smaller companies and for a moment, I want to address these folks directly: You, more than any other group, need to acquire sound security advice.

Small companies operate differently from large ones. For the little guy, cost is almost always a strong consideration. When such firms establish an Internet presence, they usually do so either by using in-house technical personnel or by recruiting an Internet guru. In either case, they are probably buying quality programming talent. However, what they are buying in terms of security may vary.

Large companies specializing in security charge a lot of money for their services. Also, most of these specialize in UNIX security. So, small companies seeking to establish an Internet presence may avoid established security firms. First, the cost is a significant deterrent. Moreover, many small companies do not use UNIX. Instead, they may use Novell NetWare, LANtastic, Windows NT, Windows 95, and so forth.

This leaves small businesses in a difficult position. They must either pay high costs or take their programmers' word that the network will be secure. Because such small businesses usually do not have personnel who are well educated in security, they are at the mercy of the individual charged with developing the site. That can be a very serious matter.

The problem is many "consultants" spuriously claim to know all about security. They make these claims when, in fact, they may know little or nothing about the subject. Typically, they have purchased a Web-development package, they generate attractive Web pages, and know how to set up a server. Perhaps they have a limited background in security, having scratched the surface. They take money from their clients, rationalizing that there is only a very slim chance that their clients' Web servers will get hacked. For most, this works out well. But although their clients' servers never get hacked, the servers may remain indefinitely in a state of insecurity.

Commercial sites are also more likely to purchase one or two security products and call it a day. They may pay several thousand dollars for an ostensibly secure system and leave it at that, trusting everything to that single product.

For these reasons, commercial sites are routinely cracked, and this trend will probably continue. Part of the problem is this: There is no real national standard on security in the private sector. Hence, one most often qualifies as a security specialist through hard experience and not by virtue of any formal education. It is true that there are many courses available and even talks given by individuals such as Farmer and Venema. These resources legitimately qualify an individual to do security work. However, there is no single piece of paper that a company can demand that will ensure the quality of the security they are getting.

Because these smaller businesses lack security knowledge, they become victims of unscrupulous "security specialists." I hope that this trend will change, but I predict that for now, it will only become more prevalent. I say this for one reason: Despite the fact that many thousands of American businesses are now online, this represents a mere fraction of commercial America. There are millions of businesses that have yet to get connected. These millions are all new fish, and security charlatans are lined up waiting to catch them.

The Public Sector Getting Cracked

In the last year, a series of commercial sites have come under attack. These attacks have varied widely in technique. Earlier in this chapter, I defined some of those techniques and the attending damage or interruption of service they cause. Here, I want to look at cases that more definitively illustrate these techniques. Let's start with the recent attack on Panix.com.

Panix.com

Panix.com (Public Access Networks Corporation) is a large Internet service provider (ISP) that provides Internet access to several hundred thousand New York residents. On September 6, 1996, Panix came under heavy attack from the void.

The Panix case was very significant because it demonstrates a technique known as the Denial of Service (DoS) attack. This type of attack does not involve an intruder gaining access. Instead, the cracker undertakes remote procedures that render a portion (or sometimes all) of a target inoperable.

The techniques employed in such an attack are simple. As you will learn in Chapter 6, "A Brief Primer on TCP/IP," connections over the Internet are initiated via a procedure called the three-part handshake. In this process, the requesting machine sends a packet requesting connection. The target machine responds with an acknowledgment. The requesting machine then returns its own acknowledgment and a connection is established.

In a syn_flooder attack, the requesting (cracker's) machine sends a series of connection requests but fails to acknowledge the target's response. Because the target never receives that acknowledgment, it waits. If this process is repeated many times, it renders the target's ports useless because the target is still waiting for the response. These connection requests are dealt with sequentially; eventually, the target will abandon waiting for each such acknowledgment. Nevertheless, if it receives tens or even hundreds of these requests, the port will remain engaged until it has processed--and discarded--each request.

NOTE: The term syn_flooder is derived from the activity undertaken by such tools. The TCP/IP three-way handshake is initiated when one machine sends another a SYN packet. In a typical flooding attack, a series of these packets are forwarded to a target, purporting to be from an address that is nonexistent. The target machine therefore cannot resolve the host. In any event, by sending a flurry of these SYN packets, one is flooding the target with requests that cannot be fulfilled.

Syn_flooder attacks are common, but do no real damage. They simply deny other users access to the targeted ports temporarily. In the Panix case, though, temporarily was a period lasting more than a week.

Syn_flooders are classified in this book as destructive devices. They are covered extensively in Chapter 14, "Destructive Devices." These are typically small programs consisting of two hundred lines of code or fewer. The majority are written in the C programming language, but I know of at least one written in BASIC.

Crack dot Com

ISPs are popular targets for a variety of reasons. One reason is that crackers use such targets as operating environments or a home base from which to launch attacks on other targets. This technique assists in obscuring the identity of the attacker, an issue we will discuss. However, DoS attacks are nothing special. They are the modern equivalent of ringing someone's telephone repeatedly to keep the line perpetually engaged. There are far more serious types of cracks out there. Just ask Crack dot Com, the manufacturers of the now famous computer game Quake.

In January, 1997, crackers raided the Crack dot Com site. Reportedly, they cracked the Web server and proceeded to chip away at the firewall from that location. After breaking through the firewall, the crackers gained carte-blanche access to the internal file server. From that location, they took the source code for both Quake and a new project called Golgotha. They posted this source code on the Net.

NOTE: For those of you who are not programmers, source code is the programming code of an application in its raw state. This is most often in human-readable form, usually in plain English. After all testing of the software is complete (and there are no bugs within it), this source code is sent a final time through a compiler. Compilers interpret the source code and from it fashion a binary file that can be executed on one or more platforms. In short, source code can be though of as the very building blocks of a program. In commercial circles, source code is jealously guarded and aggressively proclaimed as proprietary material. For someone to take that data from a server and post it indiscriminately to the Internet is probably a programmer's worst nightmare.

For Crack dot Com, the event could have far-reaching consequences. For example, it's possible that during the brief period that the code was posted on the Net, its competitors may have obtained copies of (at least some of) the programming routines. In fact, the crackers could have approached those competitors in an effort to profit from their activities. This, however, is highly unlikely. The crackers' pattern of activity suggests that they were kids. For example, after completing the crack, they paraded their spoils on Internet Relay Chat. They also reportedly left behind a log (a recording of someone's activity while connected to a given machine). The Crack dot Com case highlights the seriousness of the problem, however.

Kriegsman Furs

Another interesting case is that of Kriegsman Furs of Greensborough, North Carolina. This furrier's Web site was cracked by an animal-rights activist. The cracker left behind a very strong message, which I have reproduced in part:

Today's consumer is completely oblivious to what goes on in order for their product to arrive at the mall for them to buy. It is time that the consumer be aware of what goes on in many of today's big industries. Most importantly, the food industries. For instance, dairy cows are injected with a chemical called BGH that is very harmful to both humans and the cows. This chemical gives the cows bladder infections. This makes the cows bleed and guess what? It goes straight in to your bowl of cereal. Little does the consumer know, nor care. The same kind of thing goes on behind the back of fur wearers. The chemicals that are used to process and produce the fur are extremely bad for our earth. Not only that, but millions of animals are slaughtered for fur and leather coats. I did this in order to wake up the blind consumers of today. Know the facts. Following this message were a series of links to animal-rights organizations and resources.

Kevin Mitnik

Perhaps the most well-known case of the public sector being hacked, however, is the 1994/1995 escapades of famed computer cracker Kevin Mitnik. Mitnik has been gaining notoriety since his teens, when he cracked the North American Aerospace Defense Command (NORAD). The timeline of his life is truly amazing, spanning some 15 years of cracking telephone companies, defense sites, ISPs, and corporations. Briefly, some of Mitnik's previous targets include

Pacific Bell, a California telephone company

The California Department of Motor Vehicles

A Pentagon system

The Santa Cruz Operation, a software vendor

Digital Equipment Corporation

TRW

On December 25, 1994, Mitnik reportedly cracked the computer network of Tsutomu Shimomura, a security specialist at the San Diego Supercomputer Center. What followed was a press fiasco that lasted for months. The case might not have been so significant were it not for three factors:

The target was a security specialist who had written special security tools not available to the general public.

The method employed in the break-in was extremely sophisticated and caused a stir in security circles.

The suspicion was, from the earliest phase of the case, that Mitnik (then a wanted man) was involved in the break-in.

First, Shimomura, though never before particularly famous, was known in security circles. He, more than anyone, should have been secure. The types of tools he was reportedly developing would have been of extreme value to any cracker. Moreover, Shimomura has an excellent grasp of Internet security. When he got caught with his pants down (as it were), it was a shock to many individuals in security. Naturally, it was also a delight to the cracker community. For some time afterward, the cracking community was enthralled by the achievement, particularly because Shimomura had reportedly assisted various federal agencies on security issues. Here, one of the government's best security advisors had been cracked to pieces by a grass-roots outlaw (at least, that was the hype surrounding the case).

Second, the technique used, now referred to as IP spoofing, was complex and not often implemented. IP spoofing is significant because it relies on an exchange that occurs between two machines at the system level. Normally, when a user attempts to log in to a machine, he or she is issued a login prompt. When the user provides a login ID, a password prompt is given. The user issues his or her password and logs in (or, he or she gives a bad or incorrect password and does not log in). Thus, Internet security breaches have traditionally revolved around getting a valid password, usually by obtaining and cracking the main password file.

IP spoofing differs from this radically. Instead of attempting to interface with the remote machine via the standard procedure of the login/password variety, the IP-spoofing cracker employs a much more sophisticated method that relies in part on trust. Trust is defined and referred to in this book (unless otherwise expressly stated) as the "trust" that occurs between two machines that identify themselves to one another via IP addresses.

In IP spoofing, a series of things must be performed before a successful break-in can be accomplished:

One must determine the trust relationships between machines on the target network.

One must determine which of those trust relationships can be exploited (that is, which of those machines is running an operating system susceptible to spoofing).

One must exploit the hole.

(Be mindful that this brief description is bare bones. I treat this subject extensively in its own chapter, Chapter 28, "Spoofing Attacks.")

In the attack, the target machine trusted the other. Whenever a login occurred between these two machines, it was authenticated through an exchange of numbers. This number exchange followed a forward/challenge scenario. In other words, one machine would generate a number to which the other must answer (also with a number). The key to the attack was to forge the address of the trusted machine and provide the correct responses to the other machine's challenges. And, reportedly, that is exactly what Mitnik did.

In this manner, privileged access is gained without ever passing a single password or login ID over the network. All exchanges happen deep at the system level, a place where humans nearly never interact with the operating system.

Curiously, although this technique has been lauded as new and innovative, it is actually quite antiquated (or at least, the concept is quite antiquated). It stems from a security paper written by Robert T. Morris in 1985 titled A Weakness in the 4.2BSD UNIX TCP/IP Software. In this paper, Morris (then working for AT&T Bell Laboratories) concisely details the ingredients to make such an attack successful. Morris opens the paper with this statement:

The 4.2 Berkeley Software Distribution of the UNIX operating system (4.2BSD for short) features an extensive body of software based on the "TCP/IP" family of protocols. In particular, each 4.2BSD system "trusts" some set of other systems, allowing users logged into trusted systems to execute commands via a TCP/IP network without supplying a password. These notes describe how the design of TCP/IP and the 4.2BSD implementation allow users on untrusted and possibly very distant hosts to masquerade as users on trusted hosts. Bell Labs has a growing TCP/IP network connecting machines with varying security needs; perhaps steps should be taken to reduce their vulnerability to each other. Morris then proceeds to describe such an attack in detail, some ten years before the first widely reported instance of such an attack had occurred. One wonders whether Mitnik had seen this paper (or even had it sitting on his desk whilst the deed was being done).

In any event, the break-in caused a stir. The following month, the New York Times published an article about the attack. An investigation resulted, and Shimomura was closely involved. Twenty days later, Shimomura and the FBI tracked Mitnik to an apartment in North Carolina, the apparent source of the attack. The case made national news for weeks as the authorities sorted out the evidence they found at Mitnik's abode. Again, America's most celebrated computer outlaw was behind bars.

In my view, the case demonstrates an important point, the very same point we started with at the beginning of this chapter: As long as they are connected to the Net, anyone can be cracked. Shimomura is a hacker and a good one. He is rumored to own 12 machines running a variety of operating systems. Moreover, Shimomura is a talented telephone phreak (someone skilled in manipulating the technology of the telephone system and cellular devices). In essence, he is a specialist in security. If he fell victim to an attack of this nature, with all the tools at his disposal, the average business Web site is wide open to assault over the Internet.

In defense of Shimomura: Many individuals in security defend Shimomura. They earnestly argue that Shimomura had his site configured to bait crackers. In Chapter 26, "Levels of Attack," you will learn that Shimomura was at least marginally involved in implementing this kind of system in conjunction with some folks at Bell Labs. However, this argument in Shimomura's defense is questionable. For example, did he also intend to allow these purportedly inept crackers to seize custom tools he had been developing? If not, the defensive argument fails. Sensitive files were indeed seized from Shimomura's network. Evidence of these files on the Internet is now sparse. No doubt, Shimomura has taken efforts to hunt them down. Nevertheless, I have personally seen files that Mitnik reportedly seized from many networks, including Netcom. Charles Platt, in his scathing review of Shimomura's book Takedown, offers a little slice of reality:

Kevin Mitnick...at least he shows some irreverence, taunting Shimomura and trying to puncture his pomposity. At one point, Mitnick bundles up all the data he copied from Shimomura's computer and saves it onto the system at Netcom where he knows that Shimomura will find it....Does Shimomura have any trouble maintaining his dignity in the face of these pranks? No trouble at all. He writes: "This was getting personal. ... none of us could believe how childish and inane it all sounded."

It is difficult to understand why Shimomura would allow crackers (coming randomly from the void) to steal his hard work and excellent source code. My opinion (which may be erroneous) is that Shimomura did indeed have his boxes configured to bait crackers; he simply did not count on anyone cutting a hole through that baited box to his internal network. In other words, I believe that Shimomura (who I readily admit is a brilliant individual) got a little too confident. There should have been no relationship of trust between the baited box and any other workstation.

Cross Reference: Charles Platt's critique of Takedown, titled A Circumlocuitous review of Takedown by Tsutomu Shimomura and John Markoff, can be found at http://rom.oit.gatech.edu/~willday/mitnick/takedown.review.html.

Summary

These cases are all food for thought. In the past 20 or so years, there have been several thousand such cases (of which we are aware). The military claims that it is attacked over 250,000 times a year. Estimates suggest it is penetrated better than half of the time. It is likely that no site is entirely immune. (If such a site exists, it is likely AT&T Bell Laboratories; it probably knows more about network security than any other single organization on the Internet.)

All this having been established, I'd like to get you started. Before you can understand how to hack (or crack), however, you must first know a bit about the network. Part II of this book, "Understanding the Terrain," deals primarily with the Internet's development and design.

Is Security a Futile Endeavor?

Since Paul Baran first put pen to paper, Internet security has been a concern. Over the years, security by obscurity has become the prevailing attitude of the computing community.

Speak not and all will be well.

Hide and perhaps they will not find you.

The technology is complex. You are safe.

These principles have not only been proven faulty, but they also go against the original concepts of how security could evolve through discussion and open education. Even at the very birth of the Internet, open discussion on standards and methodology was strongly suggested. It was felt that this open discussion could foster important advances in the technology. Baran was well aware of this and articulated the principle concisely when, in The Paradox of the Secrecy About Secrecy: The Assumption of A Clear Dichotomy Between Classified and Unclassified Subject Matter, he wrote:

Without the freedom to expose the system proposal to widespread scrutiny by clever minds of diverse interests, is to increase the risk that significant points of potential weakness have been overlooked. A frank and open discussion here is to our advantage.

Security Through Obscurity

Security through obscurity has been defined and described in many different ways. One rather whimsical description, authored by a student named Jeff Breidenbach in his lively and engaging paper, Network Security Throughout the Ages, appears here:

The Net had a brilliant strategy called "Security through Obscurity." Don't let anyone fool you into thinking that this was done on purpose. The software has grown into such a tangled mess that nobody really knows how to use it. Befuddled engineers fervently hoped potential meddlers would be just as intimidated by the technical details as they were themselves.

Mr. Breidenbach might well be correct about this. Nevertheless, the standardized definition and description of security through obscurity can be obtained from any archive of the Jargon File, available at thousands of locations on the Internet. That definition is this:

alt. 'security by obscurity' n. A term applied by hackers to most OS vendors' favorite way of coping with security holes--namely, ignoring them, documenting neither any known holes nor the underlying security algorithms, trusting that nobody will find out about them and that people who do find out about them won't exploit them.

Regardless of which security philosophy you believe, three questions remain constant:

Why is the Internet insecure?
Does it need to be secure?
Can it be secure?

Why Is the Internet Insecure?

The Internet is insecure for a variety of reasons, each of which I will discuss here in detail. Those factors include

Lack of education
The Internet's design
Proprietarism (yes, another ism)
The trickling down of technology
Human nature

Each of these factors contributes in some degree to the Internet's current lack of security.

Lack of Education Do you believe that what you don't know can't hurt you? If you are charged with the responsibility of running an Internet server, you had better not believe it. Education is the single, most important aspect of security, one aspect that has been sorely wanting.

I am not suggesting that a lack of education exists within higher institutions of learning or those organizations that perform security-related tasks. Rather, I am suggesting that security education rarely extends beyond those great bastions of computer-security science.

The Computer Emergency Response Team (CERT) is probably the Internet's best-known security organization. CERT generates security advisories and distributes them throughout the Internet community. These advisories address the latest known security vulnerabilities in a wide range of operating systems. CERT thus performs an extremely valuable service to the Internet. The CERT Coordination Center, established by ARPA in 1988, provides a centralized point for the reporting of and proactive response to all major security incidents. Since 1988, CERT has grown dramatically, and CERT centers have been established at various points across the globe.

Cross Reference: You can contact CERT at its WWW page (http://www.cert.org). There resides a database of vulnerabilities, various research papers (including extensive documentation on disaster survivability), and links to other important security resources.

CERT's 1995 annual report shows some very enlightening statistics. During 1995, CERT was informed of some 12,000 sites that had experienced some form of network-security violation. Of these, there were at least 732 known break-ins and an equal number of probes or other instances of suspicious activity.

Cross Reference: You can access CERT's 1995 annual report at http://www.cert.org/cert.report.95.html.

12,000 incidents with a reported 732 break-ins. This is so, even though the GAO report examined earlier suggested that Defense computers alone are attacked as many as 250,000 times each year, and Dan Farmer's security survey reported that over 60 percent of all critical sites surveyed were vulnerable to some technique of network security breach. How can this be? Why aren't more incidents reported to CERT?

Cross Reference: Check out Dan Farmer's security survey at http://www.trouble.org/survey.

It might be because the better portion of the Internet's servers are now maintained by individuals who have less-than adequate security education. Many system administrators have never even heard of CERT. True, there are many security resources available on the Internet (many that point to CERT, in fact), but these may initially appear intimidating and overwhelming to those new to security. Moreover, many of the resources provide links to dated information.

An example is RFC 1244, the Site Security Handbook. At the time 1244 was written, it comprised a collection of state-of-the-art information on security. As expressed in that document's editor's note: This FYI RFC is a first attempt at providing Internet users guidance on how to deal with security issues in the Internet. As such, this document is necessarily incomplete. There are some clear shortfalls; for example, this document focuses mostly on resources available in the United States. In the spirit of the Internet's `Request for Comments' series of notes, we encourage feedback from users of this handbook. In particular, those who utilize this document to craft their own policies and procedures.

This handbook is meant to be a starting place for further research and should be viewed as a useful resource, but not the final authority. Different organizations and jurisdictions will have different resources and rules. Talk to your local organizations, consult an informed lawyer, or consult with local and national law enforcement. These groups can help fill in the gaps that this document cannot hope to cover.

From 1991 until now, the Site Security Handbook has been an excellent place to start. Nevertheless, as Internet technology grows in leaps and bounds, such texts become rapidly outdated. Therefore, the new system administrator must keep up with the security technology that follows each such evolution. To do so is a difficult task.

Cross Reference: RFC 1244 is still a good study paper for a user new to security. It is available at many places on the Internet. One reliable server is at http://www.net.ohio-state.edu/hypertext/rfc1244/toc.html.

The Genesis of an Advisory

Advisories comprise the better part of time-based security information. When these come out, they are immediately very useful because they usually relate to an operating system or popular application now widely in use. As time goes on, however, such advisories become less important because people move on to new products. In this process, vendors are constantly updating their systems, eliminating holes along the way. Thus, an advisory is valuable for a set period of time (although, to be fair, this information may stay valuable for extended periods because some people insist on using older software and hardware, often for financial reasons).

An advisory begins with discovery. Someone, whether hacker, cracker, administrator, or user, discovers a hole. That hole is verified, and the resulting data is forwarded to security organizations, vendors, or other parties deemed suitable. This is the usual genesis of an advisory (a process explained in Chapter 2, "How This Book Will Help You"). Nevertheless, there is another way that holes are discovered.

Often, academic researchers discover a hole. An example, which you will review later, is the series of holes found within the Java programming language. These holes were primarily revealed--at least at first--by those at Princeton University's computer science labs. When such a hole is discovered, it is documented in excruciating detail. That is, researchers often author multipage documents detailing the hole, the reasons for it, and possible remedies.

Cross Reference: Java is a compiled language used to create interactive applications for use on the World Wide Web. The language was created by efforts at Sun Microsystems. It vaguely resembles C++. For more information about Java, visit the Java home page at http://java.sun.com/.

This information gets digested by other sources into an advisory, which is often no more than 100 lines. By the time the average, semi-security literate user lays his or her hands on this information, it is limited and watered-down.

Thus, redundancy of data on the Internet has its limitations. People continually rehash these security documents into different renditions, often highlighting different aspects of the same paper. Such digested revisions are available all over the Net. This helps distribute the information, true, but leaves serious researchers hungry. They must hunt, and that hunt can be a struggle. For example, there is no centralized place to acquire all such papers.

Equally, as I have explained, end-user documentation can be varied. Although there should be, there is no 12-set volume (with papers by Farmer, Venema, Bellovin, Spafford, Morris, Ranum, Klaus, Muffet, and so on) about Internet security that you can acquire at a local library or bookstore. More often, the average bookstore contains brief treatments of the subject (like this book, I suppose).

Couple with these factors the mind-set of the average system administrator. A human being only has so much time. Therefore, these individuals absorb what they can on-the-fly, applying methods learned through whatever sources they encounter.

The Dissemination of Information

For so many reasons, education in security is wanting. In the future, specialists need to address this need in a more practical fashion. There must be some suitable means of networking this information. To be fair, some organizations have attempted to do so, but many are forced to charge high prices for their hard-earned databases. The National Computer Security Association (NCSA) is one such organization. Its RECON division gathers some 70MB per day of hot and heavy security information. Its database is searchable and is available for a price, but that price is substantial.

Cross Reference: To learn more about NCSA RECON, examine its FAQ. NCSA's database offers advanced searching capabilities, and the information held there is definitely up-to-date. In short, it is a magnificent service. The FAQ is at http://www.isrecon.ncsa.com/public/faq/isrfaq.htm. You can also get a general description of what the service is by visiting http://www.isrecon.ncsa.com/docz/Brochure_Pages/effect.htm.

Many organizations do offer superb training in security and firewall technology. The price for such training varies, depending on the nature of the course, the individuals giving it, and so on. One good source for training is Lucent Technologies, which offers many courses on security.

Cross Reference: Lucent Technologies' WWW site can be found at http://www.attsa.com/.

NOTE: Appendix A, "How to Get More Information," contains a massive listing of security training resources as well as general information about where to acquire good security information.

Despite the availability of such training, today's average company is without a clue. In a captivating report (Why Safeguard Information?) from Abo Akademi University in Finland, researcher Thomas Finne estimated that only 15 percent of all Finnish companies had an individual employed expressly for the purpose of information security. The researcher wrote:

The result of our investigation showed that the situation had got even worse; this is very alarming. Pesonen investigated the security in Finnish companies by sending out questionnaires to 453 companies with over 70 employees. The investigation showed that those made responsible for information security in the companies spent 14.5 percent of their working time on information security. In an investigation performed in the UK over 80 percent of the respondents claimed to have a department or individual responsible for information technology (IT) security. The Brits made some extraordinary claims! "Of course we have an information security department. Doesn't everyone?" In reality, the percentage of companies that do is likely far less. One survey conducted by the Computer Security Institute found that better than 50 percent of all survey participants didn't even have written security policies and procedures.

The Problems with PC-Based Operating Systems

It should be noted that in America, the increase in servers being maintained by those new to the Internet poses an additional education problem. Many of these individuals have used PC-based systems for the whole of their careers. PC-based operating systems and hardware were never designed for secure operation (although, that is all about to change). Traditionally, PC users have had less-than close contact with their vendors, except on issues relating to hardware and software configuration problems. This is not their fault. The PC community is market based and market driven. Vendors never sold the concept of security; they sold the concept of user friendliness, convenience, and standardization of applications. In these matters, vendors have excelled. The functionality of some PC-based applications is extraordinary.

Nonetheless, programmers are often brilliant in their coding and design of end-user applications but have poor security knowledge. Or, they may have some security knowledge but are unable to implement it because they cannot anticipate certain variables. Foo (the variable) in this case represents the innumerable differences and subtleties involved with other applications that run on the same machine. These will undoubtedly be designed by different individuals and vendors, unknown to the programmer. It is not unusual for the combination of two third-party products to result in the partial compromise of a system's security. Similarly, applications intended to provide security can, when run on PC platforms, deteriorate or otherwise be rendered less secure. The typical example is the use of the famous encryption utility Pretty Good Privacy (PGP) when used in the Microsoft Windows environment.

PGP

PGP operates by applying complex algorithms. These operations result in very high-level encryption. In some cases, if the user so specifies, using PGP can provide military-level encryption to a home user. The system utilizes the public key/private key pair scenario. In this scenario, each message is encrypted only after the user provides a passphrase, or secret code. The length of this passphrase may vary. Some people use the entire first line of a poem or literary text. Others use lines in a song or other phrases that they will not easily forget. In any event, this passphrase must be kept completely secret. If it is exposed, the encrypted data can be decrypted, altered, or otherwise accessed by unauthorized individuals.

In its native state, compiled for MS-DOS, PGP operates in a command-line interface or from a DOS prompt. This in itself presents no security issue. The problem is that many people find this inconvenient and therefore use a front-end, or a Microsoft Windows-based application through which they access the PGP routines. When the user makes use of such a front-end, the passphrase gets written into the Windows swap file. If that swap file is permanent, the passphrase can be retrieved using fairly powerful machines. I've tried this on several occasions with machines differently configured. With a 20MB swap file on an IBM compatible DX66 sporting 8-16MB of RAM, this is a formidable task that will likely freeze the machine. This, too, depends on the utility you are using to do the search. Not surprisingly, the most effective utility for performing such a search is GREP.

NOTE: GREP is a utility that comes with many C language packages. It also comes stock on any UNIX distribution. GREP works in a way quite similar to the FIND.EXE command in DOS. Its purpose is to search specified files for a particular string of text. For example, to find the word SEARCH in all files with a *.C extension, you would issue the following command:

GREP SEARCH *.C There are free versions of GREP available on the Internet for a variety of operating systems, including but not limited to UNIX, DOS, OS/2, and 32-bit Microsoft Windows environments.

In any event, the difficulty factor drops drastically when you use a machine with resources in excess of 100MHz and 32MB of RAM.

My point is this: It is by no fault of the programmer of PGP that the passphrase gets caught in the swap. PGP is not flawed, nor are those platforms that use swapped memory. Nevertheless, platforms that use swapped memory are not secure and probably never will be.

Cross Reference: For more information about PGP, visit http://web.mit.edu/network/pgp.html. This is the MIT PGP distribution site for U.S. residents. PGP renders sufficiently powerful encryption that certain versions are not available for export. Exporting such versions is a crime. The referenced site has much valuable information about PGP, including a FAQ, a discussion of file formats, pointers to books, and of course, the free distribution of the PGP software.

Thus, even when designing security products, programmers are often faced with unforeseen problems over which they can exert no control.

TIP: Techniques of secure programming (methods of programming that enhance security on a given platform) are becoming more popular. These assist the programmer in developing applications that at least won't weaken network security. Chapter 30, "Language, Extensions, and Security," addresses some secure programming techniques as well as problems generally associated with programming and security.

The Internet's Design

When engineers were put to the task of creating an open, fluid, and accessible Internet, their enthusiasm and craft were, alas, too potent. The Internet is the most remarkable creation ever erected by humankind in this respect. There are dozens of ways to get a job done on the Internet; there are dozens of protocols with which to do it.

Are you having trouble retrieving a file via FTP? Can you retrieve it by electronic mail? What about over HTTP with a browser? Or maybe a Telnet-based BBS? How about Gopher? NFS? SMB? The list goes on.

Heterogeneous networking was once a dream. It is now a confusing, tangled mesh of internets around the globe. Each of the protocols mentioned forms one aspect of the modern Internet. Each also represents a little network of its own. Any machine running modern implementations of TCP/IP can utilize all of them and more. Security experts have for years been running back and forth before a dam of information and protocols, plugging the holes with their fingers. Crackers, meanwhile, come armed with icepicks, testing the dam here, there, and everywhere.

Part of the problem is in the Internet's basic design. Traditionally, most services on the Internet rely on the client/server model. The task before a cracker, therefore, is a limited one: Go to the heart of the service and crack that server.

I do not see that situation changing in the near future. Today, client/server programming is the most sought-after skill. The client/server model works effectively, and there is no viable replacement at this point.

There are other problems associated with the Internet's design, specifically related to the UNIX platform. One is access control and privileges. This is covered in detail in Chapter 17, "UNIX: The Big Kahuna," but I want to mention it here.

In UNIX, every process more or less has some level of privilege on the system. That is, these processes must have, at minimum, privilege to access the files they are to work on and the directories into which those files are deposited. In most cases, common processes and programs are already so configured by default at the time of the software's shipment. Beyond this, however, a system administrator may determine specific privilege schemes, depending on the needs of the situation. The system administrator is offered a wide variety of options in this regard. In short, system administrators are capable of restricting access to one, five, or 100 people. In addition, those people (or groups of people) can also be limited to certain types of access, such as read, write, execute, and so forth.

In addition to this system being complex (therefore requiring experience on the part of the administrator), the system also provides for certain inherent security risks. One is that access privileges granted to a process or a user may allow increased access or access beyond what was originally intended to be obtained. For example, a utility that requires any form of root access (highest level of privilege) should be viewed with caution. If someone finds a flaw within that program and can effectively exploit it, that person will gain a high level of access. Note that strong access-control features have been integrated into the Windows NT operating system and therefore, the phenomenon is not exclusively related to UNIX. Novell NetWare also offers some very strong access-control features.

All these factors seriously influence the state of security on the Internet. There are clearly hundreds of little things to know about it. This extends into heterogeneous networking as well. A good system administrator should ideally have knowledge of at least three platforms. This brings us to another consideration: Because the Internet's design is so complex, the people who address its security charge substantial prices for their services. Thus, the complexity of the Internet also influences more concrete considerations.

There are other aspects of Internet design and composition that authors often cite as sources of insecurity. For example, the Net allows a certain amount of anonymity; this issue has good and bad aspects. The good aspects are that individuals who need to communicate anonymously can do so if need be.

Anonymity on the Net

There are plenty of legitimate reasons for anonymous communication. One is that people living in totalitarian states can smuggle out news about human rights violations. (At least, this reason is regularly tossed around by media people. It is en vogue to say such things, even though the percentage of people using the Internet for this noble activity is incredibly small.) Nevertheless, there is no need to provide excuses for why anonymity should exist on the Internet. We do not need to justify it. After all, there is no reason why Americans should be forbidden from doing something on a public network that they can lawfully do at any other place. If human beings want to communicate anonymously, that is their right.

Most people use remailers to communicate anonymously. These are servers configured to accept and forward mail messages. During that process, the header and originating address are stripped from the message, thereby concealing its author and his or her location. In their place, the address of the anonymous remailer is inserted.

Cross Reference: To learn more about anonymous remailers, check out the FAQ at http://www.well.com/user/abacard/remail.html. This FAQ provides many useful links to other sites dealing with anonymous remailers.

Anonymous remailers (hereafter anon remailers) have been the subject of controversy in the past. Many people, particularly members of the establishment, feel that anon remailers undermine the security of the Internet. Some portray the situation as being darker than it really is:

By far the greatest threat to the commercial, economic and political viability of the Global Information Infrastructure will come from information terrorists... The introduction of Anonymous Re-mailers into the Internet has altered the capacity to balance attack and counter-attack, or crime and punishment.1

1 Paul A. Strassmann, U.S. Military Academy, West Point; Senior Advisor, SAIC and William Marlow, Senior Vice President, Science Applications International Corporation (SAIC). January 28-30, 1996. Symposium on the Global Information Infrastructure: Information, Policy & International Infrastructure.

I should explain that the preceding document was delivered by individuals associated with the intelligence community. Intelligence community officials would naturally be opposed to anonymity, for it represents one threat to effective, domestic intelligence-gathering procedures. That is a given. Nevertheless, one occasionally sees even journalists making similar statements, such as this one by Walter S. Mossberg:

In many parts of the digital domain, you don't have to use your real name. It's often impossible to figure out the identity of a person making political claims...When these forums operate under the cloak of anonymity, it's no different from printing a newspaper in which the bylines are admittedly fake, and the letters to the editor are untraceable.

This is an interesting statement. For many years, the U.S. Supreme Court has been unwilling to require that political statements be accompanied by the identity of the author. This refusal is to ensure that free speech is not silenced. In early American history, pamphlets were distributed in this manner. Naturally, if everyone had to sign their name to such documents, potential protesters would be driven into the shadows. This is inconsistent with the concepts on which the country was founded.

To date, there has been no convincing argument for why anon remailers should not exist. Nevertheless, the subject remains engaging. One amusing exchange occurred during a hearing in Pennsylvania on the constitutionality of the Communications Decency Act, an act brought by forces in Congress that were vehemently opposed to pornographic images being placed on the Internet. The hearing occurred on March 22, 1996, before the Honorable Dolores K. Sloviter, Chief Judge, United States Court of Appeals for the Third Circuit. The case was American Civil Liberties Union, et al (plaintiffs) v. Janet Reno, the Attorney General of the United States. The discussion went as follows:

Q: Could you explain for the Court what Anonymous Remailers are?

A: Yes, Anonymous Remailers and their -- and a related service called Pseudonymity Servers are computer services that privatize your identity in cyberspace. They allow individuals to, for example, post content for example to a Usenet News group or to send an E-mail without knowing the individual's true identity.

The difference between an anonymous remailer and a pseudonymity server is very important because an anonymous remailer provides what we might consider to be true anonymity to the individual because there would be no way to know on separate instances who the person was who was making the post or sending the e-mail.

But with a pseudonymity server, an individual can have what we consider to be a persistent presence in cyberspace, so you can have a pseudonym attached to your postings or your e-mails, but your true identity is not revealed. And these mechanisms allow people to communicate in cyberspace without revealing their true identities.

Q: I just have one question, Professor Hoffman, on this topic. You have not done any study or survey to sample the quantity or the amount of anonymous remailing on the Internet, correct?

A: That's correct. I think by definition it's a very difficult problem to study because these are people who wish to remain anonymous and the people who provide these services wish to remain anonymous.

Indeed, the court was clearly faced with a catch-22. In any case, whatever one's position might be on anonymous remailers, they appear to be a permanent feature of the Internet. Programmers have developed remailer applications to run on almost any operating system, allowing the little guy to start a remailer with his PC.

Cross Reference: If you have more interest in anon remailers, visit http://www.cs.berkeley.edu/~raph/remailer-list.html. This site contains extensive information on these programs, as well as links to personal anon remailing packages and other software tools for use in implementing an anonymous remailer.

In the end, e-mail anonymity on the Internet has a negligible effect on real issues of Internet security. The days when one could exploit a hole by sending a simple e-mail message are long gone. Those making protracted arguments against anonymous e-mail are either nosy or outraged that someone can implement a procedure that they cannot. If e-mail anonymity is an issue at all, it is for those in national security. I readily admit that spies could benefit from anonymous remailers. In most other cases, however, the argument expends good energy that could be better spent elsewhere.

Proprietarism

Yes, another ism. Before I start ranting, I want to define this term as it applies here. Proprietarism is a practice undertaken by commercial vendors in which they attempt to inject into the Internet various forms of proprietary design. By doing so, they hope to create profits in an environment that has been previously free from commercial reign. It is the modern equivalent of Colonialism plus Capitalism in the computer age on the Internet. It interferes with Internet security structure and defeats the Internet's capability to serve all individuals equally and effectively.

ActiveX

A good example of proprietarism in action is Microsoft Corporation's ActiveX technology.

Cross Reference: Those users unfamiliar with ActiveX technology should visit http://www.microsoft.com/activex/. Users who already have some experience with ActiveX should go directly to the Microsoft page that addresses the security features: http://www.microsoft.com/security/.

To understand the impact of ActiveX, a brief look at HTML would be instructive. HTML was an incredible breakthrough in Internet technology. Imagine the excitement of the researchers when they first tested it! It was (and still is) a protocol by which any user, on any machine, anywhere in the world could view a document and that document, to any other user similarly (or not similarly) situated, would look pretty much the same. What an extraordinary breakthrough. It would release us forever from proprietary designs. Whether you used a Mac, an Alpha, an Amiga, a SPARC, an IBM compatible, or a tire hub (TRS-80, maybe?), you were in. You could see all the wonderful information available on the Net, just like the next guy. Not any more.

ActiveX technology is a new method of presenting Web pages. It is designed to interface with Microsoft's Internet Explorer. If you don't have it, forget it. Most WWW pages designed with it will be nonfunctional for you either in whole or in part.

That situation may change, because Microsoft is pushing for ActiveX extensions to be included within the HTML standardization process. Nevertheless, such extensions (including scripting languages or even compiled languages) do alter the state of Internet security in a wide and encompassing way.

First, they introduce new and untried technologies that are proprietary in nature. Because they are proprietary, the technologies cannot be closely examined by the security community. Moreover, these are not cross platform and therefore create limitations to the Net, as opposed to heterogeneous solutions. To examine the problem firsthand you may want to visit a page established by Kathleen A. Jackson, Team Leader, Division Security Office, Computing, Information, and Communications Division at the Los Alamos National Laboratory. Jackson points to key problems in ActiveX. On her WWW page, she writes:

...The second big problem with ActiveX is security. A program that downloads can do anything the programmer wants. It can reformat your hard drive or shut down your computer... This issue is more extensively covered in a paper delivered by Simon Garfinkel at Hot Wired. When Microsoft was alerted to the problem, the solution was to recruit a company that created digital signatures for ActiveX controls. This digital signature is supposed to be signed by the control's programmer or creator. The company responsible for this digital signature scheme has every software publisher sign a software publisher's pledge, which is an agreement not to sign any software that contains malicious code. If a user surfs a page that contains an unsigned control, Microsoft's Internet Explorer puts up a warning message box that asks whether you want to accept the unsigned control.

Cross Reference: Find the paper delivered by Simon Garfinkel at Hot Wired at http://www.packet.com/packet/garfinkel/.

You cannot imagine how absurd this seems to security professionals. What is to prevent a software publisher from submitting malicious code, signed or unsigned, on any given Web site? If it is signed, does that guarantee that the control is safe? The Internet at large is therefore resigned to take the software author or publisher at his or her word. This is impractical and unrealistic. And, although Microsoft and the company responsible for the signing initiative will readily offer assurances, what evidence is there that such signatures cannot be forged? More importantly, how many small-time programmers will bother to sign their controls? And lastly, how many users will refuse to accept an unsigned control? Most users confronted with the warning box have no idea what it means. All it represents to them is an obstruction that is preventing them from getting to a cool Web page.

There are now all manner of proprietary programs out there inhabiting the Internet. Few have been truly tested for security. I understand that this will become more prevalent and, to Microsoft's credit, ActiveX technology creates the most stunning WWW pages available on the Net. These pages have increased functionality, including drop-down boxes, menus, and other features that make surfing the Web a pleasure. Nevertheless, serious security studies need to be made before these technologies foster an entirely new frontier for those pandering malicious code, viruses, and code to circumvent security.

Cross Reference: To learn more about the HTML standardization process, visit the site of the World Wide Web Consortium (http://www.w3.org). If you already know a bit about the subject but want specifics about what types of HTML tags and extensions are supported, you should read W3C's activity statement on this issue (http://www.w3.org/pub/WWW/MarkUp/Activity). One interesting area of development is W3C's work on support for the disabled.

Proprietarism is a dangerous force on the Internet, and it's gaining ground quickly. To compound this problem, some of the proprietary products are excellent. It is therefore perfectly natural for users to gravitate toward these applications. Users are most concerned with functionality, not security. Therefore, the onus is on vendors, and this is a problem. If vendors ignore security hazards, there is nothing anyone can do. One cannot, for example, forbid insecure products from being sold on the market. That would be an unreasonable restraint of interstate commerce and ground for an antitrust claim. Vendors certainly have every right to release whatever software they like, secure or not. At present, therefore, there is no solution to this problem.

Extensions, languages, or tags that probably warrant examination include

JavaScript
VBScript
ActiveX

JavaScript is owned by Netscape, and VBScript and ActiveX are owned by Microsoft. These languages are the weapons of the war between these two giants. I doubt that either company objectively realizes that there's a need for both technologies. For example, Netscape cannot shake Microsoft's hold on the desktop market. Equally, Microsoft cannot supply the UNIX world with products. The Internet would probably benefit greatly if these two titans buried the hatchet in something besides each other.

The Trickling Down of Technology As discussed earlier, there is the problem of high-level technology trickling down from military, scientific, and security sources. Today, the average cracker has tools at his or her disposal that most security organizations use in their work. Moreover, the machines on which crackers use these tools are extremely powerful, therefore allowing faster and more efficient cracking.

Government agencies often supply links to advanced security tools. At these sites, the tools are often free. They number in the hundreds and encompass nearly every aspect of security. In addition to these tools, government and university sites also provide very technical information regarding security. For crackers who know how to mine such information, these resources are invaluable. Some key sites are listed in Table 5.1.

Table 5.1.
Some major security sites for information and tools.

Purdue University http://www.cs.purdue.edu//coast/archive/
Raptor Systems http://www.raptor.com/library/library.html
The Risks Forum http://catless.ncl.ac.uk/Risks
FIRST http://www.first.org/
DEFCON http://www.defcon.org/

The level of technical information at such sites is high. This is in contrast to many fringe sites that provide information of little practical value to the cracker. But not all fringe sites are so benign. Crackers have become organized, and they maintain a wide variety of servers on the Internet. These are typically established using free operating systems such as Linux or FreeBSD. Many such sites end up establishing a permanent wire to the Net. Others are more unreliable and may appear at different times via dynamic IP addresses. I should make it clear that not all fringe sites are cracking sites. Many are legitimate hacking stops that provide information freely to the Internet community as a service of sorts. In either case, both hackers and crackers have been known to create excellent Web sites with voluminous security information.

The majority of cracking and hacking sites are geared toward UNIX and IBM-compatible platforms. There is a noticeable absence of quality information for Macintosh users. In any event, in-depth security information is available on the Internet for any interested party to view.

So, the information is trafficked. There is no solution to this problem, and there shouldn't be. It would be unfair to halt the education of many earnest, responsible individuals for the malicious acts of a few. So advanced security information and tools will remain available.

Human Nature

We have arrived at the final (and probably most influential) force at work in weakening Internet security: human nature. Humans are, by nature, a lazy breed. To most users, the subject of Internet security is boring and tedious. They assume that the security of the Internet will be taken care of by experts.

To some degree, there is truth to this. If the average user's machine or network is compromised, who should care? They are the only ones who can suffer (as long as they are not connected to a network other than their own). The problem is, most will be connected to some other network. The Internet is one enterprise that truly relies on the strength of its weakest link. I have seen crackers work feverishly on a single machine when that machine was not their ultimate objective. Perhaps the machine had some trust relationship with another machine that was their ultimate objective. To crack a given region of cyberspace, crackers may often have to take alternate or unusual routes. If one workstation on the network is vulnerable, they are all potentially vulnerable as long as a relationship of trust exists.

Also, you must think in terms of the smaller businesses because these will be the great majority. These businesses may not be able to withstand disaster in the same way that larger firms can. If you run a small business, when was the last time you performed a complete backup of all information on all your drives? Do you have a disaster-recovery plan? Many companies do not. This is an important point. I often get calls from companies that are about to establish permanent connectivity. Most of them are unprepared for emergencies.

Moreover, there are still two final aspects of human nature that influence the evolution of security on the Internet. Fear is one. Most companies are fearful to communicate with outsiders regarding security. For example, the majority of companies will not tell anyone if their security has been breached. When a Web site is cracked, it is front-page news; this cannot be avoided. When a system is cracked in some other way (with a different point of entry), press coverage (or any exposure) can usually be avoided. So, a company may simply move on, denying any incident, and secure its network as best it can. This deprives the security community of much-needed statistics and data.

The last human factor here is curiosity. Curiosity is a powerful facet of human nature that even the youngest child can understand. One of the most satisfying human experiences is discovery. Investigation and discovery are the things that life is really made of. We learn from the moment we are born until the moment that we die, and along that road, every shred of information is useful. Crackers are not so hard to understand. It comes down to basics: Why is this door is locked? Can I open it? As long as this aspect of human experience remains, the Internet may never be entirely secure. Oh, it will be ultimately be secure enough for credit-card transactions and the like, but someone will always be there to crack it.

Does the Internet Really Need to Be Secure?

Yes. The Internet does need to be secure and not simply for reasons of national security. Today, it is a matter of personal security. As more financial institutions gravitate to the Internet, America's financial future will depend on security. Many users may not be aware of the number of financial institutions that offer online banking. One year ago, this was a relatively uncommon phenomenon. Nevertheless, by mid-1996, financial institutions across the country were offering such services to their customers. Here are a few:

Wells Fargo Bank
Sanwa Bank
Bank of America
City National Bank of Florida
Wilber National Bank of Oneonta, New York
The Mechanics Bank of Richmond, California
COMSTAR Federal Credit Union of Gaithersburg, Maryland

The threat from lax security is more than just a financial one. Banking records are extremely personal and contain revealing information. Until the Internet is secure, this information is available to anyone with the technical prowess to crack a bank's online service. It hasn't happened yet (I assume), but it will.

Also, the Internet needs to be secure so that it does not degenerate into one avenue of domestic spying. Some law-enforcement organizations are already using Usenet spiders to narrow down the identities of militia members, militants, and other political undesirables. The statements made by such people on Usenet are archived away, you can be sure. This type of logging activity is not unlawful. There is no constitutional protection against it, any more than there is a constitutional right for someone to demand privacy when they scribble on a bathroom wall.

Private e-mail is a different matter, though. Law enforcement agents need a warrant to tap someone's Internet connection. To circumvent these procedures (which could become widespread), all users should at least be aware of the encryption products available, both free and commercial (I will discuss this and related issues in Part VII of this book, "The Law").

For all these reasons, the Internet must become secure.

Can the Internet Be Secure?

Yes. The Internet can be secure. But in order for that to happen, some serious changes must be made, including the heightening of public awareness to the problem. Most users still regard the Internet as a toy, an entertainment device that is good for a couple of hours on a rainy Sunday afternoon. That needs to change in coming years.

The Internet is likely the single, most important advance of the century. Within a few years, it will be a powerful force in the lives of most Americans. So that this force may be overwhelmingly positive, Americans need to be properly informed.

Members of the media have certainly helped the situation, even though media coverage of the Internet isn't always painfully accurate. I have seen the rise of technology columns in newspapers throughout the country. Good technology writers are out there, trying to bring the important information home to their readers. I suspect that in the future, more newspapers will develop their own sections for Internet news, similar to those sections allocated for sports, local news, and human interest.

Equally, many users are security-aware, and that number is growing each day. As public education increases, vendors will meet the demand of their clientele.

Summary

In this chapter, I have established the following:

The Internet is not secure.

Education about security is lacking.

Proprietary designs are weakening Internet security.

The availability of high-grade technological information both strengthens and weakens Net security.

There is a real need for Internet security.

Internet security relies as much on public as private education. Those things having been established, I want to quickly examine the consequences of poor Internet security. Thus, in the next chapter, I will discuss Internet warfare. After covering that subject, I will venture into entirely new territory as we begin to explore the tools and techniques that are actually applied in Internet security.

A Brief Primer on TCP/IP

This chapter examines the Transmission Control Protocol (TCP) and the Internet Protocol (IP). These two protocols (or networked methods of data transport) are generally referred to together as TCP/IP.

You can read this chapter thoroughly to gain an in-depth understanding of how information is routed across the Internet or you can use this chapter as an extended glossary, referring to it only when encountering unfamiliar terms later in this book.

The chapter begins with fundamental concepts and closes with a comprehensive look at TCP/IP. The chapter is broken into three parts. The first part answers some basic questions you might have, including

What is TCP/IP?

What is the history of TCP/IP?

What platforms support TCP/IP?

The second portion of the chapter addresses how TCP/IP actually works. In that portion, I will focus on the most popular services within the TCP/IP suite. These services (or modes of transport) comprise the greater portion of the Internet as we know it today.

The final portion of this chapter explores key TCP/IP utilities with which each user must become familiar. These utilities are of value in maintenance and monitoring of any TCP/IP network.

Note that this chapter is not an exhaustive treatment of TCP/IP. It provides only the minimum knowledge needed to continue reading this book. Throughout this chapter, however, I supply links to documents and other resources from which the reader can gain an in-depth knowledge of TCP/IP.

TCP/IP: The Basics

This section is a quick overview of TCP/IP. It is designed to prepare you for various terms and concepts that arise within this chapter. It assumes no previous knowledge of IP protocols.

What Is TCP/IP?

TCP/IP refers to two network protocols (or methods of data transport) used on the Internet. They are Transmission Control Protocol and Internet Protocol, respectively. These network protocols belong to a larger collection of protocols, or a protocol suite. These are collectively referred to as the TCP/IP suite.

Protocols within the TCP/IP suite work together to provide data transport on the Internet. In other words, these protocols provide nearly all services available to today's Net surfer. Some of those services include

Transmission of electronic mail

File transfers

Usenet news delivery

Access to the World Wide Web There are two classes of protocol within the TCP/IP suite, and I will address both in the following pages. Those two classes are

The network-level protocol

The application-level protocol

Network-Level Protocols

Network-level protocols manage the discrete mechanics of data transfer. These protocols are typically invisible to the user and operate deep beneath the surface of the system. For example, the IP protocol provides packet delivery of the information sent between the user and remote machines. It does this based on a variety of information, most notably the IP address of the two machines. Based on this and other information, IP guarantees that the information will be routed to its intended destination. Throughout this process, IP interacts with other network-level protocols engaged in data transport. Short of using network utilities (perhaps a sniffer or other device that reads IP datagrams), the user will never see IP's work on the system.

Application-Level Protocols

Conversely, application-level protocols are visible to the user in some measure. For example, File Transfer Protocol (FTP) is visible to the user. The user requests a connection to another machine to transfer a file, the connection is established, and the transfer begins. During the transfer, a portion of the exchange between the user's machine and the remote machine is visible (primarily error messages and status reports on the transfer itself, for example, how many bytes of the file have been transferred at any given moment).

For the moment, this explanation will suffice: TCP/IP refers to a collection of protocols that facilitate communication between machines over the Internet (or other networks running TCP/IP).

The History of TCP/IP

In 1969, the Defense Advanced Research Projects Agency (DARPA) commissioned development of a network over which its research centers might communicate. Its chief concern was this network's capability to withstand a nuclear attack. In short, if the Soviet Union launched a nuclear attack, it was imperative that the network remain intact to facilitate communication. The design of this network had several other requisites, the most important of which was this: It had to operate independently of any centralized control. Thus, if 1 machine was destroyed (or 10, or 100), the network would remain impervious.

The prototype for this system emerged quickly, based in part on research done in 1962 and 1963. That prototype was called ARPANET. ARPANET reportedly worked well, but was subject to periodic system crashes. Furthermore, long-term expansion of that network proved costly. A search was initiated for a more reliable set of protocols; that search ended in the mid-1970s with the development of TCP/IP.

TCP/IP had significant advantages over other protocols. For example, TCP/IP was lightweight (it required meager network resources). Moreover, TCP/IP could be implemented at much lower cost than the other choices then available. Based on these amenities, TCP/IP became exceedingly popular. In 1983, TCP/IP was integrated into release 4.2 of Berkeley Software Distribution (BSD) UNIX. Its integration into commercial forms of UNIX soon followed, and TCP/IP was established as the Internet standard. It has remained so (as of this writing).

As more users flock to the Internet, however, TCP/IP is being reexamined. More users translates to greater network load. To ease that network load and offer greater speeds of data transport, some researchers have suggested implementing TCP/IP via satellite transmission. Unfortunately, such research has thus far produced dismal results. TCP/IP is apparently unsuitable for this implementation.

Today, TCP/IP is used for many purposes, not just the Internet. For example, intranets are often built using TCP/IP. In such environments, TCP/IP can offer significant advantages over other networking protocols. One such advantage is that TCP/IP works on a wide variety of hardware and operating systems. Thus, one can quickly and easily create a heterogeneous network using TCP/IP. Such a network might have Macs, IBM compatibles, Sun Sparcstations, MIPS machines, and so on. Each of these can communicate with its peers using a common protocol suite. For this reason, since it was first introduced in the 1970s, TCP/IP has remained extremely popular. In the next section, I will discuss implementation of TCP/IP on various platforms.

What Platforms Support TCP/IP?

Most platforms support TCP/IP. However, the quality of that support can vary. Today, most mainstream operating systems have native TCP/IP support (that is, TCP/IP support that is built into the standard operating system distribution). However, older operating systems on some platforms lack such native support. Table 6.1 describes TCP/IP support for various platforms. If a platform has native TCP/IP support, it is labeled as such. If not, the name of a TCP/IP application is provided.

Table 6.1. Platforms and their support for TCP/IP.

Platform TCP/IP Support UNIX Native DOS Piper/IP By Ipswitch Windows TCPMAN by Trumpet Software Windows 95 Native Windows NT Native Macintosh MacTCP or OpenTransport (Sys 7.5+) OS/2 Native AS/400 OS/400 Native

Platforms that do not natively support TCP/IP can still implement it through the use of proprietary or third-party TCP/IP programs. In these instances, third-party products can offer varied functionality. Some offer very good support and others offer marginal support.

For example, some third-party products provide the user with only basic TCP/IP. For most users, this is sufficient. (They simply want to connect to the Net, get their mail, and enjoy easy networking.) In contrast, certain third-party TCP/IP implementations are comprehensive. These may allow manipulation of compression, methods of transport, and other features common to the typical UNIX TCP/IP implementation.

Widespread third-party support for TCP/IP has been around for only a few years. Several years ago, for example, TCP/IP support for DOS boxes was very slim.

TIP: There is actually a wonderful product called Minuet that can be used in conjunction with a packet driver on LANs. Minuet derived its name from the term Minnesota Internet Users Essential Tool. Minuet offers quick and efficient access to the Net through a DOS-based environment. This product is still available free of charge at many locations, including ftp://minuet.micro.umn.edu/pub/minuet/.

One interesting point about non-native, third-party TCP/IP implementations is this: Most of them do not provide servers within their distributions. Thus, although a user can connect to remote machines to transfer a file, the user's machine cannot accept such a request. For example, a Windows 3.11 user using TCPMAN cannot--without installing additional software--accept a file-transfer request from a remote machine. Later in this chapter you'll find a list of a few names of such additional software for those who are interested in providing services via TCP/IP.

How Does TCP/IP Work?

TCP/IP operates through the use of a protocol stack. This stack is the sum total of all protocols necessary to complete a single transfer of data between two machines. (It is also the path that data takes to get out of one machine and into another.) The stack is broken into layers, five of which are of concern here. To grasp this layer concept, examine Figure 6.1.

Figure 6.1. The TCP/IP stack.

Server | Application layer: When the user initiates a data transfer, this layer passes the request to the Transport layer | Transport layer: The Transport layer attaches a header and passes the data to the Network layer. | Network layer: At the Network layer, source and destination IP addresses are added for routing purposes. | Datalink layer: The Datalink layer executes error checking over the flow of data between the above protocols and the Physical layer. | Physical layer: The Physical layer moves the data in and out, through the transmission medium.(The medium might be Ethernet via COAX, PPP via a modem, and so on)

After data has passed through the process illustrated in Figure 6.1, it travels to its destination on another machine or network. There, the process is executed in reverse (the data first meets the physical layer and subsequently travels its way up the stack). Throughout this process, a complex system of error checking is employed both on the originating and destination machine.

Each layer of the stack can send data to and receive data from its adjoining layer. Each layer is also associated with multiple protocols. At each tier of the stack, these protocols are hard at work, providing the user with various services. The next section of this chapter examines these services and the manner in which they are associated with layers in the stack. You will also examine their functions, the services they provide, and their relationship to security.

The Individual Protocols

You have examined how data is transmitted via TCP/IP using the protocol stack. Now I want to zoom in to identify the key protocols that operate within that stack. I will begin with network-level protocols.

Network-Level Protocols

Network protocols are those protocols that engage in (or facilitate) the transport process transparently. These are invisible to the user unless that user employs utilities to monitor system processes.

TIP: Sniffers are devices that can monitor such processes. A sniffer is a device--either hardware or software--that can read every packet sent across a network. Sniffers are commonly used to isolate network problems that, while invisible to the user, are degrading network performance. As such, sniffers can read all activity occurring between network-level protocols. Moreover, as you might guess, sniffers can pose a tremendous security threat. You will examine sniffers in Chapter 12, "Sniffers."

Important network-level protocols include

The Address Resolution Protocol (ARP)
The Internet Control Message Protocol (ICMP)
The Internet Protocol (IP)
The Transmission Control Protocol (TCP)

I will briefly examine each, offering only an overview.

Cross Reference: For more comprehensive information about protocols (or the stack in general), I highly recommend Teach Yourself TCP/IP in 14 Days by Timothy Parker, Ph.D (Sams Publishing). The Address Resolution Protocol

The Address Resolution Protocol (ARP) serves the critical purpose of mapping Internet addresses into physical addresses. This is vital in routing information across the Internet. Before a message (or other data) is sent, it is packaged into IP packets, or blocks of information suitably formatted for Internet transport. These contain the numeric Internet (IP) address of both the originating and destination machines. Before this package can leave the originating computer, however, the hardware address of the recipient (destination) must be discovered. (Hardware addresses differ from Internet addresses.) This is where ARP makes its debut.

An ARP request message is broadcast on the subnet. This request is received by a router that replies with the requested hardware address. This reply is caught by the originating machine and the transfer process can begin.

ARP's design includes a cache. To understand the ARP cache concept, consider this: Most modern HTML browsers (such as Netscape Navigator or Microsoft's Internet Explorer) utilize a cache. This cache is a portion of the disk (or memory) in which elements from often-visited Web pages are stored (such as buttons, headers, and common graphics). This is logical because when you return to those pages, these tidbits don't have to be reloaded from the remote machine. They will load much more quickly if they are in your local cache.

Similarly, ARP implementations include a cache. In this manner, hardware addresses of remote machines or networks are remembered, and this memory obviates the need to conduct subsequent ARP queries on them. This saves time and network resources.

Can you guess what type of security risks might be involved in maintaining such an ARP cache? At this stage, it is not particularly important. However, address caching (not only in ARP but in all instances) does indeed pose a unique security risk. If such address-location entries are stored, it makes it easier for a cracker to forge a connection from a remote machine, claiming to hail from one of the cached addresses.

Cross Reference: Readers seeking in-depth information on ARP should see RFC 826 (http://www.freesoft.org/Connected/RFC/826).

Cross Reference: Another good reference for information on ARP is Margaret K. Johnson's piece about details of TCP/IP (excerpts from Microsoft LAN Manager TCP/IP Protocol)(http://www.alexia.net.au/~www/yendor/internetinfo/index.html).

The Internet Control Message Protocol

The Internet Control Message Protocol handles error and control messages that are passed between two (or more) computers or hosts during the transfer process. It allows those hosts to share that information. In this respect, ICMP is critical for diagnosis of network problems. Examples of diagnostic information gathered through ICMP include

When a host is down
When a gateway is congested or inoperable
Other failures on a network TIP: Perhaps the most widely known ICMP implementation involves a network utility called ping. Ping is often used to determine whether a remote machine is alive. Ping's method of operation is simple: When the user pings a remote machine, packets are forwarded from the user's machine to the remote host. These packets are then echoed back to the user's machine. If no echoed packets are received at the user's end, the ping program usually generates an error message indicating that the remote host is down.

Cross Reference: I urge those readers seeking in-depth information about ICMP to examine RFC 792 (http://sunsite.auc.dk/RFC/rfc/rfc792.html).

The Internet Protocol

IP belongs to the network layer. The Internet Protocol provides packet delivery for all protocols within the TCP/IP suite. Thus, IP is the heart of the incredible process by which data traverses the Internet. To explore this process, I have drafted a small model of an IP datagram (see Figure 6.2).

Figure 6.2. The IP datagram.

Misc. Header Info/Originating IP/Destination IP/Data

The first three areas represent the "header" of an IP datagram, information that is required to determine and relate orgin and destination points on the network.

As illustrated, an IP datagram is composed of several parts. The first part, the header, is composed of miscellaneous information, including originating and destination IP address. Together, these elements form a complete header. The remaining portion of a datagram contains whatever data is then being sent.

The amazing thing about IP is this: If IP datagrams encounter networks that require smaller packages, the datagrams bust apart to accommodate the recipient network. Thus, these datagrams can fragment during a journey and later be reassembled properly (even if they do not arrive in the same sequence in which they were sent) at their destination.

Even further information is contained within an IP datagram. Some of that information may include identification of the protocol being used, a header checksum, and a time-to-live specification. This specification is a numeric value. While the datagram is traveling the void, this numeric value is constantly being decremented. When that value finally reaches a zero state, the datagram dies. Many types of packets have time-to-live limitations. Some network utilities (such as Traceroute) utilize the time-to-live field as a marker in diagnostic routines.

In closing, IP's function can be reduced to this: providing packet delivery over the Internet. As you can see, that packet delivery is complex in its implementation.

Cross Reference: I refer readers seeking in-depth information on Internet protocol to RFC 760 (http://sunsite.auc.dk/RFC/rfc/rfc760.html).

The Transmission Control Protocol

The Transmission Control Protocol is the chief protocol employed on the Internet. It facilitates such mission-critical tasks as file transfers and remote sessions. TCP accomplishes these tasks through a method called reliable data transfer. In this respect, TCP differs from other protocols within the suite. In unreliable delivery, you have no guarantee that the data will arrive in a perfect state. In contrast, TCP provides what is sometimes referred to as reliable stream delivery. This reliable stream delivery ensures that the data arrives in the same sequence and state in which it was sent.

The TCP system relies on a virtual circuit that is established between the requesting machine and its target. This circuit is opened via a three-part process, often referred to as the three-part handshake. The process typically follows the pattern illustrated in Figure 6.3.

1. The requesting(or client) sends a connection request, specifying a port to connect to on the remote (server) machine.

2. The server machine responds to the request with both an acknowledgement and a queue for the connection.

3. The client machine returns an acknowledgement and the circuit is opened.

Figure 6.3. The TCP/IP three-way handshake.

After the circuit is open, data can simultaneously travel in both directions. This results in what is sometimes called a full-duplex transmission path. Full-duplex transmission allows data to travel to both machines at the same time. In this way, while a file transfer (or other remote session) is underway, any errors that arise can be forwarded to the requesting machine.

TCP also provides extensive error-checking capabilities. For each block of data sent, a numeric value is generated. The two machines identify each transferred block using this numeric value. For each block successfully transferred, the receiving host sends a message to the sender that the transfer was clean. Conversely, if the transfer is unsuccessful, two things may occur:

The requesting machine receives error information
The requesting machine receives nothing

When an error is received, the data is retransmitted unless the error is fatal, in which case the transmission is usually halted. A typical example of a fatal error would be if the connection is dropped. Thus, the transfer is halted for no packets.

Similarly, if no confirmation is received within a specified time period, the information is also retransmitted. This process is repeated as many times as necessary to complete the transfer or remote session.

You have examined how the data is transported when a connect request is made. It is now time to examine what happens when that request reaches its destination. Each time one machine requests a connection to another, it specifies a particular destination. In the general sense, this destination is expressed as the Internet (IP) address and the hardware address of the target machine. However, even more detailed than this, the requesting machine specifies the application it is trying to reach at the destination. This involves two elements:

A program called inetd
A system based on ports

inetd: The Mother of All Daemons

Before you explore the inetd program, I want to briefly define daemons. This will help you more easily understand the inetd program.

Daemons are programs that continuously listen for other processes (in this case, the process listened for is a connection request). Daemons loosely resemble terminate and stay resident (TSR) programs in the Microsoft platform. These programs remain alive at all times, constantly listening for a particular event. When that event finally occurs, the TSR undertakes some action.

inetd is a very special daemon. It has been called many things, including the super-server or granddaddy of all processes. This is because inetd is the main daemon running on a UNIX machine. It is also an ingenious tool.

Common sense tells you that running a dozen or more daemon processes could eat up machine resources. So rather than do that, why not create one daemon that could listen for all the others? That is what inetd does. It listens for connection requests from the void. When it receives such a request, it evaluates it. This evaluation seeks to determine one thing only: What service does the requesting machine want? For example, does it want FTP? If so, inetd starts the FTP server process. The FTP server can then process the request from the void. At that point, a file transfer can begin. This all happens within the space of a second or so.

TIP: inetd isn't just for UNIX anymore. For example, Hummingbird Communications has developed (as part of its Exceed 5 product line) a version of inetd for use on any platform that runs Microsoft Windows or OS/2. There are also non- commercial versions of inetd, written by students and other software enthusiasts. One such distribution is available from TFS software and can be found at http://www.trumpton.demon.co.uk/software/inetd.html.

In general, inetd is started at boot time and remains resident (in a listening state) until the machine is turned off or until the root operator expressly terminates that process.

The behavior of inetd is generally controlled from a file called inetd.conf, located in the /etc directory on most UNIX platforms. The inetd.conf file is used to specify what services will be called by inetd. Such services might include FTP, Telnet, SMTP, TFTP, Finger, Systat, Netstat, or any other processes that you specify.

The Ports

Many TCP/IP programs can be initiated over the Internet. Most of these are client/server oriented. As each connection request is received, inetd starts a server program, which then communicates with the requesting client machine.

To facilitate this process, each application (FTP or Telnet, for example) is assigned a unique address. This address is called a port. The application in question is bound to that particular port and, when any connection request is made to that port, the corresponding application is launched (inetd is the program that launches it).

There are thousands of ports on the average Internet server. For purposes of convenience and efficiency, a standard framework has been developed for port assignment. (In other words, although a system administrator can bind services to the ports of his or her choice, services are generally bound to recognized ports. These are commonly referred to as well-known ports.)

Please peruse Table 6.2 for some commonly recognized ports and the applications typically bound to them.

Table 6.2. Common ports and their corresponding services or applications.

Service or Application Port 
File Transfer Protocol (FTP)              21 
Telnet                                    23 
Simple Mail Transfer Protocol (SMTP)      25 
Gopher                                    70 
Finger                                    79 
Hypertext Transfer Protocol (HTTP)        80 
Network News Transfer Protocol (NNTP)     119 
I will examine each of the applications described in Table 6.2. All are application-level protocols or services (that is, they are visible to user and the user can interact with them at the console).

Cross Reference: For a comprehensive list of all port assignments, visit ftp://ftp.isi.edu/in-notes/iana/assignments/port-numbers. This document is extremely informative and exhaustive in its treatment of commonly assigned port numbers.

Telnet

Telnet is best described in RFC 854, the Telnet protocol specification:

The purpose of the Telnet protocol is to provide a fairly general, bi-directional, eight-bit byte-oriented communications facility. Its primary goal is to allow a standard method of interfacing terminal devices and terminal-oriented processes to each other. Telnet not only allows the user to log in to a remote host, it allows that user to execute commands on that host. Thus, an individual in Los Angeles can Telnet to a machine in New York and begin running programs on the New York machine just as though the user were actually in New York.

For those of you who are unfamiliar with Telnet, it operates much like the interface of a bulletin board system (BBS). Telnet is an excellent application for providing a terminal-based front end to databases. For example, better than 80 percent of all university library catalogs can be accessed via Telnet. Figure 6.4 shows an example of a Telnet library catalog screen.

Figure 6.4. A sample Telnet session.

Even though GUI applications have taken the world by storm, Telnet--which is essentially a text-based application--is still incredibly popular. There are many reasons for this. First, Telnet allows you to perform a variety of functions (retrieving mail, for example) at a minimal cost in network resources. Second, implementing secure Telnet is a pretty simple task. There are several programs to implement this, the most popular of which is Secure Shell (which I will explore later in this book).

To use Telnet, the user issues whatever command necessary to start his or her Telnet client, followed the name (or numeric IP address) of the target host. In UNIX, this is done as follows:

#telnet internic.net This command launches a Telnet session, contacts internic.net, and requests a connection. That connection will either be honored or denied, depending on the configuration at the target host. In UNIX, the Telnet command has long been a native one. That is, Telnet has been included with basic UNIX distributions for well over a decade. However, not all operating systems have a native Telnet client. Table 6.3 shows Telnet clients for various operating systems.

Table 6.3. Telnet clients for various operating systems.

Operating System Client

UNIX                         Native 
Microsoft Windows 95         Native ZOC, NetTerm, Zmud, WinTel32, Yawtelnet 
Microsoft Windows NT         Native CRT, and all listed for 95 
Microsoft Windows 3.x        Trumptel Telnet, Wintel, Ewan 
Macintosh                    NCSA Telnet, NiftyTelnet, Comet 
VAX                          Native 

File Transfer Protocol

File Transfer Protocol is the standard method of transferring files from one system to another. Its purpose is set forth in RFC 0765 as follows:

The objectives of FTP are 1) to promote sharing of files (computer programs and/or data), 2) to encourage indirect or implicit (via programs) use of remote computers, 3) to shield a user from variations in file storage systems among Hosts, and 4) to transfer data reliably and efficiently. FTP, though usable directly by a user at a terminal, is designed mainly for use by programs. For over two decades, researchers have investigated a wide variety of file-transfer methods. The development of FTP has undergone many changes in that time. Its first definition occurred in April 1971, and the full specification can be read in RFC 114.

Cross Reference: RFC 114 contains the first definition of FTP, but a more practical document might be RFC 959 (http://www.freesoft.org/Connected/RFC/959/index.html).

Mechanical Operation of FTP

File transfers using FTP can be accomplished using any suitable FTP client. Table 6.4 defines some common clients used, by operating system.

Table 6.4. FTP clients for various operating systems.

Operating System Client

UNIX Native, LLNLXDIR2.0, FTPtool Microsoft Windows 95 Native, WS_FTP, Netload, Cute-FTP, Leap FTP, SDFTP, FTP Explorer Microsoft Windows NT See listings for Windows 95 Microsoft Windows 3.x Win_FTP, WS_FTP, CU-FTP, WSArchie Macintosh Anarchie, Fetch, Freetp OS/2 Gibbon FTP, FTP-IT, Lynn's Workplace FTP VAX Native

How Does FTP Work?

FTP file transfers occur in a client/server environment. The requesting machine starts one of the clients named in Table 6.4. This generates a request that is forwarded to the targeted file server (usually a host on another network). Typically, the request is sent by inetd to port 21. For a connection to be established, the targeted file server must be running an FTP server or FTP daemon.

FTPD FTPD is the standard FTP server daemon. Its function is simple: to reply to connect requests received by inetd and to satisfy those requests for file transfers. This daemon comes standard on most distributions of UNIX (for other operating systems, see Table 6.5).

Table 6.5. FTP servers for various operating systems.

Operating System Client

UNIX Native (FTPD) Microsoft Windows 95 WFTPD, Microsoft FrontPage, WAR FTP Daemon, Vermilion Microsoft Windows NT Serv-U, OmniFSPD, Microsoft Internet Information Server Microsoft Windows 3.x WinQVT, Serv-U, Beames & Whitside BW Connect, WFTPD FTP Server, WinHTTPD Macintosh Netpresenz, FTPD OS/2 Penguin

FTPD waits for a connection request. When such a request is received, FTPD requests the user login. The user must either provide his or her valid user login and password or may log in anonymously.

Once logged in, the user may download files. In certain instances and if security on the server allows, the user may also upload files.

Simple Mail Transfer Protocol The objective of Simple Mail Transfer protocol is stated concisely in RFC 821:

The objective of Simple Mail Transfer protocol (SMTP) is to transfer mail reliably and efficiently.

SMTP is an extremely lightweight and efficient protocol. The user (utilizing any SMTP- compliant client) sends a request to an SMTP server. A two-way connection is subsequently established. The client forwards a MAIL instruction, indicating that it wants to send mail to a recipient somewhere on the Internet. If the SMTP allows this operation, an affirmative acknowledgment is sent back to the client machine. At that point, the session begins. The client may then forward the recipient's identity, his or her IP address, and the message (in text) to be sent.

Despite the simple character of SMTP, mail service has been the source of countless security holes. (This may be due in part to the number of options involved. Misconfiguration is a common reason for holes.) I will discuss these security issues later in this book.

SMTP servers are native in UNIX. Most other networked operating systems now have some form of SMTP, so I'll refrain from listing them here.

Cross Reference: Further information on this protocol is available in RFC 821 (http://sunsite.auc.dk/RFC/rfc/rfc821.html).

Gopher

The Gopher service is a distributed document-retrieval system. It was originally implemented as the Campus Wide Information System at the University of Minnesota. It is defined in a March 1993 FYI from the University of Minnesota as follows:

The Internet Gopher protocol is designed primarily to act as a distributed document-delivery system. While documents (and services) reside on many servers, Gopher client software presents users with a hierarchy of items and directories much like a file system. In fact, the Gopher interface is designed to resemble a file system since a file system is a good model for locating documents and services. Cross Reference: The complete documentation on the Gopher protocol can be obtained in RFC 1436 (http://sunsite.auc.dk/RFC/rfc/rfc1436.html).

The Gopher service is very powerful. It can serve text documents, sounds, and other media. It also operates largely in text mode and is therefore much faster than HTTP through a browser. Undoubtedly, the most popular Gopher client is for UNIX. (Gopher2_3 is especially popular, followed by Xgopher.) However, many operating systems have Gopher clients. See Table 6.6 for a few.

Table 6.6. Gopher clients for various operating systems. Operating System Client Microsoft Windows (all) Hgopher, Ws_Gopher Macintosh Mac Turbo Gopher AS/400 The AS/400 Gopher Client OS/2 Os2Gofer

Typically, the user launches a Gopher client and contacts a given Gopher server. In turn, the Gopher server forwards a menu of choices. These may include search menus, pre-set destinations, or file directories. Figure 6.5 shows a client connection to the University of Illinois.

Figure 6.5. A sample gopher session.

Note that the Gopher model is completely client/server based. The user never logs on per se. Rather, the client sends a message to the Gopher server, requesting all documents (or objects) currently available. The Gopher server responds with this information and does nothing else until the user requests an object.

Hypertext Transfer Protocol

Hypertext Transfer Protocol is perhaps the most renowned protocol of all because it is this protocol that allows users to surf the Net. Stated briefly in RFC 1945, HTTP is

...an application-level protocol with the lightness and speed necessary for distributed, collaborative, hypermedia information systems. It is a generic, stateless, object-oriented protocol which can be used for many tasks, such as name servers and distributed object management systems, through extension of its request methods (commands). A feature of HTTP is the typing of data representation, allowing systems to be built independently of the data being transferred.

NOTE: RFC 1945 has been superseded by RFC 2068, which is a more recent specification of HTTP and is available at ftp://ds.internic.net/rfc/rfc2068.txt.

HTTP has forever changed the nature of the Internet, primarily by bringing the Internet to the masses. In some ways, its operation is much like Gopher. For example, it too works via a request/response scenario. And this is an important point. Whereas applications such as Telnet require that a user remain logged on (and while they are logged on, they consume system resources), protocols such as Gopher and HTTP eliminate this phenomenon. Thus, the user is pushed back a few paces. The user (client) only consumes system resources for the instant that he or she is either requesting or receiving data.

Using a common browser like Netscape Navigator or Microsoft Internet Explorer, you can monitor this process as it occurs. For each data element (text, graphic, sound) on a WWW page, your browser will contact the server one time. Thus, it will first grab text, then a graphic, then a sound file, and so on. In the lower-left corner of your browser's screen is a status bar. Watch it for a few moments when it is loading a page. You will see this request/response activity occur, often at a very high speed.

HTTP doesn't particularly care what type of data is requested. Various forms of multimedia can be either embedded within or served remotely via HTML-based WWW pages. In short, HTTP is an extremely lightweight and effective protocol. Clients for this protocol are enumerated in Table 6.7.

Table 6.7. HTTP clients for various operating systems. Operating System HTTP Client Microsoft Windows (all) Netscape Navigator, WinWeb, Mosaic, Microsoft Internet Explorer, WebSurfer, NetCruiser, AOL, Prodigy Macintosh Netscape Navigator, MacMosaic, MacWeb, Samba, Microsoft Internet Explorer UNIX Xmosaic, Netscape Navigator, Grail, Lynx, TkWWW, Arena OS/2 Web Explorer, Netscape Navigator

Until recently, UNIX alone supported an HTTP server. (The standard was NCSA HTTPD. Apache has now entered the race, giving HTTPD strong competition in the market.) The application is extremely small and compact. Like most of its counterparts, it runs as a daemon. Its typically assigned port is 80. Today, there are HTTP servers for nearly every operating system. Table 6.8 lists those servers.

Table 6.8. HTTP server for various operating systems.

Operating System HTTP Server Microsoft Windows 3.x Website, WinHTTPD Microsoft Windows 95 OmniHTTPD, Server 7, Nutwebcam, Microsoft Personal Web Server, Fnord, ZB Server, Website, Folkweb Microsoft Windows NT HTTPS, Internet Information Server, Alibaba, Espanade, Expresso, Fnord, Folkweb, Netpublisher, Weber, OmniHTTPD, WebQuest, Website, Wildcat Macintosh MacHTTP, Webstar, Phantom, Domino, Netpresenz UNIX HTTPD, Apache OS/2 GoServe, OS2HTTPD, OS2WWW, IBM Internet Connection Server, Bearsoft, Squid & Planetwood

Network News Transfer Protocol

The Network News Transfer Protocol is one of the most widely used protocols. It provides modern access to the news service commonly known as USENET news. Its purpose is defined in RFC 977:

NNTP specifies a protocol for the distribution, inquiry, retrieval, and posting of news articles using a reliable stream-based transmission of news among the ARPA-Internet community. NNTP is designed so that news articles are stored in a central database allowing a subscriber to select only those items he wishes to read. Indexing, cross-referencing, and expiration of aged messages are also provided. NNTP shares characteristics with both Simple Mail Transfer Protocol and TCP. Similarities to SMTP consist of NNTP's acceptance of plain-English commands from a prompt. It is similar to TCP in that stream-based transport and delivery is used. NNTP typically runs from Port 119 on any UNIX system.

Cross Reference: I refer readers seeking in-depth information on NNTP to RFC 977 (http://andrew2.andrew.cmu.edu/rfc/rfc977.html). You may also wish to obtain RFC 850 for examination of earlier implementations of the standard (http://sunsite.auc.dk/RFC/rfc/rfc850.html).

Concepts

You have examined TCP/IP services and protocols individually, in their static states. You have also examined the application-level protocols. This was necessary to describe each protocol and what they accomplish. Now it is time to examine the larger picture.

TCP/IP Is the Internet

By now, it should be apparent that TCP/IP basically comprises the Internet itself. It is a complex collection of protocols, many of which remain invisible to the user. On most Internet servers, a minimum of these protocols exist:

Transmission Control Protocol Internet Protocol Internet Control Message Protocol Address Resolution Protocol File Transfer Protocol The Telnet protocol The Gopher protocol Network News Transfer Protocol Simple Mail Transfer Protocol Hypertext Transfer Protocol Now, prepare yourself for a shock. These are only a handful of protocols run on the Internet. There are actually hundreds of them. Better than half of the primary protocols have had one or more security holes.

In essence, the point I would like to make is this: The Internet was designed as a system with multiple avenues of communication. Each protocol is one such avenue. As such, there are hundreds of ways to move data across the Net.

Until recently, utilizing these protocols called for accessing them one at a time. That is, to arrest a Gopher session and start a Telnet session, the user had to physically terminate the Gopher connection.

The HTTP browser changed all that and granted the average user much greater power and functionality. Indeed, FTP, Telnet, NTTP, and HTTP are all available at the click of a button.

Summary

In this chapter, you learned about TCP/IP. Relevant points about TCP/IP include

The TCP/IP protocol suite contains all protocols necessary to facilitate data transfer over the Internet

The TCP/IP protocol suite provides quick, reliable networking without consuming heavy network resources

TCP/IP is implemented on almost all computing platforms

Now that know the fundamentals of TCP/IP, you can progress to the next chapter. In it, you will explore some of the reasons why the Internet is not secure. As you can probably guess, there will be references to TCP/IP throughout that chapter.

 
To the best of our knowledge, the text on this page may be freely reproduced and distributed.
If you have any questions about this, please check out our Copyright Policy.

 

totse.com certificate signatures
 
 
About | Advertise | Bad Ideas | Community | Contact Us | Copyright Policy | Drugs | Ego | Erotica
FAQ | Fringe | Link to totse.com | Search | Society | Submissions | Technology
Hot Topics
Anti-Virus
a way to monitor someones AIM conversation
VERY simple question: browser history
anyone familiar with ms secure?
how do i hide files in to jpeg
FTP Attackers...
cable tv question
FireWall
 
Sponsored Links
 
Ads presented by the
AdBrite Ad Network

 

TSHIRT HELL T-SHIRTS

 
www.pigdog.org