Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Burgess M.Principles of network and system administration.2004.pdf
Скачиваний:
174
Добавлен:
23.08.2013
Размер:
5.65 Mб
Скачать

440

CHAPTER 11. PRINCIPLES OF SECURITY

connections that were initiated within the network itself. This provides a simple router-level firewall protection. It is useful for stopping IP spoofing attempts. The UDP protocol does not have SYN, ACK bits and so it is more difficult to filter.

11.6.11Example telnet session

An example telnet packet trace is provided in Appendix C.

11.7 Preventing and minimizing failure modes

Prevention of loss is usually cheaper than recovery after the fact. Any reasonable preventative measures we can take are worth the investment.

11.7.1Loss of data: backup

The data collected and produced by an organization are usually the primary reason for them owning a computer installation. The loss of those data, for whatever reason, would be a catastrophe, second to none.

Data can be lost by accident, by fire or natural catastrophe, by disk failure, or even vandalism. If you live in a war-zone or police state, you might also have to protect data from bombs or brutal incursions onto your premises. Once destroyed, data cannot be recovered. The laws of thermodynamics dictate this. So, to avoid complete data-loss, you need to employ a policy of redundancy, i.e. you need to make several copies of data, and make sure that they do not befall the same fate. Of course, no matter how many copies of data you make, it is possible that they might all be destroyed simultaneously, no matter what you do to protect them, but we are aiming to minimize the likelihood of that occurrence.

Principle 60 (Data invulnerability). The purpose of a backup copy is to provide an image of data which is unlikely to be destroyed by the same act that destroys the original, i.e. the backup and the original should not have any common dependencies that can be attacked.

There is an obvious corollary,

Corollary to principle (Data invulnerability). Backup copies should be stored at a different physical location than the originals.

The economics of backup has changed in recent times for several reasons: first of all, storage media are far more reliable than they once were. Component failures tend to follow exponential distributions. If a disk does not show signs of a problem within a few months then it might stand a good chance of effectively never failing of its own accord, before you change the whole machine on other grounds. Disks often tolerate continuous usage for perhaps five years, after which time you will almost certainly want to replace them for other reasons, e.g. performance. The other important change is the almost universal access to networks. Networks can be used to transport data simply and cheaply from one physical location to another.

11.7. PREVENTING AND MINIMIZING FAILURE MODES

441

Traditionally backups have been made to tape, since tape is relatively cheap and mobile. This is still the case at many sites, particularly larger ones; but tapes usually need to be dealt with manually, by a human or by an expensive robot. This adds a price tag to tape-backup which smaller institutions can find difficult to manage. By way of contrast, the price of disks and networking has fallen dramatically. For an organization with few resources, a cheap solution to the backup problem is to mirror disks across a network [244], using well-known tools like rdump, rdist or cfengine. This solves the problems of redundancy and location; and, for what it costs to employ a human or tape robot, one can purchase quite a lot of disk space.

Another change is the development of fast, reliable media like CD-ROM. In earlier times, it was normal to backup the operating system partitions of hosts to tape. Today that practice is largely unnecessary: the operating system is readily available on some straightforward medium (e.g. CD-ROM or DVD) which is at least as fast as a tape streamer and consumes a fraction of the space. It is only necessary to make backups of whatever special configuration files have been modified locally. Sites which use cfengine can simply allow cfengine to reconstruct local modifications after an OS installation. In any event, if we have followed the principle of separating the operating system from local modifications, this is no problem at all.

Similar remarks can be made about other software. Commercial software is now sold on CD-ROM and is trivial to reinstall (remember to keep a backup of license keys). For freely available software, there are already many copies and mirrors at remote locations by virtue of the Internet. For convenience, a local source repository can also be kept, to speed up recovery in the case of an accident. In the unlikely event of every host being destroyed simultaneously, downloading the software again from the network is the least of your worries!

Reconstructing a system from source rather than from backup has never been easier than now. Moreover, a policy of not backing up software which is easily accessible from source, can make a considerable saving in the volume of backup space required, at the price of more work in the event of accident. In the end this is a matter of policy.

It should be clear that user-data must have maximum priority for backup. This is where local creativity manifests itself; these are the data which form your assets.

11.7.2Loss of service

Loss of service might be less permanent than the loss of data, but it can be just as debilitating. Downtime costs money for businesses and wastes valuable time in academia.

The basic source of all computing power is electricity. Loss of electrical power can be protected against, to a limited extent, with an un-interruptible power supply (UPS). This is not an infallible security, but it helps to avoid problems due to short breaks in the power. UPS solutions use a battery backup to keep the power going for a few hours when power has failed. When the battery begins to run down, they can signal the host so as to take it down in a controlled fashion, thus minimizing

442

CHAPTER 11. PRINCIPLES OF SECURITY

damage to disks and data. Investing in a UPS for an important server could be the best thing one ever does. Electrical spike protectors are another important accessory for anyone living in a region where lightning strikes are frequent, or where the power supply is of variable quality. No fuse will protect a computer from a surge of electricity: microelectronics burn out much quicker than any fuse.

Service can also be interrupted by a breach of the network infrastructure: a failed router or broken cable, or even a blown fuse. It can be interrupted by cleaning staff, or carelessness. A backup or stand-by replacement is the only option for hardware failure. It helps to have the telephone number of those responsible for network hardware when physical breaches occur.

Software can be abused in a denial of service attack. Denial of service attacks are usually initiated by sending information to a host which confuses it into inactivity. There are as many variations on this theme as there are vandals on the network. Some attacks exploit bugs, while others are simply spamming episodes, repeatedly sending a deluge of service requests to the host, so that it spends all of its resources on handling the attack.

11.7.3Protocols

What is the solution to uncertainty? An amount of uncertainty is inevitable in any complex system. Where humans are concerned, uncertainty is always significant. A strict mode of behavior is the usual way of counteracting this uncertainty. Protocols are ways of eliminating unnecessary uncertainty by reducing the freedom of the participants.

Principle 61 (Protocols offer predictability). A well-designed protocol, either for human behavior or machine behavior, standardizes behavior and offers predictability.

11.7.4Authentication

In order to provide basic security for individuals, we need to keep track of the identity of users who make requests of the system. Authentication means determining whether the claim of identity is authentic. Usually we mean verifying somebody’s identity. There are two reasons for authenticating users:

User-based access control of files and programs requires users to be distinguished by an identity.

Accountability: attaching actions to users for recording in logs.

All authentication is based on the idea of comparing unique attributes of individuals with some database. Often ownership of a shared secret is used for this purpose, such as a password or encryption key, known only to the individual and the authenticator.

There is much confusion surrounding authentication. Much of this stems from the many claims made by cryptographic methods to provide secure methods for authenticating user identities. While this is not incorrect, it misses a crucial point.

11.7. PREVENTING AND MINIMIZING FAILURE MODES

443

Principle 62 (Identification requires trust). Establishing identity is ‘impossible’. Identification requires an initial introduction, based on trust.

Corollary to principle (Authentication is re-identification). Authentication is the confirmation of a previously trusted identity.

The first time we meet a person or contact a host on a network, we know nothing about them. When a previously unknown person or host claims their identity we must accept this information on trust. No matter how many detailed measurements we make (DNA test, processor serial number, secure exchange of keys etc.), there is no basis for matching those identifying marks to the identity claimed – since we cannot mind-read, we simply have to trust it. Once an initial identity has been accepted as true, one can then use unique properties to identify the individual again in the future, in a variety of ways, some more secure than others. The special markers or unique properties can only confirm that a person or host is the same person or host as we met previously. If the original introduction was faked, the accuracy of recognition cannot detect this.

Password login

The provision of a username claims our identity and a password verifies that claim. If this authentication succeeds, we are granted access to the system, and all of our activities then occur within the scope of an identifier which represents that user. On Unix-like systems, the username is converted into a global unique user-id number (UID). On Windows systems, the username is converted into a security-id (SID) which is only unique on a local host.

There are obvious problems with password authentication: passwords can be guessed and they can be leaked. Users with only weak passwords are vulnerable to dictionary and other brute-force attacks.

This type of login is called unilateral authentication, that is, it identifies the user to the computer. It does not verify the identity of the computer to the user. Thus a malicious party could fake a login dialogue on a computer, using this to collect passwords and account information.

Unix does not attempt to solve this problem, but NT and its successors provide a ‘secure attention sequence’. If the user types CTRL+ALT+DEL, they are guaranteed to be directed to the operating system, rather than any user programs which might be trying to look like the OS.

Authentication types

The OSI security architecture (ISO 7498-2) makes a distinction between different kinds of authentication:

Entity authentication: checking the identity of an individual or entity.

Origin authentication: checking the location of an individual or entity.

Unilateral authentication: verifying the entity to the authenticator.

Mutual authentication: verifying both parties to one another.

444

CHAPTER 11. PRINCIPLES OF SECURITY

Authentication is usually performed at the start of a session between client and system. Once one stops checking, an attacker could subsequently sneak in and change places with an authenticated user. Thus to ensure security in an on-going conversation, we have to verify identity and then use some kind of secret key to ensure that the identity cannot be changed, e.g. by encrypting the conversation. The key is only known by the authenticated parties, such as a secret that has been exchanged.

Challenge response protocols

Consider two parties A and B, who need to open a dialogue and verify a previously trusted identity.

|

 

 

|

|

M1

 

|

| ---------------------

 

>

|

|

M2

 

|

A | <--------------------

 

 

| B

|

M3

 

|

| ---------------------

 

>

|

|

M4

 

|

| <--------------------

 

 

|

|

 

 

|

A starts the protocol by sending a message to B, M1. B replies with M2, etc. We assume that message N + 1 is not sent until message N has been received and understood.

During or after the exchange of the messages we need to be sure of the following:

That the messages were received (unaltered) from the hosts which were supposed to send them.

That the messages are fresh, i.e. not replays of old messages.

• That message N + 1 is a correct reply to message N , not a misleading reply to a different question.

The first of these assurances can be made by using cryptographic checksums (message digests such as MD5 or SHA-1) or Message Authentication Code (MAC) that verifies both the identity of the sender and the integrity of the message, using a cryptographic key.

The second could be assured by the use of a time-stamp, though this would be vulnerable to errors of clock synchronization. A better approach is to use a random challenge or nonce (from the medieval English for ‘once only’).

A nonce is usually a long random number that is encrypted with a key that can only be decrypted by the receiver. The receiver then replies to the sender of the nonce by decrypting it and sending it back. Only the keeper of the secret could do this, and thus this confirms the identity of the receiver as well as the freshness of the reply. To achieve a mutual authentication, both parties send challenges to one another.

11.8. SOME WELL-KNOWN ATTACKS

445

11.7.5Integrity

Trust is the pernicious problem of security. How are we able to trust files and data which others send? Programs that we download could contain viruses or Trojan horses. Assuming that we trust the person who wrote the program, how can we be sure that no one else has tampered with it in between?

There are some things we can do to increase our confidence in data we receive from a foreign source. One is to compare message digests.

Message digests or hashes are cryptographic checksums which quickly summarize the contents of a file. The idea is to create an algorithm which digests the contents of a file and produces a single value which uniquely summarizes its contents. If we change one bit of a file, then the value of the message digest also changes. Popular algorithms include:

MD4

MD5 (Stronger than md4)

SHA1

host$ md5 .cshrc

MD5 (.cshrc) = 519ab7d30dba4a2d16b86328e025ec72

MD5 signatures are often quoted at security software repositories so that it is possible to verify the authenticity of software (assuming the MD5 signature is authentic!).

11.8 Some well-known attacks

There are many ways to attack a networked computer in order to gain access to it, or simply disable it. Some well-known examples are listed below. The actual attack mechanisms used by attackers are often intricate and ingenious, but the common theme in all of them is to exploit naive limitations in the way network services are implemented. Time and again one sees crackers make use of software systems which were written in good faith, by forcing them into unnatural situations where the software fails through inadequate checking.

11.8.1Ping attacks

The RFC 791 specifies that Internet datagrams shall not exceed 64kB. Some implementations of the protocol can send packets which are larger than this, but not all implementations can receive them.

ping -s 65510 targethost

Some older network interfaces can be made to crash certain operating systems by sending them a ‘ping’ request like this with a very large packet size. Most modern operating systems are now immune to this problem (e.g. NT 3.51 is vulnerable, but NT 4 is not). If not, it can be combatted with a packet filtering router. See http://www.sophist.demon.co.uk/ping/.

446

CHAPTER 11. PRINCIPLES OF SECURITY

11.8.2Denial of service (DoS) attacks

Another type of attack is to overload a system with so many service requests that it grinds to a halt. One example is mail spamming,2 in which an attacker sends large numbers of repetitive E-mail messages, filling up the server’s disk and causing the sendmail daemon to spawn rapidly and slow the system to a standstill.

Denial of service attacks are almost impossible to protect against. It is the responsibility of local administrators to prevent their users from initiating such attacks wherever possible.

11.8.3TCP/IP spoofing

Most network resources are protected on the basis of the host IP addresses of those resources. Access is granted by a server to a client if the IP address is contained in an access control list (ACL). Since the operating system kernel itself declares its own identity when packets are sent, it has not been common to verify whether packets actually do arrive from the hosts which they claim to arrive from. Ordinary users have not traditionally had access to privileges which allow them to alter network protocols. Today everyone can run a PC with privileged access to the networking hardware.

Normally an IP datagram passing from host A to host B has a destination address ‘host B’ and source address ‘host A’ (see figure 11.4). IP spoofing is the act of forging IP datagrams in such a way that they appear to come from a third party host, i.e. an attacker at host A creates a packet with destination address ‘host B’ and source address ‘host C’. The reasons for this are various. Sometimes an attacker wants to appear to be host C in order to gain access to a special resource which host C has privileged access to. Another reason might be to attack host C, as part of a more elaborate attack. Usually it is not quite this simple however, since the forgery is quickly detected. The TCP handshake is such that host A sends a packet to host B and then replies to the source address with a sequence number which has to match the next number of an agreed sequence. If another packet is not received with an agreed sequence number the connection will be reset and abandoned. Indeed, if host C received the confirmation reply for a message which it never sent, it would send a reset signal back immediately, saying effectively ‘I know nothing about this’. To prevent this from happening it is common to take out host C first by attacking it with some kind of Denial of Service method, or simply choosing an address which is not used by any host. This prevents it from sending a reset message. The advantage of choosing a real host C is that the blame for the attack is placed on host C.

11.8.4SYN flooding

IP spoofing can also be used as a denial of service attack. By choosing an address for host C which is not in use so that it cannot reply with a reset, host A can send SYN packets (new connections) on the same and other ports repeatedly. The

2From the Monty Python song ‘Spam spam spam spam...’.

11.8. SOME WELL-KNOWN ATTACKS

447

host A

host B

host C

Figure 11.4: IP spoofing. A third party host C assumes the role of host A.

RECV queue quickly fills up and cannot be emptied since the connections cannot be completed. Because the queues are filled the services are effectively cut off.

These attacks could be prevented if routers can be configured so as to disallow packets with forged source addresses.

11.8.5TCP sequence guessing

This attack allows an attacker to make a TCP connection to a host by guessing the initial TCP sequence number used by the other end of the connection. This is a form of IP spoofing by a man in the middle. The attack was made famous by the break in to Tsutomo Shinomura’s computers which led to the arrest of Kevin Mitnick. This attack is used to impersonate other hosts for trusted access [29, 220]. This approach can now be combatted by using random initial sequence numbers (using the strategy expounded in section 7.7.5), though many operating systems require special configuration to enable such measures.

11.8.6IP/UDP fragmentation (Teardrop)

A Teardrop attack was responsible for the now famous twelve-hour attack which ‘blue-screened’ thousands of NT machines all over the world. This attack uses the idea of datagram fragmentation. Fragmentation is something which happens as a datagram passes through a router from one network to another network where the Minimum Transfer Unit (MTU) is lower. Large packets can be split up into smaller packets for more efficient network performance. In a Teardrop attack, the attacker forges two UDP datagrams which appear to be fragments of a larger packet, but with data offsets which overlap.

When fragmentation occurs it is always the end host which reassembles the packets. In order to allocate memory for the data, the kernel calculates the difference between the end of the datagram and the offset at which the datagram fragment started. In a normal situation that would look like that in figure 11.5. In a Teardrop attack the packets are forged so that they overlap, as shown in figure 11.6. The assumption that the next fragment would follow on from the

448

 

 

CHAPTER 11. PRINCIPLES OF SECURITY

 

 

 

 

 

 

 

 

UDP frag#1

 

 

 

size = 100

− 0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

UDP frag#2

 

size = 200

− 100

 

 

 

 

 

 

0

100

200

 

Figure 11.5: Normal UDP fragmentation.

UDP frag#1

UDP frag#2

size = 120 − 0

size = 90 − 120

0

100

200

Figure 11.6: Spoofed UDP fragmentation, generates a negative size.

previous one leads to a negative number for the size of the fragment. As the kernel tries to allocate memory for this it calls malloc(size) where the size is now a negative number. The kernel panics and the system crashes on implementations which did not properly check the bounds.

11.8.7ICMP flooding (Smurf)

ICMP flooding is another denial of service attack. The ICMP protocol is the part of TCP/IP which is used to transmit error messages and control information between hosts. Well-known services like ping and echo use ICMP. Normally all hosts respond to ping and echo requests without question, since they are useful for debugging. In an ICMP flooding attack, the attacker sends a spoofed ICMP packet to the broadcast address of a large network. The source address of the packet is forged so that it appears to come from the host which the attacker wishes to attack. Every host on the large network receives the ping/echo request and replies to the same host simultaneously. The host is then flooded with requests. The requests consume all the system resources.

11.8.8DNS cache poisoning

This attack is an example of the exploitation of a trusted service in order to gain access to a foreign host. Again it uses a common theme, that of forging a network service request. This time, however, the idea is to ask a server to cache some information which is incorrect so that future look-ups will result in incorrect information being given instead of the correct information [29].

DNS is a hierarchical service which attempts to answer queries about IP names and addresses locally. If a local server does not have the information requested it

EXERCISES

449

asks an authoritative server for that information. Having received the information from the authoritative server it caches it locally to avoid having to contact the other server again; after all, since the information was required once, it is likely that the same information will be required again soon. The information is thus placed in the cache for a period of time called the TTL (Time To Live). After that time has expired it has to be obtained again from the authoritative server.

In a cache poisoning attack, the aim is to insert incorrect information into the cache of a server. Once it is there it will be there for the TTL period. In order to arrange this an attacker does the following.

1.The attacker launches his/her attack from the authoritative nameserver for his/her network. This gives him/her the chance to send information to another nameserver which will be trusted.

2.The attacker sends a query for the IP address of the victim host to the victim’s default DNS server in order to obtain a DNS query ID. This provides a point of reference for guessing, i.e. forging, the next few query IDs from that server.

3.The attacker then sends a query asking for the address of a host which the victim machine trusts, i.e. the host which the attacker would like to impersonate.

4.The attacker hopes that the victim host will soon need to look up the IP address of the host it trusts; he/she sends a fake ‘reply’ to such a DNS lookup request, forged with the query ID to look as though it comes from a lookup of the trusted host’s address. The answer for the IP address of the trusted host is altered so that it is the IP address of the attacker’s host.

5.Later when the victim host actually sends such a DNS request it finds that it has already received a UDP reply to that request (this is the nature of UDP) and it ignores the real reply because it arrives later. Now the victim’s DNS cache has been poisoned.

6.The attacker now attempts to connect directly to the victim host, posing as the trusted host. The victim host tries to verify the IP address of the host by looking up the address in its DNS server. This now responds from its cache with the forged address.

7.The attacker’s system is accepted.

This kind of attack requires the notion of external login based on trust, e.g. with Unix .rhosts files. This doesn’t help with NT because NT doesn’t have trusted hosts in the same way. On the other hand, NT is much easier to gain access to through NULL sessions.

Exercises

Self-test objectives

1.Describe the nature of possible threats to the security of a human–computer system.

450

CHAPTER 11. PRINCIPLES OF SECURITY

2.What is meant by ‘security is a property of systems’?

3.What are the four main themes in computer security?

4.What role does trust play in setting the ground rules for security?

5.Explain how security relates to risk assessment.

6.What are the main threats to human–computer security?

7.Who present the main threats to human–computer security?

8.What is ISO17799?

9.What is RFC 2196?

10.What is meant by social engineering?

11.List some ways of countering social engineering.

12.What is meant by a honey pot?

13.What is meant by a sacrificial lamb?

14.What are the pros and cons of system homogeneity in security?

15.Explain how laptops and mobile devices can compromise security.

16.What are the problems with the security of the Internet Protocol?

17.State the ways of minimizing the likelihood of a serious security breach.

18.How does economy play a role in security?

19.What is the point of strict protocols in human–computer systems?

20.Explain why it is not possible to ever really identify someone – only to reidentify someone whose identity we have already trusted.

21.What is mutual authentication?

22.What is a challenge–response system?

23.What is meant by a nonce?

24.What is a cryptographic hash or checksum?

25.What is a message authentication code?

26.What is meant by a Denial of Service (DoS) attack?

27.What is meant by cache poisoning?

EXERCISES

451

Problems

1.What are the basic requirements for computer security? Look around your network. Which hosts satisfy these basic requirements?

2.Devise a checklist for securing a PC attached to a network in your organization. How would you secure a PC in a bank? Are there any differences in security requirement between your organization and a bank? If so, what are they and how do you justify them?

3.Determine what password format is used on your own system. Are shadow password files used? Does your site use NIS (i.e. can you see the password database by typing ypcat passwd)?

4.Assume that passwords may consist of only the 26 letters of the alphabet. How many different passwords can be constructed if the number of characters in the password is 1, 2, 3, 4, 5, 6, 7 or 8 characters?

5.Suppose a password has four characters, and it takes approximately a millisecond (10−3 s) to check a password. How long would a brute-force attack take to determine the password?

6.Discuss how you can really determine the identity of another person. Is it enough to see the person? Is a DNA test sufficient? How do you know that a person’s body has not been taken over by aliens, or they have not been brainwashed by a mad scientist? This problem is meant to make you think carefully about the problem of authentication.

7.Password authentication works by knowing a shared secret. What other methods of authentication are used?

8.The secure shell uses a Virtual Private Network (VPN) or encrypted channel between hosts to transfer data. Does this offer complete security? What does encryption not protect against?

9.Explain the significance of redundancy in a secure environment.

10.When the current TCP/IP technology was devised, ordinary users did not have personal computers or access to network listening devices. Explain how encryption of TCP/IP links can help to restore the security of the TCP/IP protocol.

11.Explain the purpose of a sacrificial lamb.

12.Discuss the point of making a honey pot. Would this attract anyone other than bears of little brain?

13.Answer true or false to the following (you might have to read ahead to answer some of these):

(a)Current DNS implementations have no strong authentication.

452

CHAPTER 11. PRINCIPLES OF SECURITY

(b)DNSSec can use digital signatures to solve the problem of authenticity for zone transfers between redundant servers.

(c)DNSSec can use symmetric shared secrets to solve the authenticity problem for zone transfers.

(d)Current implementations of DNS have no way of restricting access and are thus completely vulnerable to integrity attacks.

(e)Current DNS implementations use unreliable connections.

(f)SSL/TLS uses Kerberos to authenticate secure sockets.

(g)SSL/TLS use trust management based on a signing authority, like a trusted third party.

(h)IPSec was designed for and only works with IPv6, so it will not be available for some years.

(i)IPSec has solved the problem of contradictory policy rules.

(j)IPSec permits packet filtering based on Mandatory Access Control.

(k)IPSec’s use of encrypted tunnels allows it to function like a VPN, provided that end devices themselves support IPSec.

(l)Wireless IP security does not support end to end encryption, only encryption between wireless device and receiving station.

14.Explain why encryption can be used as a form of authentication.

15.What is meant by masquerading or spoofing?

16.Describe the issues to consider in finding a backup scheme for a large and a small organization. Your answer should address tactical, economic and ethical issues.