Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Discrete.Math.in.Computer.Science,2002

.pdf
Скачиваний:
42
Добавлен:
08.08.2013
Размер:
1.76 Mб
Скачать

50 CHAPTER 2. CRYPTOGRAPHY AND NUMBER THEORY

240 log2 10 times the time needed to do one operation. (Since log2 10 is about 3.33, it will take at most 800 times the amount of time for one operation to compute our power of a.)

You may not be used to thinking about how large the numbers get when you are doing computation. Computers hold fairly large numbers, (4-byte integers in the range roughly 231 to 231 are typical) and this su ces for most purposes. Because of the way computer hardware works, as long as numbers fit into one 4-byte integer, the time to do simple arithmetic operations doesn’t depend on the value of the numbers involved. (A standard way to say this is to say that the time to do a simple arithmetic operation is constant.) However, when we talk about numbers that are much larger than 231, we have to take special care to implement our arithmetic operations correctly, and also we have to be aware that operations are slower.

It is accurate to assume that when multiplying large numbers, the time it takes is roughly proportional to the product of the number of digits in each. The constant of proportionality would depend on the implementation; with stratightforward implementations that don’t make use of special hardware features the constant could be around 1 and would be unlikely to be as small as 0.01. If we computed our 100 digit number to the 10120th power, we would be computing a number with more than 10120 digits. We clearly do not want to be doing computation on such numbers, as our computer cannot even store such a number!

Fortunately, since the number we are computing will ultimately be taken modulo some 200 digit number, we can make all our computations modulo that number (See Lemma 2.1.2.) By doing so, we ensure that the two numbers we are multiplying together have at most 200 digits, and so the time needed for the problem proposed in Exercise 2.4-1 would be a proportionality constant times 40,000 times log2(10120) times the time needed for a basic operation plus the time needed to figure out which powers of a are multiplied together, which would be quite small in comparison.

Note that this algorithm, on 200 digit numbers, is as much as 40, 000 times slower than on simple integers. This is a noticeable e ect and if you use or write an encryption program, you can see this e ect when you run it. However, we can still typically do this calculation in less than a second, a small price to pay for secure communication.

How long does it take to use the RSA Algorithm?

There is a lot of arithmetic involved in encoding and decoding messages according to the RSA algorithm. How long will all this arithmetic take? Let’s assume for now that Alice has already chosen p, q, e, and d, and so she knows n as well. When Bob wants to send Alice the message x, he sends xe mod n. By our analyses in Exercise 2.4-2 and Exercise 2.4-3 we see that this amount of time is more or less proportional to log2 e, which is itself proportional to the number of digits of e, though the first constant of proportionality depends on the method our computer uses to multiply numbers. Since e has no more than 200 digits, this should not be too time consuming for Bob if he has a reasonable computer. (On the other hand, if he wants to send a message consisting of many segments of 200 digits each, he might want to use the RSA system to send a key for another simpler (secret key) system, and then use that simpler system for the message.

It takes Alice a similar amount of time to decode, as she has to take the message to the dth power, mod n.

We commented already that nobody knows a fast way to find x from xe mod n. In fact, nobody knows that there isn’t a fast way either, which means that it is possible that the RSA

2.4. DETAILS OF THE RSA CRYPTOSYSTEM

51

cryptosystem could be broken some time in the future. We also don’t know whether extracting eth roots mod n is in the class of NP-complete problems, an important family of problems with the property that a reasonably fast solution of any one of them will lead to a reasonably fast solution of any of them. We do know that extracting eth roots is no harder than these problems, but it may be easier.

However here someone is not restricted to extracting roots to discover x. Someone who knows n and knows that Alice is using the RSA system, could presumably factor n, discover p and q, use the extended GCD algorithm to compute d and then decode all of Alice’s messages. However nobody knows how to factor integers quickly either. Again, we don’t know if factoring is NPcomplete, but we do know that it is no harder than the NP-complete problems. Thus here is a second possible way around the RSA system. However, enough people have worked on the factoring problem, that most people are confident that it is in fact di cult, in which case the RSA system is safe, as long as we use keys that are long enough.

How hard is factoring?

2.4-4 Factor 224,551. (The idea is to do this without resorting to computers, but if you give up by hand and calculator, using a computer is fine.)

With current technology, keys with roughly 100 digits are not that hard to crack. In other words, people can factor numbers that are roughly 100 digits long, using methods that are a little more sophisticated than the obvious approach of trying all possible divisors. However, when the numbers get long, say over 120 digits, they become very hard to factor. The record as of the year 2000 for factoring is a roughly 130-digit number. To factor this number, thousands of computers around the world were used, and it took several months. So given the current technology, RSA with a 200 digit key seems to be very secure.

Finding large primes

There is one more issue to consider in implementing the RSA system for Alice. We said that Alice chooses two primes of about a hundred digits each. But how does she choose them? It follows from some celebrated work on the density of prime numbers that if we were to choose a number m at random, and check about loge(m) numbers around m for primality, we would expect that one of these numbers was prime. Thus if we have a reasonably quick way to check whether a number is prime, we shouldn’t have to guess too many numbers, even with a hundred or so digits, before we find one we can show is prime.

However, we have just observed that nobody knows a quick way to find any or all factors of a number. The standard way of proving a number is prime is by showing that it and 1 are its only factors. For the same reasons that factoring is hard, the simple approach to primality testing, test all possible divisors, is much too slow. If we did not have a faster way to check whether a number is prime, the RSA system would be useless.

Miller and Rabin had a stroke of insight that has e ectively solved this problem. We know, by Fermat’s Little Theorem, that in Zp with p prime, xp−1 = 1 for every x between 1 and p − 1. What about xm−1, in Zm, when m is not prime?

52 CHAPTER 2. CRYPTOGRAPHY AND NUMBER THEORY

Lemma 2.4.1 Let m be a non-prime, and let x be a number in Zn which has no multiplicative inverse. Then xm−1 1 (mod m).

Proof: Assume, for the purpose of contradiction, that

xm−1 1

(mod m) .

Then

 

x · xm−2 1

(mod m) .

But then xm−2 is the inverse of x, which contradicts the fact that x has no multiplicative inverse.

This distinction between primes and non-primes gives the idea for an algorithm. Suppose we have some number m, and are not sure whether it is prime or not. We can run the following algorithm:

PrimeTest(m)

choose a random number x, 2 ≤ x ≤ m − 1. compute y = xm−1 mod m

if (y = 1)

output ‘‘ m might be prime’’

else

output ‘‘m is definitely not prime’’

Note the asymmetry here. If y = 1, then m is definitely not prime, and we are done. On the other hand, if y = 1, then the m might be prime, and we probably want to do some other calculations. In fact, we can repeat the algorithm Primetest(m) many times, with a di erent random number x each time. If on any of the t runs, the algorithm outputs “m is definitely not prime”, then the number m is definitely not prime, as we have “proof” of an x for which xm−1 = 1. On the other hand, if on all t runs, the algorithm Primetest(m) outputs “m might be prime”, then, with reasonable certainty, we can say that the number m is prime. This is actually an example of a randomized algorithm and we will be studying these in greater detail later in the course. For now, let’s informally see how likely it is that we make a mistake.

We can see that the chance of making a mistake depends on, for a particular non-prime m, exactly how many numbers a have the property that am−1 = 1. If the answer is that very few do, then our algorithm is very likely to give the correct answer. On the other hand, if the answer is most of them, then we are more likely to give an incorrect answer.

In Exercise 10 at the end of the section, you will show that the number of elements in Zm

without inverses is at least m. In fact, even many numbers that do have inverses will also fail the test xm−1 = 1. For example, in Z12 only 1 passes the test while in Z15 only 1 and 14 pass the test. (Z12 really is not typical; can you explain why?)

In fact, what Miller and Rabin showed was how to modify the test slightly (in a way that we won’t describe here) and show that for any non-prime m, at least half of the possible x will have fail the modified test and hence will show that m is composite. As we will see when we learn about probability, this implies that if we repeat the test t times, the probability of believing that a non-prime number is prime is actually 2−t. So, if we repeat the test 10 times, we have only a

2.4. DETAILS OF THE RSA CRYPTOSYSTEM

53

1 in a thousand chance of making a mistake, and if we repeat it 100 times, we have only a 1 in 2100 (a little less than one in a nonillion) chance of making a mistake!

Numbers we have chosen by this algorithm are sometimes called pseudoprimes. They are called this because they are very likely to be prime. In practice, pseudoprimes are used instead of primes in implementations of the RSA cryptosystem. The worst that can happen when a pseudoprime is not prime is that a message may be garbled; in this case we know that our pseudoprime is not really prime, and choose new pseudoprimes and ask our sender to send the message again. (Note that we do not change p and q with each use of the system; unless we were to receive a garbled message, we would have no reason to change them.)

Problems

1.What is 31024 in Z7? (This is an easy problem to do by hand.)

2.Suppose we have computed a2, a4, a8, a16 and a32. What is the most e cient way for us to compute a53?

3.Find all numbers a di erent from 1 and 1 (which is the same as 8) such that a8 mod 9 = 1.

4.Use a spreadsheet, programmable calculator or computer to find all numbers a di erent from 1 and -1 with a32 mod 33 = 1. (This problem is relatively easy to do with a spreadsheet that can compute mods and will let you “fill in” rows and columns with fommulas. However you do have to know how to use the spreadsheet in this way to make it easy!)

5.If a is a 100 digit number, is the number of digits of a10120 closer to 10120 or 10240. Is it a lot closer?

6.Explain what our outline of the solution to Exercise 2.4-1 has to do with the binary representation of 10120.

7.Give careful pseudocode to compute ax mod n. Make your algorithm as e cient as possible.

8.Suppose we want to compute ae1e2···em mod n. Discuss whether it makes sense to reduce the exponents mod n as we compute their product. In particular, what rule of exponents would allow us to do this, and do you think this rule of exponents makes sense?

9.Number theorists use ϕ(n) to stand for the number of elements of Zn that have inverses. Suppose we want to compute ae122···em mod n. Would it make sense for us to reduce our exponents mod ϕ(n) as we compute their product? Why?

10.Show that if m is not prime, then at least m elements of Zm do not have multiplicative inverses.

11.Show that in Zp+1, where p is prime, only one element passes the primality test xm−1 = 1. (In this case, m = p + 1.)

12.Suppose for RSA, p = 11, q = 19, and e = 7. What is the value of d? Show how to encrypt the message 100, and then show how to decrypt the resulting message.

13.Suppose for applying RSA, p = 11, q = 23, and e = 13. What is the value of d? Show how to encrypt the message 100 and then how to decrypt the resulting message.

54

CHAPTER 2. CRYPTOGRAPHY AND NUMBER THEORY

14.A digital signature is a way to securely sign a document. That is, it is a way to put your “signature” on a document so that anyone reading it knows that it is you who have signed it, but no one else can “forge” your signature. Digital signatures are, in a way, the opposite of encryption, as if Alice wants to sign a message, she first applies her signature to it (think of this as encryption) and then the rest of the world can easily read it (think of this as decryption). Explain, in detail, how to achieve digital signatures, using ideas similar to those used for RSA. in particular, anyone who has the document and has your signature (and knows your public key) should be able to determine that you signed it.

Chapter 3

Reflections on Logic and Proof

3.1Equivalence and Implication

Equivalence of statements

3.1-1 A group of students are working on a project that involves writing a merge sort program. Joe and Mary have each tried their hand at writing an algorithm for a function that takes two lists, List1 and List2 of lengths p and q and merges them into a third list. Part of Joe’s algorithm is the following:

if ((i + j ≤ p + q) && (i ≤ p) && ((j ≥ q)||(List1(i) ≤ List2(j))))

List3(k) = List1(i) i = i + 1

else

List3(k) = List2(j) j = j + 1

k = k + 1

Return List3

The corresponding part of Mary’s algorithm is

if (((i + j ≤ p + q) && (i ≤ p) && (j ≥ q)) || ((i + j ≤ p + q) && (i ≤ p) && (List1(i) ≤ List2(j))))

List3(k) = List1(i) i = i + 1

else

List3(k) = List2(j) j = j + 1

k = k + 1

Return List3

55

56

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF

Do Joe and Mary’s algorithms do the same thing?

Notice that Joe and Mary’s algorithms are exactly the same except for the “If statement.” (How convenient; they even used the same local variables!) In Joe’s algorithm we put entry i of List1 into position k of List3 if

i + j ≤ p + q and i ≤ p and (j ≥ q or List1(i) ≤ List2(j))

while in Mary’s algorithm we put entry i of List1 into position k of List3 if

(i + j ≤ p + q and i ≤ p and j ≥ q) or (i + j ≤ p + q and i ≤ p and List1(i) ≤ List2(j))

Joe and Mary’s statements are both built up from the same constituent parts (namely comparison statements), so let’s name these constituent parts and look at the statements in this form. Let’s use

s to stand for i + j ≤ p + q t to stand for i ≤ p

u to stand for j ≥ q and

v to stand for List1(i) ≤ List2(j)

Then the condition in Mary’s “If statement” becomes

s and t and (u or v)

while Joe’s becomes

(s and t and u) or (s and t and v).

One immediate benefit from recasting the statements in this symbolic form is that we see that s and t always appear together as “s and t.” This means we can simplify the form we need to analyze even more by substituting w for “s and t,” making Mary’s condition have the form

w and (u or v)

and Joe’s have the form

(w and u) or (w and v).

While we can show, based on our knowledge of the structure of the English language that these two statements are saying the same thing, it is by no means easy to see. It is true that these two statements are saying the same thing, and one way to see it is to observe that in English, the word “and” distributes over the word “or,” just as set intersection distributes over set union, and multiplication distributes over addition. To make this analogy easier to see, there is a standard notation that logicians have adopted for writing symbolic versions of compound statements. We shall use the symbol to stand for “and” and to stand for “or.” In this notation, Mary’s condition becomes

3.1. EQUIVALENCE AND IMPLICATION

57

w (u v)

and Joe’s becomes

(w u) (w v).

It is one thing to see the analogy of notation, and another to understand why two statements with this symbolic form must mean the same thing. To deal with this, we must give a precise definition of “meaning the same thing,” and develop a tool for analyzing when two statements satisfy this definition. We are going to consider symbolic compound statements that may be built up with the following kinds of symbols: symbols (s, t, etc.) standing for statements, the symbol, standing for “and,” the symbol , standing for “or,” the symbol standing for “exclusive or,” and the symbol ¬, standing for “not.” We will develop a theory for deciding when a compound statement is true based on the truth or falsity of its component statements. This theory will enable us to determine on the basis of the truth or falsity of, say, s, t, and u, whether a particular compound statement, say (s t) (¬u (s t)) ¬(s (t u)), is true or false. Our technique is going to be based on truth tables, which you have almost certainly seen before. What may not have been clear before is what they are good for; we will now show you one thing they are good for.

In our sample compound statement (s t) (¬u (s t)) ¬(s (t u)) we used parentheses to make it clear which operation to do first, with one exception, namely our use of the ¬ symbol. Notice we wrote ¬u (s t). In principle we could have meant (¬u) (s t), which we did, or ¬(u (s t)), which we did not mean. The principle we use here is simple; the symbol ¬ applies to the smallest number of following symbols that makes sense. This is the same principle we use with minus signs in algebraic expressions; if you did not wonder what we meant by ¬u (s t), this may be the reason why. With this one exception we will always use parentheses to make the order in which we are to perform operations clear; you should do the same.

The truth table for a logical connective states, in terms of the possible truth or falsity of the component parts, when the compound statement made by connecting those parts is true and when it is false. Here are the truth tables for the connectives we have mentioned so far.

s

t

s t

 

s

t

s t

 

s

t

s t

 

s

¬s

T

T

T

 

T

T

T

 

T

T

F

 

T

F

T

F

F

 

T

F

 

T

 

T

F

 

T

 

F

T

 

 

 

 

 

F

T

F

 

F

T

 

T

 

F

T

 

T

 

 

 

F

F

F

 

F

F

 

F

 

F

F

 

F

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In e ect these truth tables define what we mean by the words “and,” “or,” “exclusive or,” and “not” in the context of symbolic compound statements. Since we want to apply them to English statements, let’s see what a typical truth table says. The truth table for —or—tells us that when s and t are both true, then so is “s or t.” It tells us that when s is true and t is false, or s is false and t is true, then “s or t” is true. Finally it tells us that when s and t are both false, then so is “s or t.” Is this how we use the word or in English? The answer is sometimes! The word “or” is used ambiguously in English. When a teacher says “Each question on the test will be short answer or multiple choice,” the teacher is presumably not intending that a question could be both. Thus the word “or” is being used here in the sense of “exclusive or”—the “ ” in the truth tables above. When someone says “Let’s see, this afternoon I could take a walk or I

58

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF

could shop for some new gloves,” she probably does not mean to preclude the possibility of doing both—perhaps even taking a walk downtown and then shopping for new gloves before walking back. Thus in English, we determine the way in which someone uses the word “or” from context. In mathematics and computer science we don’t always have context and so we agree that we will say “exclusive or” or “xor” for short when that is what we mean, and otherwise we will mean the “or” whose truth table is given by . In the case of “and” and “not” the truth tables are exactly what we would expect.

When we have a more complex compound statement, as we did in Joe and Mary’s programs, we will still want to be able to describe exactly the situations in which it is true and the situations in which it is false. We will do this by working out a truth table for the compound statement from the truth tables of its symbolic statements and its connectives. We will have one row across the top of the table that says what the original symbolic statements are and how we may build up the compound statement. It also has one row for each possible combination of truth and falsity of the original symbolic statements (In this example, as is often the case, the original symbolic statements are just single variables.). Thus if we have two statements, we have, as above, four rows. If we have just one statement, then we have, as above, just two rows. If we have three statements, then we will have 23 = 8 rows, and so on. To show you how we work out truth tables, here is the truth table for Joe’s symbolic statements above. The columns to the left of the double line are the input variables, the columns to the right are the various sub-expressions that we need to compute. We put as many columns as we need in order to correctly compute the final result; as a general rule, each column should be easily computed from one or two previous columns.

w

u

v

u v

w (u v)

T

T

T

T

T

T

T

F

T

T

T

F

T

T

T

T

F

F

F

F

F

T

T

T

F

F

T

F

T

F

F

F

T

T

F

F

F

F

F

F

 

 

 

 

 

Here is the truth table for Mary’s symbolic statement.

w

u

v

 

w u

w v

(w u) (w v)

T

T

T

 

T

T

T

T

T

F

 

T

F

 

T

 

 

T

F

T

 

F

T

 

T

 

 

T

F

F

 

F

F

 

F

 

 

F

T

T

 

F

F

 

F

 

 

F

T

F

 

F

F

 

F

F

F

T

 

F

F

 

F

F

F

F

 

F

F

 

F

 

 

 

 

 

 

 

 

We see that Joe’s symbolic statement and Mary’s symbolic statement are true in exactly the same cases so that they must say the same thing. Therefore Mary and Joe’s program segments

3.1. EQUIVALENCE AND IMPLICATION

59

return exactly the same values. We say that two symbolic compound statements are equivalent if they are true in exactly the same cases. Thus they are equivalent if their truth tables have the same final column (assuming both tables assign truth values to the original symbolic statements in the same pattern).

3.1-2 De Morgan’s Law says that ¬(p q) is equivalent to ¬p ¬q. Use a truth table to demonstrate the truth of DeMorgan’s law.

3.1-3 Show that p q, the exclusive or of p and q, is equivalent to (p q) ¬(p q). Apply DeMorgan’s law to ¬¬(p q) ¬(p q) to find another symbolic statement equivalent to the exclusive or.

To verify DeMorgan’s law, we create the following pair of truth tables that we have condensed into one “double truth table.”

p

q

p q

¬(p q)

¬p

¬q

¬p ¬q

T

T

T

F

F

F

F

T

F

T

F

F

T

F

F

T

T

F

T

F

F

F

F

F

T

T

T

T

 

 

 

 

 

 

 

To show that p q is equivalent to (p q) ¬(p q), we use another “double truth table.”

p

q

 

p q

 

p q

p q

¬(p q)

(p q) ¬(p q)

T

T

 

F

 

T

T

F

F

T

 

F

 

T

 

T

F

T

T

 

 

 

F

 

T

 

T

 

T

F

T

T

 

 

 

F

 

F

 

F

 

F

F

T

F

 

 

 

 

 

 

 

 

 

 

By DeMorgan’s law, p q is also equivlaent to ¬(¬(p q) (p q)). It was certainly easier to use DeMorgan’s law to show this equivalence than to use another double truth table.

Implication

There is another kind of compound statement that occurs frequently in mathematics and computer science. Think about the statement of Fermat’s little Theorem:

If p is a prime, then ap−1 mod p = 1 for every non-zero a Zp.

This is a combination of two constituent statements,

p is a prime

and