Archive:
In this section:
Subtopics:
Comments disabled |
Mon, 29 May 2006 A puzzle for high school math studentsFactor x^{4} + 1.
SolutionAs I mentioned in yesterday's very long article about GF(2^{n}), there is only one interesting polynomial over the reals that is irreducible, namely x^{2} + 1. Once you make up a zero for this irreducible polynomial, name it i, and insert it into the reals, you have the complex numbers, and you are done, because the fundmental theorem of algebra says that every polynomial has a zero in the complex numbers, and therefore no polynomials are irreducible. Once you've inserted i, there is nothing else you need to insert.This implies, however, that there are no irreducible real polynomials of degree higher than 2. Every real polynomial of degree 3 or higher factors into simpler factors. For odd-degree polynomials, this is not surprising, since every such polynomial has a real zero. But I remember the day, surprisingly late in my life, when I realized that it also implies that x^{4} + 1 must be factorable. I was so used to x^{2} + 1 being irreducible, and I had imagined that x^{4} + 1 must be irreducible also. x^{2} + bx + 1 factors if and only if |b| ≥ 2, and I had imagined that something similar would hold for x^{4} + 1—But no, I was completely wrong. x^{4} + 1 has four zeroes, which are the fourth roots of -1, or, if you prefer, the two square roots of each of i and -i. When I was in elementary school I used to puzzle about the square root of i; I hadn't figured out yet that it was ±(1 + i)/√2; let's call these two numbers j and -j. The square roots of -i are the conjugates of j and -j: ±(1 - i)/√2, which we can call k and -k because my typewriter doesn't have an overbar. So x^{4} + 1 factors as (x + j)(x - j)(x + k)(x - k). But this doesn't count because j and k are complex. We get the real factorization by remembering that j and k are conjugate, so that (x - j)(x - k) is completely real; it equals x^{2} - √2x + 1. And similarly (x + j)(x + k) = x^{2} + √2x + 1. So x^{4} + 1 factors into (x^{2} - √2x + 1)(x^{2} + √2x + 1). Isn't that nice?
[Other articles in category /math] permanent link Sat, 27 May 2006IntroductionAccording to legend, the imaginary unit i = √-1 was invented by mathematicians because the equation x^{2} + 1 = 0 had no solution. So they introduced a new number i, defined so that i^{2} + 1 = 0, and investigated the consequences. Like most legends, this one has very little to do with the real history of the thing it purports to describe. But let's pretend that it's true for a while.
Descartes' theoremIf P(x) is some polynomial, and z is some number such that P(z) = 0, then we say that z is a zero of P or a root of P. Descartes' theorem (yes, that Descartes) says that if z is a zero of P, then P can be factored into P = (x-z)P', where P' is a polynomial of lower degree, and has all the same roots as P, except without z. For example, consider P = x^{3} - 6x^{2} + 11x - 6. One of the zeroes of P is 1, so Descartes theorem says that we can write P in the form (x-1)P' for some second-degree polynomial P', and if there are any other zeroes of P, they will also be zeroes of P'. And in fact we can; x^{3} - 6x^{2} + 11x - 6 = (x-1)(x^{2} - 5x + 6). The other zeroes of the original polynomial P are 2 and 3, and both of these (but not 1) are also zeroes of x^{2} - 5x + 6.We can repeat this process. If a_{1}, a_{2}, ... a_{n} are the zeroes of some nth-degree polynomial P, we can factor P as (x-a_{1})(x-a_{2})...(x-a_{n}). The only possibly tricky thing here is that some of the a_{i} might be the same. x^{2} - 2x + 1 has a zero of 1, but twice, so it factors as (x-1)(x-1). x^{3} - 4x^{2} + 5x - 2 has a zero of 1, but twice, so it factors as (x-1)(x^{2} - 3x + 2), where (x^{2} - 3x + 2) also has a zero of 1, but this time only once. Descartes' theorem has a number of important corollaries: an nth-degree polynomial has no more than n zeroes. A polynomial that has a zero can be factored. When coupled with the so-called fundamental theorem of algebra, which says that every polynomial has a zero over the complex numbers, it implies that every nth-degree polynomial can be factored into a product of n factors of the form (x-z) where z is complex, or into a product of first- and second-degree polynomials with real coefficients.
The square roots of -1Suppose we want to know what the square roots of 1 are. We want numbers x such that x^{2} = 1, or, equivalently, such that x^{2} - 1 = 0. x^{2} - 1 factors into (x-1)(x+1), so the zeroes are ±1, and these are the square roots of 1. On the other hand, if we want the square roots of 0, we need the solutions of x^{2} = 0, and the corresponding calculation calls for us to factor x^{2}, which is just (x-0)(x-0), So 0 has the same square root twice; it is the only number without two distinct square roots.If we want to find the square roots of -1, we need to factor x^{2} + 1. There are no real numbers that do this, so we call the polynomial irreducible. Instead, we make up some square roots for it, which we will call i and j. We want (x - i)(x - j) = x^{2} + 1. Multiplying through gives x^{2} - (i + j)x + ij = x^{2} + 1. So we have some additional relationships between i and j: ij = 1, and i + j = 0. Because i + j = 0, we always dispense with j and just call it -i. But it's essential to realize that neither one is more important than the other. The two square roots are identical twins, different individuals that are completely indistinguishable from each other. Our choice about which one was primary was completely arbitrary. That doesn't even do justice to the arbitrariness of the choice: i and j are so indistinguishable that we still don't know which one got the primary notation. Every property enjoyed by i is also enjoyed by j. i^{3} = j, but j^{3} = i. e^{iπ} = -1, but so too e^{jπ} = -1. And so on. In fact, I once saw the branch of mathematics known as Galois theory described as being essentially the observation that i and -i were indistinguishable, plus the observation that there are no other indistinguishable pairs of complex numbers. Anyway, all this is to set up the main point of the article, which is to describe my favorite math problem of all time. I call it the "train problem" because I work on it every time I'm stuck on a long train trip. We are going to investigate irreducible polynomials; that is, polynomials that do not factor into products of simpler polynomials. Descartes' theorem implies that irreducible polynomials are zeroless, so we are going to start by looking for polynomials without zeroes. But we are not going to do it in the usual world of the real numbers. The answer in that world is simple: x^{2} + 1 is irreducible, and once you make up a zero for this polynomial and insert it into the universe, you are done; the fundamental theorem of algebra guarantees that there are no more irreducible polynomials. So instead of the reals, we're going to do this in a system known as Z_{2}.
Z_{2}Z_{2} is much simpler than the real numbers. Instead of lots and lots of numbers, Z_{2} has only two. (Hence the name: The 2 means 2 and the Z means "numbers". Z is short for Zahlen, which is German.) Z_{2} has only 0 and 1. Addition and multiplication are as you expect, except that instead of 1 + 1 = 2, we have 1 + 1 = 0.
The 1+1=0 oddity is the one and only difference between Z_{2} and ordinary algebra. All the other laws remain the same. To be explicit, I will list them below:
Algebra in Z_{2}But the 1+1=0 oddity does have some consequences. For example, a + a = a·(1 + 1) = a·0 = 0 for all a. This will continue to be true for all a, even if we extend Z_{2} to include more elements than just 0 and 1, just as x + x = 2x is true not only in the real numbers but also in their extension, the complex numbers.Because a + a = 0, we have a = -a for all a, and this means that in Z_{2}, addition and subtraction are the same thing. If you don't like subtraction—and really, who does?—then Z_{2} is the place for you. As a consequence of this, there are some convenient algebraic manipulations. For example, suppose we have a + b = c and we want to solve for a. Normally, we would subtract b from both sides. But in Z_{2}, we can add b to both sides, getting a + b + b = c + b. Then the b + b on the left cancels (because b + b = 0) leaving a = c + b. From now on, I'll call this the addition law. If you suffered in high school algebra because you wanted (a + b)^{2} to be equal to a^{2} + b^{2}, then suffer no more, because in Z_{2} they are equal. Here's the proof:
Because the ab + ab cancels in the last step. This happy fact, which I'll call the square law, also holds true for sums of more than two terms: (a + b + ... + z)^{2} = a^{2} + b^{2} + ... + z^{2}. But it does not hold true for cubes; (a + b)^{3} is a^{3} + a^{2}b + ab^{2} + b^{3}, not a^{3} + b^{3}.
Irreducible polynomials in Z_{2}Now let's try to find a polynomial that is irreducible in Z_{2}. The first degree polynomials are x and x+1, and both of these have zeroes: 0 and 1, respectively.There are four second-degree polynomials:
Note in particular that, unlike in the real numbers, x^{2} + 1 is not irreducible in Z_{2}. 1 is a zero of this polynomial, which, because of the square law, factors as (x+1)(x+1). But we have found an irreducible polynomial, x^{2} + x + 1. The other three second-degree polynomials all can be factored, but x^{2} + x + 1 cannot. Like the fictitious mathematicians who invented i, we will rectify this problem, by introducing into Z_{2} a new number, called b, which has the property that b^{2} + b + 1 = 0. What are the consequences of this? The first thing to notice is that we haven't added only one new number to Z_{2}. When we add i to the real numbers, we also get a lot of other stuff like i+1 and 3i-17 coming along for the ride. Here we can't add just b. We must also add b+1. The next thing we need to do is figure out how to add and multiply with b. The addition and multiplication tables are extended as follows:
The only items here that might require explanation are the entries in the lower-right-hand corner of the multiplication table. Why is b×b = b+1? Well, b^{2} + b + 1 = 0, by definition, so b^{2} = b + 1 by the addition law. Why is b × (b+1) = 1? Well, b × (b+1) = b^{2} + b = (b + 1) + b = 1 because the two b's cancel. Perhaps you can work out (b+1) × (b+1) for yourself. From here there are a number of different things we could investigate. But what I think is most crucial is to discover the value of b^{3}. Let's calculate that. b^{3} = b·b^{2}. b^{2}, we know, is b+1. So b^{3} = b(b+1) = 1. Thus b is a cube root of 1. People with some experience with this sort of study will not be surprised by this. The cube roots of 1 are precisely the zeroes of the polynomial x^{3} + 1 = 0. 1 is obviously a cube root of 1, and so Descartes' theorem says that x^{3} + 1 must factor into (x+1)P, where P is some second-degree polynomial whose zeroes are the other two cube roots. P can't be divisible by x, because 0 is not a cube root of 1, so it must be either (x+1)(x+1), or something else. If it were the former, then we would have x^{3} + 1 = (x+1)^{3}, but as I pointed out before, we don't. So it must be something else. The only candidate is x^{2} + x + 1. Thus the other two cube roots of 1 must be zeroes of x^{2} + x + 1. The next question we must answer without delay is: b is one of the zeroes of x^{2} + x + 1; what is the other? There is really only one possibility: it must be b+1, because that is the only other number we have. But another way to see this is to observe that if p is a cube root of 1, then p^{2} must also be a cube root of 1:
So we now know the three zeroes of x^{3} + 1: they are 1, b, and b^{2}. Since x^{3} + 1 factors into (x+1)(x^{2}+x+1), the two zeroes of x^{2}+x+1 are b and b^{2}. As we noted before, b^{2} = b+1. This means that x^{2} + x + 1 should factor into (x+b)(x+b+1). Multiplying through we get x^{2} + bx + (b+1)x + b(b+1) = x^{2} + x + 1, as hoped. Actually, the fact that both b and b^{2} are zeroes of x^{2}+x+1 is not a coincidence. Z_{2} has the delightful property that if z is a zero of any polynomial whose coefficients are all 0's and 1's, then z^{2} is also a zero of that polynomial. I'll call this Theorem 1. Theorem 1 is not hard to prove; it follows from the square law. Feel free to skip the demonstration which follows in the rest of the paragraph. Suppose the polynomial is P(x) = a_{n}x^{n} + a_{n-1}x^{n-1} + ... + a_{1}x + a_{0}, and z is a zero of P, so that P(z) = 0. Then (P(z))^{2} = 0 also, so (a_{n}z^{n} + a_{n-1}z^{n-1} + ... + a_{1}z + a_{0})^{2} = 0. By the square law, we have a_{n}^{2}(z^{n})^{2} + a_{n-1}^{2}(z^{n-1})^{2} + ... + a_{1}^{2}z^{2} + a_{0}^{2} = 0. Since all the a_{i} are either 0 or 1, we have a_{i}^{2} = a_{i}, so a_{n}(z^{n})^{2} + a_{n-1}(z^{n-1})^{2} + ... + a_{1}z^{2} + a_{0} = 0. And finally, (z^{k})^{2} = (z^{2})^{k}, so we get a_{n}(z^{2})^{n} + a_{n-1}(z^{2})^{n-1} + ... + a_{1}z^{2} + a_{0} = P(z^{2}) = 0, which is what we wanted to prove. Now you might be worried about something: the fact that b is a zero of x^{2} + x + 1 implies that b^{2} is also a zero; doesn't the fact that b^{2} is a zero imply that b^{4} is also a zero? And then also b^{8} and b^{16} and so forth? Then we would be in trouble, because x^{2} + x + 1 is a second-degree polynomial, and, by Descartes' theorem, cannot have more than two zeroes. But no, there is no problem, because b^{3} = 1, so b^{4} = b·b^{3} = b·1 = b, and similarly b^{8} = b^{2}, so we are getting the same two zeroes over and over.
Symmetry againJust as i and -i are completely indistinguishable from the point of view of the real numbers, so too are b and b+1 indistinguishable from the point of view of Z_{2}. One of them has been given a simpler notation than the other, but there is no way to tell which one we gave the simpler notation to! The one and only defining property of b is that b^{2} + b + 1 = 0; since b+1 also shares this property, it also shares every other property of b. b and b+1 are both cube roots of 1. Each, added to the other gives 1; each, added to 1 gives the other. Each, when multiplied by itself gives the other; each when multiplied by the other gives 1.
Third-degree polynomialsWe have pretty much exhausted the interesting properties of b and its identical twin b^{2}, so let's move on to irreducible third-degree polynomials. If a third-degree polynomial is reducible, then it is either a product of three linear factors, or of a linear factor and an irreducible second-degree polynomial. If the former, it must be one of the four products x^{3}, x^{2}(x+1), x(x+1)^{2}, or (x+1)^{3}. If the latter, it must be one of x(x^{2}+x+1) or (x+1)(x^{2}+x+1). This makes 6 reducible polynomials. Since there are 8 third-degree polynomials in all, the other two must be irreducible:
There are two irreducible third-degree polynomials. Let's focus on x^{3} + x + 1, since it seems a little simpler. As before, we will introduce a new number, c, which has the property c^{3} + c + 1 = 0. By the addition law, this implies that c^{3} = c + 1. Using this, we can make a table of powers of c, which will be useful later:
To calculate the entries in this table, just multiply each line by c to get the line below. For example, once we know that c^{5} = c^{2} + c + 1, we multiply by c to get c^{6} = c^{3} + c^{2} + c. But c^{3} = c + 1 by definition, so c^{6} = (c + 1) + c^{2} + c = c^{2} + 1. Once we get as far as c^{7}, we find that c is a 7th root of 1. Analogous to the way b and b^{2} were both cube roots of 1, the other 7th roots of 1 are c^{2}, c^{3}, c^{4}, c^{5}, c^{6}, and 1. For example, c^{5} is a 7th root of 1 because (c^{5})^{7} = c^{35} = (c^{7})^{5} = 1^{5} = 1. We've decided that c is a zero of x^{3} + x + 1. What are the other two zeroes? By theorem 1, they must be c^{2} and (c^{2})^{2} = c^{4}. Wait, what about c^{8}? Isn't that a zero also, by the same theorem? Oh, but c^{8} = c·c^{7} = c, so it's really the same zero again. What about c^{3}, c^{5}, and c^{6}? Those are the zeroes of the other irreducible third-degree polynomial, x^{3} + x^{2} + 1. For example, (c^{3})^{3} + (c^{3})^{2} + 1 = c^{9} + c^{6} + 1 = c^{2} + (c^{2}+1) + 1 = 0. So when we take Z_{2} and adjoin a new number that is a solution of the previously unsolvable equation x^{3} + x + 1 = 0, we're actually forced to add six new numbers; the resulting system has 8, and includes three different solutions to both irreducible third-degree polynomials, and a full complement of 7th roots of 1. Since the zeroes of x^{3} + x + 1 and x^{3} + x^{2} + 1 are all 7th roots of 1, this tells us that x^{7} + 1 factors as (x+1)(x^{3} + x + 1)(x^{3} + x^{2} + 1), which might not have been obvious.
OnwardI said this was my favorite math problem, but I didn't say what the question was. But by this time we have dozens of questions. For example:
And then, when I get tired of Z_{2}, I can start all over with Z_{3}.
[Other articles in category /math] permanent link Mon, 15 May 2006
Creeping featurism and the ratchet effect
But the concept of "creeping featurism" his wider applicability than just to program features. We can recognize it in other contexts. For example, someone is reading the Perl manual. They read the section on the unpack function and they find it confusing. So they propose a documentation patch to add a couple of sentences, explicating the confusing point in more detail. It seems like a good idea at the time. But if you do it over and over—and we have—you end up with a 2,000 page manual—and we did. The real problem is that it's easy to see the benefit of any proposed addition. But it is much harder to see the cost of the proposed addition, that the manual is now 0.002% larger. The benefit has a poster child, an obvious beneficiary. You can imagine a confused person in your head, someone who happens to be confused in exactly the right way, and who is miraculously helped out by the presence of the right two sentences in the exact right place. The cost has no poster child. Or rather, the poster child is much harder to imagine. This is the person who is looking for something unrelated to the two-sentence addition. They are going to spend a certain amount of time looking for it. If the two-sentence addition hadn't been in there, they would have found what they were looking for. But the addition slowed them down just enough that they gave up without finding what they needed. Although you can grant that such a person might exist, they really aren't as compelling as the confused person who is magically assisted by timely advice. Even harder to imagine is the person who's kinda confused, and for whom the extra two sentences, clarifying some obscure point about some feature he wasn't planning to use in the first place, are just more confusion. It's really hard to understand the cost of that. But the benefit, such as it is, comes in one big lump, whereas the cost is distributed in tiny increments over a very large population. The benefit is clear, and the cost is obscure. It's easy to make a specific argument in favor of any particular addition ("people might be confused by X, so I'm going to explain it in more detail") and it's hard to make such an argument against the addition. And conversely: it's easy to make the argument that any particular bit of text should stay in, hard to argue that it should be removed. As a result, there's what I call a "ratchet effect": you can make the manual bigger, one tiny notch at a time, and people do. But having done so, you can't make it smaller again; someone will object to almost any proposed deletion. The manual gets bigger and bigger, worse and worse organized, more and more unusable, until finally it collapses under its own weight and all you can do is start over again. You see the same thing happen in software, of course. I maintain the Text::Template Perl module, and I frequently get messages from people saying that it should have some feature or other. And these people sometimes get quite angry when I tell them I'm not going to put in the feature they want. They're angry because it's easy to see the benefit of adding another feature, but hard to see the cost. "If other people don't like it," goes the argument, "they don't have to use it." True, but even if they don't use it, they still pay the costs of slightly longer download times, slightly longer compile times, a slightly longer and more confusing manual, slightly less frequent maintenance updates, slightly less prompt bug fix deliveries, and so on. It is so hard to make this argument, because the cost to any one person is so very small! But we all know where the software will end up if I don't make this argument every step of the way: on the slag heap. This has been on my mind on and off for years. But I just ran into it in a new context. Lately I've been working on a book about code style and refactoring in Perl. One thing you see a lot in Perl programs written by beginners is superfluous parentheses. For example:
next if ($file =~ /^\./); next if !($file =~ (/[0-9]/)); next if !($file =~ (/txt/));Or:
die $usage if ($#ARGV < 0);There are a number of points I want to make about this. First, I'd like to express my sympathy for Perl programmers, because Perl has something like 95 different operators at something like 17 different levels of precedence, and so nobody knows what all the precedences are and whether parentheses are required in all circumstances. Does the ** operator have higher or lower precedence than the <<= operator? I really have no idea. So the situation is impossible, at least in principle, and yet people have to deal with it somehow. But the advice you often hear is "if you're not sure of the precedence, just put in the parentheses." I think that's really bad advice. I think better advice would be "if you're not sure of the precedence, look it up." Because Perl's Byzantine operator table is not responsible for all the problems. Notice in the examples above, which are real examples, taken from real code written by other people: Many of the parentheses there are entirely superfluous, and are not disambiguating the precedence of any operators. In particular, notice the inner parentheses in: next if !($file =~ (/txt/));Inside the inner parentheses, there are no operators! So they cannot be disambiguating any precedence, and they are completely unnecessary:
next if !($file =~ /txt/);People sometimes say "well, I like to put them in anyway, just to be sure." This is pure superstition, and we should not tolerate it in people who purport to be engineers. Engineers should be capable of making informed choices, based on technical realities, not on some creepy feeling in their guts that perhaps a failure to sprinkle enough parentheses over their program will invite the wrath the Moon God. By saying "if you're not sure, just avoid the problem" we are encouraging this kind of fearful, superstitious approach to the issue. That approach would be appropriate if it were the only way to deal with the issue, but fortunately it is not. There is a more rational approach: you can look it up, or even try an experiment, and then you will know whether the parentheses are required in a particular case. Then you can make an informed decision about whether to put them in. But when I teach classes on this topic, people sometimes want to take the argument even further: they want to argue that even if you know the precedence, and even if you know that the parentheses are not required, you should put them in anyway, because the next person to see the code might not know that. And there we see the creeping featurism argument again. It's easy to see the potential benefit of the superfluous parentheses: some hapless novice maintenance programmer might misunderstand the expression if I don't put them in. It's much harder to see the cost: The code is fractionally harder for everyone to read and understand, novice or not. And again, the cost of the extra parentheses to any particular person is so small, so very small, that it is really hard to make the argument against it convincingly. But I think the argument must be made, or else the code will end up on the slag heap much faster than it would have otherwise. Programming cannot be run on the convoy system, with the program code written to address the most ignorant, uneducated programmer. I think you have to assume that the next maintenance programmer will be competent, and that if they do not know what the expression means, they will look up the operator precedence in the manual. That assumption may be false, of course; the world is full of incompetent programmers. But no amount of parentheses are really going to help this person anyway. And even if they were, you do not have to give in, you do not have to cater to incompetence. If an incompetent programmer has trouble understanding your code, that is not your fault; it is their fault for being incompetent. You do not have to take special steps to make your code understandable even by incompetents, and you certainly should not do so at the expense of making it harder for competent programmers to read and understand, no, not to the tiniest degree. The advice that one should always put in the parentheses seems to me to be going in the wrong direction. We should be struggling for higher standards, both for ourselves and for our associates. The conventional advice, it seems to me, is to give up.
[Other articles in category /prog] permanent link Thu, 11 May 2006
Egyptian Fractions
A simple algorithm for calculating this so-called "Egyptian fraction representation" is the greedy algorithm: To represent n/d, find the largest unit fraction 1/a that is less than n/d. Calculate a representation for n/d - 1/a, and append 1/a. This always works, but it doesn't always work well. For example, let's use the greedy algorithm to find a representation for 2/9. The largest unit fraction less than 2/9 is 1/5, and 2/9 - 1/5 = 1/45, so we get 2/9 = 1/5 + 1/45 = [5, 45]. But it also happens that 2/9 = [6, 18], which is much more convenient to calculate with because the numbers are smaller. Similarly, for 19/20 the greedy algorithm produces 19/20 = [2] + 9/20 = [2, 3] + 7/60 = [2, 3, 9, 180]. But even 7/60 can be more simply written than as [9, 180]; it's also [10, 60], [12, 30], and, best of all, [15, 20]. So similarly, for 3/7 this time, the greedy methods gives us 3/7 = 1/3 + 2/21, and that 2/21 can be expanded by the greedy method as [11, 231], so 3/7 = [3, 11, 231]. But even 2/21 has better expansions: it's also [12, 84], [14, 42], and, best of all, [15, 35], so 3/7 = [3, 15, 35]. But better than all of these is 3/7 = [4, 7, 28], which is optimal. Anyway, while I was tinkering with all this, I got an answer to a question I had been wondering about for years, which is: why did Ahmes come up with a table of representations of fractions of the form 2/n, rather than the representations of all possible quotients? Was there a table somewhere else, now lost, of representations of fractions of the form 3/n? The answer, I think, is "probably not"; here's why I think so. Suppose you want 3/7. But 3/7 = 2/7 + 1/7. You look up 2/7 in the table and find that 2/7 = [4, 28]. So 3/7 = [4, 7, 28]. Done. OK, suppose you want 4/7. You look up 2/7 in the table and find that 2/7 = [4, 28]. So 4/7 = [4, 4, 28, 28] = [2, 14]. Done. Similarly, 5/7 = [2, 7, 14]. Done. To calculate 6/7, you first calculate 3/7, which is [4, 7, 28]. Then you double 3/7, and get 6/7 = 1/2 + 2/7 + 1/14. Now you look up 2/7 in the table and get 2/7 = [4, 28], so 6/7 = [2, 4, 14, 28]. Whether this is optimal or not is open to argument. It's longer than [2, 3, 42], but on the other hand the denominators are smaller. Anyway, the table of 2/n is all you need to produce Egyptian representations of arbitrary rational numbers. The algorithm in general is:
An alternative algorithm is to expand the numerator as a sum of powers of 2, which the Egyptians certainly knew how to do. For 19/20 this gives us 19/20 = 16/20 + 2/20 + 1/20 = 4/5 + [10, 20]. Now we need to figure out 4/5, which we do as above, getting 4/5 = [2, 6, 12, 20], or 4/5 = 2/3 + [12, 20] if we are Egyptian, or 4/5 = [2, 4, 20] if we are clever. Supposing we are neither, we have 19/20 = [2, 6, 12, 20, 10, 20] = [2, 6, 12, 10, 10] = [2, 6, 12, 5] as before. (It is not clear to me, by the way, that either of these algorithms is guaranteed to terminate. I need to think about it some more.) Getting the table of good-quality representations of 2/n is not trivial, and requires searching, number theory, and some trial and error. It's not at all clear that 2/105 = [90, 126]. Once you have the table of 2/n, however, you can grind out the answer to any division problem. This might be time-consuming, but it's nevertheless trivial. So Ahmes needed a table of 2/n, but once he had it, he didn't need any other tables.
[Other articles in category /math] permanent link Tue, 02 May 2006
Addenda to recent articles 200604
[Other articles in category /addenda] permanent link Mon, 01 May 2006
Google query roundup
Probably the one I found most interesting was:
1 if n + 1 are put inside n boxes, then at least one box will contain more than one ball. prove this principle by induction.But I found this so interesting that I wrote a 1,000 word article about it, which is not finished. Briefly: I believe that nearly all questions of the form "solve the problem using/without using technique X" are pedagogically bogus and represent a failure of instructor or curriculum. Well, it will have to wait for another time. Another mathematical question that came up was: 1 a collection of 2 billion points is completely enclosed by a circle. does there exist a straight line having exactly 1 billion of these points on each sideThis one is rather interesting. The basic idea is that you take a line, any line, and put it way off to one side of the points; all the points are now on one side of the line. Then you move the line smoothly across the points to the other side. As you do this, the number of points on one side decreases and the number of points on the other side increases until all the points are on the other side. Unless something funny happens in the middle, then somewhere along the way, half the points will be on one side and half on the other. For concreteness, let's say that the line is moving from left to right, and that the points start out to the right of the line. What might happen in the middle is that you might have one billion minus n points on the left, and then suddenly the line intersects more than n points at once, so that the number of points on the left jumps up by a whole bunch, skipping right past one billion, instead of ticking up by one at a time. So what we really need is to ensure that this never happens. But that's no trouble. Taking the points two at a time, we can find the slope of the line that will pass through the two points. There are at most 499,999,999,500,000,000 such slopes. If we pick a line that has a slope different from one of these, then no matter where we put it, it cannot possibly intersect more than one of the points. Then as we slide the line from one side to the other, as above, the count of the number of points on the left never goes up by more than 1 at a time, and we win. Another math query that did not come from Google is:
Why can't there be a Heron's formula for an arbitrary quadrilateralHeron's formula, you will recall, gives the area of a triangle in terms of the lengths of its sides. The following example shows that there can be no such formula for the area of a quadrilateral:
The following query is a little puzzling:
1 undecidable problems not in npIt's puzzling because no undecidable problem is in NP. NP is the class of problems for which proposed solutions can be checked in polynomial time. Problems in NP can therefore be solved by the simple algorithm of: generate everything that could possibly be a solution, and check each one to see if it is a solution. Undecidable problems, on the other hand, cannot be solved at all, by any method. So if you want an example of an undecidable problem that is not in NP, you start by choosing an undecidable problem, and then you are done. It might be that the querent was looking for decidable problems that are not in NP. Here the answer is more interesting. There are many possibilities, but surprisingly few known examples. The problem of determining whether there is any string that is matched by a given regex is known to require exponential time, if regular expressions are extended with a {2} notation so that a{2} is synonymous with aa. Normally, if someone asks you if there is any string that matches a regex, you can answer just by presenting such a string, and then the querent can check the answer by running the regex engine and checking (in polynomial time) that the string you have provided does indeed match. But for regexes with the {2} notation, the string you would provide might have to be gigantic, so gigantic that it could not be checked efficiently. This is because one can build a relatively short regex that matches only enormous strings: a{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2} is only 61 characters long, and it does indeed match one string, but the string it matches is 1,048,576 characters long. Many problems involving finding the good moves in certain games are known to be decidable and believed to not be in NP, but it isn't known for sure. Many problems that involve counting things are known to be decidable but are believed to not be in NP. For example, consider the NP-complete problem discussed here, called X3C. X3C is in NP. If Sesame Workshop presents you with a list of episodes and a list of approved topics for dividing the episodes into groups of 3, and you come up with a purported solution, Sesame Workshop can quickly check whether you did it right, whether each segment of Elmo's World is on exactly one video, and whether all the videos are on the approved list. But consider the related problem in which Sesame Workshop comes to you with the same episodes and the same list of approved combinations, and asks, not for a distribution of episodes onto videotapes, but a count of the number of possible distributions. You might come back with an answer, say 23,487, but Sesame Workshop has no way to check that this is in fact the right number. Nobody has been able to think of a way that they might do this, anyhow. Such a problem is clearly decidable: enumerate all possible distributions of episodes onto videos, check each one to see if it satisfies Sesame Workshop's criteria, and increment a counter if so. It is clearly at least as hard as the NP-complete problem of determining whether there is any legal distribution, because if you can count the number of legal distributions, there is a legal distribution if and only if the count of legal distributions is not 0. But it may well be outside of NP, because seems quite plausible that there is no quick way to verify a purported count of solutions, short of generating, checking, and recounting all possible distributions. This is on my mind anyway, because this month I got email from two separate people both asking me for examples of problems that were outside of NP but still decidable, one on March 18: I was hoping to get some information on a problem that is considered np-hard but not in NP (other than halting problem).And then one on March 31:
I was visiting your website in search of problems that are NP-Hard but not NP-Complete which are decision problems, but not the halting problem.It turned out that they were both in the same class at Brock University in Canada. Here's one that was surprising: 1 wife site:plover.comMy first thought was that this might have been posted by my wife, looking to see if I was talking about her on my blog. But that is really not her style. She would be much more likely just to ask me rather than to do something sneaky. That is why I married her, instead of a sneaky person. And indeed, the query came from Australia. I still wonder what it was about, though.
1 mathematical solution for eliminating debtSomething like this has come up month after month:
[18/Jan/2006:09:10:46 -0500] eliminate debt using linear math [08/Feb/2006:13:48:13 -0500] linear math system eliminate debt [08/Feb/2006:13:53:28 -0500] linear math system eliminate debt [25/Feb/2006:15:12:01 -0500] how to get out of debt using linear math [23/Apr/2006:10:32:18 -0400] Mathematical solution for eliminating debt [23/Apr/2006:10:33:43 -0400] Mathematical solution for eliminating debtAt first I assumed that it was the same person. But analysis of the logs suggests it's not. I tried the query myself and found that many community colleges and continuing education programs offer courses on using linear math to eliminate debt. I don't know what it's about. I don't even know what "linear" means in this context. I am unlikely to shell out $90 to learn the big secret, so a secret it will have to remain.
1 which 2 fraction did archimedes add together to write 3/4I don't know what this was about, but it reminded me of Egyptian fractions. Apparently, the Egyptians had no general notation for fractions. They did, however, have notations for numbers of the form 1/n, and they could write sums of these. They also had a special notation for 2/3. So they could in fact write all fractions, although it wasn't always easy. There are several algorithms for writing any fraction as a sum of fractions of the form 1/n. The greedy algorithm suffices. Say you want to write 2/5. This is bigger than 1/3 and smaller than 1/2, so write 2/5 = 1/3 + x. x is now 1/15 and we are done. Had x had a numerator larger than 1, we would have repeated the process. The greedy algorithm produces an Egyptian fraction representation of any number, but does not always do so conveniently. For example, consider 19/20. The greedy algorithm finds 19/20 = 1/2 + 1/3 + 1/9 + 1/180. But 19/20 = 1/2 + 1/4 + 1/5, which is much more convenient. So the problem of finding good representations for various numbers was of some interest, and the Ahmes papyrus, one of the very oldest mathematical manuscripts known, devotes a large amount of space to the representations of various numbers of the form 2/n.
1 why does pi appear in both circle area and circumference?This is not a coincidence. I have a mostly-written article about this; I will post it here when I finish it.
1 the only two things in our universe starting with m & eI hope the high school science teacher who asked this idiotic question burns in hell.
"Never heard of it," said the machine.I have a minor reading disorder: I hardly ever think books are funny, except when they are read out loud. When I tell people this, they always start incredulously enumerating funny books: "You didn't think that the Hitchhiker's Guide books were funny?" No, I didn't. I thought Fear and Loathing in Las Vegas was the dumbest thing I'd ever read, until I heard James Woodyatt reading it aloud in the back of the car on the way to a party in Las Vegas (New Mexico, not Nevada) and then I laughed my head off. I did not think Evelyn Waugh's humorous novel Scoop was funny. I did not think Daniel Pinkwater's books were funny. I do not think Christopher Moore's books are funny. Louise Erdrich's books are sometimes funny, but I only know this because Lorrie and I read them aloud to each other. Had I read them myself, I would have thought them unrelievedly depressing. I think The Cyberiad is uproarious. It is easily the funniest book I have ever read. I laugh out out every time I read it. The most amazing thing about it is that it was originally written in Polish. The translator, Michael Kandel, is a genius. I would not have dared to translate a Polish story about a machine that can make anything that begins with n. It is an obvious death trap. Eventually, the story is going to call for the machine to manufacture something like napad rabunkowy ("stickup") that begins with n in Polish but not in English. Perhaps the translator will get lucky and find a synonym that begins with n in English, but sooner or later his luck will run out. I once met M. Kandel, and I asked him if it had been n in the original, or if it had been some other letter that was easy in Polish but impossible in English, like z, or even ł. No, he said it had indeed been n. He said that the only real difficulty he had was that Trurl asks the machine to make nauka ("science") and he had to settle for "nature". (Incidentally, I put the word napad rabunkowy into the dictionary lookup at poltran.com and it told me angrily "PLEASE TYPE POLISH WORDS USING CAPITAL LETTERS ONLY!" Yeah, that's a good advertisement for your software.)
By the way, narghiles are like hookahs, and nankeens are trousers made of a certain kind of cotton cloth, also called nankeen. The most intriguing query I got was: 1 consciousness torus photon coreWow, isn't that something? I have no idea what this person was looking for. So I did the search myself, to see what would come up. I found a lot of amazingly nutty theories about physics. As I threatened a while back, I may do an article about crackpotism; if so, I will certainly make this query again, to gather material. My favorite result from this query is unfortunately offline, available only from the Google cache:
The photon is actually composed of two tetrahedrons that are joined together, as we see in figure 4.6, and they then pass together through a cube that is only big enough to measure one of them at a time.Wow! The photon is actually composed of two tetrahedrons! Who knew? But before you get too excited about this, I should point out that the sentence preceding this one asserted that the volume of a regular tetrahedron is one-third the volume of the smallest sphere containing it. (Exercise: without knowing very much about the volumes of tetrahedra and their circumscribed spheres, how can you quickly guess that this is false?) Also, I got 26 queries from people looking for the Indiana Pacers' cheerleaders, and 31 queries from people looking for examples of NP-complete problems. This is article #100 on my blog.
[Other articles in category /google-roundup] permanent link
Google query roundup
The complete article is available here.
[Other articles in category /google-roundup] permanent link |