The Universe of Discourse


Thu, 22 Aug 2024

XKCD game theory question

Six-panel cartoon from XKCD.
Each panel gives a one-question mathematics ‘final exam’ from
a different level of education from ‘kindergarten’ to 
‘postgraduate math’.  This article concerns the fifth, which says
“Game Theory Final Exam: Q. Write down 10 more than the average of
the class’s answers.  A. (blank).”

(Source: XKCD “Exam numbers”.)

This post is about the bottom center panel, “Game Theory final exam”.

I don't know much about game theory and I haven't seen any other discussion of this question. But I have a strategy I think is plausible and I'm somewhat pleased with.

(I assume that answers to the exam question must be real numbers — not !!\infty!! — and that “average” here is short for 'arithmetic mean'.)

First, I believe the other players and I must find a way to agree on what the average will be, or else we are all doomed. We can't communicate, so we should choose a Schelling point and hope that everyone else chooses the same one. Fortunately, there is only one distinguished choice: zero. So I will try to make the average zero and I will hope that others are trying to do the same.

If we succeed in doing this, any winning entry will therefore be !!10!!. Not all !!n!! players can win because the average must be !!0!!. But !!n-1!! can win, if the one other player writes !!-10(n-1)!!. So my job is to decide whether I will be the loser. I should select a random integer between !!0!! and !!n-1!!. If it is zero, I have drawn a short straw, and will write !!-10(n-1)!!. otherwise I write !!10!!.

(The straw-drawing analogy is perhaps misleading. Normally, exactly one straw is short. Here, any or all of the straws might be short.)

If everyone follows this strategy, then I will win if exactly one person draws a short straw and if that one person isn't me. The former has a probability that rapidly approaches !!\frac1e\approx 36.8\%!! as !!n!! increases, and the latter is !!\frac{n-1}n!!. In an !!n!!-person class, the probability of my winning is $$\left(\frac{n-1}n\right)^n$$ which is already better than !!\frac13!! when !!n= 6!!, and it increases slowly toward !!36.8\%!! after that.

Some miscellaneous thoughts:

  1. The whole thing depends on my idea that everyone will agree on !!0!! as a Schelling point. Is that even how Schelling points work? Maybe I don't understand Schelling points.

  2. I like that the probability !!\frac1e!! appears. It's surprising how often this comes up, often when multiple agents try to coordinate without communicating. For example, in ALOHAnet a number of ground stations independently try to send packets to a single satellite transceiver, but if more than one tries to send a packet at a particular time, the packets are garbled and must be retransmitted. At most !!\frac1e!! of the available bandwidth can be used, the rest being lost to packet collisions.

  3. The first strategy I thought of was plausible but worse: flip a coin, and write down !!10!! if it is heads and !!-10!! if it is tails. With this strategy I win if exactly !!\frac n2!! of the class flips heads and if I do too. The probability of this happening is only $$\frac{n\choose n/2}{2^n}\cdot \frac12 \approx \frac1{\sqrt{2\pi n}}.$$ Unlike the other strategy, this decreases to zero as !!n!! increases, and in no case is it better than the first strategy. It also fails badly if the class contains an odd number of people.

    Thanks to Brian Lee for figuring out the asymptotic value of !!4^{-n}\binom{2n}{n}!! so I didn't have to.

  4. Just because this was the best strategy I could think of in no way means that it is the best there is. There might have been something much smarter that I did not think of, and if there is then my strategy will sabotage everyone else.

    Game theorists do think of all sorts of weird strategies that you wouldn't expect could exist. I wrote an article about one a few years back.

  5. Going in the other direction, even if !!n-1!! of the smartest people all agree on the smartest possible strategy, if the !!n!!th person is Leeroy Jenkins, he is going to ruin it for everyone.

  6. If I were grading this exam, I might give full marks to anyone who wrote down either !!10!! or !!-10(n-1)!!, even if the average came out to something else.

  7. For a similar and also interesting but less slippery question, see Wikipedia's article on Guess ⅔ of the average. Much of the discussion there is directly relevant. For example, “For Nash equilibrium to be played, players would need to assume both that everyone else is rational and that there is common knowledge of rationality. However, this is a strong assumption.” LEEROY JENKINS!!

  8. People sometimes suggest that the real Schelling point is for everyone to write !!\infty!!. (Or perhaps !!-\infty!!.)

    Feh.

  9. If the class knows ahead of time what the question will be, the strategy becomes a great deal more complicated! Say there are six students. At most five of them can win. So they get together and draw straws to see who will make a sacrifice for the common good. Vidkun gets the (unique) short straw, and agrees to write !!-50!!. The others accordingly write !!10!!, but they discover that instead of !!-50!!, Vidkun has written !!22!! and is the only person to have guessed correctly.

    I would be interested to learn if there is a playable Nash equilibrium under these circumstances. It might be that the optimal strategy is for everyone to play as if they didn't know what the question was beforehand!

    Suppose the players agree to follow the strategy I outlined, each rolling a die and writing !!-50!! with probability !!\frac16!!, and !!10!! otherwise. And suppose that although the others do this, Vidkun skips the die roll and unconditionally writes !!10!!. As before, !!n-1!! players (including Vidkun) win if exactly one of them rolls zero. Vidkun's chance of winning increases. Intuitively, the other players' chances of winning ought to decrease. But by how much? I think I keep messing up the calculation because I keep getting zero. If this were actually correct, it would be a fascinating paradox!


[Other articles in category /math] permanent link

Mon, 05 Aug 2024

Everyday examples of morphisms with one-sided inverses

Like almost everyone except Alexander Grothendieck, I understand things better with examples. For instance, how do you explain that

$$(f\circ g)^{-1} = g^{-1} \circ f^{-1}?$$

Oh, that's easy. Let !!f!! be putting on your shoes and !!f^{-1}!! be taking off your shoes. And let !!g!! be putting on your socks and !!g^{-1}!! be taking off your socks.

Now !!f\circ g!! is putting on your socks and then your shoes. And !!g^{-1} \circ f^{-1}!! is taking off your shoes and then your socks. You can't !!f^{-1} \circ g^{-1}!!, that says to take your socks off before your shoes.

(I see a topologist jumping up and down in the back row, desperate to point out that the socks were never inside the shoes to begin with. Sit down please!)

Sometimes operations commute, but not in general. If you're teaching group theory to high school students and they find nonabelian operations strange, the shoes-and-socks example is an unrebuttable demonstration that not everything is abelian.

(Subtraction is not a good example here, because subtracting !!a!! and then !!b!! is the same as subtracting !!b!! and then !!a!!. When we say that subtraction isn't commutative, we're talking about something else.)

Anyway this weekend I was thinking about very very elementary category theory (the only kind I know) and about left and right inverses. An arrow !!f : A\to B!! has a left inverse !!g!! if

$$g\circ f = 1_A.$$

Example of this are easy. If !!f!! is putting on your shoes, then !!g!! is taking them off again. !!A!! is the state of shoelessness and !!B!! is the state of being shod. This !!f!! has a left inverse and no right inverse. You can't take the shoes off before you put them on.

But I wanted an example of an !!f!! with right inverse and no left inverse:

$$f\circ h = 1_B$$

and I was pretty pleased when I came up with one involving pouring the cream pitcher into your coffee, which has no left inverse that gets you back to black coffee. But you can ⸢unpour⸣ the cream if you do it before mixing it with the coffee: if you first put the cream back into the carton in the refrigerator, then the pouring does get you to black coffee.

But now I feel silly. There is a trivial theorem that if !!g!! is a left inverse of !!f!!, then !!f!! is a right inverse of !!g!!. So the shoe example will do for both. If !!f!! is putting on your shoes, then !!g!! is taking them off again. And just as !!f!! has a left inverse and no right inverse, because you can't take your shoes off before putting them on, !!g!! has a right inverse (!!f!!) and no left inverse, because you can't take your shoes off before putting them on.

This reminds me a little of the time I tried to construct an example to show that “is a blood relation of" is not a transitive relation. I had this very strange and elaborate example involving two sets of sisters-in-law. But the right example is that almost everyone is the blood relative of both of their parents, who nevertheless are not (usually) blood relations.


[Other articles in category /math] permanent link

Fri, 05 Jul 2024

A triviality about numbers that look like abbc

Looking at license plates the other day I noticed that if you have a four-digit number !!N!! with digits !!abbc!!, and !!a+c=b!!, then !!N!! will always be a multiple of !!37!!. For example, !!4773 = 37\cdot 129!! and !!1776 = 37\cdot 48!!.

Mathematically this is uninteresting. The proof is completely trivial. (Such a number is simply !!1110a +111c!!, and !!111=3\cdot 37!!.)

But I thought that if someone had pointed this out to me when I was eight or nine, I would have been very pleased. Perhaps if you have a mathematical eight- or nine-year-old in your life, they will be pleased if you share this with them.


[Other articles in category /math] permanent link

Tue, 23 Apr 2024

Well, I guess I believe everything now!

The principle of explosion is that in an inconsistent system everything is provable: if you prove both !!P!! and not-!!P!! for any !!P!!, you can then conclude !!Q!! for any !!Q!!:

$$(P \land \lnot P) \to Q.$$

This is, to put it briefly, not intuitive. But it is awfully hard to get rid of because it appears to follow immediately from two principles that are intuitive:

  1. If we can prove that !!A!! is true, then we can prove that at least one of !!A!! or !!B!! is true. (In symbols, !!A\to(A\lor B)!!.)

  2. If we can prove that at least one of !!A!! or !!B!! is true, and we can prove that !!A!! is false, then we may conclude that that !!B!! is true. (Symbolically, !!(A\lor B) \to (\lnot A\to B)!!.).

Then suppose that we have proved that !!P!! is both true and false. Since we have proved !!P!! true, we have proved that at least one of !!P!! or !!Q!! is true. But because we have also proved that !!P!! is false, we may conclude that !!Q!! is true. Q.E.D.

This proof is as simple as can be. If you want to get rid of this, you have a hard road ahead of you. You have to follow Graham Priest into the wilderness of paraconsistent logic.

Raymond Smullyan observes that although logic is supposed to model ordinary reasoning, it really falls down here. Nobody, on discovering the fact that they hold contradictory beliefs, or even a false one, concludes that therefore they must believe everything. In fact, says Smullyan, almost everyone does hold contradictory beliefs. His argument goes like this:

  1. Consider all the things I believe individually, !!B_1, B_2, \ldots!!. I believe each of these, considered separately, is true.

  2. However, I also believe that I'm not infallible, and that at least one of !!B_1, B_2, \ldots!! is false, although I don't know which ones.

  3. Therefore I believe both !!\bigwedge B_i!! (because I believe each of the !!B_i!! separately) and !!\lnot\bigwedge B_i!! (because I believe that not all the !!B_i!! are true).

And therefore, by the principle of explosion, I ought to believe that I believe absolutely everything.

Well anyway, none of that was exactly what I planned to write about. I was pleased because I noticed a very simple, specific example of something I believed that was clearly inconsistent. Today I learned that K2, the second-highest mountain in the world, is in Asia, near the border of Pakistan and westernmost China. I was surprised by this, because I had thought that K2 was in Kenya somewhere.

But I also knew that the highest mountain in Africa was Kilimanjaro. So my simultaneous beliefs were flatly contradictory:

  1. K2 is the second-highest mountain in the world.
  2. Kilimanjaro is not the highest mountain in the world, but it is the highest mountain in Africa
  3. K2 is in Africa

Well, I guess until this morning I must have believed everything!


[Other articles in category /math/logic] permanent link

Sat, 13 Apr 2024

3-coloring the vertices of an icosahedron

I don't know that I have a point about this, other than that it makes me sad.

A recent Math SE post (since deleted) asked:

How many different ways are there to color the vertices of the icosahedron with 3 colors such that no two adjacent vertices have the same color?

I would love to know what was going on here. Is this homework? Just someone idly wondering?

Because the interesting thing about this question is (assuming that the person knows what an icosahedron is, etc.) it should be solvable in sixty seconds by anyone who makes the least effort. If you don't already see it, you should try. Try what? Just take an icosahedron, color the vertices a little, see what happens. Here, I'll help you out, here's a view of part of the end of an icosahedron, although I left out most of it. Try to color it with 3 colors so that no two adjacent vertices have the same color, surely that will be no harder than coloring the whole icosahedron.

The explanation below is a little belabored, it's what OP would have discovered in seconds if they had actually tried the exercise.

Let's color the middle vertex, say blue.

The five vertices around the edge can't be blue, they must be the other two colors, say red and green, and the two colors must alternate:

Ooops, there's no color left for the fifth vertex.

The phrasing of the question, “how many” makes the problem sound harder than it is: the answer is zero because we can't even color half the icosahedron.

If OP had even tried, even a little bit, they could have discovered this. They didn't need to have had the bright idea of looking at a a partial icosahedron. They could have grabbed one of the pictures from Wikipedia and started coloring the vertices. They would have gotten stuck the same way. They didn't have to try starting in the middle of my diagram, starting at the edge works too: if the top vertex is blue, the three below it must be green-red-green, and then the bottom two are forced to be blue, which isn't allowed. If you just try it, you win immediately. The only way to lose is not to play.

Before the post was deleted I suggested in a comment “Give it a try, see what happens”. I genuinely hoped this might be helpful. I'll probably never know if it was.

Like I said, I would love to know what was going on here. I think maybe this person could have used a dose of Lower Mathematics.

Just now I wondered for the first time: what would it look like if I were to try to list the principles of Lower Mathematics? “Try it and see” is definitely in the list.

Then I thought: How To Solve It has that sort of list and something like “try it and see” is probably on it. So I took it off the shelf and found: “Draw a figure”, “If you cannot solve the proposed problem”, “Is it possible to satisfy the condition?”. I didn't find anything called “fuck around with it and see what you learn” but it is probably in there under a different name, I haven't read the book in a long time. To this important principle I would like to add “fuck around with it and maybe you will stumble across the answer by accident” as happened here.

Mathematics education is too much method, not enough heuristic.


[Other articles in category /math] permanent link

Wed, 06 Mar 2024

Optimal boxes with and without lids

Sometime around 1986 or so I considered the question of the dimensions that a closed cuboidal box must have to enclose a given volume but use as little material as possible. (That is, if its surface area should be minimized.) It is an elementary calculus exercise and it is unsurprising that the optimal shape is a cube.

Then I wondered: what if the box is open at the top, so that it has only five faces instead of six? What are the optimal dimensions then?

I did the calculus, and it turned out that the optimal lidless box has a square base like the cube, but it should be exactly half as tall.

For example the optimal box-with-lid enclosing a cubic meter is a 1×1×1 cube with a surface area of !!6!!.

Obviously if you just cut off the lid of the cubical box and throw it away you have a one-cubic-meter lidless box with a surface area of !!5!!. But the optimal box-without-lid enclosing a cubic meter is shorter, with a larger base. It has dimensions $$2^{1/3} \cdot 2^{1/3} \cdot \frac{2^{1/3}}2$$

and a total surface area of only !!3\cdot2^{2/3} \approx 4.76!!. It is what you would get if you took an optimal complete box, a cube, that enclosed two cubic meters, cut it in half, and threw the top half away.

I found it striking that the optimal lidless box was the same proportions as the optimal complete box, except half as tall. I asked Joe Keane if he could think of any reason why that should be obviously true, without requiring any calculus or computation. “Yes,” he said. I left it at that, imagining that at some point I would consider it at greater length and find the quick argument myself.

Then I forgot about it for a while.

Last week I remembered again and decided it was time to consider it at greater length and find the quick argument myself. Here's the explanation.

Take the cube and saw it into two equal halves. Each of these is a lidless five-sided box like the one we are trying to construct. The original cube enclosed a certain volume with the minimum possible material. The two half-cubes each enclose half the volume with half the material.

If there were a way to do better than that, you would be able to make a lidless box enclose half the volume with less than half the material. Then you could take two of those and glue them back together to get a complete box that enclosed the original volume with less than the original amount of material. But we already knew that the cube was optimal, so that is impossible.


[Other articles in category /math] permanent link

Sat, 02 Dec 2023

Math SE report 2023-10: Peano's definition of addition is not a tautology, and what was great about Ramanujan?

Content warning: grumpy complaining. This was a frustrating month.

Need an intuitive example for how "P is necessary for Q" means "Q⇒P"?

This kind of thing comes up pretty often. Why are there so many ways that the logical expression !!Q\implies P!! can appear in natural language?

  • If !!Q!!, then !!P!!
  • !!Q!! implies !!P!!
  • !!P!! if !!Q!!
  • !!Q!! is sufficient for !!P!!
  • !!P!! is necessary for !!Q!!

Strange, isn't it? !!Q\land P!! is much simpler: “Both !!Q!! and !!P!! are true” is pretty much it.

Anyway this person wanted an intuitive example of “!!P!! is necessary for !!Q!!”

I suggested:

Suppose that it is necessary to have a ticket (!!P!!) in order to board a certain train (!!Q!!). That is, if you board the train (!!Q!!), then you have a ticket (!!P!!).

Again this follows the principle that rule enforcement is a good thing when you are looking for intuitive examples. Keeping ticketless people off the train is something that the primate brain is wired up to do well.

My first draft had “board a train” in place of “board a certain train”. One commenter complained:

many people travel on trains without a ticket, worldwide

I was (and am) quite disgusted by this pettifogging.

I said “Suppose that…”. I was not claiming that the condition applies to every train in all of history.

OP had only asked for an example, not some universal principle.

Does ...999.999... = 0?

This person is asking one of those questions that often puts Math StackExchange into the mode of insisting that the idea is completely nonsensical, when it is actually very close to perfectly mundane mathematics. (Previously: [1] [2] [3] ) That didn't happen this time, which I found very gratifying.

Normally, decimal numerals have a finite integer part on the left of the decimal point, and an infinite fractional part on the right of the decimal point, as with (for example) !!\frac{13}{3} = 4.333\ldots!!. It turns out to work surprisingly well to reverse this, allowing an infinite integer part on the left and a finite fractional part on the right, for example !!\frac25 = \ldots 333.4!!. For technical reasons we usually do this in base !!p!! where !!p!! is prime; it doesn't work as well in base !!10!!. But it works well enough to use: If we have the base-10 numeral !!\ldots 9999.0!! and we add !!1!!, using the ordinary elementary-school right-to-left addition algorithm, the carry in the units place goes to the tens place as usual, then the next carry goes to the hundreds place and so on to infinity, leaving us with !!\ldots 0000.0!!, so that !!\ldots 9999.0!! can be considered a representation of the number !!-1!!, and that means we don't need negation signs.

In fact this system is fundamental to the way numbers are represented in computer arithmetic. Inside the computer the integer !!-1!! is literally represented as the base-2 numeral !!11111111\;11111111\;11111111\;11111111!!, and when we add !!1!! to it the carry bit wanders off toward infinity on the left. (In the computer the numeral is finite, so we simulate infinity by just discarding the carry bit when it gets too far away.)

Once you've seen this a very reasonable next question is whether you can have numbers that have an infinite sequence of digits on both sides. I think something goes wrong here — for one thing it is no longer clear how to actually do arithmetic. For the infinite-to-the-left numerals arithmetic is straightforward (elementary-school algorithms go right-to-left anyway) and for the standard infinite-to-the-right numerals we can sort of fudge it. (Try multiplying the infinite decimal for !!\sqrt 2!! by itself and see what trouble you get into. Or simpler: What's !!4.666\ldots \times 3!!?)

OP's actual question was: If !!\ldots 9999.0 !! can be considered to represent !!-1!!, and if !!0.9999\ldots!! can be considered to represent !!1!!, can we add them and conclude that !!\ldots 9999.9999\ldots = 0!!?

This very deserving question got a good answer from someone who was not me. This was a relief, because my shameful answer was pure shitpostery. It should have been heavily downvoted, but wasn't. The gods of Math SE karma are capricious.

Why define addition with successor?

Ugh, so annoying. OP had read (Bertrand Russell's explanation of) the Peano definition of addition, and did not understand it. Several people tried hard to explain, but communication was not happening. Or, perhaps, OP was more interested in having an argument than in arriving at an understanding. I lost a bit of my temper when they claimed:

Russell's so-called definition of addition (as quoted in my question) is nothing but a tautology: ….

I didn't say:

If you think Bertrand Russell is stupid, it's because you're stupid.

although I wanted to at first. The reply I did make is still not as measured as I would like, and although it leaves this point implicit, the point is still there. I did at least shut up after that. I had answered OP's question as well as I was able, and carrying on a complex discussion in the comments is almost never of value.

Why is Ramanujan considered a great mathematician?

This was easily my best answer of the month, but the question was deleted, so you will only be able to see it if you have enough Math SE reputation.

OP asked a perfectly reasonable question: Ramanujan gets a lot of media hype for stuff like this:

$${\sqrt {\phi +2}}-\phi ={\cfrac {e^{{-2\pi /5}}}{1+{\cfrac {e^{{-2\pi }}}{1+{\cfrac {e^{{-4\pi }}}{1+{\cfrac {e^{{-6\pi }}}{1+\,\cdots }}}}}}}}$$

which is not of any obvious use, so “why is it given such high regard?”

OP appeared to be impugning a famous mathematician, and Math SE always responds badly to that; their heroes must not be questioned. And even worse, OP mentioned the notorious non-fact that $$1+2+3+\ldots =-\frac1{12}$$ which drives Math SE people into a frothing rage.

One commenter argued:

Mathematics is not inherently about its "usefulness". Even if you can't find practical use for those formulas, you still have to admit that they are by no means trivial

I think this is fatuous. OP is right here, and the commenter is wrong. Mathematicians are not considered great because they produce wacky and impractical equations. They are considered great because they solve problems, invent techniques that answer previously impossible questions, and because they contribute insights into deep and complex issues.

Some blockhead even said:

Most of the mathematical results are useless. Mathematics is more like an art.

Bullshit. Mathematics is about trying to understand stuff, not about taping a banana to the wall. I replied:

I don't think “mathematics is not inherently about its usefulness" is an apt answer here. Sometimes mathematical results have application to physics or engineering. But for many mathematical results the application is to other parts of mathematics, and mathematicians do judge the ‘usefulness’ of results on this basis. Consider for example Mochizuki's field of “inter-universal Teichmüller theory”. This was considered interesting only as long as it appeared that it might provide a way to prove the !!abc!! conjecture. When that hope collapsed, everyone lost interest in it.

My answer to OP elaborated on this point:

The point of these formulas wasn't that they were useful in themselves. It's that in order to find them he had to have a deep understanding of matters that were previously unknown. His contribution was the deep understanding.

I then discussed Hardy's book on the work he did with Ramanujan and Hardy's own estimation of Ramanujan's work:

The first chapter is somewhat negative, as it summarizes the parts of Ramanujan's work that he felt didn't have lasting value — because Hardy's next eleven chapters are about the work that he felt did have value.

So if OP wanted a substantive and detailed answer to their question, that would be the first place to look.

I also did an arXiv search for “Ramanujan” and found many recent references, including one with “applications to the Ramanujan !!τ!!-function”, and concluded:

The !!\tau!!-function is the subject of the entire chapter 10 of Hardy's book and appears to still be of interest as recently as last Monday.

The question was closed as “opinion-based” (a criticism that I think my answer completely demolishes) and then it was deleted. Now if someone else trying to find out why Ramanujan is held in high regard they will not be able to find my factual, substantive answer.

Screw you, Math SE. This month we both sucked.


[Other articles in category /math/se] permanent link

Mon, 27 Nov 2023

Uncountable sets for seven-year-olds

I was recently talking to a friend whose seven-year old had been reading about the Hilbert Hotel paradoxes. One example: The hotel is completely full when a bus arrives with 53 passengers seeking rooms. Fortunately the hotel has a countably infinite number of rooms, and can easily accomodate 53 more guests even when already full.

My friend mentioned that his kid had been unhappy with the associated discussion of uncountable sets, since the explanation he got involved something about people whose names are infinite strings, and it got confusing. I said yes, that is a bad way to construct the example, because names are not infinite strings, and even one infinite string is hard to get your head around. If you're going to get value out of the hotel metaphor, you waste an opportunity if you abandon it for some weird mathematical abstraction. (“Okay, Tyler, now let !!\mathscr B!! be a projection from a vector bundle onto a compact Hausdorff space…”)

My first attempt on the spur of the moment involved the guests belonging to clubs, which meet in an attached convention center with a countably infinite sequence of meeting rooms. The club idea is good but my original presentation was overcomplicated and after thinking about the issue a little more I sent this email with my ideas for how to explain it to a bright seven-year-old.


Here's how I think it should go. Instead of a separate hotel and convention center, let's just say that during the day the guests vacate their rooms so that clubs can meet in the same rooms. Each club is assigned one guest room that they can use for their meeting between the hours of 10 AM to 4 PM. The guest has to get out of the room while that is happening, unless they happen to be a member of the club that is meeting there, in which case they may stay.

If you're a guest in the hotel, you might be a member of the club that meets in your room, or you might not be a member of the club that meets in your room, in which case you have to leave and go to a meeting of one of your clubs in some other room.

We can paint the guest room doors blue and green: blue, if the guest there is a member of the club that meets in that room during the day, and green if they aren't. Every door is now painted blue or green, but not both.

Now I claim that when we were assigning clubs to rooms, there was a club we missed that has nowhere to meet. It's the Green Doors Club of all the guests who are staying in rooms with green doors.

If we did assign the Green Doors Club a guest room in which to meet, that door would be painted green or blue.

The Green Doors Club isn't meeting in a room with a blue door. The Green Doors Club only admits members who are staying in rooms with green doors. That guest belongs to the club that meets in their room, and it isn't the Green Doors Club because the guest's door is blue.

But the Green Doors Club isn't meeting in a room with a green door. We paint a door green when the guest is not a member of the club that meets in their room, and this guest is a member of the Green Doors Club.

So however we assigned the clubs to the rooms, we must have missed out on assigning a room to the Green Doors Club.

One nice thing about this is that it works for finite hotels too. Say you have a hotel with 4 guests and 4 rooms. Well, obviously you can't assign a room to each club because there are 16 possible clubs and only 4 rooms. But the blue-green argument still works: you can assign any four clubs you want to the four rooms, then paint the doors, then figure out who is in the Green Doors Club, and then observe that, in fact, the Green Doors Club is not one of the four clubs that got a room.

Then you can reassign the clubs to rooms, this time making sure that the Green Doors Club gets a room. But now you have to repaint the doors, and when you do you find out that membership in the Green Doors Club has changed: some new members were admitted, or some former members were expelled, so the club that meets there is no longer the Green Doors Club, it is some other club. (Or if the Green Doors Club is meeting somewhere, you will find that you have painted the doors wrong.)


I think this would probably work. The only thing that's weird about it is that some clubs have an infinite number of members so that it's hard to see how they could all squeeze into the same room. That's okay, not every member attends every meeting of every club they're in, that would be impossible anyway because everyone belongs to multiple clubs.

But one place you could go from there is: what if we only guarantee rooms to clubs with a finite number of members? There are only a countably infinite number of clubs then, so they do all fit into the hotel! Okay, Tyler, but what happens to the Green Door Club then? I said all the finite clubs got rooms, and we know the Green Door Club never gets a room, so what can we conclude?

It's tempting to try to slip in a reference to Groucho Marx, but I think it's unlikely that that will do anything but confuse matters.

[ Previously ]

[ Update: My friend said he tried it and it didn't go over as well as I thought it might. ]


[Other articles in category /math] permanent link

Fri, 24 Nov 2023

Math SE report 2023-09: Sense and reference, Wason tasks, what is a sequence?

Proving there is only one proof?

OP asks:

In mathematics, is it possible to prove that there is only one (shortest) proof of a given theorem (say, in ZFC)?

This was actually from back in July, when there was a fairly substantive answer. But it left out what I thought was a simpler, non-substantive answer: For a given theorem !!T!! it's actually quite simple to prove that there is (or isn't) only one proof of !!T!!: just generate all possible proofs in order by length until you find the shortest proofs of !!T!!, and then stop before you generate anything longer than those. There are difficult and subtle issues in provability theory, but this isn't one of them.

I say “non-substantive” because it doesn't address any of the possibly interesting questions of why a theorem would have only one proof, or multiple proofs, or what those proofs would look like, or anything like that. It just answers the question given: is it possible to prove that there is only one shortest proof.

So depending on what OP was looking for, it might be very unsatisfying. Or it might be hugely enlightening, to discover that this seemingly complicated question actually has a simple answer, just because proofs can be systematically enumerated.

This comes in handy in more interesting contexts. Gödel showed that arithmetic contains a theorem whose shortest proof is at least one million steps long! He did it by constructing an arithmetic formula !!G!! which can be interpreted as saying:

!!G!! cannot be proved in less than one million steps.

If !!G!! is false, it can be proved (in less than one million steps) and our system is inconsistent. So assuming that our axioms are consistent, then !!G!! is true and either:

  1. There is no proof of at all of !!G!!, or
  2. There are proofs of !!G!! but the shortest one is at least a million steps

Which is it? It can't be (1) because there is a proof of !!G!!: simply generate every single proof of one million steps or fewer, and check at the last line of each one to make sure that it is not !!G!!. So it must be (2).

What counts as a sequence, and how would we know that it isn't deceiving?

This is a philosophical question: What is a sequence, really? And:

if I write down random numbers with no pattern at all except for the fact that it gets larger, is it a viable sequence?

And several other related questions that are actually rather subtle: Is a sequence defined by its elements, or by some external rule? If the former how can you know when a sequence is linear, when you can only hope to examine a finite prefix?

I this is a great question because I think a sequence, properly construed, is both a rule and its elements. The definition says that a sequence of elements of !!S!! is simply a function !!f:\Bbb N\to S!!. This definition is a sort of spherical cow: it's a nice, simple model that captures many of the mathematical essentials of the thing being modeled. It works well for many purposes, but you get into trouble if you forget that it's just a model. It captures the denotation, but not the sense. I wouldn't yak so much about this if it wasn't so often forgotten. But the sense is the interesting part. If you forget about it, you lose the ability to ask questions like

Are sequences !!s_1!! and !!s_2!! the same sequence?

If all you have is the denotation, there's only one way to answer this question:

By definition, yes, if and only if !!s_1!! and !!s_2!! are the same function.

and there is nothing further to say about it. The question is pointless and the answer is useless. Sometimes the meaning is hidden a little deeper. Not this time. If we push down into the denotation, hoping for meaning, we find nothing but more emptiness:

Q: What does it mean to say that !!s_1!! and !!s_2!! are the same function?

A: It means that the sets $$S_1 = \{ \langle i, s_1(i) \rangle \mid i\in \Bbb N\}$$ and $$S_2 = \{ \langle i, s_2(i) \rangle \mid i\in \Bbb N\}$$ have exactly the same elements.

We could keep going down this road, but it goes nowhere and having gotten to the end we would have seen nothing worth seeing.

But we do ask and answer this kind of question all the time. For example:

  1. !!S_1(n)!! is the infinite sequence of odd numbers starting at !!1!!
  2. !!S_2(n)!! is the infinite sequence of numbers that are the difference between a square and its previous square, starting at !!1^2-0^2!!

Are sequences !!S_1!! and !!S_2!! the same sequence? Yes, yes, of course they are, don't focus on the answer. Focus on the question! What is this question actually asking?

The real essence of the question is not about the denotation, about just the elements. Rather: we're given descriptions of two possible computations, and the question is asking if these two computations will arrive at the same results in each case. That's the real question.

Well, I started this blog article back in October and it's still not ready because I got stuck writing about this question. I think the answer I gave on SE is pretty good, OP asked what is essentially a philosophical question and the backbone of my answer is on the level of philosophy rather than mathematics.

[ Addendum: On review, I am pleasantly surprised that this section of the blog post turned out both coherent and relevant. I really expected it to be neither. A Thanksgiving miracle! ]

Can inequalities be added the way that equations can be added?

OP says:

Suppose you have !!x + y > 6!! and !!x - y > 4!!. Adding the inequalities, the !!y!! terms cancel and you end up with … !!x > 5!!. It is not intuitively obvious to me that this holds true … I can see that you can't subtract inequalities, but is it always okay to add them?

I have a theory that if someone is having trouble with the intuitive meaning of some mathematical property, it's a good idea to turn it into a question about fair allocation of resources, or who has more of some commodity, because human brains are good at monkey tasks like seeing who got cheated when the bananas were shared out.

About ten years ago someone asked for an intuitive explanation of why you could add !!\frac a2!! to both sides of !!\frac a2 < \frac b2!! to get !!\frac a2+\frac a2 < \frac a2 + \frac b2!!. I said:

Say I have half a bag of cookies, that's !!\frac a2!! cookies, and you have half a carton of cookies, that's !!\frac b2!! cookies, and the carton is bigger than the bag, so you have more than me, so that !!\frac a2 < \frac b2!!.

Now a friendly djinn comes along and gives you another half a bag of cookies, !!\frac a2!!. And to be fair he gives me half a bag too, also !!\frac a2!!.

So you had more cookies before, and the djinn gave each of us an extra half a bag. Then who has more now?

I tried something similar this time around:

Say you have two bags of cookies, !!a!! and !!b!!. A friendly baker comes by and offers to trade with you: you will give the baker your bag !!a!! and in return you will get a larger bag !!c!! which contains more cookies. That is, !! a \lt c !!. You like cookies, so you agree.

Then the baker also trades your bag !!b!! for a bigger bag !!d!!.

Is it possible that you might not have more cookies than before you made the trades? … But that's what it would mean if !! a\lt c !! and !! b\lt d !! but not !! a+b \lt c+d !! too.

Someday I'll write up a whole blog article about this idea, that puzzles in arithmetic sometimes become intuitively obvious when you turn them into questions about money or commodities, and that puzzles in logic sometimes become intuitively obvious when you turn them into questions about contract and rule compliance.

I don't remember why I decided to replace the djinn with a baker this time around. The cookies stayed the same though. I like cookies. Here's another cookie example, this time to explain why !!1\div 0.5 = 2!!.

What is the difference between "for all" and "there exists" in set builder notation?

This is the same sort of thing again. OP was was asking about

$$B = \{n \in \mathbb{N} : \forall x \in \mathbb{N} \text{ and } n=2^x\}$$

but attempting to understand this is trying to swallow two pills at once. One pill is the logic part (what role is the !!\forall!! playing) and the other pill is the arithmetic part having to do with powers of !!2!!. If you're trying to understand the logic part and you don't have an instantaneous understanding of powers of !!2!!, it can be helpful to simplify matters by replacing the arithmetic with something you understand intuitively. In place of the relation !!a = 2^b!! I like to use the relation “!!a!! is the mother of !!b!!”, which everyone already knows.

Are infinities included in the closure of the real set !!\overline{\mathbb{R}}!!

This is a good question by the Chip Buchholtz criterion: The answer is much longer than the question was. OP wants to know if the closure of !!\Bbb R!! is just !!\Bbb R!! or if it's some larger set like !![-\infty, \infty]!!. They are running up against the idea that topological closure is not an absolute notion; it only makes sense in the context of an enclosing space.

I tried to draw an analogy between the closure and the complement of a set: Does the complement of the real numbers include the number !!i!!? Well, it depends on the context.

OP preferred someone else's answer, and I did too, saying:

I thought your answer was better because it hit all the important issues more succinctly!

I try to make things very explicit, but the downside of that is that it makes my answers longer, and shorter is generally better than longer. Sometimes it works, and sometimes it doesn't.

Vacuous falsehood - does it exist, and are there examples?

I really liked this question because I learned something from it. It brought me up short: “Huh,” I said. “I never thought about that.” Three people downvoted the question, I have no idea why.

I didn't know what a vacuous falsity would be either but I decided that since the negation of a vacuous truth would be false it was probably the first thing to look at. I pulled out my stock example of vacuous truth, which is:

All my rubies are red.

This is true, because all rubies are red, but vacuously so because I don't own any rubies.

Since this is a vacuous truth, negating it ought to give us a vacuous falsity, if there is such a thing:

I have a ruby that isn't red.

This is indeed false. And not in the way one would expect! A more typical false claim of this type would be:

I have a belt that isn't leather.

This is also false, in rather a different way. It's false, but not vacuously so, because to disprove it you have to get my belts out of the closet and examine them.

Now though I'm not sure I gave the right explanation in my answer. I said:

In the vacuously false case we don't even need to read the second half of the sentence:

there is a ruby in my vault that …

… The irrelevance of the “…is not red” part is mirrored exactly in the irrelevance of the “… are red” part in the vacuously true statement:

all the rubies in my vault are …

But is this the right analogy? I could have gone the other way:

In the vacuously false case we don't even need to read the first half of the sentence:

there is a ruby … that is not red

… The irrelevance of the “… in my vault …” part is mirrored exactly in the irrelevance of the “… are red” part in the vacuously true statement:

all the rubies in my vault are …

Ah well, this article has been drying out on the shelf for a month now, I'm making an editorial decision to publish it without thinking about it any more.


[Other articles in category /math/se] permanent link

Fri, 20 Oct 2023

The discrete logarithm, shorter and simpler

I recently discussed the “discrete logarithm” method for multiplying integers, and I feel like I took too long and made it seem more complicated and mysterious than it should have been. I think I'm going to try again.

Suppose for some reason you found yourself needing to multiply a lot of powers of !!2!!. What's !!4096·512!!? You could use the conventional algorithm:

$$ \begin{array}{cccccccc} & & & & 4 & 0 & 9 & 6 \\ × & & & & & 5 & 1 & 2 \\ \hline % & & & & 8 & 1 & 9 & 2 \\ & & & 4 & 0 & 9 & 6 & \\ & 2 & 0 & 4 & 8 & 0 & & \\ \hline % & 2 & 0 & 9 & 7 & 1 & 5 & 2 \end{array} $$

but that's a lot of trouble, and a simpler method is available. You know that $$2^i\cdot 2^j = 2^{i+j}$$

so if you had an easy way to convert $$2^i\leftrightarrow i$$ you could just convert the factors to exponents, add the exponents, and convert back. And all that's needed is a simple table:

\begin{array}{rr} 0 & 1\\ 1 & 2\\ 2 & 4\\ 3 & 8\\ 4 & 16\\ 5 & 32\\ 6 & 64\\ 7 & 128\\ 8 & 256\\ 9 & 512\\ 10 & 1\,024\\ 11 & 2\,048\\ 12 & 4\,096\\ 13 & 8\,192\\ 14 & 16\,384\\ 15 & 32\,768\\ 16 & 65\,536\\ 17 & 131\,072\\ 18 & 262\,144\\ 19 & 524\,288\\ 20 & 1\,048\,576\\ 21 & 2\,097\,152\\ \vdots & \vdots \\ \end{array}

We check the table, and find that $$4096\cdot512 = 2^{12}\cdot 2^9 = 2^{12+9} = 2^{21} = 2097152.$$ Easy-peasy.

That is all very well but how often do you find yourself having to multiply a lot of powers of !!2!!? This was a lovely algorithm but with very limited application.

What Napier (the inventor of logarithms) realized was that while not every number is an integer power of !!2!!, every number is an integer power of !!1.00001!!, or nearly so. For example, !!23!! is very close to !!1.00001^{313\,551}!!. Napier made up a table, just like the one above, except with powers of !!1.00001!! instead of powers of !!2!!. Then to multiply !!x\cdot y!! you would just find numbers close to !!x!! and !!y!! in Napier's table and use the same algorithm. (Napier's original table used powers of !!0.9999!!, but it works the same way for the same reason.)

There's another way to make it work. Consider the integers mod !!101!!, called !!\Bbb Z_{101}!!. In !!\Bbb Z_{101}!!, every number is an integer power of !!2!!‍!

For example, !!27!! is a power of !!2!!. It's simply !!2^7!!, because if you multiply out !!2^7!! you get !!128!!, and !!128\equiv 27\pmod{101}!!.

Or:

$$\begin{array}{rcll} 14 & \stackrel{\pmod{101}}{\equiv} & 10\cdot 101 & + 14 \\ & = & 1010 & + 14 \\ & = & 1024 \\ & = & 2^{10} \end{array} $$

Or:

$$\begin{array}{rcll} 3 & \stackrel{\pmod{101}}{\equiv} & 5844512973848570809\cdot 101 & + 3 \\ & = & 590295810358705651709 & + 3 \\ & = & 590295810358705651712 \\ & = & 2^{69} \end{array} $$

Anyway that's the secret. In !!\Bbb Z_{101}!! the silly algorithm that quickly multiplies powers of !!2!! becomes more practical, because in !!\Bbb Z_{101}!!, every number is a power of !!2!!.

What works for !!101!! works in other cases larger and more interesting. It doesn't work to replace !!101!! with !!7!! (try it and see what goes wrong), but we can replace it with !!107, 797!!, or !!297779!!. The key is that if we want to replace !!101!! with !!n!! and !!2!! with !!a!!, we need to be sure that there is a solution to !!a^i=b\pmod n!! for every possible !!b!!. (The jargon term here is that !!a!! must be a “primitive root mod !!n!!”. !!2!! is a primitive root mod !!101!!, but not mod !!7!!.)

Is this actually useful for multiplication? Perhaps not, but it does have cryptographic applications. Similar to how multiplying is easy but factoring seems difficult, computing !!a^i\pmod n!! for given !!a, i, n!! is easy, but nobody knows a quick way in general to reverse the calculation and compute the !!i!! for which !!a^i\pmod n = m!! for a given !!m!!. When !!n!! is small we can simply construct a lookup table with !!n-1!! entries. But if !!n!! is a !!600!!-digit number, the table method is impractical. Because of this, Alice and Bob can find a way to compute a number !!2^i!! that they both know, but someone else, seeing !!2^i!! can't easily figure out what the original !!i!! was. See Diffie-Hellman key exchange for more details.

[ Also previously: Percy Ludgate's weird variation on this ]


[Other articles in category /math] permanent link

Sun, 15 Oct 2023

The discrete logarithm

[ Addendum 20231020: This came out way longer than it needed to be, so I took another shot at it, and wrote a much simpler explanation of the same thing that is only one-third as long. ]

A couple days ago I discussed the weird little algorithm of Percy Ludgate's, for doing single-digit multiplication using a single addition and three scalar table lookups. In Ludgate's algorithm, there were two tables, !!T_1!! and !!T_2!!, satisfying the following properties:

$$ \begin{align} T_2(T_1(n)) & = n \tag{$\color{darkgreen}{\spadesuit}$} \\ T_2(T_1(a) + T_1(b)) & = ab. \tag{$\color{purple}{\clubsuit}$} \end{align} $$

This has been called the “Irish logarithm” method because of its resemblance to ordinary logarithms. Normally in doing logarithms we have a magic logarithm function !!\ell!! with these properties:

$$ \begin{align} \ell^{-1}(\ell(n)) & = n \tag{$\color{darkgreen}{\spadesuit}$} \\ \ell^{-1}(\ell(a) + \ell(b)) & = ab. \tag{$\color{purple}{\clubsuit}$} \end{align} $$

(The usual notation for !!\ell(x)!! is of course “!!\log x!!” or “!!\ln x!!” or something of that sort, and !!\ell^{-1}(x)!! is usually written !!e^x!! or !!10^x!!.)

The properties of Ludgate's !!T_1!! and !!T_2!! are formally identical, with !!T_1!! playing the role of the logarithm function !!\ell!! and !!T_2!! playing the role of its inverse !!\ell^{-1}!!. Ludgate's versions are highly restricted, to reduce the computation to something simple enough that it can be implemented with brass gears.

Both !!T_1!! and !!T_2!! map positive integers to positive integers, and can be implemented with finite lookup tables. The ordinary logarithm does more, but is technically much more difficult. With the ordinary logarithm you are not limited to multiplying single digit integers, as with Ludgate's weird little algorithm. You can multiply any two real numbers, and the multiplication still requires only one addition and three table lookups. But the cost is huge! The tables are much larger and more complex, and to use them effectively you have to deal with fractional numbers, perform table interpolation, and worry about error accumulation.

It's tempting at this point to start explaining the history and use of logarithm tables, slide rules, and so on, but this article has already been delayed once, so I will try to resist. I will do just one example, with no explanation, to demonstrate the flavor. Let's multiply !!7!! by !!13!!.

  1. I look up !!7!! in my table of logarithms and find that !!\log_{10} 7 \approx 0.84510!!.
    Small section of a page from a book of log
tables with 7.000 and its logarithm highlighted

  2. I look up !!13!! similarly and find that !!\log_{10} 13 \approx 1.11394!!. Small section of a page from a book of log
tables with 1.300 and its logarithm highlighted

  3. I add !!0.84510 + 1.1394 = 1.95904!!.

  4. I do a reverse lookup on !!1.95904!! and find that the result is approximately !!91.00!!. Small section of a page from a book of log
tables with 9.100 and its logarithm highlighted

If I were multiplying !!7.236!! by !!13.877!!, I would be willing to accept all these costs, and generations of scientists and engineers did accept them. But for !!7.0000×13.000 = 91.000!! the process is ridiculous. One might wonder if there wasn't some analogous technique that would retain the small, finite tables, and permits multiplication of integers, using only integer calculations throughout. And there is!

Now I am going to demonstrate an algorithm, based on logarithms, that exactly multiplies any two integers !!a!! and !!b!!, as long as !!ab ≤ 100!!. Like Ludgate's and the standard algorithm, it will use one addition and three lookups in tables. Unlike the standard algorithm, the tables will be small, and will contain only integers.

Here is the table of the !!\ell!! function, which corresponds to Ludgate's !!T_1!!:

$$ \begin{array}{rrrrrrrrrrr} {\tiny\color{gray}{1}} & 0, & 1, & \color{darkblue}{69}, & 2, & 24, & 70, & \color{darkgreen}{9}, & 3, & 38, & 25, \\ {\tiny\color{gray}{11}} & 13, & \color{darkblue}{71}, & \color{darkgreen}{66}, & 10, & 93, & 4, & 30, & 39, & 96, & 26, \\ {\tiny\color{gray}{21}} & 78, & 14, & 86, & 72, & 48, & 67, & 7, & 11, & 91, & 94, \\ {\tiny\color{gray}{31}} & 84, & 5, & 82, & 31, & 33, & 40, & 56, & 97, & 35, & 27, \\ {\tiny\color{gray}{41}} & 45, & 79, & 42, & 15, & 62, & 87, & 58, & 73, & 18, & 49, \\ {\tiny\color{gray}{51}} & 99, & 68, & 23, & 8, & 37, & 12, & 65, & 92, & 29, & 95, \\ {\tiny\color{gray}{61}} & 77, & 85, & 47, & 6, & 90, & 83, & 81, & 32, & 55, & 34, \\ {\tiny\color{gray}{71}} & 44, & 41, & 61, & 57, & 17, & 98, & 22, & 36, & 64, & 28, \\ {\tiny\color{gray}{81}} & \color{darkred}{76}, & 46, & 89, & 80, & 54, & 43, & 60, & 16, & 21, & 63, \\ {\tiny\color{gray}{91}} & 75, & 88, & 53, & 59, & 20, & 74, & 52, & 19, & 51, & 50\hphantom{,} \\ \end{array} $$

(If we only want to multiply numbers with !!1\le a, b \le 9!! we only need the first row, but with the full table we can also compute things like !!7·13=91!!.)

Like !!T_2!!, this is not really a two-dimensional array. It just a list of !!100!! numbers, arranged in rows to make it easy to find the !!81!!st number when you need it. The small gray numerals in the margin are a finding aid. If you want to look up !!\ell(81)!! you can see that it is !!\color{darkred}{76}!! without having to count up !!81!! elements. This element is highlighted in red in the table above.

Note that the elements are numbered from !!1!! to !!100!!, whereas all the other tables in these articles have been zero-indexed. I wondered if there was a good way to fix this, but there really isn't. !!\ell!! is analogous to a logarithm function, and the one thing a logarithm function really must do is to have !!\log 1 = 0!!. So too here; we have !!\ell(1) = 0!!.

We also need an !!\ell^{-1}!! table analogous to Ludgate's !!T_2!!:

$$ \begin{array}{rrrrrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, & 27, & 54, & 7, \\ {\tiny\color{gray}{10}} & 14, & 28, & 56, & 11, & 22, & 44, & 88, & 75, & 49, & 98, \\ {\tiny\color{gray}{20}} & 95, & 89, & 77, & 53, & 5, & 10, & 20, & 40, & 80, & 59, \\ {\tiny\color{gray}{30}} & 17, & 34, & 68, & 35, & 70, & 39, & 78, & 55, & 9, & 18, \\ {\tiny\color{gray}{40}} & \color{darkblue}{36}, & 72, & 43, & 86, & 71, & 41, & 82, & 63, & 25, & 50, \\ {\tiny\color{gray}{50}} & 100, & 99, & 97, & 93, & 85, & 69, & 37, & 74, & 47, & 94, \\ {\tiny\color{gray}{60}} & 87, & 73, & 45, & 90, & 79, & 57, & 13, & 26, & 52, & 3, \\ {\tiny\color{gray}{70}} & 6, & 12, & 24, & 48, & 96, & \color{darkgreen}{91}, & \color{darkred}{81}, & 61, & 21, & 42, \\ {\tiny\color{gray}{80}} & 84, & 67, & 33, & 66, & 31, & 62, & 23, & 46, & 92, & 83, \\ {\tiny\color{gray}{90}} & 65, & 29, & 58, & 15, & 30, & 60, & 19, & 38, & 76, & 51\hphantom{,} \\ \end{array} $$

Like !!\ell^{-1}!! and !!T_2!!, this is just a list of !!100!! numbers in order.

As the notation suggests, !!\ell^{-1}!! and !!\ell!! are inverses. We already saw that the first table had !!\ell(81)=\color{darkred}{76}!! and !!\ell(1) = 0!!. Going in the opposite direction, we see from the second table that !!\ell^{-1}(76)= \color{darkred}{81}!! (again in red) and !!\ell^{-1}(0)=1!!. The elements of !!\ell!! tell you where to find numbers in the !!\ell^{-1}!! table. Where is !!17!! in the second table? Look at the !!17!!th element in the first table. !!\ell(17) = 30!!, so !!17!! is at position !!30!! in the second table.

Before we go too deeply into how these were constructed, let's try the !!7×13!! example we did before. The algorithm is just !!\color{purple}{\clubsuit}!!:

$$ \begin{align} % \ell^{-1}(\ell(a) + \ell(b)) & = ab\tag{$\color{purple}{\clubsuit}$} \\ 7·13 &= \ell^{-1}(\ell(7) + \ell(13)) \\ &= \ell^{-1}(\color{darkgreen}{9} + \color{darkgreen}{66}) \\ &= \ell^{-1}(75) \\ &= \color{darkgreen}{91} \end{align} $$

(The relevant numbers are picked out in green in the two tables.)

As promised, with three table lookups and a single integer addition.

What if the sum in the middle exceeds !!99!!? No problem, the !!\ell^{-1}!! table wraps around, so that element !!100!! is the same as element !!0!!:

$$ \begin{align} % \ell^{-1}(\ell(a) + \ell(b)) & = ab\tag{$\color{purple}{\clubsuit}$} \\ 3·12 &= \ell^{-1}(\ell(3) + \ell(12)) \\ &= \ell^{-1}(\color{darkblue}{69} + \color{darkblue}{71}) \\ &= \ell^{-1}(140) \\ &= \ell^{-1}(40) &\text{(wrap around)}\\ &= \color{darkblue}{36} \end{align} $$

How about that.

(This time the relevant numbers are picked out in blue.)

I said this only computes !!ab!! when the product is at most !!100!!. That is not quite true. If you are willing to ignore a small detail, this algorithm will multiply any two numbers. The small detail is that the multiplication will be done mod !!101!!. That is, instead of the exact answer, you get one that differs from it by a multiple of !!101!!. Let's do an example to see what I mean when I say it works even for products bigger than !!100!!:

$$ \begin{align} % \ell^{-1}(\ell(a) + \ell(b)) & = ab\tag{$\color{purple}{\clubsuit}$} \\ 16·26 &= \ell^{-1}(\ell(16) + \ell(26)) \\ &= \ell^{-1}(4 + 67) \\ &= \ell^{-1}(71) \\ &= 12 \end{align} $$

This tell us that !!16·26 = 12!!. The correct answer is actually !!16·26 = 416!!, and indeed !!416-12 = 404!! which is a multiple of !!101!!. The reason this happens is that the elements of the second table, !!\ell^{-1}!!, are not true integers, they are mod !!101!! integers.

Okay, so what is the secret here? Why does this work? It should jump out at you that it is often the case that an entry in the !!\ell^{-1}!! table is twice the previous entry:

$$\ell^{-1}(1+n) = 2\cdot \ell^{-1}(n)$$

In fact, this is true everywhere, if you remember that the numbers are not ordinary integers but mod !!101!! integers. For example, the number that follows !!64!!, in place of !!64·2=128!!, is !!27!!. But !!27\equiv 128\pmod{101}!! because they differ by a multiple of !!101!!. From a mod !!101!! point of view, it doesn't matter wther we put !!27!! or !!128!! after !!64!!, as they are the same thing.

Those two facts are the whole secret of the !!\ell^{-1}!! table:

  1. Each element is twice the one before, but
  2. The elements are not quite ordinary numbers, but mod !!101!! numbers where !!27=128=229=330=\ldots!!.

Certainly !!\ell^{-1}(0) = 2^0 = 1!!. And every entry in the !!\ell^{-1}!! is twice the previous one, if you are thinking in mod !!101!!. The two secrets are actually one secret:

$$\ell^{-1}(n) = 2^n\pmod{101}.$$

This is why the multiplication algorithm works. Say we want to multiply !!7!! and !!13!! again. We look up !!7!! and !!13!! in !!\ell!!, and find !!\ell(7)=9!! and !!\ell(13)=66!!. What this is really telling us is that

$$ \begin{align} 7 & = 2^{9\hphantom6} \pmod{101} \\ 13 & = 2^{66} \pmod{101} \\ \end{align} $$

so that multiplying !!7\cdot13!! mod !!101!! is the same as multiplying $$2^9\cdot 2^{66}.$$

But multiplying exact powers of !!2!! is easy, since you just add the exponents: !!2^9\cdot2^{66} = 2^{9+66} = 2^{75}!!, whether you are doing it in regular numbers or mod !!101!! numbers. And the !!\ell^{-1}!! table tells us directly that !!2^{75} = 91\pmod{101}!!.

The !!\ell!! function, which is analogous to the regular logarithm, is called a discrete logarithm.

What's going on with Percy Ludgate's algorithm? It's a sort of compressed, limited version of the discrete logarithm.

I had a hope that maybe we could reimplement Ludgate's thing by basing it more directly on discrete logarithms. Say we had the !!\ell^{-1}!! table encoded in a wheel of some sort, with the !!100!! entries in order around the rim. There's a “current position” !!p!!, initially !!0!!, and a “current number” !!2^p!!, initially !!1!!.

On the same axle as the wheel, mount a gear with exactly 100 teeth. We can easily turn the wheel exactly !!q!! positions by taking a straight bar with !!q!! teeth and using it to turn the gear, which turns the wheel. We easily multiply the current number by !!2!! just by turning the wheel one position clockwise.

Multiplying by !!7!! isn't too hard, just turn the wheel !!9!! positions clockwise. We can do this by constructing a short bar with exactly !!9!! teeth and using it to turn the gear. Or maybe we have a meshing gear with !!9!! teeth, on another axle, which we give one full turn. Either way, if the current number was !!5!! before, it's !!35!! after.

Multiplying by !!3!! is rather more of a pain, because we have to turn the wheel !!69!! positions, so we need a bar or a meshing gear with 69 teeth.

(We could get away with one with only !!31!! teeth, if we could turn the wheel the other way, but that seems like it might be more complicated. Hmm, I suppose it would work to use a meshing gear with 31 teeth that engages a second gear (with any number of teeth) that engages the main gear.)

Anyway I took a look to see if there were any better tables do use, and the answer is: maybe! If, instead of a table of !!2^n!!, we use a table of !!26^n!!, then the brass wheel approach performs a little better:

$$ \begin{array}{rrrrrrrrrr} {\tiny\color{gray}{0}} & 1, & 26, & 70, & 2, & 52, & 39, & 4, & 3, & 78, & 8, \\ {\tiny\color{gray}{10}}& 6, & 55, & 16, & 12, & 9, & 32, & 24, & 18, & 64, & 48, \\ {\tiny\color{gray}{20}}& 36, & 27, & 96, & 72, & 54, & 91, & 43, & 7, & 81, & 86, \\ {\tiny\color{gray}{30}}& 14, & 61, & 71, & 28, & 21, & 41, & 56, & 42, & 82, & 11, \\ {\tiny\color{gray}{40}}& 84, & 63, & 22, & 67, & 25, & 44, & 33, & 50, & 88, & 66, \\ {\tiny\color{gray}{50}}& 100, & 75, & 31, & 99, & 49, & 62, & 97, & 98, & 23, & 93, \\ {\tiny\color{gray}{60}}& 95, & 46, & 85, & 89, & 92, & 69, & 77, & 83, & 37, & 53, \\ {\tiny\color{gray}{70}}& 65, & 74, & 5, & 29, & 47, & 10, & 58, & 94, & 20, & 15, \\ {\tiny\color{gray}{80}}& 87, & 40, & 30, & 73, & 80, & 60, & 45, & 59, & 19, & 90, \\ {\tiny\color{gray}{90}}& 17, & 38, & 79, & 34, & 76, & 57, & 68, & 51, & 13, & 35\hphantom{,} \\ \end{array} $$

Multiplying by !!2!! is no longer as simple as turning the wheel one notch clockwise; you have to turn it !!3!! positions counterclockwise. But that seems pretty easy. Multiplying by !!3!! is also rather easy: just turn the wheel !!7!! positions. If the table above is !!T_2!!, then the analogue of Ludgate's !!T_1!! table is:

$$ \begin{array}{cccccccccc} \tiny\color{gray}{1} & \tiny\color{gray}{2} & \tiny\color{gray}{3} & \tiny\color{gray}{4} & \tiny\color{gray}{5} & \tiny\color{gray}{6} & \tiny\color{gray}{7} & \tiny\color{gray}{8} & \tiny\color{gray}{9} \\ 0 & 3 & 7 & 6 & 72 & 10 & 27 & 9 & 14 \\ \end{array} $$

That is, if you want to compute !!3·6!!, you start with the wheel in position !!0!!, then turn it by !!T_1(3) = 7!! positions, then by !!T_1(6) = 10!!, and now it's at position !!17!!, where the current number is !!18!!.

The numbers in the !!T_1!! table are all pretty small, except that to multiply by !!5!! you have to turn by !!72!! positions, which is kinda awful. Still it's only a little worse than in the powers-of-2 version where to multiply by !!3!! you would have to turn the wheel by !!69!! positions. And overall the powers-of-26 table is better: the sum of the !!9!! entries is only !!148!!, which is optimal; the corresponding sum of the entries for the powers-of-2 table is !!216!!.

Who knows, it might work, and even if it didn't work well it might be pretty cool.


[Other articles in category /math] permanent link

Mon, 02 Oct 2023

Irish logarithm forward instead of backward

Yesterday I posted about the so-called “Irish logarithm”, Percy Ludgate's little algorithm for single-digit multiplication.

Hacker News user sksksfpuzhpx said:

There's a much simpler way to derive Ludgate's logarithms

and referred to Brian Coghlan's aticle “Percy Ludgate's Logarithmic indices”.

Whereas I was reverse-engineering Ludgate's tables with a sort of ad-hoc backtracking search, if you do it right you can do it it more easily with a simple greedy search.

Uh oh, I thought, I will want to write this up before I move on to the thing I planned to do next, which made it all the more likely that I never would get to the thing I had planned to do next. But Shreevatsa R. came to my rescue and wrote up the Coghlan thing at least as well as I could have myself. Definitely check it out.

Thank you, Shreevatsa!

[ Update 20231015: A different kind of all-integer logarithm: the discrete logarithm. ]

[ Update 20231020: Better explanation of the discrete logarithm. ]


[Other articles in category /math] permanent link

Sun, 01 Oct 2023

The Irish logarithm

The Wikipedia article on “Irish logarithm” presents this rather weird little algorithm, invented by Percy Ludgate. Suppose you want to multiply !!a!! and !!b!!, where both are single-digit numbers !!0≤a,b≤9!!.

Normally you would just look it up on a multiplication table, but please bear with me for a bit.

To use Ludgate's algorithm you need a different little table:

$$ \begin{array}{rl} T_1 = & \begin{array}{cccccccccc} \tiny\color{gray}{0} & \tiny\color{gray}{1} & \tiny\color{gray}{2} & \tiny\color{gray}{3} & \tiny\color{gray}{4} & \tiny\color{gray}{5} & \tiny\color{gray}{6} & \tiny\color{gray}{7} & \tiny\color{gray}{8} & \tiny\color{gray}{9} \\ 50 & 0 & 1 & 7 & 2 & 23 & 8 & 33 & 3 & 14 \\ \end{array} \end{array} $$

and a different bigger one:

$$ \begin{array}{rl} T_2 = & % \left( \begin{array}{rrrrrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, & 3, & 6, & 12, \\ {\tiny\color{gray}{10}} & 24, & 48, & 0, & 0, & 9, & 18, & 36, & 72, & 0, & 0, \\ {\tiny\color{gray}{20}} & 0, & 27, & 54, & 5, & 10, & 20, & 40, & 0, & 81, & 0, \\ {\tiny\color{gray}{30}} & 15, & 30, & 0, & 7, & 14, & 28, & 56, & 45, & 0, & 0, \\ {\tiny\color{gray}{40}} & 21, & 42, & 0, & 0, & 0, & 0, & 25, & 63, & 0, & 0, \\ {\tiny\color{gray}{50}} & 0, & 0, & 0, & 0, & 0, & 0, & 35, & 0, & 0, & 0, \\ {\tiny\color{gray}{60}} & 0, & 0, & 0, & 0, & 0, & 0, & 49\hphantom{,} \end{array} % \right) \end{array} $$

I've formatted !!T_2!! in rows for easier reading, but it's really just a zero-indexed list of !!101!! numbers. So for example !!T_2(23)!! is !!5!!.

The tiny gray numbers in the margin are not part of the table, they are counting the elements so that it is easy to find element !!23!!.

Ludgate's algorithm is simply:

$$ ab = T_2(T_1(a) + T_1(b)) $$

Let's see an example. Say we want to multiply !!4×7!!. We first look up !!4!! and !!7!! in !!T_1!!, and get !!2!! and !!33!!, which we add, getting !!35!!. Then !!T_2(35)!! is !!28!!, which is the correct answer.

This isn't useful for paper-and-pencil calculation, because it only works for products up to !!9×9!!, and an ordinary multiplication table is easier to use and remember. But Ludgate invented this for use in a mechanical computing engine, for which it is much better-suited.

The table lookups are mechanically very easy. They are simple one-dimensional lookups: to find !!T_1(6)!! you just look at entry !!6!! in the !!T_1!! table, which can be implemented as a series of ten metal rods of different lengths, or something like that. Looking things up in a multiplication table is harder because it is two-dimensional.

The single addition in Ludgate's algorithm can also be performed mechanically: to add !!T_1(a)!! and !!T_1(b)!!, you have some thingy that slides up by !!T_1(a)!! units, and then by !!T_1(b)!! more, and then wherever it ends up is used to index into !!T_2!! to get the answer. The !!T_2!! table doesn't have to be calculated on the fly, it can be made up ahead of time, and machined from brass or something, and incorporated directly into the machine. (It's tempting to say “hardcoded”.)

The tables look a little uncouth at first but it is not hard to figure out what is going on. First off, !!T_1!! is the inverse of !!T_2!! in the sense that $$T_2(T_1(n)) = n\tag{$\color{darkgreen}{\spadesuit}$}$$

whenever !!n!! is in range — that is when !!0≤ n ≤ 9!!.

!!T_2!! is more complex. We must construct it so that

$$T_2(T_1(a) + T_1(b)) = ab.\tag{$\color{purple}{\clubsuit}$}$$

for all !!a!! and !!b!! of interest, which means that !!0\le a, b\le 9!!.

If you look over the table you should see that the entry !!n!! is often followed by !!2n!!. That is, !!T_2(i+1) = 2T_2(i)!!, at least some of the time. In fact, this is true in all the cases we care about, where !!2n = ab!! for some single digits !!a, b!!.

The second row could just as well have started with !!24, 48, 96, 192!!, but Ludgate doesn't need the !!96, 192!! entries, so he made them zero, which really means “I don't care”. This will be important later.

The algorithm says that if we want to compute !!2n!!, we should compute $$ \begin{align} 2n & = T_2(T_1(2) + T_1(n)) && \text{Because $\color{purple}{\clubsuit}$} \\ & = T_2(1 + T_1(n)) \\ & = 2T_2(T_1(n)) && \text{Because moving one space right doubles the value}\\ & = 2n && \text{Because $\color{darkgreen}{\spadesuit}$} \end{align} $$

when !!0≤n≤9!!.

I formatted !!T_2!! in rows of !!10!! because that makes it easy to look up examples like !!T_2(35) = 28!!, and because that's how Wikipedia did it. But this is very misleading, and not just because it makes !!T_2!! appear to be a !!10×10!! table when it's really a vector. !!T_2!! is actually more like a compressed version of a !!7×4×3×3!! table.

Let's reformat the table so that the rows have length !!7!! instead of !!10!!:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 5, & 10, & 20, & 40, & 0, \\ {\tiny\color{gray}{28}} & 81, & 0, & 15, & 30, & 0, & 7, & 14, \\ {\tiny\color{gray}{35}} & 28, & 56, & 45, & 0, & 0, & 21, & 42, \\ {\tiny\color{gray}{42}} & 0, & 0, & 0, & 0, & 25, & 63, & 0, \\ {\tiny\color{gray}{49}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{56}} & 35, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{63}} & 0, & 0, & 0, & 49 \\ \end{array} $$

We have already seen that moving one column right usually multiplies the entry by !!2!!. Similarly, moving down by one row is seen to triple the !!T_2!! value — not always, but in all the cases of interest. Since the rows have length !!7!!, moving down one row from !!T_2(i)!! gets you to !!T_2(i+7)!!, and this is why !!T_1(3) = 7!!: to compute !!3n!!, one does:

$$ \begin{align} 3n & = T_2(T_1(3) + T_1(n)) && \text{Because $\color{purple}{\clubsuit}$} \\ & = T_2(7 + T_1(n)) \\ & = 3T_2(T_1(n)) && \text{Because moving down triples the value}\\ & = 3n && \text{Because $\color{darkgreen}{\spadesuit}$} \end{align} $$

Now here is where it gets clever. It would be straightforward easy to build !!T_2!! as a stack of !!5×7!! tables, with each layer in the stack having entries quintuple the layer above, like this:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, & 0, & 0, & 0, & 0, & 0, & 0, \\ \\ {\tiny\color{gray}{35}} & 5, & 10, & 20, & 40, & 0, & 0, & 0, \\ {\tiny\color{gray}{42}} & 15, & 30, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{49}} & 45, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{56}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{63}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ \\ {\tiny\color{gray}{70}} & 25, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{77}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{84}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{91}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{98}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ \end{array} $$

This works, if we make !!T_1(5)!! the correct offset, which is !!7·5 = 35!!. But it wastes space, and the larger !!T_2!! is, the more complicated and expensive is the brass thingy that encodes it. The last six entries of the each layer in the stack are don't-cares, so we can just omit them:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, \\ \\ {\tiny\color{gray}{29}} & 5, & 10, & 20, & 40, & 0, & 0, & 0, \\ {\tiny\color{gray}{36}} & 15, & 30, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{43}} & 45, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{50}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{57}} & 0, \\ \\ {\tiny\color{gray}{58}} & 25, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{65}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{72}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{79}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{86}} & 0\hphantom{,} \\ \end{array} $$

And to compensate we make !!T_1(5) = 29!! instead of !!35!!: you now move down one layer in the stack by skipping !!29!! entries forward, instead of !!35!!.

The table is still missing all the multiples of !!7!!, but we can repeat the process. The previous version of !!T_2!! can now be thought of as a !!29×3!! table, and we can stack another !!29×3!! table below it, with all the entries in the new layer being !!7!! times the original one:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, \\ \\ {\tiny\color{gray}{29}} & 5, & 10, & 20, & 40, & 0, & 0, & 0, \\ {\tiny\color{gray}{36}} & 15, & 30, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{43}} & 45, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{50}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{57}} & 0, \\ \\ {\tiny\color{gray}{58}} & 25, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{65}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{72}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{79}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{86}} & 0, \\ \\ \hline \\ {\tiny\color{gray}{87}} & 7, & 14, & 28, & 56, & 0, & 0, & 0, \\ {\tiny\color{gray}{94}} & 21, & 42, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{101}} & 63, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{108}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{115}} & 0, \\ \\ {\tiny\color{gray}{116}} & 35, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{123}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{130}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{137}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{144}} & 0, \\ \\ {\tiny\color{gray}{145}} & 0, & 0, & 0, & 0, & 0, & 0, & \ldots \\ \\ \hline \\ {\tiny\color{gray}{174}} & 49\hphantom{,} \\ \end{array} $$

Each layer in the stack has !!29·3 = 87!! entries, so we could take !!T_1(7) = 87!! and it would work, but the last !!28!! entries in every layer are zero, so we can discard those and reduce the layers to !!59!! entries each.

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, \\ \\ {\tiny\color{gray}{29}} & 5, & 10, & 20, & 40, & 0, & 0, & 0, \\ {\tiny\color{gray}{36}} & 15, & 30, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{43}} & 45, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{50}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{57}} & 0, \\ \\ {\tiny\color{gray}{58}} & 25, \\ \\ \hline \\ {\tiny\color{gray}{59}} & 7, & 14, & 28, & 56, & 0, & 0, & 0, \\ {\tiny\color{gray}{66}} & 21, & 42, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{73}} & 63, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{80}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{87}} & 0, \\ \\ {\tiny\color{gray}{88}} & 35, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{95}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{102}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{109}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{116}} & 0, \\ \\ {\tiny\color{gray}{117}} & 0, \\ \\ \hline \\ {\tiny\color{gray}{118}} & 49\hphantom{,} \\ \end{array} $$

Doing this has reduced the layers from !!87!! to !!59!! elements each, but Ludgate has another trick up his sleeve. The last few numbers in the top layer are !!45, 25,!! and a lot of zeroes. If he could somehow finesse !!45!! and !!25!!, he could trim the top two layers all the way back to only 38 entries each:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, \\ \\ {\tiny\color{gray}{29}} & 5, & 10, & 20, & 40, & 80, & 0, & 0, \\ {\tiny\color{gray}{36}} & 15, & 30, \\ \\ \hline \\ {\tiny\color{gray}{38}} & 7, & 14, & 28, & 56, & 0, & 0, & 0, \\ {\tiny\color{gray}{45}} & 21, & 42, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{52}} & 63, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{59}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{66}} & 0, \\ \\ {\tiny\color{gray}{67}} & 35, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{74}} & 0, & 0, \\ \hline \\ {\tiny\color{gray}{76}} & 49\hphantom{,} \\ \end{array} $$

We're now missing !!25!! and we need to put it back. Fortunately the place we want to put it is !!T_1(5) + T_1(5) = 29+29 = 58!!, and that slot contains a zero anyway. And similarly we want to put !!45!! at position !!14+29 = 43!!, also empty:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{28}} & 81, \\ \\ {\tiny\color{gray}{29}} & 5, & 10, & 20, & 40, & 0, & 0, & 0, \\ {\tiny\color{gray}{36}} & 15, & 30, \\ \\ \hline \\ {\tiny\color{gray}{38}} & 7, & 14, & 28, & 56, & 0, & \color{purple}{45}, & 0, \\ {\tiny\color{gray}{45}} & 21, & 42, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{52}} & 63, & 0, & 0, & 0, & 0, & 0, & \color{purple}{25}, \\ {\tiny\color{gray}{59}} & 0, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{66}} & 0, \\ \\ {\tiny\color{gray}{67}} & 35, & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{74}} & 0, & 0, \\ \\ \hline \\ {\tiny\color{gray}{76}} & 49\hphantom{,} \\ \end{array} $$

The arithmetic pattern is no longer as obvious, but property !!\color{purple}{\clubsuit}!! still holds.

We're not done yet! The table still has a lot of zeroes we can squeeze out. If we change !!T_1(5)!! from !!29!! to !!23!!, the !!5,10,20,40!! group will slide backward to just after the !!54!!, and the !!15, 30!! will move to the row below that.

We will also have to move the other multiples of !!5!!. The !!5!! itself moved back by six entries, and so did everything after that in the table, including the !!35!! (from position !!32+29!! to !!32+23!!) and the !!45!! (from position !!14+29!! to !!14+23!!) so those are still in the right places. Note that this means that !!7!! has moved from position !!38!! to position !!32!!, so we now have !!T_1(7) = 32!!.

But the !!25!! is giving us trouble. It needed to move back twice as far as the others, from !!29+29 = 58!! to !!23+23 = 46!!, and unfortunately it now collides with !!63!! which is currently at position !!7+7+32 = 46!!.

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & \color{purple}{5}, & \color{purple}{10}, & \color{purple}{20}, & \color{purple}{40}, & 0, \\ {\tiny\color{gray}{28}} & 81, & 0, & \color{purple}{15} & \color{purple}{30}, \\ \\ \hline \\ {\tiny\color{gray}{32}} & 7, & 14, & 28, & 56, & 0, & \color{darkgreen}{45}, & 0, \\ {\tiny\color{gray}{39}} & 21, & 42, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{46}} & {63\atop\color{darkred}{¿25?}} & 0, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{53}} & 0, & 0, & \color{darkgreen}{35}, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{60}} & 0, & 0, & 0, & 0, \\ \\ \hline \\ {\tiny\color{gray}{64}} & 49\hphantom{,} \\ \end{array} $$

We need another tweak to fix !!25!!. !!7!! is currently at position !!32!!. We can't move !!7!! any farther back to the left without causing more collisions. But we can move it forward, and if we move it forward by one space, the !!63!! will move up one space also and the collision with !!25!! will be solved. So we insert a zero between !!30!! and !!7!!, which moves up !!7!! from position !!32!! to !!33!!:

$$ \begin{array}{rrrrrrrr} {\tiny\color{gray}{0}} & 1, & 2, & 4, & 8, & 16, & 32, & 64, \\ {\tiny\color{gray}{7}} & 3, & 6, & 12, & 24, & 48, & 0, & 0, \\ {\tiny\color{gray}{14}} & 9, & 18, & 36, & 72, & 0, & 0, & 0, \\ {\tiny\color{gray}{21}} & 27, & 54, & 5, & 10, & 20, & 40, & 0, \\ {\tiny\color{gray}{28}} & 81, & 0, & 15 & 30, \\ \\ \hline \\ {\tiny\color{gray}{32}} & \color{purple}{0}, & \color{darkgreen}{7}, & \color{darkgreen}{14}, & \color{darkgreen}{28}, & \color{darkgreen}{56}, & 45, & 0, \\ {\tiny\color{gray}{39}} & 0, & \color{darkgreen}{21}, & \color{darkgreen}{42}, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{46}} & 25,& \color{darkgreen}{63}, & 0, & 0, & 0, & 0, & 0, \\ {\tiny\color{gray}{53}} & 0, & 0, & 0, & \color{darkgreen}{35}, & 0, & 0, & 0, \\ {\tiny\color{gray}{60}} & 0, & 0, & 0, & 0, \\ \\ \hline \\ {\tiny\color{gray}{64}} & \color{purple}{0}, & \color{purple}{0}, & \color{darkgreen}{49}\hphantom{,} \\ \end{array} $$

All the other multiples of !!7!! moved up by one space, but not the non-multiples !!25!! and !!45!!. Also !!49!! had to move up by two, but that's no problem at all, since it was at the end of the table and has all the space it needs.

And now we are done! This is exactly Ludgate's table, which has the property that

$$T_2(p + 7q + 23r + 33s) = 2^p3^q5^r7^s$$

whenever !!2^p3^q5^r7^s = ab!! for some !!0≤a,b≤9!!. Moving right by one space multiplies the !!T_2!! entry by !!2!!, at least for the entries we care about. Moving right by seven spaces multiplies the entry by !!3!!. To multiply by !!5!! or !!7!! we move right by or !!23!! or by !!33!!, respectively.

These are exactly the values in the !!T_1!! table:

$$\begin{align} T_1(2) & = 1\\ T_1(3) & = 7\\ T_1(5) & = 23\\ T_1(7) & = 33 \end{align}$$

The rest of the !!T_1!! table can be obtained by remembering !!\color{darkgreen}{\spadesuit}!!, that !!T_2(T_1(n)) = n!!, so for example !!T_1(6) = 8!! because !!T_2(8) = 6!!. Or we can get !!T_1(6)!! by multiplication, using !!\color{purple}{\clubsuit}!!: multiplying by !!6!! is the same as multiplying by !!2!! and then by !!3!!, which means you move right by !!1!! and then by !!7!!, for a total of !!8!!. Here's !!T_1!! again for reference:

$$ \begin{array}{rl} T_1 = & \begin{array}{cccccccccc} 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline 50 & 0 & 1 & 7 & 2 & 23 & 8 & 33 & 3 & 14 \\ \end{array} \end{array} $$

(Actually I left out a detail: !!T_1(0) = 50!!. Ludgate wants !!T_2(T_1(0) + T_1(b)) = 0!! for all !!b!!. So we need !!T_2(T_1(0) + k) = 0!! for each !!k!! in !!T_1!!. !!T_1(0) = 50!! is the smallest value that works. This is rather painful, because it means that the !!66!!-item table above is not sufficient. Ludgate has to extend !!T_2!! all the way out to !!101!! items in order to handle the seemingly trivial case of !!0\cdot 0 = T_2(50 + 50)!!. But the last 35 entries are all zeroes, so the the brass widget probably doesn't have to be too much more complicated.)

Wasn't that fun? A sort of mathematical engineering or a kind that has not been really useful for at least fifty years.

But actually that was not what I planned to write about! (Did you guess that was coming?) I thought I was going to write this bit as a brief introduction to something else, but the brief introduction turned out to be 2500 words and a dozen complicated tables.

We can only hope that part 2 is forthcoming. I promise nothing.

[ Update 20231002: Rather than the ad-hoc backtracking approach I described here, one can construct !!T_1!! and !!T_2!! in a simpler and more direct way. Shreevatsa R. explains. ]

[ Update 20231015: Part 2 has arrived! It discusses a different kind of all-integer logarithm called the “discrete” logarithm. ]

[ Update 20231020: I think this is a clearer explanation of the discrete logarithm. Shorter, anyway. ]


[Other articles in category /math] permanent link

Tue, 05 Sep 2023

Math SE report 2023-06: funky-looking Hasse diagrams, and what is a polynomial anyway?

Is !!x^4-x^4 = 0!! a fourth-degree equation?

This is actually a really good question! (You can tell because it's quick to ask and complicated to answer.) It goes back to a very fundamental problem that beginners in mathematics, which is that there is a difference between an object's true nature and the way it happens to be written down. And often these problems are compounded because there is no way to talk about the object except by referring to how it is written down.

OP says:

The best definition I could find for the degree of an equation is the following:

The highest power of the unknown term whose coefficient isn't zero in a given equation

And they are bothered by this, and rightly so. I was almost derailed at this point into writing an article about what an equation is, but I'm going to put it off for another day, because I think to get to this person's question what we really need to do is to say what a polynomial is.

One way is to describe it as an expression in a certain form, but this is a bit roundabout. It's like describing a rational number as an expression of the form !!\frac n d!! where !!n!! and !!d!! are relatively prime integers. Under this sort of definition, !!x^4-x^4!! isn't a polynomial at all, because it's not an expression of the correct form.

But I think the right way to define a polynomial is that it's an element of the free ring over some ring !!C!! of coefficients. This leaves completely open the question of how a polynomial is written, or what it looks like. It becomes a theorem that polynomials are in one-to-one correspondence with finite sequences of elements of !!C!!. Then we can say that the “degree” of a polynomial is one less than the length of the corresponding finite sequence, or something like that.

[ Sometimes we make an exception for the zero polynomial and say its degree is !!-\infty!!, to preserve the law !!\operatorname{deg}(pq) = \operatorname{deg}(p)+\operatorname{deg}(q)!!.) ]

In this view the zero polynomial is simply the zero element of the ring. The polynomial called “!!x^4!!” is the fourth power of the free element !!x!!.

Since the polynomials are elements of a ring, addition, subtraction, and multiplication come along automatically, and we can discuss the value of the expression !!x^4-x^4!!, which by the usual properties of !!-!! is also the zero polynomial.

Anyway that all is pretty much what I said:

!!x^4-x^4!! is just a way to write the polynomial !!0!!, which is not a fourth-degree polynomial. Similarly !!x^5+x^4-x^5!! is not a fifth-degree polynomial.

There's an underlying reality here, the abstract elements of the ring !!R[x]!!. And then there's a representation theorem, which is that elements of !!R[x]!! are in one-to-one correspondence with finite sequences of elements of !!R!!. The ring laws give us ways to normalize arbitrary expressions involving polynomials. And then there's also the important functor that turns every polynomial ring into a ring of functions, turning the polynomial !!x^4!! into the function !!x\mapsto x^4!!.

This kind of abstract approach isn't usually explained in secondary or tertiary education, and I'm not sure it would be an improvement if it were. (You'd have to explain rings, for one thing.) But the main conceptual point is that there is a difference between the thing itself (say, !!x^4!!) and the way we happen to write the thing (say, !!x^5+x^4-x^5!!), and some properties are properties of the thing itself and others are properties of expressions, of the way the thing has been written. The degree of a polynomial is a property of the thing itself, not of the way it happens to be written, so both of those expressions are ways to write the same polynomial, which is fourth-degree, regardless of the fact that in one of them, “the highest power of the unknown term whose coefficient isn't zero” is five.

There is one example of this abstraction that everyone learns in childhood, rational numbers. I lean hard on this example, because most people know it pretty well, even if they don't realize it yet. !!\frac15!! and !!\frac6{30}!! are the same thing, written in two different ways. Mathematicians will, without even thinking about it, speak of the numerator of a rational number, and without batting an eyelash will say that the numerator of the rational number !!\frac{6}{30}!! is !!1!!. The fraction !!\frac6{30}!! is a mere notation that represents a rational number, in this case the rational number !!\frac15!!, and this rational number has a numerator of !!1!!.

Beginning (and intermediate) computer programmers also have this issue, that the thing itself, usually some embedding of data in the computer's memory, may be serialized into a string in multiple ways. There's a string that represents the thing, and then there's the thing itself, but it's hard to talk about the thing itself because anything you can write is a string. I wish this were made explicit in CS education. Computer programmers who don't pick up on this crucial point, at least on an intuitive level, do not progress past intermediate.

What are the names given to statements that can be true or false?

I think I totally flubbed this one. OP is really concerned with open and closed formulas. For example, “!!x > 2!!” is true, or false, depending on the value of !!x!!. And OP astutely noted that while “!!x>4 \to x> 2!!” is always true, its meaning still depends on the value of !!x!!.

I did get around to explaining that part of the issue, eventually. The crucial point, which is that there are formulas which may have free variables and then there are statements which do not, is buried at the end after a lot of barely-relevant blather about Quinian quasiquotation. What was I thinking? Perhaps I should go back and shorten the answer to just the relevant part.

How does one identify the weakest preconditions in Hoare triples?

I wrote a detailed explanation of how one identifies weakest preconditions in Hoare triples, before realizing that actually OP understood this perfectly, and their confusion was because their book wrote “{x≠1}” where it should have had “{x≠-1}”.

Sheesh.

Artifacts of mathematical logic

This was fun. OP wants actual physical artifacts that embody concepts from mathematical logic, the way one models of the platonic solids embody concepts from solid geometry.

I couldn't think of anything good, but then Michael Weiss brought up Lewis Carroll's Game of Logic. This reminded me that Martin Gardner had written a book about embodiments of simple logic, including the Carroll one, so I mentioned that. It's a fun book. Check out the account of Ramon Llull, who missed being canonized because his martyrdom looked a bit too much like FAFO.

I find this answer a little unsatisfying though. The logic machines in Gardner's book do only a little boolean algebra, or maybe at best a tiny bit of predicate logic. But I'd love to see a physical system that somehow encapsulated sequent calculus or natural deduction or something like that. Wouldn't it be cool to have blocks stuck together with magnets or physical couplings, and if you could physically detach the !!B!! from !!A\to B!! only if you already had an assemblage that matched !!A!! exactly? I have no idea how you'd do it. Maybe a linear logic model would be more feasible: once you used !!A!! with !!A\to B!! to get !!B!!, you wouldn't be able to use either one again.

We need some genius to come and invent some astonishing mechanism that formerly seemed impossible. I wonder if Ernő Rubik is available?

Joachim Breitner's Incredible Proof Machine is a fun thing along these lines, but it's not at all an artifact.

Is there a name for this refinement of the subset ordering?

This was my question. I've never seen this ordering elsewhere, but it has a simple description. We start with a totally ordered finite set !!S!!. Then if !!A!! and !!B!! are subsets of !!S!!, we deem !!A \preceq B!! if there is an injective mapping !!f:A\to B!! where !!a \le f(a)!! for each !!a\in A!!.

So for example, if !!S!! has three elements !!a\lt b\lt c!! then the ordering I want, on !!2^S!!, has this Hasse diagram:

At far left, least, is the empty set.  Then, in a
line to the right, the set {a}, then {b}, then both {ab} and {c}, then
right of both of these is {ac}, then {bc}, then S={abc}

!!\{b\}\prec\{a,b\}!! because we can match the !!b!!'s. And !!\{a,b\}\prec \{a, c\}!! because we can match !!a!! with !!a!! and !!b!! with !!c!!. But !!\{c\}\not\prec\{a, b\}!! because we can't match !!c!! with either !!a!! or with !!b!!, and !!\{a,b\}\not\prec\{c\}!! because, while we can match either of !!a!! or !!b!! with !!c!!, we aren't allowed to match both of them with !!c!!.

Here's the corresponding Hasse diagram for !!|S|=4!!:

A more complicated
Hasse diagram with 16 nodes, one for each subset of {a,b,c,d}.

Maybe a better way to describe this is: the bottom element is !!\varnothing!!. To go up the lattice one step, you either increment one of the elements of the current set, or you insert an !!a!! if there isn't one already. So from !!\{b,d\}!! you can either increment the !!b!! to move up to !!\{c, d\}!! or you can insert an !!a!! to move up to !!\{a, b, d\}!!.

This ordering comes up in connection with a problem I've thought about a lot: Say you have a number !!N!! and you want to find !!AB=N!! with !!A!! and !!B!! as close together as possible. Even if you have the prime factorization of !!N!! available, it's not usually clear what the answer is. (In general it's NP-hard.)

If !!N!! is the product of two primes, !!N=p_1p_2!! the answer is obvious. And if !!N!! is a product of three primes !!N =p_1p_2p_3!! there is a definitive answer. Without loss of generality, !!p_1 ≤ p_2 ≤ p_3!!, and the answer is simply that !!A=p_1p_2, B=p_3!! is always optimal.

But if !!N =p_1p_2p_3p_4!! it can go two different ways. Assuming !!p_1 ≤ p_2 ≤ p_3 ≤ p_4!!, it usually turns out that the optimal solution is !!A=p_1p_4, B=p_2p_3!!. But sometimes the optimal solution is !!A=p_1p_2p_3, B=p_4!!. These are the only two possibilities.

Which ways of splitting the prime factors might be optimal relates to those Hasse diagrams above. The possibly-optimal splits between !!A!! and !!B!! correspond to nodes that are just at the boundary of the left and right halves of the diagram.

Nobody had an answer for what this order was called, so I could not look it up. This is OK, I will figure it all out eventually.


[Other articles in category /math/se] permanent link

Mon, 04 Sep 2023

Gantō's axe does have computational content

A while back I was thinking about this theorem of intuitionistic logic:

$$((P\to Q)\land ((P\to \bot) \to Q)) \to {(Q\to\bot)\to \bot} \tag{$\color{darkred}{\heartsuit}$}$$

Since it's intuitionistically valid, its proof can be converted into a program with the corresponding type. But the function that you get seemed to be computationally vacuous, since it could only be called with arguments that were impossible to obtain, or else with arguments that render the whole thing silly. For the former, you can make it work in general if you have at hand a function of type !!(P\to\bot)\to Q!! — but how could you? And for the latter, you can make it work if !!Q=\mathtt{int}!!, in which case you don't really need the program at all since !!(Q\to\bot)\to\bot!! is trivially true.

Several people replied to me about this, but the most interesting response I got was from Simon Tatham, who observed that !!\color{darkred}{\heartsuit}!! is still intuitionistically valid if you replace !!\bot!! with arbitrary !!R!!:

$$((P\to Q)\land ((P\to R) \to Q)) \to {(Q\to R)\to R} \tag{$\color{purple}{\spadesuit}$}$$

The proof is essentially the same:

  1. Suppose you have: !!f : P\to Q!! and !!g:(P\to R) \to Q!!
  2. then if you also have !!h: Q\to R!!, you can compose it with !!f!! to get !!h\circ f : P\to R!!
  3. which you can feed to !!g!! to get !!g(h\circ f) : Q!!
  4. and then feed that to !!h!! to get !!h(g(h\circ f)) : R!!.

But unlike !!\color{darkred}{\heartsuit}!!, this is computationally interesting.

M. Tatham gives this example: Take !!R!! to be !!\mathtt{bool}!!. We can understand a function !!X \to R!! as a subset of !!X!!, represented by its characteristic function. Say that !!Q!! is the type of subsets of !!P!!, perhaps in some representation that is more concrete than characteristic functions. If you'd like a more specific example, take !!P!! to be the natural numbers, and !!Q!! to be (recursive) sets of natural numbers as represented by (possibly infinite) lists in strictly increasing order.

Then:

  • !!f!! is some arbitrary function that assigns, to each element of !!P!!, a related subset of !!P!!. Say for example that !!f(p)!! is the semispatulated closure of !!p!!.

  • !!g!! could be the isomorphism that converts a subset of !!P!!, represented as a characteristic function, into one represented as a !!Q!!.

  • !!h : Q\to \mathtt{bool}!! is a single subset of !!Q!!, represented as a characteristic function on !!Q!!. Say, !!h(q)!! is true if and only the set !!q\subset P!! has the Cosell property.

With these interpretations, !!(h\circ f)(p) : P\to \mathtt{bool}!! tells you whether !!p!!'s semispatulated closure has the Cosell property, and !!g(h\circ f)!! is the element of !!Q!! that represents, !!Q!!-style, the set of all such !!p!!.

Then !!h(g(h\circ f)) : R!! computes whether this set itself also has the Cosell property: true if so, false if not.

This is certainly nontrivial, and if the made-up properties were replaced by real ones, it could even be interesting.

One could also replace !!R=\mathtt{bool}!! with some other set, and then instead of a characteristic function, !!X\to R!! would be some sort of valuation function.

I had asked:

Is the whole thing just trivial because there is no interesting way to instantiate data objects with the right types? Or is there some real computational content here?

I think M. Tatham's example shows that there is, and that the apparent vacuity of the theorem !!\color{darkred}{\heartsuit}!! arose only because !!R!! had been replaced with the empty set.


[Other articles in category /math/logic] permanent link

Tue, 01 Aug 2023

Computational content of Gantō's axe

Lately I have been thinking about the formula

$$((P\to Q)\land (\lnot P \to Q)) \to Q \tag{$\color{darkgreen}{\heartsuit}$}$$

which is a theorem of classical logic, but not of intuitionistic logic. This shouldn't be surprising. In CL you know that one of !!P!! and !!\lnot P!! is true (although perhaps not which), and whichever it is, it implies !!Q!!. In IL you don't know that one of !!P!! and !!\lnot P!! is provable, so you can't conclude anything.

Except you almost can. There is a family of transformations !!T!! where, if !!C!! is classically valid !!T(C)!! is intuitionistically valid even if !!C!! itself isn't.

For example, if !!C!! is classically valid, then !!\lnot\lnot C!! is intuitionistically valid whether or not !!C!! is. IL won't prove that !!(\color{darkgreen}{\heartsuit})!! is true, but it will prove that it isn't false.

I woke up in the middle of the night last month with the idea that even though I can't prove !!(\color{darkgreen}{\heartsuit})!!, I should be able to prove !!(\color{darkred}{\heartsuit})!!:

$$((P\to Q)\land (\lnot P \to Q)) \to \color{darkred}{\lnot\lnot Q} \tag{$\color{darkred}{\heartsuit}$}$$

This is correct; !!(\color{darkred}{\heartsuit})!! is intuitionistically valid. Understanding !!\lnot X!! as an abbreviation for !!X\to\bot!! (as is usual in IL), and assuming $$ \begin{array}{rlc} P\to Q & & (1) \\ \lnot P\to Q & & (2) \\ \lnot Q & (≡ Q\to\bot) & (3) \end{array} $$

we can combine !!P\to Q!! and !!Q\to\bot!! to get !!P\to\bot!! which is the definition of !!\lnot P!!. Then detach !!Q!! from !!(2)!!. Then from !!Q!! and !!(3)!! we get !!\bot!!, and discharging the three assumptions we conclude:

$$ \begin{align} \color{darkblue}{(P\to Q)\to (\lnot P \to Q)} & \to \color{darkgreen}{\lnot Q \to \bot} \\ ≡ \color{darkblue}{((P\to Q)\land (\lnot P \to Q))} & \to \color{darkgreen}{\lnot\lnot Q} \tag{$\color{darkred}{\heartsuit}$} \end{align}$$

But what is going on here? It makes sense to me that !!(P\to Q)\land (\lnot P \to Q)!! doesn't prove !!Q!!. That wasn't the part that puzzled me. The puzzling part was that it would prove anything more than zero. And if !!(P\to Q)\land (\lnot P\to Q)!! can prove !!\lnot\lnot Q!!, then why can't it prove anything else?

This isn't a question about the formal logical system. It's a question about the deeper meaning: how are we to understand this? Does it make sense?

I think the answer is that !!Q\to\bot!! is an extremely strong assumption, in fact the strongest possible statement you can make about !!Q!!. So it's easist possible thing you can disprove about !!Q!!. Even though !!(P\to Q)\land(\lnot P\to Q)!! is not enough to prove anything positive, it is enough, just barely, to disprove the strongest possible statement about !!Q!!.

When you assume !!Q\to \bot!!, you are restricting your attention to a possible world where !!Q!! is actually false. When you find yourself in such a world, you discover that both !!P\to Q!! and !!\lnot P\to Q!! are much stronger than you suspected.

My high school friends and I used to joke about “very strong theorems”: “I'm trying to prove that a product of Lindelöf spaces is also a Lindelöf space” one of us would say, and someone would reply “I think that is a very strong theorem,” meaning, facetiously or perhaps sarcastically, that it was false. But facetious or sarcastic, it's funny because it's correct. False theorems are really strong, that's why they are so hard to prove! We've been trying for thousands of years to prove a false theorem, but every time we think we have done it, there turns out to be a mistake in the proof.

My puzzlement about why !!(P\to Q)\land (\lnot P\to Q)!! can prove anything, translated into computational language, looks like this: I have a function !!P\to Q!! (but I don't have any !!P!!) and a function !!\lnot P\to Q!! (but I don't have any !!\lnot P!!). The intutionistic logic says that I can't use these functions to to actually get any !!Q!!, which is not at all surprising, because I don't have anything to use as arguments. But IL says that I can get !!\lnot\lnot Q!!. The question is, how can I get anything from these functions when I don't have anything to use as arguments?

Translating the proof of the theorem into computations, the answer one gets is quite unsatisfying. The proof observes that if I also had a !!Q\to\bot!! function, I could compose it with the first function to make a !!P\to\bot\equiv \lnot P!! which I could then feed to the second function and get !!Q!! from nowhere. Which is very strange, since operationally, where does that !!Q!! actually come from? It's manufactured by the !!\lnot P\to Q!! function, which was rather suspicious to begin with. What does such a function actually look like? What functions of this type can actually be implemented? It all seems rather unlikely: how on earth would you turn a !!P \to \bot!! value into a !!Q!! value?

One reasonable answer is that if !!Q = \lnot P!!, then it's easy to write that suspicious !!\lnot P\to Q!! function. But if !!Q=\lnot P!! then the claim that I also have a !!P\to Q!! function looks extremely dubious.

An answer that looks good at first but flops is that if !!Q=\mathtt{int}!! or something, then it's quite easy to produce the required functions, both !!P\to Q!! and !!\lnot P\to Q!!. The constant function that always returns !!23!! will do for either or both. But this approach does not answer the question, because in such a case we can deduce not only !!\lnot\lnot Q!! but !!Q!! itself (the !!23!! again), so we didn't need the functions in the first place.

Is the whole thing just trivial because there is no interesting way to instantiate data objects with the right types? Or is there some real computational content here? And if there is, what is it, and how does that translate into the logic? Does this argument ever allow us to conclude something actually interesting? Or is it always just reasoning about vacuities?

Note

As far as I know the formula !!(\color{darkgreen}{\heartsuit})!! was first referred to as “Gantō's Axe” by Douglas Hofstadter. This is a facetious reference to a certain Zen koan, which says, in part:

Ganto picked up an axe and went to the hut where the two monks were meditating. He raised the axe, saying, “If you say a word I will cut off your heads. If you do not say anything, I will also behead you.”

(See Kubose, Gyomay M. Zen Koans, p.178.)

Addendum 20230904

Simon Tatham observed that this is a special case of the theorem:

$$((P\to Q)\land ((P\to R) \to Q)) \to {(Q\to R)\to R}$$

which definitely has nontrivial computational content, and that the vacuity or !!\color{darkred}{\heartsuit}!! arises because !!R!! has been replaced by the empty set. Full details are here.


[Other articles in category /math/logic] permanent link

Fri, 09 Jun 2023

Math SE report 2023-05: Arguments that don't work, why I am a potato, and set theory as a monastery

How to shift a power series to be centered at !!a!!?

OP observed that while the Taylor series for !!\sin x!!, centered at zero, is a good approximation near !!x=0!!, it is quite inaccurate for computing !!\sin 4!!:

Graphs of sin(x) and of the fifth-degree Taylor polynomial for
it, showing that the two gives are not at all the same for !!x!!
bigger than about 2.2.

They wanted to know how to use it to compute a good approximation for !!\sin 4!!. But the Taylor series centered around !!4!! is no good for this, because it only tells you that when !!x!! is close to !!4!!, $$\sin x \approx \sin 4 + (x-4)\cos 4 + \ldots, $$ which is obviously useless: put !!x=4!! and you get !!\sin 4 = \sin 4!!.

I'd written about Taylor series centering at some length before, but that answer was too long and detailed to repeat this time. It was about theory (why do we do it at all) and not about computation.

So I took a good suggestion from the comments, which is that if you want to compute !!\sin 4!! you should start with the Taylor series centered around !!π!!:

$$\begin{align} \sin x & \approx \sin \pi + (x-\pi)\cos \pi - \frac{(x-\pi)^2}{2}\sin \pi - \frac{(x-\pi)^3}{6}\cos \pi + \ldots \\ & = -(x-\pi) + \frac{(x-\pi)^3}{6} - \frac{(x-\pi)^5}{120} + \ldots \end{align} $$

because the !!\sin \pi!! terms vanish and !!\cos \pi = -1!!. I did some nice rainbow-colored graphs in Desmos.

Graphs of
the first three nonzero terms of the Taylor series: a dotted straight
line -(x-π),
then a third-degree monomial function, then a fifth-degree one. Graphs of
sin(x) and of the three-term and five-term Taylor approximations of
it, showing how well these fit the sinusoid between around x=2 and x=4
or so.

I just realized I already wrote this up last month. And do you know why? It's because I copied this article from last month's, forgot to change the subject line from “2023-04” to “2023-05”, and because of that forgot that I was doing May and not April. Wheeee! This is what comes of writing blog articles at 3 AM.

Well anyway, continuing with May, we have…

Rational solutions for !!x^3+y^3=1!! where both x and y are non-negative

OP wanted solutions to $$x^3 + y^3 = 1,$$ and had done some research, finding a relevant blog post that they didn't understand, which observed that if !!x!! and !!y!! were solutions, so too would be certain functions of !!x!! and !!y!!, and this allows an infinite family of solutions to be developed if one knows a solution to begin with.

Unfortunately, there are no nontrivial rational solutions to !!x^3 + y^3 = 1!!, as has been known for some time. The blog post that OP found was discussing !!x^3 + y^3 = 9!!, for which !!\langle x, y\rangle = \langle 1, 2\rangle !! is an obvious starting point.

OP asked a rather odd question in the comments:

Why is !!(0, 1)!! not a start?

Had they actually tried this, they would have seen that if they started with !!\langle x, y\rangle = \langle 0, 1\rangle !!, when they computed the two functions that were supposed to give them another solution, they got !!\langle 0, 1\rangle !! back again. I told OP to try it and see what happened. (Surprising how often people forget this. Lower Mathematics!)

This reminds me a bit of a post I replied to long ago that asked why we can't use induction to prove the Goldbach conjecture. Well, what happens when you try? The base case is trivial, so far so good. The induction case says here you go, for every even number !!k < n!! I give you primes !!p_k!! and !!q_k!! with !!p_k+q_k = k!!. Your job is to use these to find primes !!p_{k+2}!! and !!q_{k+2}!! with !!p_{k+2}+q_{k+2} = k+2!!. Uhhh? What now?

Proving !!n(n^2+5)!! is always even

Mathematicially this is elementary, but the pedagogy is interesting.

OP had already proved this by considering even and odd cases separately, but wanted to know if an induction proof was possible. They had started one, but gotten stuck.

Three people, apparently not reading the question, provided proofs by considering even and odd cases separately. One other provided a proof by induction that was “a bit hairy”. But I think a better answer engages with OP's attempt at an induction proof: Instead of “here's a way it could be done”, it's better to provide “here's how you could have made your way work”.

I used a trick, which is that instead of taking !!\Phi(x)!! to mean “!!f(x)!! is even”, and proving !!\Phi(x)!! for all !!x!! by induction, I took !!\Phi(x)!! to mean “!!f(x)!! is even and !!f(x+1)!! is also even”. You have to prove more, but you have more to work with. For a similar approach to a similar problem, see Proof that every third Fibonacci number is even.

The key feature that makes this a good answer is where it says:

For !!f(n+2)!! we will use your method. …. Subtracting !!n(n^2+5) = n^3 + 5n!! as you suggested ….

It's important to point out to the student when their idea would have worked. This is important in code reviews too. The object is not to make the junior programmer do it the same way you would have, it's to help them make their own idea work well. I ought to write an article about that.

Is an argument valid if assuming its premises and conclusion leads to no contradiction?

This was one of those questions where OP proposed some logical principle that was totally invalid and asked why it isn't allowed, something about why you can't assume the conclusion and show that it satisfies the required properties.

It's a curious question because there's such a failure of instruction here: OP has not grasped what it means to be a valid deduction, that the logic used in mathematics is the same logic that is used everywhere else, and that mathematical arguments are valid or invalid for the same reasons that those same arguments are valid or invalid when thinking about anything else: the invalid arguments lead you to the wrong conclusions!

Anyway, I don't want to quote my whole answer here, but you should check it out, it's amusing. OP didn't like it though.

Proving or disproving that if !!A^2X=λ^2X!! then !!AX=λX!!

OP did like this one, and so do I, it's hilarious. The question is apparently something about linear transformations and eigenvalues and stuff like that, which I never learned as well as I should have, owing to my undergraduate linear algebra class being very poor. (Ugh, so many characteristic polynomials.)

Someone else posted a linear algebra (dis)proof which was very reasonable and which got several upvotes. But I realized that this is not actually a question about eigenvalues! It is elementary algebra: If you have an example where !!A^2X=λ^2X!!, then !!-\lambda!! has this property also and is a counterexample to the claim. OP was pleased with this and accepted my answer instead of the smart one with the upvotes.

This kind of thing is why my Math SE avatar is a potato.

Can we treat two equal sets as being distinct mathematical objects?

There was an answer to this that I felt was subtly wrong. It said:

The axiom that answers your question is known as Extensionality: Sets are uniquely determined by their elements.

and then started talking about ZFC, which seems to me to be an irrelevant confusion.

The formal idea of sets comes from the axioms, but the axioms themselves come from a sort of preformal idea of sets. We want to study what happens when we have these things-that-have-elements, and when we ignore any other properties that they might have. The axiom is just a more formal statement of that. Do sets have properties, such as identities, other than their elements? It's tempting to say “no” as this other person did. But I think the more correct answer is “it doesn't matter”.

Think of a monastery where, to enter, you must renounce all your worldly possessions. Must you legally divest yourself of the possessions in order to enter the monastery? Will the monks refuse you entry if, in the view of the outside world, you still own a Lamborghini? No, they won't, because they don't care. The renunciation is what counts. If you are a monk and you ask another monk whether you still own the Lamborgini, they will just be puzzled. You have renounced your possessions, so why are you asking this? Monks are not concerned with Lamborghinis.

Set theory is a monastery where the one requirement for entry is that you must renounce your interest in properties of sets other than those that come from their elements. Whether a set owns a Lamborghini is of no consequence to set theorists.


[Other articles in category /math/se] permanent link

Sun, 30 Apr 2023

Why use cycle notation for permutations?

Katara is taking a discrete mathematics course this semester, and was asked to prove that if !!D_n!! is the number of derangements of !!n!! objects, then !!D!! satisfies the recurrence $$D_n = (n-1)D_{n-1} + (n-1)D_{n-2}\tag{$\star$}$$ for !!n\gt 2!!. (A derangement is a permutation with no fixed points.) This is OEIS A000166. (We also have !!D_0=1, D_1=0!!).

She was struggling with this task, and I learned why: She was still thinking of permutations like this:

$$\begin{align} 1 & \to 2 \\ 2 & \to 4 \\ 3 & \to 5 \\ 4 & \to 1 \\ 5 & \to 3 \end{align} $$

when she should have been thinking of them like this:

$$(1\; 2\; 4)(3\; 5).$$

“Didn't they teach you about cycle notation?” I asked. Apparently not. (I'm not sure it's a very good class, but it's also possible that this was discussed in one of the lectures that she did not attend.)

Anyway the proof is not difficult, if you think of the cycle structure of the permutations. Let !!p!! be a derangement of size !!n>1!! and consider the location of !!n!! in its cycle structure. The element !!n!! is in some cycle, say one with length !!c!!. !!p!! has no fixed points, so !!c!! is at least !!2!!. If !!c!! is at least !!3!!, we can drop !!n!! from its cycle and still have a derangement of size !!n-1!!:

$$(n\; \ldots)c_2c_3\cdots \Rightarrow (\ldots)c_2c_3\cdots.\tag1$$

If !!n!! is in a cycle of length !!c=2!! and we drop it, then we no longer have a derangement, because we have a permutation which has the fixed point !!p(n)!!. Dropping this also does give us another derangement, of size !!n-2!!:

$$(n\; p(n))c_2c_3\cdots \Rightarrow \underbrace{\color{darkred}{(p(n))}}_{\text{fixed point!}} c_2c_3\cdots \Rightarrow c_2c_3\cdots.\tag2$$

In the former case !!(1)!! there are !!n-1!! positions at which we can insert !!n!! back into the smaller derangement, and each one produces a different permutation, so there are !!(n-1)D_{n-1}!! derangements we can obtain in this way. In the latter case !!(2)!! there are !!n-1!! choices for !!p(n)!!, the element that will share the !!2!!-cycle with !!n!!, so !!(n-1)D_{n-2}!! derangements we can obtain in this way. The total is the sum of these, given by !!(\star)!!.

But if you don't know about permutation cycle structure, you are missing the basic tool that lets you find this argument. To find it, you will probably end up reinventing the idea of cycle structure.

And if you don't know about the representation of permutations in cycle notation, you will have trouble thinking about the cycle structure.

Katara asked me about this in the parking lot outside her dorm just as I was about to drop her off at night, so I was not able to intelligently explain why the cycle structure is nearly always the right way to think about permutations. “It's just better,” I asserted baldly and unconvincingly. After I got home, I tried to think: why is the cycle notation better? It is better, but this is an empirical observation. What is the theoretical cause of its being better?

Most obviously, it's more compact, but this is not the main reason.

A permutation is a bijective function on some carrier set of labels: $$p: S\to S$$ so the only things you need to know about it are: what is !!S!!, and what is !!p!!? In some applications, !!S!! matters. For example, we might want to think of permutations of the English alphabet that leave the set !!\{A, E, I , O, U\}!! fixed.

But typically !!S!! is unimportant, as it is in the derangements problem. In counting derangements, we don't care whether the things being deranged are letters or numbers or hats or fruits or whatever; the identity of !!S!! is completely irrelevant and we took it to be !!\{1, 2, 3, \ldots, n\}!! only because the positive integers are a plentiful supply of items with short names.

And:

If you aren't interested in the labels, the cycle structure of a permutation is the only remaining property it has, because two permutations with the same cycle structure are identical under some change of labels.

Or in jargon, if you are thinking of the permutations as elements of the abstract symmetric group !!S_n!!, relabeling is a conjugation of !!S_n!!, and the cycle classes are exactly the conjugacy classes.

So the cycle structure is exactly the structure of a permutation that remains if you ignore the actual labels, and the cycle notation brings that structure to the foreground.


[Other articles in category /math] permanent link

Fri, 28 Apr 2023

Show how the student could have solved it

A few days ago I offered these maxims about pedagogy:

  1. It's not enough to show the student the answer; you should try to show them how to find the answer.

  2. It's not enough to show the student how you can find the answer; you should try to show them how they could have found the answer.

A nice illustration popped up on Math SE this morning. OP asks:

If all eigenvalues of a matrix are 0 or 1, does that imply the matrix is idempotent?

Shortly afterward a comment from PrincessEev said, opaquely:

The matrices $$\left[\begin{matrix} 0&x\\ 0&0 \end{matrix}\right]$$ are obvious counterexamples for !!x\ne 0!!.

Uh, they are? It wasn't obvious to me. I mean, I think I see why the eigenvalues must be zero, without doing the calculation. But where did this example come from?

But then later they redeemed themselves by adding another comment:

it was just my first instinct to try a few examples with what felt like a bold claim: matrices with enough well-placed zeroes tend to vanish when raising them to powers

I understand now! Yeah, I could have thought of that, but didn't. So the second comment actually taught me something, not what the answer is, which not very useful, because who cares?, but how to find the answer, which contains knowledge that might be generally useful.


[Other articles in category /math/se] permanent link

Thu, 20 Apr 2023

Math SE report 2023-04: Simplest-possible examples, pointy regions, and nearly-orthogonal vectors

Polyhedra has more corners than facets

This one was a bit puzzling because it asked:

Is it true that [a polyhedron] has always more/as many corners than facets? I haven't found a counterexample…

(By ‘facets’ I assume OP meant ‘faces’.)

This is puzzling because there are so many counterexamples. For example, every dipyramid has this property. A dipyramid is what you get if you take two pyramids and glue their bases together. Maybe you want to say this is obscure, but an octahedron is a dipyramid and one might expect anyone asking about polyhedra to know about octahedra. I wonder what examples this person did consider?

In fact, for any polyhedron with !!F!! faces and !!V!! vertices, there is a corresponding “dual” polyhedron with !!V!! faces and !!F!! vertices, so for almost any polyhedron you can think of, if that polyhedron is not already a counterexample, then its dual is. A cube is not, but its dual is — this is the octahedron again.

Finding sets !!A!!, !!B!!, and !!C!! such that !!A\in B!!, !!B \in C!!, but !!A \notin C!!

I thought this one was pedagogically interesting. OP made a mistake in their approach that is quite common:

The problem tells us !!B = \{A,b_1,b_2,\ldots\}!! and !!C = \{B,c_1,c_2\}!!.

The mistake OP made here was to start by trying to find the most general possible example. Yes, if !!A\in B!! then in general !!B = \{A,b_1,b_2,\ldots\}!!. This might be a more helpful observation if the question had asked for some universal property of all such !!A, B, C!!. Then you could add constraints to the general case and see if you had anything left at the end. But this problem only asked for one example. So instead of considering the most general case of !!A\in B!!, and therefore the most complex form of the idea, the first thing one should try is the simplest possible example of !!A\in B!!, which is just !!B = \{A\}.!!

Then similarly one should try !!C = \{B\}!!. Obviously the required properties !!A\in B!! and !!B\in C!! are satisfied. What about !!A\notin C!!?

Since the only element of !!C!! is !!B!!, the answer is easy: !!A\notin C!! unless !!A=B = \{A\}!!

So now we just have to avoid !!A=\{A\}!!. Again let's try the simplest thing that could possibly work: !!A=\emptyset!!. And then we win, because indeed !!\emptyset\ne \{\emptyset\}!!, since the left side is empty and the right side isn't.

Did we get lucky here? No! The axiom of foundation guarantees that literally any !!A!! will work. But you don't have to know that to find an example, because literally any !!A!! will work.

This is Lower Mathematics in action. The abstract approach is useful if you are trying to prove some theorem, but if all you want is to find an example, the abstract approach is overkill.

Volume obtained by rotating a region around two different lines

OP considered the region bounded by the curves !!y=x^2!! and !!y=\sqrt x!! for !!0\le x, y\le 1!!, and then the solids of revolution obtained by revolving this region around the lines !!y=0!! and !!y=1!!. They said:

I expected the volume obtained by rotating about !!y=1!! to be identical with the volume obtained by rotating about the !!x!!-axis. To my surprise, calculation shows different results.

Many people would have posted an answer to this that simply did the calculation, sometimes one with no words in it. But I think this misses the point of the question, which is about OP's intuition. Why were they wrong?

Something that has been on my mind lately is an elaboration of a certain principle of pedagogy. Everyone knows the principle:

It's not enough to show the student the answer; you should try to show them how to find the answer.

Not everyone follows this, but at least most people are aware of it.

But my decades of experience watching people teach math have led me to believe that this is insufficient. There's a higher-order version of this principle that is also important:

It's not enough to show the student how you can find the answer; you should try to show them how they could have found the answer.

And by ‘they’ I don't mean ‘a student’; I mean the student, the specific one sitting in front of you, who knows what they know and can do what they can do.

This is hard.

(I think the !!A\in B, B\in C, A\notin C!! thing above is another example of this. Three people answered that question by pulling solutions out of thin air, but how much does that help OP solve the next problem of this type?)

Anyway, I digress. The region in question looks like this:

The region is bounded by two parabolic arcs.  It
resembles a knife blade,  having
a mirror symmetry along the diagonal x=y, but its upper right corner
is much sharper and pointier then the lower left corner.

I observed that the upper end is much narrower than the lower end. You could count boxes to verify this, but I think it's obvious, and said:

Which end would you rather be poked with?

Then I pointed out that if you revolve the region around !!y=1!!, the thick end travels a long way and sweeps out a large volume, whereas if you revolve it around !!y=0!! the thick end is closer to the axis of revolution, so does not sweep out so much volume. So just from looking at the picture, one might guess that the volume will be larger when revolved around !!y=1!!, which is what OP originally reported.

I did not actually do the calculation, so it's conceivable that I was utterly wrong, but I suspect not.

Definition of Graph Isomorphism

This was not that interesting, but it is a demonstration of a couple of things:

  1. Finding the simplest possible example

    Because it's usually easier for someone to understand a simple example well enough to generalize it than it is for them to understand an abstract, general construction well enough to specialize it to an example.

  2. Math SE will often ignore subtle answers to challenging questions, while giving many upvotes to trivialities

    This post and the octahedron one were my most upvoted posts of the month and also the most trivial. This is why one should ignore upvotes: they are not correlated with anything of real importance.

Help understanding proof: classifying groups of order 21

I may have kinda blown this one. My answer was way too long. OP was asking about specific steps in some group theory proof, ultimately related to the formula $$(aba^{-1})^n = ab^na^{-1}.$$

Algebraically this is quite easy to show, and I did. But it also has deep and essential intuitive content, which I summarized like this:

It says that if you are going to repeat several times the operation of turning your head, then doing !!b!!, then turning your head back, you can skip some of the head-turning and just turn your head once, do operation !!b!! repeatedly, and turn your head back at the end.

This !!aba^{-1}!! thing, called “conjugation”, is incredibly important in group theory, and I have often felt that my group theory course did not make this clear. As I recall the course observed that the mapping !!\varphi_a : x\mapsto axa^{-1}!! is always a group automorphism, went on from there. Which indeed it is, but so what? Why do we care about that particular transformation, anyway?

But the intuitive content of the statement about the automophism is that the symmetries of an object don't change when you turn your head. That's why it's important!

When are two rotations of a sphere conjugate? Exactly when they rotate by the same amount around their respective axes. (“Turn your head!”)

Why are two permutations conjugate if and only if they have the same cycle structure? Because this exactly when they are equivalent under renaming of the objects being permuted; renaming the objects is analogous to "turning your head" for this kind of symmetry.

So whenever the topic of conjugation comes up, I am tempted to launch into a long explanation of the significance of conjugation and its intuitive understanding. Which might have been helpful in this case, but it might have been a completely unnecessary distraction, and I should probably have resisted.

What definition of "nearly orthogonal" would result in "In a 10,000-dimensional space there are millions of nearly orthogonal vectors"?

This was one of those cases where OP asked a very slightly under-baked question and several people jumped in to say it made no mathematical sense at all. (“It's a figure of speech” says one comment. No, it isn't. “I doubt that reference is to a precisely defined concept”, says another. There are more things in heaven and earth, Horatio… “I call bullshit, or imprecise speech,” says a third. Heavens, such foul language!)

I have complained about this at length in the past: I think Math SE persons are too quick to jump from “I have not heard of that” to “it does not exist” and then to “it cannot exist”, or from “I don't quite understand that” to “nobody can understand that” and then to “that is incomprehensible nonsense”.

Two vectors !!u!! and !!v!! are said to be orthogonal if their inner product !!\langle u,v\rangle!! is exactly zero. So if you don't know what “nearly orthogonal” means, you might guess that it means that the inner product is nearly zero: $$\left\lvert \langle u,v\rangle \right\rvert < \epsilon$$ for some small specified !!\epsilon!!. The angle between !!u!! and !!v!! would then be approximately between !!\frac\pi2 - \epsilon!! and !!\frac\pi2 + \epsilon!!, which is nearly a right angle; hence “nearly orthogonal”. This is not exactly subtle thinking.

Another user helpfully linked to a Math Overflow post that discussed essentially the same question, with the title “Almost orthogonal vectors”. So bullshit it isn't, and the question there was sufficiently clear that six people thought it was worth answering, including some guy named Tim Gowers.

I didn't know the answer (although I do now!), but if you don't know the answer, you can still sometimes be useful by writing up the answers of other people who are smarter than yourself, that is called “scholarship”.

In writing it up I almost made a horrible mistake. At one point my draft said something like:

The top answer there gives a bound and claims it is implied by the Johnson-Lindenstrauss lemma. I think the bound might not be quite correct, because the Johnson-Lindenstrauss lemma seems to apply to a somewhat different situation, and…

Fortunately, I realized before posting that that person who had written that answer that was in fact William B. Johnson after whom the Johnson-Lindenstrauss lemma was named, and there was quite a good chance that he did not misapply his own theorem. Heh. Yikes.

It's funny now, but if I had actually made that mistake I would have been mortified.

Also

I have an article with pretty diagrams about how to expand a Taylor series around !!x=\pi!!, with a nice Desmos demonstration that you might enjoy playing with. Press the little ▶️ button in the box that defines the parameter !!a!! and watch the cubic polynomials whip back and forth.

[ Addendum 20230421: Eric Roode says that the animation reminds him of “the robot from Lost in Space sliding back and forth, waving its arms wildly, and saying ‘Danger, Will Robinson! Danger!’”. Same. ]

I have already written a separate article about this post that asks how to compute the integral $$\int_0^{2000} e^{x/2-\lfloor x/2\rfloor}\; dx$$

which you might like to read if you didn't already.


[Other articles in category /math/se] permanent link

Sat, 15 Apr 2023

I liked this simple calculus exercise

A recent Math SE question asked for help computing the value of $$\int_0^{2000} e^{x/2-\left\lfloor x/2\right\rfloor}\; dx.\tag{$\star$}$$

(!!\left\lfloor \frac x2 \right\rfloor!! means !!\frac x2!! rounded down to the nearest integer.)

Often when I see someone's homework problems I exclaim “what blockhead TA assigned this?” But I think this is a really good exercise. Here's why.

In a calculus class, some people will have learned to integrate common functions by rote manipulatation of the expressions. They have learned a set of rules for converting $$\int_a^b x^k\; dx$$ to $$\left.\frac{x^{k+1}}{k+1}\right\rvert_a^b$$ and then to $$\frac{b^{k+1}}{k+1}- \frac{a^{k+1}}{k+1}$$ and such like, and they grind through the algebra. If this is all someone knows how to do, they are going to have a lot of trouble with !!(\star)!!. They might say “But nobody ever taught us how to integrate functions with !!\left\lfloor \frac x2\right\rfloor!!”.

A calculus tyro trying to deal with this analytically might also try rewriting $$e^{x/2-\left\lfloor x/2\right\rfloor}$$ as $$\frac{e^{x/2}}{e^{\left\lfloor x/2\right\rfloor}}$$ but that makes the problem harder, not easier.

To solve this, the student has to actually understand what the integral is computing, and if they don't they will have to learn something about it. The integral is computing the area under a curve. if you graph the function $$\frac x2-\left\lfloor \frac x2\right\rfloor$$

you find that it looks like this:

The part between 0
and 2 is a straight diagonal
line from ⟨0,0⟩ to ⟨1, 2⟩, and this line repeats over and over on
every interval of length 2.

If the interval of integration in !!(\star)!! were only !!(0,2)!! instead of !!(0, 2000)!!, the problem would be very easy because, on this interval, the complicated exponent is identically equal to !!\frac x2!!:

$$\begin{align} \int_0^2 e^{x/2-\left\lfloor x/2\right\rfloor}\; dx & = \int_0^2 e^{x/2}\; dx \\ & = \left. 2e^{x/2} \right\rvert_0^2 \\ & = 2e-2 \end{align} $$

Since the function is completely periodic, integrating over any of the !!1000!! intervals of length !!2!! will produce the same value, so the final answer is simply $$1000\cdot (2e-2).$$

But just pushing around the symbols won't get you there, to solve this problem you have to actually know something about calculus.

The student who overcomes this problem might learn the following useful techniques:

  • If some expression looks complicated, try graphing it and see if you get any insight into how it behaves.

  • Some complicated functions can be understood by breaking them into simple parts and dealing with the parts separately.

  • Piecewise-continuous functions can be integrated by breaking them into continuous intervals and integrating the intervals separately.

  • You can exploit symmetry to reduce the amount of calculation required.

None of this is deep stuff, but it's all valuable technique. Also they might make the valuable observation that not every problem should be solved by pushing around the symbols.


[Other articles in category /math/se] permanent link

Mon, 13 Mar 2023

This ONE WEIRD TRICK for primality testing… doesn't work

This morning I was driving Lorrie to the train station early and trying to factor the four digit numbers on the license plates as I often do. Along the way I ran into the partial factor 389. Is this prime?

The largest prime less than !!\sqrt{389}!! is !!19!!, so I thought I would have to test primes up to !!19!!. Dividing by !!19!! can be troublesome. But I had a brain wave: !!389 = 289+100!! is obviously a sum of two squares. And !!19!! is a !!4k+3!! prime, not a !!4k+1!! prime. So if !!389!! were divisible by !!19!! it would have to be divisible by !!19^2!!, which it obviously isn't. Tadaa, I don't have to use trial division to check if it's a multiple of !!19!!.

Well, that was not actually useful, since the trial division by !!19!! is trivial: !!389 = 380 + 9!!.

Maybe the trick could be useful in other cases, but it's not very likely, because I don't usually notice that a number is a sum of two squares.

[ Addendum 20230323: To my surprise, the same trick came in handy again. I wanted to factor !!449 = 400 + 49!!, and I could skip checking it for divisibility by !!19!!. ]


[Other articles in category /math] permanent link

Tue, 28 Feb 2023

Uniform descriptions of subspaces of the n-cube

This must be well-known, but I don't remember seeing it before. Consider a !!3!!-cube. It has !!8!! vertices, which we can name !!000, 001, 010, \ldots, 111!! in the obvious and natural way:

Two vertices share an edge exactly when they agree in two of the three components. For example, !!001!! and !!011!! have a common edge. We can call this common edge !!0X1!!, where the !!X!! means “don't care”.

The other edges can be named similarly:

$$ \begin{array}{} 00X & 0X0 & X00 \\ 01X & 0X1 & X01 \\ 10X & 1X0 & X10 \\ 11X & 1X1 & X11 \end{array} $$

A vertex !!abc!! is contained in three of the !!12!! edges, namely !!Xbc, aXc,!! and !!abX!!. For example, here's vertex !!001!!, which is incident to edges !!X01, 0X1, !! and !!00X!!.

Each edge contains two vertices, obtained by replacing its single !!X!! with either !!0!! or !!1!!. For example, in the picture above, edge !!00X!! is incident to vertices !!000!! and !!001!!.

We can label the faces similarly. Each face has a label with two !!X!!es:

$$ \begin{array}{} 0XX & 1XX \\ X0X & X0X \\ XX0 & XX1 \\ \end{array} $$

The front face in the diagram contains vertices !!000, 001, 100, !! and !!101!! and is labeled with !!X0X!!:

  • Each vertex !!abc!! is contained in three of the !!6!! faces, namely !!XXc, XbX,!! and !!aXX!!. For example, vertex !!100!! is in face !!X0X!!, as shown above but also face !!1XX!! (the right side face) and !!XX0!! (the top face).
  • A face contains four vertices, obtained by replacing its two !!X!!es with zeroes and ones in four different ways. The illustration above shows how this works with face !!X0X!!.
  • An edge is contained in two faces, obtained by replacing one of the edge's two non-!!X!! components with an !!X!!. For example the edge !!X00!! lies in faces !!X0X!! (shown) and !!XX0!! (the top face).
  • A face contains four edges, obtained by first selecting one of its two !!X!!es and then by replacing that !!X!! with either a zero or a one.

The entire cube itself can be labeled with !!XXX!!. Here's a cube with all the vertices and edges labeled. (I left out the face and body labelings because the picture was already very cluttered.)

But here's the frontmost face of that cube, detached and displayed head-on:

Every one of the nine labels has a !!0!! in the middle component. The back face is labeled exactly the same, but all the middle zeroes are changed to ones.

Here's the right-side face; every label has a !!1!! in the first component:

If any of those faces was an independent square, not part of a cube, we would just drop the redundant components, dropping the leading !!1!! from the subspaces of the !!1XX!! face, or the middle !!0!! from the subspaces of the !!X0X!! face. The result is the same in any case:

What's with those !!X!!es

In a 3-cube, every edge is parallel to one of the three coordinate axes. There are four edges parallel to each axis, that is four pointing in each of three directions. The edges whose labels have !!X!! in the first component are the ones that are parallel to the !!x!!-axis. Labels with an !!X!! in the second or third component are those that are parallel to the !!y!!- or !!z!!-axes, respectively.

Faces have two !!X!!es because they are parallel to two of the three coordinate axes. The faces !!X0X!! and !!X1X!! are parallel to both the !!x!!- and !!z!!-axes.

Vertices have no !!X!!es because they are points, and don't have a direction.

Higher dimensions

None of this would be very interesting if it didn't generalize flawlessly to !!n!! dimensions.

  • A subspace of an !!n!!-cube (such as an edge, a face, etc.) is described by an !!n!!-tuple of !!\{0, 1, X\}!!.

  • A subspace's dimension is equal to the number of !!X!!es in its label. (The number of zeroes and ones is often called the codimension.)

  • Subspace !!A!! is contained in subspace !!B!! if every component of !!B!!'s label is either an !!X!! or matches the corresponding component of !!A!!'s label.

  • Two labels designate parallel subspaces if the !!X!!es of one are a subset of the !!X!!es of the other. (‘Parallel’ here has exactly its ordinary geometric meaning.)

Subspace intersections

Two subspaces intersect if their labels agree in all the components where neither one has an !!X!!.

If they do intersect, the label of the intersection can be obtained by combining the corresponding letters in their labels with the following operator:

$$\begin{array}{c|ccc} & 0 & 1 & X \\ \hline 0 & 0 & – & 0 \\ 1 & – & 1 & 1 \\ X & 0 & 1 & X \end{array}$$

where !!–!! means that the labels are incompatible and the two subspaces don't intersect at all.

For example, in a !!3!!-cube, edges !!1X0!! and !!X00!! have the common vertex !!100!!. Faces !!0XX!! and !!X0X!! share the common edge !!00X!!.

Face !!0XX!! contains edge !!01X!!, because the intersection is all of !!01X!!. But face !!0XX!! intersects edge !!X11!! without containing it; the intersection is the vertex !!011!!.

Face !!0XX!! and vertex !!101!! don't intersect, because the first components don't match.

Counting the labels tells us that in an !!n!!-cube, every !!k!!-dimensional subspace contains !!2^i \binom ki!! subspaces of dimension !!k-i!!, and belongs to !!\binom{n-k}{j}!! super-subspaces of dimension !!k+j!!. For example, in the !!n=3!!-cube, each edge (dimension !!k=1!!) contains !!2^1\binom 11 = 2!! vertices and belongs to !!\binom{3-1}{1} = 2!! faces; each face (dimension !!k=2!!) contains !!2^1\binom 21 = 4!! edges and belongs to !!\binom{3-2}{1} =1 !! cube.

For the !!3!!-cube this is easy to visualize. Where I find it useful is in thinking about the higher-dimensional cubes. This table of the subspaces of a !!4!!-cube shows how any subspaces of each type are included in each subspace of a higher dimension. For example, the !!3!! in the !!E!! row and !!C!! column says that each edge inhabits three of the cubical cells.

$$ \begin{array}{c|ccccc} & V & E & F & C & T \\ \hline V & 1 & 4 & 6 & 4 & 1 \\ E & & 1 & 3 & 3 & 1 \\ F & & & 1 & 2 & 1 \\ C & & & & 1 & 1 \\ T & & & & & 1 \end{array} $$

The other half of the table shows how many edges inhabit each cubical cell of a !!4!!-cube: twelve, because the cubical cells of a !!4!!-cube are just ordinary cubes, each with !!12!! edges.

$$ \begin{array}{c|ccccc} & V & E & F & C & T \\ \hline V & 1 & & & & \\ E & 2 & 1 & & & \\ F & 4 & 4 & 1 & & \\ C & 8 & 12& 6 & 1 & \\ T & 16& 32& 24& 8 & 1 \end{array} $$

In a !!4!!-cube, the main polytope contains a total of !!8!! cubical cells.

More properties

Two subspaces of the same dimension are opposite if their components are the same, but with all the zeroes and ones switched. For example, in a !!4!!-cube, the face opposite to !!0XX1!! is !!1XX0!!, and the vertex opposite to !!1011!! is !!0100!!.

In a previous article, we needed to see when two vertices of the !!4!!-cube shared a face. The pair !!0000!! and !!1100!! is prototypical here: the two vertices have two matching components and two mismatching, and the shared face replaces the mismatches with !!X!!es: !!XX00!!. How many vertices share a face with some vertex !!v!!? We can pick two of the components of !!v!! to flip, creating two mismatches with !!v!!; there are !!4!! components, so !!\binom42 = 6!! ways to pick, and so !!6!! vertices sharing a face with !!v!!.

We can use this notation to observe a fascinating phenomenon of four-dimensional geometry. In three dimensions, two intersecting polyhedral faces always share an edge. In four dimensions this doesn't always happen. In the !!4!!-cube, the faces !!XX00!! and !!00XX!! intersect, but don't share an edge! Their intersection is the single vertex !!0000!!. This is analogous to the way, in three dimensions, a line and a plane can intersect in a single point, but the Flatlanders can't imagine a plane intersecting a line without containing the whole line.

Finally, the total number of subspaces in an !!n!!-cube is seen to be !!3^n!!, because the subspaces are in correspondence with elements of !!\{0, 1, X\}^n!!. For example, a square has !!3^2 = 9!! subspaces: !!4!! vertices, !!4!! edges, and !!1!! square.

[ Previously related: standard analytic polyhedra ]


[Other articles in category /math] permanent link

More about the seventh root of a 14-digit number

I recently explained how to quickly figure out the seventh root of the number !!19203908986159!! without a calculator, or even without paper if you happen to know a few things.

The key insight is that the answer has only two digits. To get the tens digit, I just estimated the size. But Roger Crew pointed out that there is another way. Suppose the number we want to find, !!n!!, is written as !!10p+q!!. We already know that !!q=9!!, so write this as !!n = 10(p+1)-1!! as in the previous article. Then expanding with the binomial theorem as before:

$$ \begin{align} n^7 & = \sum_{k=0}^7 \binom 7k\; (10(p+1))^{7-k}\; (-1)^k \\ & = (10(p+1))^ 7 + \ldots + \binom71 (10(p+1)) - 1 \\ \end{align} $$

All the terms except the last two are multiples of 100, because they are divisible by !!(10(p+1))^i!! for !!i\ge 2!!. So if we consider this equation mod-!!100!!, those terms all vanish, leaving:

$$ \begin{align} 19203908986159 & \equiv \binom71 (10(p+1)) - 1 & \pmod{100} \\ 59 & \equiv 70(p+1) - 1 & \pmod{100} \\ 90 & \equiv 70p & \pmod{100} \\ 9 & \equiv 7p & \pmod{10} \\ \end{align} $$

and the (only, because !!\gcd(7, 10) = 1!!) solution to this has !!p=7!! since !!7\cdot 7=49!!.

This does seem cleaner somehow, and my original way seems to depend on a lucky coincidence between the original number being close to !!2\cdot 10^{14}!! and my being able to estimate !!8^7 = 2^{21} \approx 2000000!! because !!2^{10}!! is luckily close to !!1000!!. On the other hand, I did it the way I did it, so in some sense it was good enough. As longtime Gentle Readers know already, I am a mathematical pig-slaughterer.


[Other articles in category /math] permanent link

Mon, 20 Feb 2023

Construing cube faces as pairs of something or other

(Previously)

A couple of days ago I discussed a Math Stack Exchange question where OP, observing that a square has four sides, a cube has six faces, and !!6=\binom 42!!, asked if there was some way to understand this as a real structural property and not a coincidence. They did not put it that way, but I think that is partly what they were after:

I was thinking maybe there exists some bijective map from any 2 given edges of a square to faces of a cube but I'm not really getting anywhere.

I provided a different approach, and OP declared themselves satisfied, but I kept thinking about it. My approach did provide a combinatorial description of the cube that related its various parts to the square's corresponding parts. But it didn't produce the requested map from pairs of objects (the !!\binom 42!! part) to the six faces.

Looking around for something that a cube has four of, I considered the four body diagonals. A cube does have six interior bisecting planes, and they do correspond naturally with the pairs of body diagonals. But there's no natural way to assign each of the six faces a distinct pair of body diagonals; the mapping isn't one-to-one but two-to-two. So that seemed like a dead end.

But then I had a happy thought. A cube has eight vertices, which, if we think of them as points in 3-space, have coordinates !!\langle 0,0,0\rangle, \langle 0,0,1\rangle, \dots, \langle 1,1,1\rangle!!. The vertices fall naturally two groups of four, according to whether the sum of their coordinates is even or odd.

In this picture the even vertices are solid red, and the odd vertices are hollow. (Or maybe it's the other way around; it doesn't matter.)

The set of the four even vertices, red in the picture above, are a geometrically natural subset to focus on. They form a regular tetrahedron, for example, and no two share an edge.

And, crucially for this question, every face of the cube contains exactly two of the four even vertices, and every pair of even vertices resides in a single common face. This is the !!6=\binom 42!! correspondence we were looking for. Except that the !!4!! here isn't anything related to the square; instead the !!4!! is actually !!\frac122^3!!, half the total number of vertices.

In this diagram I've connected every pair of even vertices with a different-colored line. There are six colored lines, and each one is a diagonal of one of the six faces of the cube.

Six faces, six pairs of even vertices. Perfect!

This works just fine for lower dimensions. In two dimensions there are !!\frac122^2 = 2!! even vertices (namely !!\langle0,0\rangle!! and !!\langle1,1\rangle!!) so exactly one pair of even vertices, and this corresponds to the one “face” of a square. In one dimension there is only one even vertex, so there are no pairs, and no faces either.

But in extending this upward to four dimensions there is a complication. There are !!16!! vertices total, and !!8!! of these are even. There are !!\binom82 = 28!! pairs of even vertices, but only !!24!! faces, so you know something must have gone wrong. But what?

Let's focus on one vertex in particular, say !!\langle 0,0,0,0\rangle!!. This vertex shares a face with only six of the seven other even vertices. But its relationship with the last even vertex, !!\langle 1,1,1,1\rangle!!, is special — that vertex is all the way on the opposite corner, too far from !!\langle 0,0,0,0\rangle!! to share a face with it. Nothing like that happened in the three-dimensional case, where the vertex opposite an even vertex was an odd one.

The faces of the 4-cube correspond to pairs of even vertices, except that there are four pairs of opposite even vertices that are too far apart to share a face, and instead of $$\binom 82$$ faces there are only $$\binom 82 - 4 = 24; $$ that's the !!\binom82!! pairs of even vertices, minus the four pairs that are opposite.

We can fix this. Another way to look at it is that two even vertices are on the same face of an !!n!!-cube if they differ in exactly two coordinates. The even vertices in the 3-cube are:

$$ \langle0,0,0\rangle \\ \langle0,1,1\rangle \\ \langle1,0,1\rangle \\ \langle1,1,0\rangle $$

and notice that any two differ in exactly two of the three positions. In the !!4!!-cube this was usually the case, but then we had some antipodal pairs like !!\langle 0,0,0,0\rangle!! and !!\langle 1,1,1,1\rangle!! that differed in four coordinate positions instead of the required two.

Let !!E!! be an even vertex in an !!n!!-cube. We will change two of its coordinates, to obtain another even vertex, and call the other vertex !!E'!!. The point !!E!! has !!n!! coordinates, and there are !!\binom n2!! ways to choose the two coordinate positions to switch to obtain !!E'!!. We have !!\frac122^n!! choices for !!E!!, then !!\binom n2!! choices of which pair of coordinates to switch to get !!E'!!, and then !!E!! and !!E'!! together determine a single face of the !!n!!-cube. But that double-counts the faces, since pair !!\{ E, E'\}!! determines the same face as !!\{ E', E\}!!. So to count the faces we need to divide the final answer by !!2!!. Counting by this method tells us that the number of faces in an !!n!!-cube is is:

$$\frac122^n\cdot \binom n2 \cdot \frac12$$

where the !!\frac122^n!! counts the number of ways to choose an even vertex !!E!!, the !!\binom n2!! counts the number of ways to choose an even vertex !!E'!! that shares a face with !!E!!, and the extra !!\frac12!! adjusts for the double-counting in the order of !!E!! and !!E'!!. We can simplify this to $$2^{n-2}\binom n2.$$

When !!n=3!! we get $$ 2^{3-2}\binom 32 = 2\cdot 3 = 6 $$

and it is correct for all !!n!!, since !!\binom n2 = 0!! when !!n<2!!:

$$ \begin{array}{r|rrrrr} n& 0&1&2&3&4&5 \\ \text{faces} = 2^{n-2}\binom n2 & 0 & 0 & 1 & 6 & 24 & 80 \\ \end{array} $$

That's not quite the !!\binom{2^{n-1}}2!! or the !!\left({{(n-1)2^{n-2}}\atop 2}\right)!! we were looking for, but it at least does produce a bijective map between faces and pairs of something.


[Other articles in category /math] permanent link

Sun, 19 Feb 2023

Math SE report 2023-02

I had an unusually interesting batch of Math Stack Exchange posts recently.

I think all of my answers to these questions are worth reading in full, and if you like the math posts on my blog, you will like reading these SE posts also. Well, most of them. Maybe.

Summaries follow.

Confusion about equality: mathematical objects versus the symbols that describe them

This one is from last September but I'm really happy with it because it thoroughly addresses up a very common misconception about mathematical notation:

Based on my understanding of equality, the statement !!(1+1)+1=1!!, contains no mathematical content beyond !!1=1!!, since the group element !!(1+1)+1!! literally is the group element !!1!!. This bothers me...

My answer begins with

It should; it's wrong.

I'm frequently surprised by how often this fallacy shows up on Math SE, often asserted as an obvious truth by people I thought would know better. So it's worth explaining in detail. I expect I'll be able to refer people to this answer when it comes up in the future.

A brief summary of my answer is:

  • Mathematical expressions denote computations, not values.
  • !!A=B!! means that that two computations eventually produce the same value.
  • This does not, in general, mean that the computations have the same meaning.

Check it out.

What does italic i mean in integral calculator?

The i means the imaginary unit, that !!i^2=-1!! thing. No surprise there. But the reason was a bit interesting. OP had Wolfram α compute some horrendous double integral:

screengrab of Wolfram α double
integral formula where the integrand is a big expression with an
inverse hyperbolic tangent and square roots and fractions and stuff.
The result is approximately -2297 plus an imaginary part on the order
of 10^{-13}

and the answer should have been a real number, so what was !!i!! doing in there?

The answer: Floating-point roundoff error. Check out Claude Leibovici's detailed explanation of where the roundoff error comes from, it's much smarter than what I said, which was to mumble something about how Wolfram α's “probably … used … some advanced technique …” which sounds wise but actually I had no idea what it might have done. Claude Leibovici actually has an explanation.

I was going to leave this out but I wanted to remind you all how much I despise floating-point arithmetic.

Can Peano's 9th axiom be expressed using a self-referential set definition?

This is one of those not-quite-baked questions where the initial answers act like it does not make sense (tier 4 or 5). But it does make sense and there is a good answer (tier 1).

The question asks if you can define the set of natural numbers by saying something like

If !!K = \{0\} \cup \{S(k)\mid k \in K\}!!, then !!K=\mathbb{N}!!.

The initial comments said no, it's self-referential. But so is:

$$ n! = \begin{cases} 1, & \text{if $n$ = 0} \\ n\cdot (n-1)!& \text{otherwise} \end{cases} $$

and nobody bats an eyelash at that. (The author of the comments later retracted his rejection.)

In fact it requires only a little bit of elaboration to make sense of such “circular” definitions. To interpret $$X = f(X)$$ you need to do two things. First, think of !!f!! as a mapping, and ask if it has any fixed points, any arguments !!x!! for which !!x=f(x)!! holds. And then, from the set of fixed points, find some unambiguous way to identify one of the fixed points as the one you want. If !!f!! is a mapping from sets to sets, it often happens that the family of fixed points is closed under intersections, and you can select the unique minimal fixed point that is a proper subset of all the others.

This was all formalized by Dana Scott in the 1960s and it continues to underlie formal treatments of programming language semantics.

My answer has more details.

Is there a scenario for when changing the order of different quantifiers in a nested quantifier retain the original meaning?

This is interesting because some of the replies make the mistake of conflating the meaning of an expression with its value, a problem I discussed above in connection with something else. Two expressions of first-order logic may be logically equivalent, but this does not imply that they have the same meaning.

The question also looks superficially like “What is the difference between !!\forall x\exists y. R(x,y)!! and !!\exists y\forall x. R(x,y)!!, which is a FAQ. But it is not that question.

The question concerned expressions of the type !!\forall x.\exists y.P(x,y)!! and was further complicated by the implicit quantifier on the !!P!!. Are we asking if !!\forall x.\exists y.P(x,y)!! always has a different meaning from !!\forall y.\exists x.P(x,y)!! for all !!P!!? Or for a particular !!P!!? There are several similar-sounding questions that could be asked here, and my thinking about the variations is still not clear to me.

English (and standard mathematical terminology) is not well-equipped to discuss this sort of thing intelligibly. Or perhaps I just don't know how to do it. I had to work hard to write something I was satisfied with.

Further details.

Cantor set - is it made of !![a,b]!! intervals or exclusively of singletons?

This question is a bit confused (every set is made of singletons) and I was worried that some know-it-all would jump in and tell this person that really the Cantor set is very simple. When actually the Cantor set is really weird and this is why it is such an important counterexample to so many plausible-seeming conjectures. As Von Neumann supposedly said, in mathematics one doesn't understand things, one just gets used to them. It can be hard for people who have gotten used to the Cantor set to remember what it is like for people who are grappling with it for the first time — or to remember that they themselves may not understand as well as they imagine they do.

When I write an answer to a question like this, in which I need to say “your idea is somewhat confused”, I like to place that remark in close proximity to “… because the situation is a confusing one” so that OP doesn't feel that they are the only person in the world who is puzzled by the Cantor set.

(Sometimes they are the only person in the world who is puzzled by whatever it is, and then it's okay for them to feel that way. I wouldn't lie and say that the situation was a confusing one when I thought it wasn't. If the matter is actually simple it's better to say so, because that can be valuable information. Beginners often overthink simple issues. But the Cantor set is not one of those situations!)

A valuable pedagogical strategy is finding a simpler example. The Cantor set does not have all the same properties as !!\Bbb Q!!. But !!\Bbb Q!! does seem to share with the Cantor set the specific properties that were troubling this person. Does !!\Bbb Q!! contain any intervals? Like the Cantor set, no. Is !!\Bbb Q!! a union of singletons? It's not clear what OP meant by this, but, uh, probably? And if not we can at least find out more about what OP thought they meant, by asking about !!\Bbb Q!!. So it's a good idea to take the focus off of the Cantor set, which is weird, complicated, and unfamiliar, and put it on !!\Bbb Q!!, which is much less weird, somewhat less complicated, and much more familiar. Then with that foundation laid, you are in a better position to climb up to !!\Bbb R\setminus\Bbb Q!! (Similar to !!\Bbb Q!!, but uncountable) and then to the Cantor set itself.

Here I am talking about the Cantor set.

Deriving that a cube has six sides via a square and combinatorics

This is probably my favorite question of the month, because it seems quite half-baked, but there is an excellent answer available. As often happens with half-baked questions, the people who don't know the answer jump to the conclusion that no answer is possible, and say dumb stuff like:

What is the definition of a "cube" in your problem?

This is going the wrong direction. The point is to find the ‘right’ definition of the cube; if OP could define “cube” in the way they wanted, they wouldn't need to ask the question.

A better way to answer this question is to understand that what OP is looking for is actually a suitable definition of “cube”. A more mathematically sophisticated person might have asked:

How can we understand the cube as a combinatorial object, developed from the square?

The word “cube” in this question does not mean some specific mathematical object, but rather the informal intuitive cube. A correct answer will explain how to approach the informal idea of the cube in a mathematical way.

There is a nice (tier 1!) answer in this case: A segment is composed of an interior !!i!! and two endpoints, so we can represent it as !!S=i+2!!. Then !!S^3!! is a cube and its analogous combinatorial description is !!(i+2)^3 =i^3+6i^2+12i+8!!. Ta daa! The answer has a more detailed explanation.

There were a couple of followup comments that annoyed me, objecting that what I had presented was not a proof. That was a feature, not a bug. The question hadn't asked for a proof, and I had not tried to provide one.

One of the comments went further, and called it “a nice coincidence”. It's not, it's just generating functions.

I think the “coincidence” person has a profound misunderstanding of how mathematics operates. I wrote several hundred words explaining why but then realized that I had finally been able to articulate an idea I've been groping around to get hold of for decades. This is too precious to me to stick in at the tail end of an anthology article; it deserves its own article. So I am saving the next five paragraphs for next week. Or next year. Whenever I can do it justice.

Algebraic descriptions of the cube.

Thanks for reading.

[ Addendum 20230221: The original question also wanted to identify the faces of a cube with pairs of something there were four of, maybe the sides or the corners of a square. I did find a way to identify faces of a cube with pairs of something interesting. ]


[Other articles in category /math/se] permanent link

Sat, 18 Feb 2023

Finding the seventh root of 19203908986159

This Math SE question seems destined for deletion, so before that happens I'm repatriating my answer here. The question is simply:

Let !!n!! be an integer, and you are given that !!n^7=19203908986159!!. How would you solve for !!n!! without using a calculator?

If you haven't seen this sort of thing before, you may not realize that it's easier than it looks because !!n!! will have only two digits.


The number !!n^7!! is !!14!! digits long so its seventh root is !!\left\lceil \frac {14}7\right\rceil=2!! digits long. Let's say the digits of !!n!! are !!p!! and !!q!! so that the number we seek is !!n=10p+q!!.

The units digit of !!n^7!! is odd so !!q!! is odd. Clearly !!q\ne 1!! and !!q\ne 5!!. Units digits of numbers ending in !!3,7,9!! repeat in patterns:

$$ \begin{array}{rl} 3 & 3, 9, 7, 1, 3, 9, \mathbf 7, \ldots \\ 7 & 7, 9, 3, 1, 7, 9, \mathbf 3, \ldots \\ 9 & 9, 1, 9, 1, 9, 1, \mathbf 9, \ldots \end{array} $$

so that !!(10p+q)^7!! can end in !!9!! only if !!q=9!!. Let's rewrite !!10p+9!! as !!10(p+1)-1!!.

Expanding !!(10(p+1)-1)^7!! with the binomial theorem, we get

$$10^7(p+1)^7 - 7\cdot10^6(p+1)^6 + \binom72 10^5(p+1)^5 - \cdots$$

The first term here is by far the largest; it is more than !!\frac{10}7p!! times as large as the second largest. Ignoring all but the first term, we want $$(p+1)^7 \approx 1920390.$$

!!2^7!! is only !!128!!, far too small. !!5^7!! is only !!78125!!, also too small. But !!8^7 = 2^{21} \approx 2000000!! because !!2^{10}\approx 1000!!. This is just right, so !!p+1=8!! and the final answer is $$79.$$

If the !!n!! were a three digit number, these kinds of tricks wouldn't be sufficient, and I would use more systematic and general methods. But I think it's a nice example of how far you can get with mere tricks and barely any theory.


This post was brought to you by the P.D.Q. Bernoulli Intitute of Lower Mathematics. It is dedicated to Gian Pietro Farina. I told him I planned to blog more, and he said he was especially looking forward to my posts about math. And then I posted like eleven articles in a row about Korean dog breed names and goose snot.

[ Addendum 20230219: Roger Crew pointed out that I forgot the binomial coefficients in the binomial theorem expansion. I have corrected this. ]

[ Addendum 20230228: Roger Crew also pointed out a possibly simpler way to find !!p!!. ]


[Other articles in category /math] permanent link

Thu, 08 Sep 2022

Pope Fibonacci

When Albino Luciani was crowned Pope, he chose his papal name by concatenating the names of his two predecessors, John XXIII and Paul VI, to become John Paul. He died shortly after, and was succeeded by Karol Wojtyła who also took the name John Paul. Wojtyła missed a great opportunity to adopt Luciani's strategy. Had he concatenated the names of his predecessors, he would have been Paul John Paul. In this alternate universe his successor, Benedict XVI, would have been John Paul Paul John Paul, and the current pope, Francis, would have been Paul John Paul John Paul Paul John Paul. Each pope would have had a unique name, at the minor cost of having the names increase exponentially in length.

(Now I wonder if any dynasty has ever adopted the less impractical strategy of naming their rulers after binary numerals, say:

King Juan
King Juan Cyril
King Juan Juan
King Juan Cyril Cyril
King Juan Cyril Juan
King Juan Juan Cyril
(etc)

Or perhaps !!0, 1, 00, 01, 10, 11, 000, \ldots!!. There are many variations, some actually reasonable.)

I sometimes fantasize that Philadelphia-area Interstate highways are going to do this. The main east-west highway around here is I-76. (Not so-called, as many imagine, in honor of Philadelphia's role in the American revolution of 1776, but simply because it lies south of I-78 and north of I-74 and I-70.) A connecting segment that branches off of I-76 is known as I-676. Driving to one or the other I often see signs that offer both:

Street view of Philadelphia.  In
the center is a lamp post with two red and blue
Interstate highway shields affixed, indicating that entrances to I-76
West and I-676 East are ahead.

I often fantasize that this is a single sign for I-76676, and that this implies an infinite sequence of highways designated I-67676676, I-7667667676676, and so on.

Finally, I should mention the cleverly-named fibonacci salad, which you make by combining the leftovers from yesterday's salad and the previous day's.

[ Addendum: Do you know the name of the swagman in Waltzing Matilda? It's Juan. The song says so: “Juan's a jolly swagman…” ]


[Other articles in category /math] permanent link

Mon, 25 Apr 2022

Unordered pairs and the axiom of binary choice

A few days ago I demanded a way to construct an unordered pair !![x, y]!! with the property that $$[x, y] = [y, x]$$ for all !!x!! and !!y!!, and also formulas !!P_1!! and !!P_2!! that would extract the components again, not necessarily in any particular order (since there isn't one) but so that $$[P_1([x, y]), P_2([x, y])] = [x, y]$$ for all !!x!! and !!y!!.

Several readers pointed out that such formulas surely do not exist, as their existence would prove the axiom of binary choice. This is a sort of baby brother of the infamous Axiom of Choice. The full Axiom of Choice (“AC”) can be stated this way:

Let !!\mathcal I!! be some index set.

Let !!\mathscr F!! be a family of nonempty sets indexed by !!\mathcal I!!; we can think of !!\mathscr F!! as a function that takes an element !!i \in \mathcal I !! and produces a nonempty set !!\mathscr F_i!!.

Then (Axiom of Choice) there is a function !!\mathcal C!! such that, for each !!i\in \mathcal I!!, $$\mathcal C(i) \in \mathscr F_i.$$

(From this it also follows that $$\{ \mathcal C(i) \mid i\in \mathcal I \},$$ the collection of elements selected by !!\mathcal C!!, is itself a set.)

This is all much more subtle than it may appear, and was the subject of a major investigation between 1914 and 1963. The standard axioms of set theory, called ZF, are consistent with the truth of Axiom of Choice, but also consistent with its falsity. Usually we work in models of set theory in which we assume not just ZF but also AC; then the system is called ZFC. Models of ZF where AC does not hold are very weird. (Models where AC does hold are weird also, but much less so.)

One can ask about all sorts of weaker versions of AC. For example, what if !!\mathcal I!! and !!\mathscr F!! are required to be countable? The resulting “axiom of countable choice”, is strictly weaker than AC: ZFC obviously includes countable choice as a restricted case, but there are models of ZF that satisfy countable choice but not AC in its fullest generality.

Rather than restricting the size of !!\mathscr F!! itself, we can consider what happens when we restrict the size of its elements. For example, what if !!\mathscr F!! is a collection of finite sets? Then we get the “axiom of finite choice”. This is also known to be independent of ZF.

What if we restrict the elements of !!\mathscr F!! yet further, so that !!\mathscr F!! is a family of sets, each of which has exactly two elements? Then we have the “axiom of binary choice”. Finite choice obviously implies binary choice. But the converse implication does not hold. This is not at all obvious. Binary choice is known to imply the corresponding version of binary choice in which each member of !!\mathscr F!! has exactly four elements, and is known not to imply the version in which each member has exactly three elements. (Further details in this Math SE post.)

But anyway, readers pointed out that, if there were a first-order formula !!P_1!! with the properties I requested, it would prove binary choice. We could understand each set !!\mathscr F_i!! as an unordered pair !![a_i, b_i]!! and then

$$ \{ \langle i, P_1([a_i, b_i])\rangle \mid i\in \mathcal I \},$$

which is the function !!i\mapsto P_1(\mathscr F_i)!!, would be a choice function for !!\mathscr F!!. But this would constitute a proof of binary choice in ZF, which we know is impossible.

Thanks to Carl Witty, Daniel Wagner, Simon Tatham, and Gareth McCaughan for pointing this out.


[Other articles in category /math] permanent link

Fri, 15 Apr 2022

Unordered pairs à la Wiener

A great deal of attention has been given to the encoding of ordered pairs as sets. I lately discussed the usual Kuratowski definition:

$$\langle a, b \rangle = \{\{a\}, \{a, b\}\}$$

but also the advantages of the ealier Wiener definition:

$$\langle a, b \rangle = \{\{\{a\},\emptyset\}, \{ \{ b\}\}\}$$

One advantage of the Wiener construction is that the Kuratowski pair has an odd degenerate case: if !!a=b!! it is not really a pair at all, it's a singleton. The Wiener pair always has exactly two elements.

Unordered pairs don't get the same attention because the implementation is simple and obvious. The unordered pair !![a, b]!! can be defined to be !! \{a, b\}!! which has the desired property. The desired property is:

$$[a, b] = [c, d] \\ \text{if and only if} \\ a=c \land b=d\quad \text{or} \quad a=d\land b=c $$

But the implementation as !!\{a, b\}!! suffers from the same drawback as the Kuratowski pair: if !!a=b!!, it's not actually a pair!

So I wonder:

Is there a set !![a, b]!! with the following properties:

  1. !![a, b] = [c, d]!! if and only if !!a=c \land b=d!! or !! a=d\land b=c!!

  2. !![a, b]!! has exactly two elements for all !!a!! and !!b!!

Put that way, a solution is $$ [a, b] = \{ \{ a, b \}, \emptyset \}\tag{$\color{darkred}{\spadesuit}$}$$ but that is very unsatisfying. There must be some further property I want the solution to have, which is not possessed by !!(\color{darkred}{\spadesuit})!!, but I don't know yet what it is. Is it that I want it to be possible to extract the two elements again? I am not sure what that means, but whatever it means, if !!\{ a, b\}!! does it, then so does !!(\color{darkred}{\spadesuit})!!.

But that does also suggest another property that neither of those enjoys:

There should be formulas !!F_1!! and !!F_2!! such that for all !![a, b]!!:

  1. !!F_1([a, b]) = a!! or !!b!!
  2. !!F_2([a, b]) = a!! or !!b!!
  3. !!F_1([a, b]) = F_2([a, b])!! if and only if !!a=b!!

I think this can be abbreviated to simply:

!![F_1([a, b]), F_2([a, b])] = [a, b]!!

There may be some symmetry argument why there are no such formulas, but if so I can't think of it offhand. Perhaps further consideration of !![a, a]!! will show that what I want is incoherent.

Today is the birthday of Leonhard Euler. Happy 315th, Lenny!

[ Addendum 20220422: Several readers pointed out that the !!F_i!! formulas are effectively choice functions, so there can be no simple solution. Further details. ]


[Other articles in category /math] permanent link

Mon, 11 Apr 2022

At last, an internet commenter I can agree with

Browsing around Math StackExchange today, I encountered this question, ‘Unique’ doesn't have a unique meaning, which pointed out that the phrase “Every boy has a unique shirt” is at least confusing. (Do all the boys share a single shirt?)

“Aha,” I said. “I know what's wrong there: it should be ‘every boy has a distinct shirt’.” I scrolled down to see if I should write that as an answer. But I noticed that the question had been posted in 2012, and guessed that probably someone had already said what I was going to say. Indeed, when I looked at the comments, I saw that the third one said:

If I meant that no shirt belongs to two boys, I would say "every boy has a distinct shirt".

Okay, that saves me the trouble of replying at least. I went to click the upvote button on the comment, but there was no button,

because

the comment had been posted, in August of 2012, by me.


[Other articles in category /math/se] permanent link

Wed, 09 Mar 2022

Bad but interesting mathematical notation idea

Zaz Brown showed up on Math SE yesterday with a proposal to make mathematical notation more uniform. It's been pointed out several times that the expressions

$$y^n = x \qquad n = \log_y x \qquad y=\sqrt[n]x $$

all mean the same thing, and yet look completely different. This has led to proposals to try to unify the three notations, although none has gone anywhere. (For example, this Math SE thread .)

!!\def\o{\overline}\def\u{\underline}!!

In this new thread, M. Brown has an interesting observation: exponentiation also unifies addition and multiplication. So write !!\o x!! to mean !!e^x!!, and !!\u x!! to mean !!\ln x!!, and leave multiplication as it is. Now !!x^y!! can be written as !!\o{\u x y}!! and !!x+y!! can be written as !!\u{\bar x \! \bar y}!!.

Well, this is a terrible idea, and I'll explain why I think so in some detail. But I really hope nobody will think I mean this as any sort of criticism of its author. I have a lot of ideas too, and most of them are amazingly bad, way worse than this one. Having bad ideas doesn't make someone a bad person. And just because an idea is bad, doesn't mean it wasn't worth considering; thinking about ideas is how you decide which ones are bad and which aren't. M. Brown's idea was interesting enough for me to think about it and write an article. That's a compliment, not a criticism.

I'm deeply interested in notation. I think mathematicians don't yet understand the power of mathematical notation and what it does. We use it, but we don't understand it. I've observed before that you can solve algebraic equations or calculus problems just by “pushing around the symbols”. But why can you do that? Where is the meaning, and how do the symbols capture the meaning? How does that work? The fact that symbols in general can somehow convey meaning is a deep philosophical mystery, not just in mathematics but in all communication, and nobody understands how it works. Mathematical symbols can be even more amazing: they don't just tell you what other people were thinking, they tell you things themselves. You rearrange them in a certain way and they smile and whisper secrets: “now you can see this function is everywhere zero”, “this is evidently unbounded” or “the result is undefined when !!\lvert x_1\rvert > \frac 23!!”. It's almost as if the symbols are doing some of the thinking for you.

Anyway this particular idea is not good, but maybe we can learn something from its failure modes?

Here's how you would write !!x^2+x!!: $$\u{\o{\o{2\u x}}{\o x}}$$

Zaz Brown suggested that this expression might be better written as !!x{\u{\o x \o 1}}!!, which is analogous to !!x(x+1)!!, but I think that reply misses a very important point: you need to be able to write both expressions so that you can equate them, or transform one into the other. The expression !!x(x+1)!! is useful because you can see at a glance that it is composite for all integer !!x!! larger than 1, and actually twice a composite for sufficiently large !!x!!. (This is the kind of thing I had in mind when I said the symbols whisper secrets to you.) !!x^2+x!! is useful in different ways: you can see that it's !!\Theta(x^2)!! and it's !!(x+1)^2 - (x+1)!! and so on. Both are useful and you need to be able to turn one into the other easily. Good notation facilitates that sort of conversion.

M. Brown's proposal actually has at least two components. One component is its choice of multiplication, exponentials and logarithms as the only first-class citizens. The other is the specific way that was chosen to write these, with the over- and underbars. This second component is no good at all, for purely typographic reasons. These three expressions look almost identical but have completely different meanings: $$ \u{\o a\, \o c}\qquad \u{\o { ac}} \qquad \o{\u a\, \u c}.$$

In fact, the two on the right were almost indistinguishable until I told MathJax to put in some extra space. I'm sure you can imagine similar problems with !!\u{\o{\o{2\u x}}}{\o x}!! turning into !!\u{\o{\o{2\u x x}}}!! or !!\u{\o{\o{2\u x }x}}!! or whatever. Think of how easy it is to drop a minus sign; this is much worse.

[ Addendum 20220308: Earlier, I had said that !!x+y!! could be written as !!\u{\bar x\bar y}!!. A Gentle Reader pointed out that the bar on the bottom wasn't connected but should have been, as on the far right of this screenshot:

Screenshot of blog text “x+y can be written as (xy) (xy)” where in each case both the x and the y have overbars, and the whole thing has an underbar, except that on the right the underbar has a tiny break, and on the left the x and y have been squished together uncomfortably to eliminate the break in the underbar.

I meant it to be connected and what I wrote asked for it to be connected, but MathJax, which formats the math formulas on the blog, didn't connect it. To remove the gap, I had to explicitly subtract space between the !!x!! and the !!y!!. ]

But maybe the other component of the proposal has something to it and we will find out what it is if we fix the typographic problem with the bars. What's a good alternative?

Maybe !!\o x = x^\bullet!! and !!\u x = x_\bullet!! ? On the one hand we get the nice property that !!x^\bullet_\bullet = x!!. But I think the dots would make my head swim. Perhaps !!\o x = x\top!! and !!\u x = x\bot!!? Let's try.

Good notation facilitates transformation of expressions into equal expressions. The !!\top\bot!! notation allows us to easily express the simple identities $$a\top\bot \quad = \quad a\bot\top \quad = \quad a.$$ That kind of thing is good, although the dots did it better. But I couldn't find anything else like it.

Let's see what the distributive law looks like. In standard notation it is $$a(b+c) = ab + ac.$$ In the original bar notation it was $$a\u{\o b\o c} = \u{\o{ab}\, \o{ac}}.$$ This looks uncouth but perhaps would not be worse once one got used to it.

With the !!\top\bot!! idea we have

$$ a(b\top c\top)\bot = ((ab)\top(ac)\top)\bot. $$

I had been hoping that by making the !!\top!! and !!\bot!! symbols postfix we'd be able to avoid parentheses. That didn't happen: without the parentheses you can't distinguish between !!(ab)\top!! and !!a(b\top)!!. Postfix notation is famous for allowing you to omit parentheses, but that's only if your operators all have fixed arity. Here the invisible variadic multiplication ruins that. And making it visible dyadic multiplication is not really an improvement:

$$ ab\top c\top\cdot\cdot\bot = ab\cdot\top ac\cdot \top\cdot \bot. $$

You know what I think would happen if we actually tried to use this idea? Someone would very quickly invent an abbreviation for !!\u{\o {x_1}\, \o {x_2} \cdots \o{x_k}}!!, I don't know, something like “!!x_1 + x_2 + \ldots + x_k!!” maybe. (It looks crazy, I know, but it might just work.) Because people might like to discuss the fact that $$ \u{\o 2\, \o 3 } = 5$$ and without an addition sign there seems to be no way to explain why this should be.

Well, I have been turning away from the real issue for a while now, but !!a(b\top c\top)\bot = !! !!((ab)\top(ac)\top)\bot!! forces me to confront it. The standard expression of the distributive law equates a computation with two operations and another with three. The computations expressed by the new notation involve five and six operations respectively. Put this way, the distributive law is no longer simple!

This reminds me of the earlier suggestion that if !!x^2+x!! is too complicated, one can write !!x(x+1)!! instead. But expressions don't only express a result, they express a way of arriving at that result. The purpose of an equation is to state that two different computations arrive at the same result. Yes, it's true that $$a+b = \ln e^ae^b,$$ but the two computations are not the same! If they were, the statement would be vacuous. Instead, it says that the simple computation on the left arrives at the same result as the complicated one on the right, an interesting thing to know. “!!2+3=5!!” might imply that !!e^2\cdot e^3=e^5!! but it doesn't say the same thing.

Here's my takeaway from consideration of the Zaz Brown proposal:

It's not sufficient for a system of notation to have a way of expressing every result; it has to be able to express every possible computation.

Put that way, other instructive examples come to mind. Consider Egyptian fractions. It's known that every rational number between !!0!! and !!1!! can be written in the form $$\frac1{a_1} + \frac1{a_2} + \ldots + \frac1{a_n}$$ where !!\{ a_i\}!! is a strictly increasing sequence of positive integers. For example $$\frac 7{23} = \frac 14 + \frac1{19} + \frac1{583} + \frac1{1019084}$$ or with a bit more ingenuity, $$\frac7{23} = \frac16 + \frac1{12} + \frac1{23} + \frac1{138} + \frac1{276},$$ longer but less messy. The ancient Egyptians did in fact write numbers this way, and when they wanted to calculate !!2\cdot\frac17!!, they had to look it up in a table, because writing !!\frac27!! was not an expressible computation, it had to be expressed in terms of reciprocals and sums, so !!2\cdot\frac 17 = \frac14 + \frac1{28}!!. They could write all the numbers, but they couldn't write all the ways of making the numbers.

(Neither can we. We can write the real root of !!x^3-2!! as !!\sqrt[3]2!!, but there is no effective notation for the real root of !!x^5+x-1!!. The best we can do is something like “!!0.75488\ldots!!”, which is even less effective than how the Egyptians had to write !!\frac27!! as !!\frac14+\frac1{28}!!.)

Anyway I think my conclusion from all this is that a practical mathematical notation really must have a symbol for addition, which is not at all surprising. But it was fun and interesting to see what happened without it. It didn't work well, but maybe the next idea will be better.

Thanks again, Zaz Brown.

[ Addendum 20230422: I discussed the Egyptians’ table of !!\frac 2n!! a couple of years ago, and why a more general table wasn't needed. ]


[Other articles in category /math/se] permanent link

Sat, 26 Feb 2022

I vent my rage at dumbass Math SE comments

[ Content warning: ranting ]

An article I've had in progress for a while is an essay about the dogmatic slogan that “infinity is not a number”. As research for that article I got Math Stack Exchange to disgorge all the comments that used that phrase. There were several dozen.

Most of them were just inane, or ill-considered; some contained genuine technical errors. But this one was so annoying that I have paused to complain about it individually:

One thing many laypeople do not understand or realize is that infinity is not a number, it's not equal to any number, and that two infinities can be different (or the same) in size from one another."

That is not “one thing”. It is three things.

A person who is unclear on the distinction between !!1!! and !!3!! should withhold their opinions about the nature of infinity.

[ Addendum 20220301: I did not clearly communicate which side of the “infinity is not a number” issue I am on. Here's my preliminary statement on the matter: The facile and prevalent claim that “infinity is not a number”, to the extent that it isn't inane, is false. I hope this is sufficiently clear. ]


[Other articles in category /math/se] permanent link

Thu, 24 Feb 2022

More about the axioms of infinity and the empty set

[ Previously: [1] [2] ]

[ Content warning: inconclusive nattering ]

Yesterday I discussed how one can remove the symbol !!\varnothing!! from the statement of the axiom of infinity (“!!A_\infty!!”). Normally, !!A_\infty!! looks like this:

$$\exists S (\color{darkblue}{\varnothing \in S}\land (\forall x\in S) x\cup\{x\}\in S).$$

But the “!!\varnothing!!” is just an abbreviation for “some set !!Z!! with the property !!\forall y. y\notin Z!!”, so one should understand the statement above as shorthand for:

$$\exists S (\color{darkblue}{(\exists Z.(\forall y. y\notin Z)\land (Z \in S))} \\ \land (\forall x\in S) x\cup\{x\}\in S).\tag{$\heartsuit$}$$

(The !!\cup!! and !!\{x\}!! signs should be expanded analogously, as abbreviations for longer formulas, but we will ignore them today.)

Thinking on it a little more, I realized that you could conceivably get into big trouble by doing this, for a reason very much like the one that concerned me initially. Suppose that, in !!(\heartsuit)!!, in place of $$\exists Z.(\color{darkgreen}{\forall y. y\notin Z})\land (Z \in S),$$ we had $$\exists Z.(\color{darkred}{\forall y. y\in Z\iff y\notin y})\land (Z \in S).$$

Now instead of !!Z!! having the empty-set property, it is required to have the Russell set property, and we demand that the infinite set !!S!! include an element with that property. But there is provably no such !!Z!!, which makes the axioms inconsistent. My request that !!\varnothing!! be proved to exist before it be used in the construction of !!S!! was not entirely silly.

My original objection partook of a few things:

  • You ought not to use the symbol “!!\varnothing!!” without defining it

I believe that expanding abbreviations as we did above addresses this issue adequately.

  • But to be meaningful, any such definition requires an existence proof and perhaps even a uniqueness proof

“Meaningful” is the wrong word here. I'm willing to agree that the defined symbol necessarily has a sense. But without an existence proof the symbol may not refer to anything. This is still a live issue, because if the symbol doesn't denote anything, your axiom has a big problem and ruins the whole theory. The axiom has a sense, but if it asserts the existence of some derived object, as this one does, the theory is inconsistent, and if it asserts the universality of some derived property, the theory is vacuous.

I think embedding the existence claim inside another axiom, as is done in !!(\heartsuit)!!, makes it easier to overlook the existence issue. Why use complicated axioms when you could use simpler ones? But technically this is not a big deal: if !!Z!! doesn't exist, then neither does !!S!!, and the axioms are inconsistent, regardless of whether we chose to embed the definition of !!Z!! in !!A_\infty!! or leave it separate.

One reason to prefer simpler axioms is that we hope it will be easier to detect that something is wrong with them. But set theorists do spend a lot of time thinking about the consistency of their theories, and understand the consistency issues much better than I do. If they think it's not a problem to embed the axiom of the empty set into !!A_\infty!!, who am I to disagree?


[Other articles in category /math] permanent link

Wed, 23 Feb 2022

My mistake about errors in the presentation of axiomatic set theory

[ Content warning: highly technical mathematics ]

A couple of weeks ago I claimed:

Many presentations of axiomatic set theory contain an error

I realized recently that there's a small but significant error in many presentations of the Zermelo-Frankel set theory: Many authors omit the axiom of the empty set, claiming that it is omittable. But it is not.

Well, it sort of is and isn't at the same time. But the omission that bothered me is not really an error. The experts were right and I was mistaken.

(Maybe I should repeat my disclaimer that I never thought there was a substantive error, just an error of presentation. Only a crackpot would reject the substance of ZF set theory, and I am not prepared to do that this week.)

My argument was something like this:

  • You want to prevent the axioms from being vacuous, so you need to be able to prove that at least one set exists.

  • One way to do this is with an explicit “axiom of the empty set”: $$\exists Z. \forall y. y\notin Z$$

  • But many presentations omit this, remarking that the axiom of infinity (“!!A_\infty!!”) also asserts the existence of a set, and the empty set can be obtained from that one via specification.

  • The axiom of infinity is usually stated in this form: $$\exists S (\varnothing \in S\land (\forall x\in S) x\cup\{x\}\in S).$$ But, until you prove that the empty set actually exists, it is not meaningful to include the symbol !!\varnothing!! in your axiom, since it does not actually refer to anything, and the formula above is formally meaningless.

I ended by saying:

You really do need an explicit axiom [of the empty set]. As far as I can tell, you cannot get away without it.

Several people tried to explain my error, pointing out that !!\varnothing!! is not part of the language of set theory, so the actual formal statement of !!A_\infty!! doesn't include the !!\varnothing!! symbol anyway. But I didn't understand the point until I read Eike Schulte's explanation. M. Schulte delved into the syntactic details of what we really mean by abbreviations like !!\varnothing!!, and why they are meaningful even before we prove that the abbreviation refers to something. Instead of explicitly mentioning !!\varnothing!!, which had bothered me, M. Schulte suggested this version of !!A_\infty!!:

$$\exists S (\color{darkblue}{(\exists Z.(\forall y. y\notin Z)\land (Z \in S))} \\ \land (\forall x\in S) x\cup\{x\}\in S).$$

We don't have to say that !!S!! (the infinite set) includes !!\varnothing!!, which is subject to my quibble about !!\varnothing!! not being meaningful. Instead we can just say that !!S!! includes some element !!Z!! that has the property !!\forall y.y\notin Z!!; that is, it includes an element !!Z!! that happens to be empty.

A couple of people had suggested something like this before M. Schulte did, but I either didn't understand or I felt this didn't contradict my original point. I thought:

I claimed that you can't get rid of the empty set axiom. And it hasn't been gotten rid of; it is still there, entire, just embedded in the statement of !!A_\infty!!.

In a conversation elsewhere, I said:

You could embed the axiom of pairing inside the axiom of infinity using the same trick, but I doubt anyone would be happy with your claim that the axiom of pairing was thereby unnecessary.

I found Schulte's explanation convincing though. The !!A_\infty!! that Schulte suggested is not a mere conjunction of axioms. The usual form of !!A_\infty!! states that the infinite set !!S!! must include !!\varnothing!!, whatever that means. The rewritten form has the same content, but more explicit: !!S!! must include some element !!Z!! that has the emptiness property (!!\forall y. y\notin Z!!) that we want !!\varnothing!! to have.

I am satisfied. I hereby recant the mistaken conclusion of that article.

Thanks to everyone who helped me out with this: Ben Zinberg, Asaf Karagila, Nick Drozd, and especially to Eike Schulte. There are now only 14,823,901,417,522 things remaining that I don't know. Onward to zero!

[ Addendum 20220224: A bit more about this. ]


[Other articles in category /math] permanent link

Mon, 21 Feb 2022

What to do if you're about to fail real analysis for the second time?

A few months ago a Reddit user came to r/math with this tale of woe:

I failed real analysis horrifically the first time … and my resit takes place in a few days. I still feel completely and utterly unprepared. I can't do the homework questions and I can't do the practice papers. I'm really quite worried that I'll fail and have to resit the whole year (can't afford) or get kicked out of uni (fuck that).

Does anyone have tips or advice, or just any general words of comfort to help me through this mess? Cheers.

The first thing that came to mind for me was “wow, you're fucked”. There was a time in my life when I might have posted that reply but I'm a little more mature now and I know better.

I read some of the replies. The top answer was a link to a pirated copy of Aksoy and Khamsi's A Problem Book in Real Analysis. A little too late for that, I think. Hapless OP must re-sit the exam in a few days and can't do the homework questions or practice papers; the answer isn't simply “more practice”, because there isn't time.

The second-highest-voted reply was similar: “Pick up Stephen Abbott's Understanding Analysis”. Same. It's much too late for that.

The third reply was fatuous: “understand the proofs done in the textbook/lecture completely, since a lot of the techniques used to prove those statements you will probably need to use while doing problems”. <sarcasm>Yes, great advice, to pass the exam just understand the material completely, so simple, why doesn't everyone just do that?</sarcasm>

Here's one I especially despised:

Don't worry. Real analysis is a lot of work and you never have the time to understand everything there.

“Don't worry”! Don't worry about having to repeat the year? Don't worry about getting kicked out of university? I honestly think “wow, you're fucked” would be less damaging. In the book of notoriously ineffective problem-solving strategies, chapter 1 is titled “Pretend there is No Problem”.

This was amusing but unhelpful:

We are all going to die. Compared to death, real analysis is nothing to fear.

(However, another user disagreed: “Compared to real analysis, death is nothing to fear”.)

Some comments offered hints: focus on topology, try proof by contradiction. Too little, and much too late.

Most of the practical suggestions, in my opinion, were answering the wrong question. They all started from the premise that it would be possible for Hapless OP to pass the exam. I see no evidence that this was the case. If Hapless OP had showed up on Reddit having failed the midterm, or even a few weeks ahead of the final, that would be a very different situation. There would still have been time to turn things around. OP could get tutoring. They could go to office hours regularly. They could organize a study group. They could work hard with one of those books that the other replies mentioned. But with “a few days” left? Not a chance.

The problem to solve here isn't “how can OP pass the exam”. It's “how can OP deal with the inescapable fact that they cannot and will not pass the exam”.

Way downthread there was some advice (from user tipf) that was gloomy but which, unlike the rest of the comments, engaged the real issue, that Hapless OP wasn't going to pass the exam the following week:

I would seriously rethink getting a pure math major. It's not a very marketable major outside academia, and sinking a bunch of money into it (e.g. re-taking a whole year) is just not a good idea under almost any circumstance.

Pessimistic, yes, but unlike the other suggestions, it actually engages with Hapless OP's position as they described it (“have to resit the whole year (can't afford)”).

When we reframe the question as “how can OP deal the fact that they won't pass the exam”, some new paths become available for exploration. I suggested:

[OP] should go consult with the math department people immediately, today, explain that they are not prepared and ask what can be done. Perhaps there is no room for negotiation, but in that case OP would be no worse off than they are now. But there may be an administrative solution.

For example, just hypothetically, what if the math department administrative assistant said:

You can get special permission to re-sit the exam in three months time, if you can convince the Dean that you had special extenuating hardships.

This is not completely implausible, and if true might put Hapless OP in a much better position!

Now you might say “Dominus, you just made that up as an example, there is no reason to think it is actually true.” And you would be quite correct. But we could make up fifty of these, and the chances would pretty good that one of them was actually true. The key is to find out which one.

And OP can't find out what is available unless they go talk to someone in the math department. Certainly not by moping in their room reading Reddit. Every minute spent moping is time that could be better spent tracking down the Dean, or writing the letter, or filing the forms, or whatever might be required to improve the situation.

Along the same lines, I suggested:

perhaps the department already has a plan for what to do with people who can't pass real analysis. Maybe they will say something constructive like “many people in your situation change to a Statistics major” or something like that.

I don't know if Hapless OP would have been happy with a Statistics major; they didn't say. But again, the point is, there may be options that are more attractive than “get kicked out of uni”, and OP should go find out what they are.

The higher-level advice here, which I think is generally good, is that while asking on Reddit is quick and easy, it's not likely to produce anything of value. It's like looking for your lost wallet under the lamppost because the light is good. But it doesn't work; you need to go ask the question to the people who actually know what the solutions might be and who are in a position to actually do something about the problem.

[ Addendum 20220222: Still higher-level advice is: if you're losing the game, try instead playing the different game that is one level up. ]


[Other articles in category /math] permanent link

Fri, 11 Feb 2022

Geometric proof that the mediant lies between its arguments

The mediant of two fractions !!\frac ab!! and !!\frac cd!! is simply !!\frac{a+c}{b+d}!!. It appears often in connection with the theory of continued fractions, and a couple of months ago I put it to use in this post about Newton's method. There the crucial property was that if $$\frac ab < \frac cd$$ then $$\frac ab < \frac{a+c}{b+d} < \frac cd.$$

This can be proved with straightforward algebra:

$$\begin{align} \frac ab & < \frac cd \\ ad & < bc \\ ab + ad & < ab + bc \\ a(b+d) & < (a+c) b \\ \frac ab & < \frac{a+c}{b+d} \end{align}$$

and similarly for the !!\frac cd!! side.

But Reddit user asenseofbeauty recently suggested a lovely visual proof that makes the result intuitively clear:

!!\def\pt#1#2{\langle{#1},{#2}\rangle}!! The idea is simply this: !!\frac ab!! is the slope of the line from the origin !!O!! through the point !!P=\pt ba!! (blue) and !!\frac cd!! is the slope of the line through !!Q=\pt dc!! (red). The point !!\pt{b+d}{a+c}!! is the fourth vertex of the parallelogram with vertices at !!O, P, Q!!, and !!\frac{a+c}{b+d}!! is the slope of the parallelogram's diagonal. Since the diagonal lies between the two sides, the slope must also lie in the middle somewhere.

The embedded display above should be interactive. You can drag around the red and blue points and watch the diagonal with slope !!\frac{a+b}{c+d}!! slide around to match.

In case the demo doesn't work, here's a screenshot showing that !!\frac 25 < \frac{2+4}{5+3} < \frac 43!!:

[ Addendum 20220215: The source was this Reddit comment from asenseofbeauty. ]


[Other articles in category /math] permanent link

Sat, 05 Feb 2022

Factoring composite numbers into nearly equal factors

Pennsylvania license plate numbers have four digits and when I'm driving I habitually try to factor these. (This hasn't yet led to any serious injury or property damage…) In general factoring is a hardish problem but when !!n<10000!! the worst case is !!9991 = 97·103!! which is not out of reach. The toughest part is when you find a factor like !!661!! or !!667!! and have to decide if it is prime. For !!667!! you might notice right off that it is !!676-9 = (26+3)(26-3)!! but for !!661!! you have to wonder if maybe there is something like that and you just haven't thought of it yet. (There isn't.)

A related problem is the Nearly Equal Factors (NEF) problem: given !!n!!, find !!a!! and !!b!!, as close as possible, with !!ab=n!!. If !!n!! has one large prime factor, as it often does, this is quite easy. For example suppose we are driving on the Interstate and are behind a car with license plate GJA 6968. First we divide this by !!8!!, so !!871!!, which is obviously not divisible by !!2, 3, 5, 7,!! or !!11!!. So try !!13!!: !!871-780 = 91!! so !!871 = 13·67!!. If we throw the !!67!! into the !!a!! pile and the other factors into the !!b!! pile we get !!6968 = 67· 104!! and it's obvious we can't divide up the factors more evenly: the !!67!! has to go somewhere and if we put anything else with it, its pile is now at least !!2·67 = 134!! which is already bigger than !!104!!. So !!67·104!! is the best we can do.

When I first started thinking about this I thought maybe there could be a divide-and-conquer algorithm. For example, suppose !!n=4m!!. Then if we could find an optimal !!ab=m!!, we could conclude that the optimal factorization of !!n!! would simply be !!n = 2a· 2b!!. Except no, that is completely wrong; a counterexample is !!n=20!! where the optimal factorization is !!5·4!!, not !!(2·5)·(2·1)!!. So it's not that simple.

It's tempting to conclude that NEF is NP-hard, because it does look a lot like Partition. In Partition someone hands you a list of numbers and demands to know of they can be divided into two piles with equal sums. This is NP-hard, and so the optimization version of it, where you are asked to produce two piles as nearly equal as possible, is at least as hard. The NEF problem seems similar: if you know the prime factors !!n=p_1p_2…p_k!! then you can imagine that someone handed you the numbers !!\log p_1, … \log p_k!! and asked you to partition them into two nearly-equal piles. But this reduction is in the wrong direction; it only proves that Partition is at least as hard as NEF. Could there be a reduction in the other direction? I don't see anything obvious, but maybe there is something known about Partition or Knapsack that shows that even this restricted version is hard. [ Addendum: see below. ]

In practice, the first-fit-decreasing (FFD) algorithm usually performs well for this sort of problem. In FFD we go through the prime factors in decreasing order, and throw each one into the bin that is least full. This always works when there is one large prime factor. For example with !!6968!! we throw the !!67!! into the !!a!! bin, then the !!13!! and two of the !!2!!s into the !!b!! bin, at which point we have !!67·52!!, so the final !!2!! goes into the !!b!! bin also, and this is optimal. FFD does find the optimal solution much of the time, but not always. It works for !!20!! but fails for !!72 = 2^33^2!! because the first thing it does is to put the threes into separate bins. But the optimal solution !!72=9·8!! puts them in the same bin. Still it works for nearly all small numbers.

I would like to look into whether FFD it produces optimal results almost all of the time, and if so how almost? Wikipedia seems to say that the corresponding FFD algorithm for Partition, called LPT-first scheduling is guaranteed to produce a larger total which is no more than !!\frac76!! as big as the optimal, which would mean that for the NEF problem !!n=ab!! and !!a\ge b!! it will produce an !!a!! value no more than !!OPT^{7/6}!! where !!OPT!! is the minimum possible !!a!!.

Some small-number cases where FFD fails are:

$$ \begin{array}{rccc} n & \text{FFD} = ab & \text{Optimal} & \frac{\log(a)}{\log(OPT)} (≤ 1.167) \\ 72 & 12·6 & 9·8 & 1.131 \\ 180 & 18·10 & 15·12 & 1.067 \\ 240 & 20·12 & 16·15 & 1.080 \\ 288 & 24·12 & 18·16 & 1.100 \\ 336 & 28·12 & 21·16 & 1.094 \\ 540 & 30·18 & 27·20 & 1.032 \\ \end{array} $$

I wrote code to compute these and then I lost it.

I would also like to look at algorithms for NEF that don't begin by factoring !!n!!. We can guarantee to find optimal solutions with brute force, and in some cases this works very well. Consider !!240!! we begin by computing (in at most !!O(\log^2 n)!! time) the integer square root of !!240!!, which is !!15!!. Then since !!240!! is a multiple of !!15!! we have !!240=16·15!! and we win. In general of course it is not so easy, and it fails even in some cases where !!n!! is easy to factor. !!n=p^{2k+1}!! is especially unfortunate. Say !!n=243!! so the square root is !!15!!, which is not a factor of !!243!!. So we try !!14,!! then !!13,12,11,10!! and at last we find !!243=27·9!!.

[ Addendum 20220207: Dan Brumleve points out that it is NP-complete to decide whether, given numbers !!L, U, N!!, there is a number !!f!! in !! [L, U]!! that divides !!N!!. Using this, he shows that it is probably NP-complete to decide whether a given !!N!! is a product of two integers with !!\lvert a-b\rvert ≤ N^{1/4}!!. “Probably” here means that the reduction from Partition is polynomial time if Cramér's conjecture is correct. ]


[Other articles in category /math] permanent link

Sun, 23 Jan 2022

Annoying mathematical notation

Recently I've been thinking that maybe the thing I really dislike about set theory might the power set axiom. I need to do a lot more research about this, so any blog articles about it will be in the distant future. But while looking into it I ran across an example of a mathematical notation that annoyed me.

This paper of Gitman, Hamkins, and Johnstone considers a subtheory of ZFC, which they call “!!ZFC-!!”, obtained by omitting the power set axiom. Fine so far. But the main point of the paper:

Nevertheless, these deficits of !!ZFC-!! are completely repaired by strengthening it to the theory !!ZFC^−!!, obtained by using collection rather than replacement in the axiomatization above.

Got that? They are comparing two theories that they call “!!ZFC-!!” and “!!ZFC^-!!”.

(Blog post by Gitman (archived))

[ Previously ]


[Other articles in category /math] permanent link

Fri, 21 Jan 2022

A proposal for improved language around divisibility

Divisibility and modular residues are among the most important concepts in elementary number theory, but the terminology for them is clumsy and hard to pronounce.

  • !!n!! is divisible by !!5!!
  • !!n!! is a multiple of !!5!!
  • !!5!! divides !!n!!

The first two are 8 syllables long. The last one is tolerably short but is backwards. Similarly:

  • The mod-!!5!! residue of !!n!! is !!3!!

is awful. It can be abbreviated to

  • !!n!! has the form !!5k+3!!

but that is also long, and introduces a dummy !!k!! that may be completely superfluous. You can say “!!n!! is !!3!! mod !!5!!” or “!!n!! mod !!5!! is !!3!!” but people find that confusing if there is a lot of it piled up.

Common terms should be short and clean. I wish there were a mathematical jargon term for “has the form !!5k+3!!” that was not so cumbersome. And I would like a term for “mod-5 residue” that is comparable in length and simplicity to “fifth root”.

For mod-!!2!! residues we have the special term “parity”. I wonder if something like “!!5!!-ity” could catch on? This doesn't seem too barbaric to me. It's quite similar to the terminology we already use for !!n!!-gons. What is the name for a polygon with !!33!! sides? Is it a triskadekawhatever? No, it's just a !!33!!-gon, simple.

Then one might say things like:

  • “Primes larger than !!3!! have !!6!!-ity of !!±1!!”

  • “The !!4!!-ity of a square is !!0!! or !!1!!” or “a perfect square always has !!4!!-ity of !!0!! or !!1!!”

  • “A number is a sum of two squares if and only its prime factorization includes every prime with !!4!!-ity !!3!! an even number of times.”

  • “For each !!n!!, the set of numbers of !!n!!-ity !!1!! is closed under multiplication”

For “multiple of !!n!!” I suggest that “even” and “odd” be extended so that "!!5!!-even" means a multiple of !!5!!, and "!!5!!-odd" means a nonmultiple of !!5!!. I think “!!n!! is 5-odd” is a clear improvement on “!!n!! is a nonmultiple of 5”:

  • “The sum or product of two !!n!!-even numbers is !!n!!-even; the product of two !!n!!-odd numbers is !!n!!-odd, if !!n!! is prime, but the sum may not be. (!!n=2!! is a special case)”

  • “If the sum of three squares is !!5!!-even, then at least one of the squares is !!5!!-even, because !!5!!-odd squares have !!5!!-ity !!±1!!, and you cannot add three !!±1's!! to get zero”

  • “A number is !!9!!-even if the sum of its digits is !!9!!-even”

It's conceivable that “5-ity” could be mistaken for “five-eighty” but I don't think it will be a big problem in practice. The stress is different, the vowel is different, and also, numbers like !!380!! and !!580!! just do not come up that often.

The next mouth-full-of-marbles term I'd want to take on would be “is relatively prime to”. I'd want it to be short, punchy, and symmetric-sounding. I wonder if it would be enough to abbreviate “least common multiple” and “greatest common divsor” to “join” and “meet” respectively? Then “!!m!! and !!n!! are relatively prime” becomes “!!m!! meet !!n!! is !!1!!” and we get short phrasings like “If !!m!! is !!n!!-even, then !!m!! join !!n!! is just !!m!!”. We might abbreviate a little further: “!!m!! meet !!n!! is 1” becomes just “!!m!! meets !!n!!”.

[ Addendum: Eirikr Åsheim reminds me that “!!m!! and !!n!! are coprime” is already standard and is shorter than “!!m!! is relatively prime to !!n!!”. True, I had forgotten. ]


[Other articles in category /math] permanent link

Thu, 20 Jan 2022

Testing for divisibility by 8

I recently wrote:

Instead of multiplying the total by 3 at each step, you can multiply it by 2, which gives you a (correct but useless) test for divisibility by 8.

But one reader was surprised that I called it “useless”, saying:

I only know of one test for divisibility by 8: if the last three digits of a number are divisible by 8, so is the original number. Fine … until the last three digits are something like 696.

Most of these divisibility tricks are of limited usefulness, because they are not less effort than short division, which takes care of the general problem. I discussed short division in the first article in this series with this example:

Suppose you want to see if 1234 is divisible by 7. It's 1200-something, so take away 700, which leaves 500-something. 500-what? 530-something. So take away 490, leaving 40-something. 40-what? 44. Now take away 42, leaving 2. That's not 0, so 1234 is not divisible by 7.

For a number like 696, take away 640, leaving 56. 56 is divisible by 8, so 696 is also. Suppose we were doing 996 instead? From 996 take away 800 leaving 196, and then take away 160 leaving 36, which is not divisible by 8. For divisibility by 8 you can ignore all but the last 3 digits but it works quite well for other small divisors, even when the dividend is large.

This not not what I usually do myself, though. My own method is a bit hard to describe but I will try. The number has the form !!ABB!! where !!BB!! is a multiple of 4, or else we would not be checking it in the first place. The !!BB!! part has a ⸢parity⸣, it is either an even multiple of 4 (that is, a multiple of 8) or an odd multiple of 4 (otherwise). This ⸢parity⸣ must match the (ordinary) parity of !!A!!. !!ABB!! is divisible by 8 if and only if the parities match. For example, 104 is divisible by 8 because both parts are ⸢odd⸣. Similarly 696 where both parts are ⸢even⸣. But 852 is not divisible by 8, because the 8 is even but the 52 is ⸢odd⸣.


[Other articles in category /math] permanent link

Wed, 19 Jan 2022

The squares are kinda Fibonacci-like

I got a cute little surprise today. I was thinking: suppose someone gives you a large square integer and asks you to find the next larger square. You can't really do any better than to extract the square root, add 1, and square the result. But if someone gives you two consecutive square numbers, you can find the next one with much less work. Say the two squares are !!b = n^2!! and !!a = n^2+2n+1!!, where !!n!! is unknown. Then you want to find !!n^2+4n+4!!, which is simply !!2a-b+2!!. No square rooting is required.

So the squares can be defined by the recurrence $$\begin{align} s_0 & = 0 \\ s_1 & = 1 \\ s_{n+1} & = 2s_n - s_{n-1} + 2\tag{$\ast$} \end{align} $$

This looks a great deal like the Fibonacci recurrence:

$$\begin{align} f_0 & = 0 \\ f_1 & = 1 \\ f_{n+1} & = f_n + f_{n-1} \end{align} $$

and I was a bit surprised because I thought all those Fibonacci-ish recurrences turned out to be approximately exponential. For example, !!f_n = O(\phi^n)!! where !!\phi=\frac12(1 + \sqrt 5)!!. And actually the !!f_0!! and !!f_1!! values don't matter, whatever you start with you get !!f_n = O(\phi^n)!!; the differences are small and are hidden in the Landau sign.

Similarly, if the recurrence is !!g_{n+1} = 2g_n + g_{n-1}!! you get !!g_n = O((1+\sqrt2)^n)!!, exponential again. So I was surprised that !!(\ast)!! produced squares instead of something exponential.

But as it turns out, it is producing something exponential. Sort of. Kind of. Not really.

!!\def\sm#1,#2,#3,#4{\left[\begin{smallmatrix}{#1}&{#2}\\{#3}&{#4}\end{smallmatrix}\right]}!!

There are a number of ways to explain the appearance of the !!\phi!! constant in the Fibonacci sequence. Feel free to replace this one with whatever you prefer: The Fibonacci recurrence can be written as $$\left[\matrix{1&1\\1&0}\right] \left[\matrix{f_n\\f_{n-1}}\right] = \left[\matrix{f_{n+1}\\f_n}\right] $$ so that $$\left[\matrix{1&1\\1&0}\right]^n \left[\matrix{1\\0}\right] = \left[\matrix{f_{n+1}\\f_n}\right] $$

and !!\phi!! appears because it is the positive eigenvalue of the square matrix !!\sm1,1,1,0!!. Similarly, !!1+\sqrt2!! is the positive eigenvalue of the matrix !!\sm 2,1,1,0!! that arises in connection with the !!g_n!! sequences that obey !!g_{n+1} = 2g_n + g_{n-1}!!.

For !!s_n!! the recurrence !!(\ast)!! is !!s_{n+1} = 2s_n - s_{n-1} + 2!!, Briefly disregarding the 2, we get the matrix form

$$\left[\matrix{2&-1\\1&0}\right]^n \left[\matrix{s_1\\s_0}\right] = \left[\matrix{s_{n+1}\\s_n}\right] $$

and the eigenvalues of !!\sm2,-1,1,0!! are exactly !!1!!. Where the Fibonacci sequence had !!f_n \approx k\cdot\phi^n!! we get instead !!s_n \approx k\cdot1^n!!, and instead of exploding, the exponential part remains well-behaved and the lower-order contributions remain significant.

If the two initial terms are !!t_0!! and !!t_1!!, then !!n!!th term of the sequence is simply !!t_0 + n(t_1-t_0)!!. That extra !!+2!! I temporarily disregarded in the previous paragraph is making all the interesting contributions: $$0, 0, 2, 6, 12, 20, \ldots, n(n-1) \ldots$$ and when you add the !!t_0 + n(t_1-t_0)!! and put !!t_0=0, t_1=1!! you get the squares.

So the squares can be considered a sort of Fibonacci-ish approximately exponential sequence, except that the exponential part doesn't matter because the base of the exponent is !!1!!.

How about that.


[Other articles in category /math] permanent link

Sun, 09 Jan 2022

Many presentations of axiomatic set theory contain an error

[ Content warning: highly technical mathematics ]

[ Addendum 20220223: Also, I was mistaken. ]

I realized recently that there's a small but significant error in many presentations of the Zermelo-Frankel set theory: Many authors omit the axiom of the empty set, claiming that it is omittable. But it is not.

The overarching issue is as follows. Most of the ZF axioms are of this type:

If !!\mathcal A!! is some family of sets, then [something derived from !!\mathcal A!!] is also a set.

The axiom of union is a typical example. It states that if !!\mathcal A!! is some family of sets, then there is also a set !!\bigcup \mathcal A!!, which is the union of the members of !!\mathcal A!!. The other axioms of this type are the axioms of pairing, specification, power set, replacement, and choice.

There is a minor technical problem with this approach: where do you get the elements of !!\mathcal A!! to begin with? If the axioms only tell you how to make new sets out of old ones, how do you get started? The theory is a potentially vacuous one in which there aren't any sets! You can prove that if there were any sets they would have certain properties, but not that there actually are any such things.

This isn't an entirely silly quibble. Prior to the development of axiomatic set theory, mathematicians had been using a model called naïve set theory, and after about thirty years it transpired that the theory was inconsistent. Thirty years of work about a theory of sets, and then it turned out that there was no possible universe of sets that satisfied the requirements of the theory! This precipitated an upheaval in mathematics a bit similar to the quantum revolution in physics: the top-down view is okay, but the most basic underlying theory is just wrong.

If we can't prove that our new theory is consistent, we would at least like to be sure it isn't trivial, so we would like to be sure there are actually some sets. To ensure this, the very least we can get away with is this axiom:

!!A_S!!: There exists a set !!S!!.

This is enough! From !!A_S!! and specification, we can prove that there is an empty subset of !!S!!. Then from extension, we can prove that this empty subset is the unique empty set. This justifies assigning a symbol to it, usually !!\varnothing!! or just !!0!!. Once we have the empty set, pairing gives us !!\{0,0\} = \{0\} = 1!!, then !!\{0, 1\} = 2!! , and so on. Once we have these, the axioms of union and infinity show that !!\omega!! is a set, then from that the axiom of power sets gets us uncountable sets, and the sky is the limit. But we need something like !!A_S!! to get started.

In place of !!A_S!! one can have:

!!A_\varnothing!!: There exists a set !!\varnothing!! with the property that for all !!x!!, !!x\notin\varnothing!!.

Presentations of ZF sometimes include this version of the axiom. It is easily seen to be equivalent to !!A_S!!, in the sense that from either one you can prove the other.

I wanted to see how this was handled in Thomas Jech's Set Theory, which is a standard reference text for axiomatic set theory. Jech includes a different version of !!A_S!!, initially given (page 3) as:

!!A_∞!!: There exists an infinite set.

This is also equivalent to !!A_S!! and !!A_\varnothing!!, if you are willing to tolerate the use of the undefined term “infinite”. Jech of course is perfectly aware that while this is an acceptable intuitive introduction to the axiom of infinity, it's not formally meaningful without a definition of “infinite”. When he's ready to give the formal version of the axiom, he states it like this:

$$\exists S (\varnothing \in S\land (\forall x\in S) x\cup\{x\}\in S).$$

(“There is a set !!S!! that includes !!\varnothing!! and, whenever it includes some !!x!!, also includes !!x\cup\{x\}!!.” (3rd edition, p. 12))

Except, oh no, “!!\varnothing!!” has not yet been defined, and it can't be, because the thing we want it to refer to cannot, at this point, be proved to actually exist.

Maybe you want to ask why we can't use it without proving that it exists. That is exactly what went wrong with naïve set theory, and we don't want to repeat that mistake.

I brought this up on math Stack Exchange and Asaf Karagila, the resident axiomatic set theory expert, seemed to wonder why I complained about !!\varnothing!! but not about !!\{x\}!! and !!\cup!!. But the issue doesn't come up with !!\{x\}!! and !!\cup!!, which can be independently defined using the axioms of pairing and union, and then used to state the axiom of infinity. In contrast, if we're depending on the axiom of infinity to prove the existence of !!\varnothing!!, it's circular for us to assume it exists while writing the statement of the axiom. We can't depend on !!A_∞!! to define !!\varnothing!! if the very meaning of !!A_∞!! depends on !!\varnothing!! itself.

That's the error: the axioms, as stated by Jech, are ill-founded. This is a little hard to see because of the way he prevaricates the actual statement of the axiom of infinity. On page 8 he states !!A_\varnothing!!, which would work if it were included, but he says “we have not included [!!A_\varnothing!!] among the axioms, because it follows from the axiom of infinity.”

But this is wrong. You really do need an explicit axiom like !!A_\varnothing!! or !!A_S!!. As far as I can tell, you cannot get away without it.


This isn't specifically a criticism of Jech or the book; a great many presentations of axiomatic set theory make the same mistake. I used Jech as an example because his book is a well-known authority. (Otherwise people will say “well perhaps, but a more careful writer would have…”. Jech is a careful writer.)

This is also not a criticism of axiomatic set theory, which does not collapse just because we forgot to include the axiom of the empty set.

[ Addendum 20220223: As could perhaps have been predicted, I was mistaken. Details here. Thanks to Math SE user Eike Schulte for explaining my error in a way I could understand. ]

[ Addendum 20220224: A bit more about this. ]


[Other articles in category /math] permanent link

Thu, 06 Jan 2022

Another test for divisibility by 7

[ Previously ]

Recently I thought of another way to check for divisibility by !!7!!. Let's consider !!\color{darkblue}{3269}!!. The rule is: take the current total (initially 0), triple it, and add the next digit to the right. So here we do:

$$ \begin{align} \color{darkblue}{3}·3 & + \color{darkblue}{2} & = && \color{darkred}{11} \\ \color{darkred}{11}·3 & + \color{darkblue}{6} & = && \color{darkred}{39} \\ \color{darkred}{39}·3 & + \color{darkblue}{9} & = && \color{darkred}{126} \\ \end{align} $$

and the final number, !!\color{darkred}{126} !!, is a multiple of !!7!! if and only if the initial one was. If you're not sure about !!126!! you can check it the same way:

$$ \begin{align} \color{darkblue}{1} ·3 & + \color{darkblue}{2} & = && \color{darkred}{5} \\ \color{darkred}{5} ·3 & + \color{darkblue}{6} & = && \color{darkred}{21} \\ \end{align} $$

If you're not sure about !!\color{darkred}{21} !!, you calculate !!2·3+1=7!! and if you're not sure about !!7!!, I can't help.

You can simplify the arithmetic by reducing everything mod !!7!! whenever it gets inconvenient, so checking !!3269!! really looks like this:

$$ \begin{align} \color{darkblue}{3} ·3 & + \color{darkblue}{2} & = && 11 = \color{darkred}{4} \\ \color{darkred}{4} ·3 & + \color{darkblue}{6} & = && 18 = \color{darkred}{4} \\ \color{darkred}{4} ·3 & + \color{darkblue}{9} & = && 21 = \color{darkred}{0} \\ \end{align} $$

This is actually practical.

I'm so confident this is already in the Wikipedia article about divisibility testing that I didn't bother to actually check. But I did check the email that Eric Roode sent me in 2018 about divisibility testing, and confirmed that it was in there.

Instead of multiplying the total by 3 at each step, you can multiply it by 2, which gives you a (correct but useless) test for divisibility by 8. Or you can multiply it by 1, which gives you the usual (correct and useful) test for divisibility by 9. Or you can multiply it by 0, which gives you a slightly silly (but correct) version of the usual test for divisibility by 10. Or you can multiply it by -1, which which gives you exactly the usual test for divisibility by 11.

You can of course push it farther in either direction, but none of the results seems particularly interesting as a practical divisibility test.

I wish I had known about this as a kid, though, because I would probably have been interested to discover that the pattern continues to work: if at each step you multiply by !!k!!, you get a test for divisibility by !!10-k!!. Sure, you can take !!k=9!! or !!k=10!! if you like, go right ahead, it still works.

And if you do it for base-!!r!! numerals, you get a test for divisibility by !!r-k!!, so this is a sort of universal key to divisibility tests. In base 16, the triple-and-add method tests not for divisibility by 7 but for divisibilty by 13. If you want to test for divisibility by !!7!! you can use double-and-add instead, which is a nice wrinkle.

The tests you get aren't in general any easier than just doing short division, of course. At least they are easy to remember!


[Other articles in category /math] permanent link

Tue, 04 Jan 2022

All polynomials of degree 3 or greater factor over the reals

A couple of years ago I wrote:

One day when I was in high school, I bumped into the fact that !!\sqrt{7 + 4 \sqrt 3}!!, which looks just like a 4th-degree number, is actually a 2nd-degree number. It's numerically equal to !!2 + \sqrt 3!!. At the time, I was totally boggled.

I had a kind of similar surprise around the same time in connection with the polynomial !!x^4+1!!.

Everyone in high school algebra learns that !!x^2-1 = (x-1)(x+1)!! but that !!x^2+1!! does not similarly factor over the reals; in the jargon it is irreducible.

Every cubic polynomial does factor over the reals, though, because every cubic polynomial has a real root, and a polynomial with real root !!r!! has !!x-r!! as a factor; this is Descartes’ theorem. (It's easy to explain why all cubic polynomials have roots. Every cubic polynomial !!P(x)!! has the form !! ax^3!! plus some lower-order terms. As !!x!! goes to !!±∞!! the lower-order terms are insignificant and !!P(x)!! goes to !!a·±∞!!. Since the value of !!P(x)!! changes sign, !!P(x)!! must be zero at some point.)

For example, $$\begin{align} x^3+1 & = (x+1)(x^2- x+1) \\ x^3-1 & = (x-1)(x^2+ x+1) \\ \end{align}$$

So: polynomials with real roots always factor, cubics always have roots, so cubics factor. Also !!x^2+1!! has no real roots, and doesn't factor. And !!x^4+1!!, which looks pretty much the same as !!x^2+1!!, also has no real roots, and so behaves the same as !!x^2+1!! so doesn't factor…

Wrong! It has no real roots, true, but it still factors over the reals:

$$x^4+1 = (x^2 + \sqrt2· x + 1) (x^2 - \sqrt2· x + 1)$$

Neither of the two factors has a real root. I was kinda blown away by this, sometime back in the 1980s.

The fundamental theorem of algebra tells us that the only irreducible real polynomials have degree 1 or 2. Every polynomial of degree 3 or higher can be expressed as a product of polynomials of degrees 1 and 2. I knew this, but somehow didn't put the pieces together in my head.

Raymond Smullyan observes that almost everyone has logically inconsistent beliefs. His example is that while you individually believe a large number of separate claims, you probably also believe that at least one of those claims is false, so you don't believe their conjunction. This is an example of a completely different type: I simultaneously believed that every polynomial had roots over the complex numbers, and also that !!x^4+1!! was irreducible.

[ Addendum 20220221: I didn't remember when I wrote this that I had already written essentially the same article back in 2006. ]


[Other articles in category /math] permanent link

Thu, 30 Dec 2021

A little more about the pedagogy of what it means to be transcendental

[ This is a followup to In simple English, what does it mean to be transcendental? ]

A while back a Math SE user posted a comment on my simple explanation of transcendental and algebraic numbers that asked why my explanation had contained some redundancies:

Are there any numbers for which division is necessary? I can't think of any. Likewise, subtracting integers is redundant; I believe all you need is "you can add any integer; you can multiply by any non-zero integer; you can multiply by x."

This is true! I had said:

We will play a game. Suppose you have some number !!x!!. You start with !!x!! and then you can add, subtract, multiply, or divide by any integer, except zero. You can also multiply by !!x!!. You can do these things as many times as you want. If the total becomes zero, you win.

and you don't need subtraction or division. (The underlying mathematical fact that motivated this answer is that integer polynomials are the free ring over the integers. For a ring you only need addition and multiplication.) So why did I mention subtraction and division? They're not mathematically necessary, doesn't it make the answer more complicated to put them in?

I had considered this carefully, and had decided it was simpler this way. The target audience is a person with no significant mathematical training. To a mathematician, it's obvious that inclusion of integers includes subtraction as a special case because you can simply add a negative integer. But non-mathematicians are not used to thinking this way. They have been taught that there are four arithmetic operations. If I mention all four, they will understand that all the operations of basic arithmetic are allowed. But if I had said only "addition and multiplication" many people would have been distracted and wondered "why just those two? Why not some other two?". Including all four avoids this distraction.

I could have said only “addition and multiplication” and later on explained that allowing subtraction and division doesn't change anything. I think this would have been an inferior choice. It's best to get to the point as quickly as possible. In this case the point is that all the operations of basic arithmetic are allowed. The fact that you can omit two is not relevant. My version is shorter and clearer, and avoids the whole issue.

If my version were less technically correct, that would be a major drawback. Sacrificing correctness for clarity is a seductive but usually harmful choice. The result may appear more clear, when it actually isn't, because of the subtle errors that have been papered over. In this case, though, nothing was sacrificed. It's 100% correct both ways. Mathematicians might prefer the minimal statement, but whole point of this answer is that it is correct even though it is not written in the way that a mathematician would prefer.

I'd like to boil this down to a pithy maxim, but I'm not sure I can do it without being inane. There's something in it about how, when you write something for non mathematicians, you should try to write every part of it for non-mathematicians, not just at the surface presentation but in the deeper layers too.

There's also something about how you should be very careful to distinguish the underlying mathematical truth on the one hand, from the practices that mathematicians have developed to help them in their day-to-day business, or to help them communicate with other mathematicians, or that are merely historical accidents, on the other. The underlying truth is the important part. The rest can be jettisoned.


[Other articles in category /math/se] permanent link

Mon, 22 Nov 2021

Consecutive squareful numbers

On Saturday I was thinking about how each of !!48, 49, 50!! is a multiple of a square number, and similarly !!98, 99, 100!!. No such sequence of four numbers came immediately to mind. The smallest example turns out to be !!242, 243, 244, 245!!.

Let's say a number is “squareful” if it has the form $$a\cdot b^2$$ for !!b>1!!. The opposite, “squarefree”, is a standard term, but “non-squarefree” sounds even worse than “squareful”. Do ten consecutive squareful numbers exist, and if so, how can we find them?

I did a little algebraic tinkering but didn't come up with anything. If !!n,n+1, n+2!! are consecutive squareful numbers, then so are !!2n, 2n+2, 2n+4!!, except they aren't consecutive, but maybe we could find the right !!n!! so that !!2n+1!! and !!2n+3!! are also squareful. I couldn't make this work though, so I wrote some brute-force search programs to get the lay of the land.

The computer quickly produced sequences of length 7:

$$\begin{array}{rcrr} 217070 & = & 4430 \ · & 7^2 \\ 217071 & = & 24119 \ · & 3^2 \\ 217072 & = & 54268 \ · & 2^2 \\ 217073 & = & 17 \ · & 113^2 \\ 217074 & = & 1794 \ · & 11^2 \\ 217075 & = & 8683 \ · & 5^2 \\ 217076 & = & 54269 \ · & 2^2 \end{array} $$

and length 8:

$$\begin{array}{rcrr} 1092747 & = & 3027 \ · & 19^2 \\ 1092748 & = & 273187 \ · & 2^2 \\ 1092749 & = & 22301 \ · & 7^2 \\ 1092750 & = & 43710 \ · & 5^2 \\ 1092751 & = & 9031 \ · & 11^2 \\ 1092752 & = & 273188 \ · & 2^2 \\ 1092753 & = & 121417 \ · & 3^2 \\ 1092754 & = & 6466 \ · & 13^2 \\ \end{array} $$

Neither of these suggested anything to me, and nor did any of the other outputs, so I stuck !!1092747!! into OEIS. With numbers like that you don't even have to ask for the whole sequence, you only need to ask for the one number. Six sequences came up but the first five are all the same and were what I was looking for: A045882: Smallest term of first run of (at least) n consecutive integers which are not squarefree.

This led me to Louis Marmet's paper First occurrences of square-free gaps and an algorithm for their computation. The paper provides the earliest sequences of !!k!! consecutive squareful numbers, for !!k≤ 18!!, and bounds for how far out these sequences must be when !!k≤24!!. This is enough to answer the questions I originally asked:

  • Are there ten consecutive squareful numbers? Yes, the smallest example starts at !!262\,315\,467!!, found in 1999 by D. Bernier.

  • How can we find them? Marmet gives a sieve method that starts simple and becomes increasingly elaborate.

The paper is from 2007, so it seems plausible that the same algorithms on 2021 computers could produce some previously unknown results.

[ Addendum 20211124: The original version of this article ended “The general problem, of whether there are arbitrarily long sequences of squareful numbers, seems to be open.” This is completely wrong. Daniel Wagner and Shreevatsa R. pointed out that the existence of arbitrarily long sequences is quite elementary. Pick any squares you like that do not share a common factor, say for example !!49, 9, 4,!! and !!25!!. Now (because Chinese remainder theorem) you can find consecutive numbers that are multiples of those specific squares; in this case !!28322, 28323, 28324, 28325!!. ]

[ In only vaguely related news, I was driving Toph to school this morning, and a car in front of mine had license plate number !!5625 = 75^2!!. ]


[Other articles in category /math] permanent link

Thu, 18 Nov 2021

In simple English, what does it mean to be transcendental?

I've been meaning to write this up for a while, but somehow never got around to it. In my opinion, it's the best Math Stack Exchange post I've ever written. And also remarkable: its excellence was widely recognized. Often I work hard and write posts that I think are really good, and they get one or two upvotes; that's okay, because the work is its own reward. And sometimes I write posts that are nothing at all that get a lot of votes anyway, and that is okay because the Math SE gods are fickle. But this one was great and it got what it deserved.

I am really proud of it, and in this post I am going to boast as shamelessly as I can.

The question was:

In simple English, what does it mean to be transcendental?

There were several answers posted immediately that essentially recited the definition, some better than others. At the time I arrived, the most successful of these was by Akiva Weinberger, which already had around fifty upvotes.

… Numbers like this, that satisfy polynomial equations, are called algebraic numbers. … A real (or complex) number that's not algebraic is called transcendental.

If you're going to essentially quote the definition, I don't think you can do better than to explain it the way Akiva Weinberger did. It was a good answer!

Once one answer gets several upvotes, it moves to the top of the list, right under the question itself. People see it first, and they give it more votes. A new answer has zero votes, and is near the bottom of the page, so people tend it ignore it. It's really hard for new answers to surpass a highly-upvoted previous answer. And while fifty upvotes on some stack exchanges is not a large number, on Math SE fifty is a lot; less than 0.2% of answers score so high.

I was unhappy with the several quoting-the-definition answers. Because honestly "numbers… that satisfy polynomial equations" is not “simple English” or “layman's terms” as the OP requested. Okay, transcendental numbers have something to do with polynomial equations, but why do we care about polynomial equations? It's just explaining one obscure mathematical abstraction in terms of second one.

I tried to think a little deeper. Why do we care about polynomials? And I decided: it's because the integer polynomials are the free ring over the integers. That's not simple English either, but the idea is simple and I thought I could explain it simply. Here's what I wrote:

We will play a game. Suppose you have some number !!x!!. You start with !!x!! and then you can add, subtract, multiply, or divide by any integer, except zero. You can also multiply by !!x!!. You can do these things as many times as you want. If the total becomes zero, you win.

For example, suppose !!x!! is !!\frac23!!. Multiply by !!3!!, then subtract !!2!!. The result is zero. You win!

Suppose !!x!! is !!\sqrt[3] 7!!. Multiply by !!x!!, then by !!x!! again, then subtract !!7!!. You win!

Suppose !!x!! is !!\sqrt2 +\sqrt3!!. Here it's not easy to see how to win. But it turns out that if you multiply by !!x!!, subtract 10, multiply by !!x!! twice, and add !!1!!, then you win. (This is not supposed to be obvious; you can try it with your calculator.)

But if you start with !!x=\pi!!, you cannot win. There is no way to get from !!\pi!! to !!0!! if you add, subtract, multiply, or divide by integers, or multiply by !!\pi!!, no matter how many steps you take. (This is also not supposed to be obvious. It is a very tricky thing!)

Numbers like !!\sqrt 2+ \sqrt 3!! from which you can win are called algebraic. Numbers like !!\pi!! with which you can't win are called transcendental.

Why is this interesting? Each algebraic number is related arithmetically to the integers, and the winning moves in the game show you how so. The path to zero might be long and complicated, but each step is simple and there is a path. But transcendental numbers are fundamentally different: they are not arithmetically related to the integers via simple steps.

This answer was an immediate hit. It rocketed past the previous top answer into the stratosphere. Of 190,000 Math SE, answers, there are twenty with scores over 500; mine is 13th.

The original version left off the final paragraph (“Why is this interesting?”). Fortunately, someone posted a comment pointing out the lack. They were absolutely right, and I hastened to fix it.

I love this answer for several reasons:

  • It's not as short as possible, but it's short enough.

  • It's almost completely jargonless. It doesn't use the word “coefficient”. You don't have to know what a polynomial is. You only have to understand grade-school arithmetic. You don't even need to know what a square root is; you can still try the example if you have a calculator with a square root button.

  • Sometimes to translate a technical concept into plain language, one must sacrifice perfect accuracy, or omit important details. This explanation is technically flawless.

  • One often sees explanations of “irrational number” that refer to the fact such a number has a nonrepeating decimal expansion. While this is true, it's a not what irrationality is really about, but a secondary property. The true core of the matter is that an irrational number is not the ratio of any two integers.

    My post didn't use the word “polynomial” and took a somewhat different path than the typical explanation, but it nevertheless hit directly at the core of the topic, not at a side issue. The “path to zero” thing isn't some property that algebraic numbers happen to have, it's the crucial issue, only phrased a little differently.

  • Also I had some unusually satisfying exchanges with critical commenters. There are a few I want to call out for triumphant mockery, but I have a policy of not mocking private persons on this blog, and this is just the kind of situation I intended to apply it to.

This is some good work. When I stand in judgment and God asks me if I did my work as well as I could, this is going to be one of the things I bring up.

[ Addendum 20211230: More about one of the finer points of this answer's pedagogical approach. ]


[Other articles in category /math/se] permanent link

Fri, 12 Nov 2021

Stack Exchange is a good place to explain initial and terminal objects in the category of sets

The fact that singleton sets are terminal in the category of sets, and the empty set is initial, is completely elementary, so it's often passed over without discussion. But understanding it requires understanding the behavior of empty functions, and while there is nothing complex about that, novices often haven't thought it through, because empty functions are useless except for the important role they play in Set. So it's not unusual to see questions like this one:

I have trouble understanding the difference between initial and terminal objects in category theory. … Why there can be morphism from empty set to any other set? And why there is not morphism to empty set as well?

I'm happy with the following answer, which is of the “you already knew this, you only thought you didn't” type. It doesn't reveal any new information, it doesn't present any insights. All it does is connect together some things that the querent hasn't connected before.

This kind of connecting is an important part of pedagogy, one that Math Stack Exchange is uniquely well-suited to deal with. It is not well-handled by the textbook (which should not be spending time or space on such an elementary issue) or in lectures (likewise). In practice it's often handled by the TA (or the professor), during office hours, which isn't a good way to do it: the TA will get bored after the second time, and most students never show up to office hours anyway. It can be well-handled if the class has a recitation section where a subset of the students show up at a set time for a session with the TA, but upper-level classes like category theory don't usually have enough students to warrant this kind of organization. When I taught at math camp, we would recognize this kind of thing on the fly and convene a tiny recitation section just to deal with the one issue, but again, very few category theory classes take place at math camp.

Stack Exchange, on the other hand, is a great place to do this. There are no time or space limitations. One person can write up the answer, and then later querents can be redirected to the pre-written answer.


Your confusion seems to be not so much about initial and terminal objects, but about what those look like in the category of sets. Looking at the formal definition of “function” will help make clear some of the unusual cases such as functions with empty domains.

A function from !!A!! to !!B!! can be understood as a set of pairs $$\langle a,b\rangle$$ where !!a\in A!! and !!b\in B!!. And:

There must be exactly one pair !!\langle a,b\rangle!! for each element !!a!! of !!A!!.

Exactly one, no more and no less, or the set of pairs is not a function.

For example, the function that takes an integer !!n!! and yields its square !!n^2!! can be understood as the (infinite) set of ordered pairs:

$$\{ \ldots ,\langle -2, 4\rangle, \langle -1, 1\rangle, \langle 0, 0\rangle ,\langle 1, 1\rangle, \langle 2, 4\rangle\ldots\}$$

And for each integer !!n!! there is exactly one pair !!\langle n, n^2\rangle!!. Some numbers can be missing on the right side (for example, there is no pair !!\langle n, 3\rangle!!) and some numbers can be repeated on the right (for example the function contains both !!\langle -2, 4\rangle!! and !!\langle 2, 4\rangle!!) but on the left each number appears exactly once.


Now suppose !!A!! is some set !!\{a_1, a_2, \ldots\}!! and !!B!! is a set with only one element !!\{b\}!!. What does a function from !!A!! to !!B!! look like? There is only one possible function: it must be: $$\{ \langle a_1, b\rangle, \langle a_2, b\rangle, \ldots\}.$$ There is no choice about the left-side elements of the pairs, because there must be exactly one pair for each element of !!A!!. There is also no choice about the right-side element of each pair. !!B!! has only one element, !!b!!, so the right-side element of each pair must be !!b!!.

So, if !!B!! is a one-element set, there is exactly one function from !!A!! to !!B!!. This is the definition of “terminal”, and one-element sets are terminal.

Now what if it's !!A!! that has only one element? We have !!A=\{a\}!! and !!B=\{b_1, b_2, \ldots\}!!. How many functions are there now? Only one?

One function is $$\{\langle a, b_1\rangle\}$$ another is $$\{\langle a, b_2\rangle\}$$ and another is $$\{\langle a, b_3\rangle\}$$ and so on. Each function is a set of pairs where the left-side elements come from !!A!!, and each element of !!A!! is in exactly one pair. !!A!! has only one element, so there can only be one pair in each function. Still, the functions are all different.

You said:

I would find it more intuitive if one-element set would be initial object too.

But for a one-element set !!A!! to be initial, there must be exactly one function !!A\to B!! for each !!B!!. And we see above that usually there are many functions !!A\to B!!.


Now we do functions on the empty set. Suppose !!A!! is !!\{a_1, a_2, \ldots\}!! and !!B!! is empty. What does a function from !!A\to B!! look like? It must be a set of pairs, it must have exactly one pair for each element of !!a!!, and the right-side of each pair must be an element of !!B!!. But !!B!! has no elements, so this is impossible: $$\{\langle a_1, ?\rangle, \langle a_2, ?\rangle, \ldots\}.$$

There is nothing to put on the right side of the pairs. So there are no functions !!A\to\varnothing!!. (There is one exception to this claim, which we will see in a minute.)

What if !!A!! is empty and !!B!! is not, say !!\{b_1, b_2, \ldots\}!!? A function !!A\to B!! is a set of pairs that has exactly one pair for each element of !!A!!. But !!A!! has no elements. No problem, the function has no pairs! $$\{\}$$

A function is a set of pairs, and the set can be empty. This is called the “empty function”. When !!A!! is the empty set, there is exactly one function from !!A\to B!!, the empty function, no matter what !!B!! is. This is the definition of “initial”, so the empty set is initial.

Does the empty set have an identity morphism? It does; the empty function !!\{ \}!! is its identity morphism. This is the one exception to the claim that there are no functions from !!A\to\varnothing!!: if !!A!! is also empty, the empty function is such a function, the only one.


The issue for topological spaces is exactly the same:

  • When !!B!! has only one element, there is exactly one continuous map !!A\to B!! for every !!A!!.
  • When !!A!! is empty, there is exactly one continuous map !!A\to B!! for every !!B!!: the empty function is the homeomorphism.
  • When !!A!! has only one element, there are usually many continuous maps !!A\to B!!, one different one for each element of !!B!!.

There are categories in which the initial and terminal objects are the same:

  • In the category of groups, the trivial group (with one element) is both initial and terminal.

  • A less important but simpler example is Set*, the category of pointed sets, whose objects are nonempty sets in which one element has been painted red. The morphisms of Set* are ordinary functions that map the red element in the domain to the red element of the codomain.

I hope this was some help.

[ Thanks to Rupert Swarbrick for pointing out that I wrote “homeomorphism” instead of “continuous map” ]


[Other articles in category /math/se] permanent link

Wed, 10 Nov 2021

Annoying Kuratowski pair projection formula

In a recent article I complained that it was too difficult to extract the second component from a Kuratowski pair. The formula given by Wikipedia is not simple:

$$\pi_2(p) = \bigcup\left\{\left. x \in \bigcup p\,\right|\,\bigcup p \neq \bigcap p \implies x \notin \bigcap p \right\}$$

I said “It's so non-obvious that I suspect that it's wrong”. Matthijs Blom then contacted me with this formal correctness proof using the Lean theorem prover. (Raw source code.)

I have also convinced myself that it is correct, using old-fashioned methods.

!!\def\pr#1#2{\langle{#1},{#2}\rangle}!! !!\def\kp#1#2{\{\{{#1}\}, \{{#1},{#2}\}\}}!!

Let's recall !!p = \pr ab = \kp ab!! and observe that then $$ \bigcup p = \{a, b\}\\ \bigcap p = \{a\} $$

where possibly !!a=b!!. Then the Wikipedia formula can be written as:

$$\pi_2(\pr ab) = \bigcup\left\{\left. x \in \{a, b\} \,\right| \,\{a, b\} \neq \{a\} \implies x \notin \{a\} \right\}.$$

!!\{a, b\} \ne \{a\}!! can be rewritten as !!b\ne a!!. The !!x\notin\{a\}!! can be rewritten as !!x\ne a!!. We can rewrite the whole !!S\implies T!! thing as !!\lnot S\lor T!!:

$$\pi_2(\pr ab) = \bigcup\left\{\left. x \in \{a, b\} \,\right| \,a = b \lor x \ne a \right\}.$$

When !!a≠b!! the !!a=b!! clause in the guard condition vanishes, leaving

$$\begin{align} \pi_2(\pr ab) & = \bigcup\left\{\left. x \in \{a, b\} \,\right| \,x \ne a \right\} \\ % & = \bigcup\{ x \in \{ b\} \} \\ & = \bigcup\{b\} \\ & = b. \end{align} $$

When !!a=b!!, the !!\{a,b\}!! expression reduces to !!\{b\}!! and the guard condition !!a = b \lor x \ne a!! is always true, so we get:

$$\begin{align} \pi_2(\pr ab) & = \bigcup\{ x \in \{b\}\} \\ & = \bigcup\{b\} \\ & = b. \end{align} $$


[Other articles in category /math] permanent link

Tue, 02 Nov 2021

One way in which Wiener pairs are simpler than Kuratowski pairs

!!\def\kp#1#2{\{\{{#1}\}, \{{#1},{#2}\}\}}!!

!!\def\nwp#1#2{\{\{\{{#1}\}, \emptyset\}, \{\{{#2}\}\}\}}!!

In a recent article about Wiener's definition of ordered pairs, I said:

In any case the Kuratowski thing is still simpler. I wonder why Wiener didn't think of it first. … Perhaps he didn't consider the details important.

But today I thought of a specific technical advantage that Wiener's definition has over Kuratowski's. The Kuratowski definition is

$$\langle a, b\rangle = \kp ab$$

and in the special case where !!a=b!!, this reduces to:

$$\kp aa = \{\{a\}, \{a\}\} = \{\{a\}\}$$

This can complicate the proofs. For example suppose you want to prove that if !!\langle p,q\rangle = \langle r,s\rangle!! then !!p=r!! and !!q=s!!. You might like to start by saying that each side represents a set of two elements, and then compare the elements. But you can't, because either might be a set with one element, so there's a special case. Or maybe you have to worry about what happens if !!b=\{a\}!!, is it still all right?

With the Wiener pair, it easy to see that nothing like this can happen:

$$\langle a, b\rangle = \nwp ab$$

Here, no matter what !!a!! and !!b!! might be, !!\langle a, b\rangle!! is a set with two elements. The set !!\{\{a\}, \emptyset\}!! must have exactly two elements; it's impossible that !!\{a\} = \emptyset!!. And since !!\{\{a\}, \emptyset\}!! has two elements, it's impossible for it to equal !!\{\{b\}\}!!, which has one element. So !!\langle a, b\rangle!! is always a two-element set !!\{P_a, P_b\}!!. When !!a=b!! there's no special case. You just get $$\nwp aa$$ which doesn't look any different.

This analysis explains another possibly puzzling feature of Wiener's definition: Why !!\nwp ab!! and not the apparently simpler !!{\{\{a, \emptyset\}, \{b\}\}}!!? But that proposal, like Kuratowski's, has annoying special cases. For example, the proposal collases to !!\{\{\emptyset\}\}!! when !!a=b=\emptyset!!.

The components of a Wiener pair are easy to extract again: the !!P_b!! component is a singleton, satisfying !!\bigcup P_b = b!!, and the !!P_a!! component is a two-element set satisfying !!\bigcup P_a = a!!. This is all considerably simpler than with the Kuratowski pair, where you can't get !!b!! alone, you can only get !!\{a, b\}!! and then you have to take the !!a!! back out, except that if !!a=b!! you musn't. To extract the second component from a Wiener pair is easy. The first thing I thought of was $$\pi_2(p) = \bigcup \{x\in p: |x| = 1\}$$ (“give me the contents of the singleton elements of !!p!!”) but one could also do $$\pi_2(p) = \{x: \{\{x\}\} \in p\},$$ and it's obvious that both of these are correct. [ Addendum 20211112: Obvious perhaps, but wrong; the first one is not correct! ]

The analogous formula given by Wikipedia for the Kuratowski pair is not at all obvious:

$$\pi_2(p) = \bigcup\left\{\left. x \in \bigcup p\,\right|\,\bigcup p \neq \bigcap p \implies x \notin \bigcap p \right\}$$

It's so non-obvious that I suspect that it's wrong. (Wikipedia.) But I don't want to put in the effort it would take to check it.

[ Addendum 20211110: Matthijs Blom and I have confirmed that the formula is correct. ]

[ Addendum 20211112: my suggested definition of !!\pi_2(p) = \bigcup \{x\in p: |x| = 1\}!! does not work, because at this point we have not yet defined what !!|x|!! means. And we can't, because it requires functions and relations, which we can't define without an adequate model for orderded pairs. ]


[Other articles in category /math] permanent link

Fri, 29 Oct 2021

The nonassociativity of the cartesian product

!!\def\pr#1#2{\langle{#1},{#2}\rangle}!!

Set theory doesn't include the ordered pair as a primitive type; ordered pairs have to be represented as sets somehow. This caused technical problems in Principia Mathematica, as I explained a couple of years ago:

In 1910 … a crucial piece of machinery was missing: the ordered pair.

This technical problem was solved the following year, by Norbert Wiener. We usually use the simpler definition invented later by Kazimierz Kuratowski:

!!\def\kp#1#2{\{\{{#1}\}, \{{#1},{#2}\}\}}!!

$$\pr ab = \kp ab$$

This works well enough (and it has worked well enough that we have used it for a hundred years) but it has some warts. We define the cartesian product of two sets like this: $$A\times B = \{\pr ab \mid a\in A\text{ and }b\in B\}$$

With Kuratwoski pairs, and this definition of product, it is not the case that $$A\times(B\times C) = (A\times B)\times C\qquad\color{darkred}{Not!}$$ because on the left side the elements look like $$\pr a{\pr bc} = \kp a{\kp bc}$$ and on the right side the corresponding element looks like $$\pr{\pr ab}c = \kp{\kp ab}c.$$

This doesn't present any serious technical problems; see Henning Makholm's discussion here. He says:

The nitty-gritty of getting such details to work completely formally is not generally worth the trouble.

Even with highly fussy technical details like this, you can almost always find some mathematician who said “Ah, but you have to be a little bit careful, because…”. As far as I know, this matter is an exception. If anyone thinks there is something interesting going on here that deserves further examination, I have not heard of it. Everyone ignores the matter.

Category theory takes the high road away from the issue, breezily dismissing it with a remark about how !!A\times(B\times C)!! and !!(A\times B)\times C!! are isomorphic, and equal up to unique isomorphism. This is the right answer, because as Wiener said in 1911, the structure of the pairs as actual sets is “largely a matter of choice”, and the whole point of category theory is to ignore this sort of irrelevant distraction, focusing on the API rather than on the internal implementation.

But last week I wondered: what if you do actually want !!A\times(B\times C)!! be the exact same set as !!(A\times B)\times C!!, and not just naturally isomorphic? Is there some clever way to define ordered pairs so that this happens?

I got the question all written up for Math Stack Exchange, when I realized the answer: No.

Consider the special case where !!A = \{a\}, B=\{b\},!! and !!C = \{c\}!!. Then !!A\times(B\times C) = (A\times B)\times C!! implies $$\pr a{\pr bc} = \pr{\pr ab}c.$$

But this violates the one property that we absolutely require of ordered pairs, which is that !!\pr pq = \pr rs!! if and only if !!p=r!! and !!q=s!!. To have !!\pr a{\pr bc} = \pr{\pr ab}c!! would imply !!a=\pr ab!!, which would imply !!\pr a{b'} = a = \pr ab!! even when !!b'\ne b!!.

Oh well, it would have been fun.


[Other articles in category /math] permanent link

Thu, 28 Oct 2021

Kuratowski pairs and Wiener pairs

I mentioned a couple of years back that Principia Mathematica was bloated with repetitive material because they hadn't been able to unify the idea of a relation and a set, because the ordered pair hadn't been invented yet. There's a section that defines set union, !!\cup!!, and then proves that it is commutative and associative and so on, and later there is a separate section that defines relation union, !!\dot\cup!!, and proves the exact same theorems in the same way. In 2021 (or even in 1921) we would say that a relation is a set of ordered pairs, and that relation union is just a special case of set union.

To do this we have to interpret ordered pairs set-theoretically. The method we usually use for this was invented by Kazimierz Kuratowski:

$$\langle a, b\rangle = \{\{a\}, \{a,b\}\}$$

But there were earlier developments that also sufficed. Hausdorff suggested the more intuitive, but technically more complex:

$$\langle a, b\rangle = \{\{a, 1\}, \{b, 2\}\}$$

where !!1!! and !!2!! are any two objects that are not among the things we want to include in our ordered pairs. And even earlier, the first interpretation of pairs as sets was in 1911 by Norbert Wiener. In modern notation, Wiener's definition is:

$$\langle a, b\rangle = \{\{\{a\}, \emptyset\}, \{\{b\}\}\}.$$

Wiener actually used the notation of Principia Mathematica, which I reproduce for your amusement:

$$\def\i{\iota`} \i(\i\i a\cup \i\Lambda)\cup\i\i\i b $$

The !!\i x!! notation means essentially the same as !!\{x\}!!. I thought Principia Mathematica had a better way to write !!\{x, y\}!! than as !!\i x\cup\i y!!, but if so I can't remember what it is.

Someone once told me that Wiener's definition is more complicated than Kuratowski's because it had to function in the context of Whitehead and Russell's type theory. Kuratowski was working later, in set theory, so could use a simpler definition that wouldn't function in type theory because the types didn't match up. I had never thought carefully about this until now but it seems to be wrong. The Kuratowski pair requires !!a!! and !!b!! to be the same type, or else you can't put them both into the class !!\{a, b\}!!. But the Wiener pair requires this also. Say !!a!! and !!b!! have type !!n!!. Then !!\{a\}!! and !!\{b\}!! have type !!n+1!!, and !!\{\{a\}, \emptyset\}!! and !!\{\{b\}\}!! have type !!n+2!!. And because they have the same type we can put them both into the class !!\{\{\{a\}, \emptyset\}, \{\{b\}\}\}!!. But for this to work, !!a!! and !!b!! have to have the same type to begin with.

I wanted to find out what Wiener said about this, and Wikipedia referred me to his paper A Simplification of the logic of relations, and helpfully pointed out that it was reprinted in van Heijenoort's Source Book in Mathematical Logic, which I have on the shelf. (I love when this happens. It makes me feel like a scholar.) Wiener agrees: !!a!! and !!b!! must have the same type. But, he points out, if !!a!! and !!b!! have different types you can still make it work by adjusting the nesting level of !!a!! or !!b!! accordingly. For example, if !!b!! had a type one higher than !!a!!, you could use !!\{\{\{a\}, \emptyset\}, \{b\}\}!! instead.

In any case the Kuratowski thing is still simpler. I wonder why Wiener didn't think of it first. But he does say “the particular method selected of doing this is largely a matter of choice”, so perhaps he didn't consider the details important. As in fact they aren't. The important point, and the real point of Wiener's paper, is that you can now construe a two-place relation as a class of ordered pairs.

The paper ends by observing that this fixes the !!\cup!!-versus-!!\dot\cup!! extravagance of Principia Mathematica, since now !!R\cup S = R\,\dot\cup\, S!!. Similarly the class of relations is now a subclass of the class of classes, and so forth.

[ Addendum 20211030: This construction makes the cartesian product nonassociative, but nobody cares. ]

[ Addendum 20211101: One way in which Wiener pairs are simpler than Kuratowski pairs. ]


[Other articles in category /math] permanent link

Sun, 03 Oct 2021

Simpson's paradox and Maxine Hong Kingston's mom

Yesterday's Simpson's paradox example reminds me of Maxine Hong Kingston's mother in The Woman Warrior:

When a sick person was about to die, my mother could read the fact of it a year ahead of time on the daughters-in-law’s faces. … My mother would take one look at the daughter-in-law who answered the door at the sick house and she’d say, “Find another doctor.” She would not touch death; therefore, untainted, she brought only health from house to house. “She must be a Jesus convert,” the people from the far villages said. “All her patients get well.”


[Other articles in category /math] permanent link

Sat, 02 Oct 2021

Simplest example of Simpson's paradox

I had read many times about Simpson's paradox but it never quite clicked for me. I saw many examples, but I couldn't quite get what was really going on. And I could never remember the numbers in the examples, so I couldn't ponder it while waiting for the bus or whatever.

Last month I sat down and thought about it, with the idea of coming up with the simplest and most memorable possible example. Here it is.

  • Imagine there is a disease that kills 50% of the people who get it.

  • There is a pill you can take if you get the disease. Of the people who take the pills, 80% die.

  • It looks like the pills are killing people. But they aren't, they are helpful.

This is Simpson's paradox.

  • What is really happening is that half the people with the disease have mild cases and half have severe cases. A patient with a mild case will get better on their own. But everyone with a severe case dies unless they receive treatment.

  • People with mild cases don't bother to take the pills, because they are going to get better anyway.

  • Only people with severe cases take the pills. 80% of them die, but without the pills they all would have died.

Now I understand Simpson's paradox.


[Other articles in category /math] permanent link

Fri, 01 Oct 2021

Shitpost

Imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of !!R(5, 5)!! or they will destroy our planet. In that case, we should marshal all our computers and all our mathematicians and attempt to find the value. But suppose, instead, that they ask for yo mama's weight in pounds. In that case, we should attempt to destroy the aliens.

(With posthumous apologies to Paul Erdős)


[Other articles in category /math] permanent link

Wed, 01 Sep 2021

On having the right theorem

A recent Math StackExchange question asks “Prove every permutation of the alphabet contains a subset of six letters in order”. That is, you take a string of length 26 that contains each letter once; you can find a subsequence of six letters that is either increasing or decreasing.

Choosing a permutation at random, suppose we have:

    q y x u l i n g w o c k j d r p f v t s h e a z b m

Then the sequence x w v t s e b has length 7 and is in descending order. Or:

    t h e q u i c k b r o w n f x j m p s v r l z y d g

This contains the ascending sequence h q r s v y. Also the descending sequence t q o n m l d.

I thought about this for a while but couldn't make any progress. But OP had said “I know I have to construct a partially ordered set and possibly use Dilworth's Theorem…” so I looked up Dilworth's theorem.

It turns out I did actually know Dilworth's theorem. It is about partially-ordered sets. Dilworth's theorem says that if you partition a partially-ordered set into totally-ordered subsets, called ‘chains’, then the number of such chains is at least as big as the size of the largest “antichain”. An antichain is a subset in which no two elements are comparable.

This was enough of a hint, and I found the solution pretty quickly after that. Say that !!S[i]!! is the position of letter !!i!! in the string !!S!!. Define the partial order !!\prec!!: $$ i\prec j \qquad \equiv \qquad i < j \text{ and } S[i] < S[j] $$

That is, !!i\prec j!! means that !!i!! is alphabetically earlier than !!j!! and its position in !!S!! is to the left of !!j!!. This is obviously a partial ordering of the letters of the alphabet. Chains under !!\prec!! are, by definition, ascending sequences of letters from !!S!!. It's easy to show that antichains are descending sequences.

Partition !!S!! into chains. If any chain has length !!6!! or more, that is an ascending sequence of letters and we win.

If not, then no chain has more than 5 letters, and so there must be at least !!6!! chains, because !!5·5=25!! and there are !!26!! letters. Since there are at least !!6!! chains, Dilworth's theorem tells us there is an antichain of at least !!6!! letters, and hence a descending sequence of that length, and we win.

Once you have the idea to use Dilworth's theorem, the problem collapses. (You also have to invent the right !!\prec!! relation, but there's really only one possible choice.)

Maybe I'll write a followup article about how just having a theorem doesn't necessarily mean you have an algorithm. Mathematicians will say “partition !!S!! into chains,” but actually programming something like that might be nontrivial. Then finding the antichain among the chains might also be nontrivial.


[Other articles in category /math] permanent link

Sat, 28 Aug 2021

Why is the S combinator an S?

The most important combinator in combinatory logic is the !!S!! combinator, defined simply:

$$ S x y z ⇒ (x z)(y z) $$

or in !!\lambda!!-calculus terms:

$$ S = \lambda x y z. (x z)(y z). $$

A wonderful theorem states that any !!\lambda!!-expression with no free variables can be converted into a combinator expression that contains only the combinators !!S, K,!! and !!I!!, where !!S!! is really the only interesting one of the three, !!I!! being merely the identity function, and !!K!! a constructor of constant functions:

$$ \begin{align} I x & = x \\ K x y & = x \\ \end{align} $$

In fact one can get along without !!I!! since !!S K K = I!!.

A not-too-infrequently-asked question is why the three combinators are named as they are. The !!I!! is an identity function and pretty obvious stands for “identity”.

Similarly the !!K!! constructs constant functions: !!K x!! is the combinator which ignores its argument and yields !!x!!. So it's not hard to imagine that !!K!! is short for Konstant, which is German for “constant”; no mystery there.

But why !!S!!? People typically guess that it stands for “substitution”, the idea being that if you have some application $$A\,B$$ then !!S!! allows one to substitute some term !!T!! for a free variable !!v!! in both !!A!! and !!B!! prior to the application:

$$ S\, A\, B\, T = A[v/T]\, B[v/T]. $$

Although this seems plausible, it's not correct.

Combinatory logic was introduced in a 1924 paper of Moses Schönfinkel. In it, he defines a family of combinators including the standard !!S!!, !!K!!, and !!I!!; he shows that only !!S!! and !!K!! are required. His initial set of combinators comprises the following:

$$ \begin{array}{cllrl} I & \textit{Identitätsfunktion} & \text{“identity function”}& I\,x =& x \\ C & \textit{Konstanzfunktion} & \text{“constancy function”} & C\,x\,y =& x \\ T & \textit{Vertauschungsfunktion} & \text{“swap function”} & T\,x\,y\,z=& x\,z\,y \\ Z & \textit{Zusammensetzungsfunktion} & \text{“composition function”} & Z\,x\,y\,z=& x\,(y\,z) \\ S & \textit{Verschmelzungsfunktion} & \text{“fusion function”} & S\,x\,y\,z=& x\,z\,(y\,z) \end{array} $$

(Schönfinkel also had combinators representing logical operations (one corresponding to the Sheffer stroke, which had been discovered in 1913), and to quantification, but those don't concern us right now.)

!!T!! and !!Z!! are now usually called !!C!! and !!B!!. These names probably originated in Curry's Grundlagen der kombinatorischen Logik (1930). Curry 1930 is probably also the origin of the change from !!C!! to !!K!!. I have no idea why Schönfinkel chose to abbreviate Konstanzfunktion as !!C!! instead of !!K!!. Curry notes that for !!I, K, B, C, S!! Schönfinkel has !!I, C, Z, T, S!!, but does not explain his changes. In Curry and Feys’ influential 1958 book on combinatory logic, the !!B!! and !!C!! combinators given names that are are literal translations of Schönfinkel's: “elementary permutator” and “elementary compositor”.

Returning to the !!S!! combinator, one sees that its German name in Schönfinkel's paper, Verschmelzungsfunktion, begins with the letter V, but so does Vertauschungsfunktion, so abbreviating either with V would have been ambiguous. Schönfinkel instead chose to abbreviate Verschmelzungsfunktion with S for its root schmelzen, “fusion”, and Vertauschungsfunktion with T for its root tauschen, “swap”. The word schmelzen is akin to English words “melt” and “smelt”.

The “swap” is straightforward: the !!T!! combinator swaps the order of the arguments to !!x!! in !!x\,y\,z!!: $$T\,x\,y\,z = x\,z\,y$$ but does not otherwise alter the structure of the expression.

But why is !!S!! the “melting” or “fusion” combinator? It's because Schönfinkel was interested in reducing abitrary mathematical expressions to combinators. He will sometimes have an application !!(f\, x)(g\, x)!! and he wants to ‘fuse’ the two occurrences of !!x!!. He can do this by rewriting the expression as !!S\, f\, g\, x!!. Schönfinkel says:

Der praktische Nutzen der Function !!S!! besteht ersichtlich darin, daß sie es ermöglicht, mehrmals auftresnde Veränderliche — und bis zu einem gewissen Grade auch individuelle Functionen — nur einmal auftreten zu lassen.

Clearly, the practical use of the function !!S!! will be to enable us to reduce the number of occurrences of a variable — and to some extent also of a particular function — from several to a single one.

(Translation from van Heijenoort, p. 362.)

So there you have it: the !!S!! combinator is so-named not for substitution, but because S is the first letter of schmelzen, ‘to fuse’.

References

  • Schönfinkel, M. “Über die Bausteine der mathematischen Logik” (“On the building-blocks of mathematical logic”), Mathematische Annalen (1969), p. 305–316; Berlin, Göttingen, Heidelberg.

    English translation in Van Heijenoort, Jean (ed.) From Frege to Gödel: a Source Book in Mathematical Logic, 1879–1931 (1967) pp. 355–366 Harvard University Press; Cambridge and London.

  • Curry, H.B. “Grundlagen der kombinatorischen Logik” (“Fundamentals of combinatory logic”), American Journal of Mathematics Vol. 52, No. 3 (Jul., 1930), pp. 509-536.

  • Curry, H.B. and Robert Feys Combinatory Logic (1958) p. 152 North-Holland Publishing Company, Amsterdam.


[Other articles in category /math] permanent link

Thu, 22 Jul 2021

The convergents of 2x

Take some real number !!\alpha!! and let its convergents be !!c_0, c_1, c_2, \ldots!!. Now consider the convergents of !!2\alpha!!. Sometimes they will include !!2c_0, 2c_1, 2c_2, \ldots!!, sometimes only some of these.

For example, the convergents of !!\pi!! and !!2\pi!! are

$$ \begin{array}{rlc} \pi & \approx & \color{darkblue}{3},&&& \color{darkblue}{\frac{22}{7}}, & \color{darkblue}{\frac{333}{106}}, && \color{darkblue}{\frac{355}{113}}, & \color{darkblue}{\frac{103993}{33102}}, && \frac{104348}{33215}, & \color{darkblue}{\frac{208341}{66317}}, & \ldots \\ 2\pi & \approx & \color{darkblue}{6}, & \frac{19}{3}, & \frac{25}{4}, & \color{darkblue}{\frac{44}{7}}, & \color{darkblue}{\frac{333}{53}}, & \frac{377}{60}, & \color{darkblue}{\frac{710}{113}}, & \color{darkblue}{\frac{103393}{16551}}, & \frac{312689}{49766}, && \color{darkblue}{\frac{416682}{66317}}, & \ldots \end{array}
$$

Here are the analogous lists for !!\frac{1+\sqrt{5}}2!! and !!1+\sqrt5!!:

$$ \begin{array}{rlc} \frac12\left({1+\sqrt{5}}\right)& \approx & 1, & 2, & \color{darkblue}{\frac32}, & \frac53, & \frac85, & \color{darkblue}{\frac{13}8}, & \frac{21}{13}, & \frac{34}{21}, & \color{darkblue}{\frac{55}{34}}, & \frac{89}{55}, & \frac{144}{89}, & \color{darkblue}{\frac{233}{144}}, & \frac{377}{233}, &\frac{610}{377} , & \color{darkblue}{\frac{987}{610} }, & \ldots \\ 1+\sqrt{5} & \approx & & & \color{darkblue}{3}, &&& \color{darkblue}{\frac{13}4}, &&& \color{darkblue}{\frac{55}{17}}, &&& \color{darkblue}{\frac{233}{72}}, &&& \color{darkblue}{\frac{987}{305}}, & \ldots \end{array} $$

This time all the convergents in the second list are matched by convergents in the first list. The number !!\frac{1+\sqrt5}{2}!! is notorious because it's the real number whose convergents converge the most slowly. I'm surprised that !!1+\sqrt5!! converges so much more quickly; I would not have expected the factor of 2 to change the situation so drastically.

I haven't thought about this at all yet, but it seems to me that a promising avenue would be to look at what Gosper's algorithm would do for the case !!x\mapsto 2x!! and see what simplifications can be done. This would probably produce some insight, and maybe a method for constructing a number !!\alpha!! so that all the convergents of !!2\alpha!! are twice those of !!\alpha!!.


[Other articles in category /math] permanent link

Thu, 08 Jul 2021

A simple dice-throwing game that seems hard to play

I ran into a fun math problem yesterday, easy to ask, easy to understand, but somewhat open-ended and seems to produce fairly complex behavior. It might be a good problem for a bright high school student to tinker with.

Consider the following one-player game. You start with a total of n points. On each turn, you choose to throw either a four-, six-, or eight-sided die, and then subtract the number thrown from your point total. The game continues until your total reaches zero (and you win) or goes below zero (and you lose).

This game seems surprisingly difficult to analyze. The computer analysis is quite easy, but what I mean is, if someone comes to you offering to pay you a dollar if you can win starting with !!n=9!! points, and it would be spoilsportish to say “just wait here for half an hour while I write this computer program”, what's your good move?

Is there even a way to make an educated guess, short of doing a full analysis? The !!n≤4!! strategy is obvious, but even for !!n=5!! you need to start calculating: rolling the d4 is safe. Rolling the d6 gives you a chance of wiping out, but also a chance of winning instantly; is that an improvement? (Spoiler: it is, quite substantially so! Your chance of winning increases from !!36\%!! to !!40.7\%!!.)

With the game as described, and optimal play, the probability of winning approaches !!45.66\%!! as the number of points increases, and the strategy is not simple: the best strategy for !!n≤20!! uses the d4 in 13 cases, the d6 in 4 cases, and the d8 in 3 cases:

$$\begin{array}{rcl} n & \text{Best play} & \text{Win %} \\ \hline 1 & 4\quad & 25.00\% \\ 2 & 4\quad & 31.25 \\ 3 & 4\quad & 39.06 \\ 4 & 4\quad & 48.83 \\ \hline 5 & 6 & 40.69\% \\ 6 & 6 & 47.47 \\ 7 & 4\quad & 44.01 \\ 8 & \quad8 & 47.04 \\ \hline 9 & 4\quad & 44.80\% \\ 10 & 4\quad & 45.83 \\ 11 & 4\quad & 45.42 \\ 12 & 4\quad & 45.77 \\ \hline 13 & 6 & 45.48 \% \\ 14 & \quad8 & 45.73 \\ 15 & 4\quad & 45.60 \\ 16 & \quad8 & 45.71 \\ \hline 17 & 4\quad & 45.63 \% \\ 18 & 4\quad & 45.67 \\ 19 & 4\quad & 45.65 \\ 20 & 6 & 45.67 \end{array} $$

It seems fairly clear (and not hard to prove) that when the die with fewest sides has !!d!! sides, the good numbers of points are multiples of !!d!!, with !!kd+1!! somewhat worse, and then !!kd+2, kd+3, \ldots!! generally better and better to the next peak at !!kd+d!!. But there are exceptions: even if !!d!! is not the smallest die, if you have a !!d!!-sided die, it is good to have !!d!! points, and when you do you should roll the !!d!!-sided die.

I did get a little more insight after making the chart above and seeing the 4-periodicity. In a comment on my Math SE post I observed:

There is a way to see quickly that the d4 is better for !!n=7!!. !!n=1!! is the worst possible position. !!n=2,3,!! and !!4!! are increasingly good; !!4!! is best because you can't lose and you might win outright. After that !!5!! is bad again, but not as bad as !!1!!, with !!6,7,8!! increasingly good. The pattern continues this way, with !!4k−3,4k−2,4k−1,4k!! being increasingly good, and then !!4k+1!! being worse again but better than !!4k−3!!. For !!n=7!!, the d6 allows one to land on !!\{1,2,3,4,5,6\}!!, and the d4 on !!\{3,4,5,6\}!!. But !!1!! is worse than !!5!! and !!2!! is worse than !!6!!, so prefer the d4.

The d4-d6-d8 case is unusually confusing, because for example it's not clear whether from 12 points you should throw d4, hoping to land on 8, or d6, hoping to land on 6. (I haven't checked but I imagine the two strategies perform almost equally well; similarly it probably doesn't matter much if you throw the d4 or the d6 first from !!n=10!!.)

That the d6 is best for !!n=13!! is very surprising to me.

Why !!45.66\%!!? I don't know. With only one die, the winning probability for large !!n!! converges to !!\frac2{n+1}!! which I imagine is a fairly straightforward calculation (but I have not done it). For more than one die, it seems much harder.

Is there a way to estimate the winning probability for large !!n!!, given the list of dice? Actually yes, a little bit: the probability of winning with just a d4 is !!\frac 25!!, and the d6 and d8 can't hurt, so we know the chance of winning with all three dice available will be somewhat more than !!40\%!!, as it is. The value of larger dice falls off rapidly with the number of sides, so for example with d4+d6 the chance of winning increases from !!40\%!! to almost !!45\%!!, and adding the d8 only nudges this up to !!45.66\%!!.

The probability of winning with a d2 is !!\frac 23!!, and if you have a d3 also the probability goes up to !!\frac 34!!, which seems simple enough, but if you add a d4 instead of the d3 instead it goes to !!68.965\%!!, whatever that is. And Dfan Schmidt tells me that d3 + d4 converges to !!\frac{512}{891}!!.

I wrote it up for Math StackExchange but nobody has replied yet.

Here's Python code to calculate the values. Enjoy.

[ Addendum: Michael Lugo points out that the d2+d4 probability (“!!68.965\%!!, whatever that is”) is simply !!\frac{20}{29}!!, and gives some other similar results. One is that d3+d4+d5 has a winning probability of !!\frac{16}{27}!!; the small denominator is surprising. ]


[Other articles in category /math] permanent link

Fri, 01 Jan 2021

My big mistake about dense sets

I made a big mistake in a Math Stack Exchange answer this week. It turned out that I believed something that was completely wrong.

Here's the question, are terminating decimals dense in the reals?. It asks if the terminating decimals (that is, the rational numbers of the form !!\frac m{10^n}!!) are dense in the reals. “Dense in the reals” means that if an adversary names a real number !!r!! and a small distance !!d!!, and challenges us to find a terminating decimal !!t!! that is closer than !!d!! to point !!r!!, we can always do it. For example, is there a terminating decimal !!t!! that is within !!0.0000001!! of !!\sqrt 2!!? There is: !!\frac{14142135}{10^7} = 1.4142135!! is closer than that; the difference is less than !!0.00000007!!.

The answer to the question is ‘yes’ and the example shows why: every real number has a decimal expansion, and if you truncate that expansion far enough out, you get a terminating decimal that is as close as you like to the original number. This is the obvious and straightforward way to prove it, and it's just what the top-scoring answer did.

I thought I'd go another way, though. I said that it's enough to show that for any two terminating decimals, !!a!! and !!b!!, there is another one that lies between them. I remember my grandfather telling me long ago that this was a sufficient condition for a set to be dense in the reals, and I believed him. But it isn't sufficient, as Noah Schweber kindly pointed out.

(It is, of course, necessary, since if !!S!! is a subset of !!\Bbb R!!, and !!a,b\in S!! but no element of !!S!! between these, then there is no element of !!S!! that is less than distance !!\frac{b-a}2!! of !!\frac{a+b}2!!. Both !!a!! and !!b!! are at that distance, and no other point of !!S!! is closer.)

The counterexample that M. Schweber pointed out can be explained quickly if you know what the Cantor middle-thirds set is: construct the Cantor set, and consider the set of midpoints of the deleted intervals; this set of midpoints has the property that between any two there is another, but it is not dense in the reals. I was going to do a whole thing with diagrams for people who don't know the Cantor set, but I think what follows will be simpler.

Consider the set of real numbers between 0 and 1. These can of course be represented as decimals, some terminating and some not. Our counterexample will consist of all the terminating decimals that end with !!5!!, and before that final !!5!! have nothing but zeroes and nines. So, for example, !!0.5!!. To the left and right of !!0.5!!, respectively, are !!0.05!! and !!0.95!!.

In between (and around) these three are: $$\begin{array}{l} \color{darkblue}{ 0.005 }\\ 0.05 \\ \color{darkblue}{ 0.095 }\\ 0.5 \\ \color{darkblue}{ 0.905 }\\ 0.95 \\ \color{darkblue}{ 0.995 }\\ \end{array}$$

(Dark blue are the new ones we added.)

And in between and around these are:

$$\begin{array}{l} \color{darkblue}{ 0.0005 }\\ 0.005 \\ \color{darkblue}{ 0.0095 }\\ 0.05 \\ \color{darkblue}{ 0.0905 }\\ 0.095 \\ \color{darkblue}{ 0.0995 }\\ 0.5 \\ \color{darkblue}{ 0.9005 }\\ 0.905 \\ \color{darkblue}{ 0.9095 }\\ 0.95 \\ \color{darkblue}{ 0.9905 }\\ 0.995 \\ \color{darkblue}{ 0.9995 }\\ \end{array}$$

Clearly, between any two of these there is another one, because around !!0.????5!! we've added !!0.????05!! before and !!0.????95!! after, which will lie between !!0.????5!! and any decimal with fewer !!?!! digits before it terminates. So this set does have the between-each-two-is-another property that I was depending on.

But it should also be clear that this set is not dense in the reals, because, for example, there is obviously no number of this type that is near !!0.7!!.

(This isn't the midpoints of the middle-thirds set, it's the midpoints of the middle-four-fifths set, but the idea is exactly the same.)

Happy New Year, everyone!


[Other articles in category /math] permanent link

Sat, 21 Nov 2020

Testing for divisibility by 19

[ Previously, Testing for divisibility by 7. ]

A couple of nights ago I was keeping Katara company while she revised an essay on The Scarlet Letter (ugh) and to pass the time one of the things I did was tinker with the tests for divisibility rules by 9 and 11. In the course of this I discovered the following method for divisibility by 19:

Double the last digit and add the next-to-last.
Double that and add the next digit over.
Repeat until you've added the leftmost digit.

The result will be a smaller number which is a multiple of 19 if and only if the original number was.

For example, let's consider, oh, I don't know, 2337. We calculate:

  • 7·2+3 = 17
  • 17·2 + 3 = 37
  • 37·2 + 2 = 76

76 is a multiple of 19, so 2337 was also. But if you're not sure about 76 you can compute 2·6+7 = 19 and if you're not sure about that you need more help than I can provide.

I don't claim this is especially practical, but it is fun, not completely unworkable, and I hadn't seen anything like it before. You can save a lot of trouble by reducing the intermediate values mod 19 when needed. In the example above above, after the first step you get to 17, which you can reduce mod 19 to -2, and then the next step is -2·2+3 = -1, and the final step is -1·2+2 = 0.

Last time I wrote about this Eric Roode sent me a whole compendium of divisibility tests, including one for divisibility by 19. It's a little like mine, but in reverse: group the digits in pairs, left to right; multiply each pair by 5 and then add the next pair. Here's 2337 again:

  • 23·5 + 37 = 152
  • 1·5 + 52 = 57

Again you can save a lot trouble by reducing mod 19 before the multiplication. So instead of the first step being 23·5 + 37 you can reduce the 23·5 to 4·5 = 20 and then add the 37 to get 57 right away.

[ Addendum: Of course this was discovered long ago, and in fact Wikipedia mentions it. ]

[ Addendum 20201123: An earlier version of this article claimed that the double-and-add step I described preserves the mod-19 residue. It does not, of course; the doubling step doubles it. It is, however, true that it is zero afterward if and only if it was zero before. ]


[Other articles in category /math] permanent link

Sun, 18 Oct 2020

Newton's Method and its instability

While messing around with Newton's method for last week's article, I built this Desmos thingy:

The red point represents the initial guess; grab it and drag it around, and watch how the later iterations change. Or, better, visit the Desmos site and play with the slider yourself.

(The curve here is !!y = (x-2.2)(x-3.3)(x-5.5)!!; it has a local maximum at around !!2.7!!, and a local minimum at around !!4.64!!.)

Watching the attractor point jump around I realized I was arriving at a much better understanding of the instability of the convergence. Clearly, if your initial guess happens to be near an extremum of !!f!!, the next guess could be arbitrarily far away, rather than a small refinement of the original guess. But even if the original guess is pretty good, the refinement might be near an extremum, and then the following guess will be somewhere random. For example, although !!f!! is quite well-behaved in the interval !![4.3, 4.35]!!, as the initial guess !!g!! increases across this interval, the refined guess !!\hat g!! decreases from !!2.74!! to !!2.52!!, and in between these there is a local maximum that kicks the ball into the weeds. The result is that at !!g=4.3!! the method converges to the largest of the three roots, and at !!g=4.35!!, it converges to the smallest.

This is where the Newton basins come from:

Julia set for the rational function
associated to Newton's method for ƒ:z→z³−1.  The plane is divided
symmetrically into three regions, colored red, green, and blue.  The
boundary of each two regions (red and green, say) is inhabited by a
series of leaf shapes of the third color (blue), and the boundaries
between the main regions (green, say) and the (blue) leaves are
inhabited by smaller leaves again of the other color (red), and so on
ad infinitum.  The boundaries are therefore an infinitely detailed
filigree of smaller and smaller leaves of all three colors.

Here we are considering the function !!f:z\mapsto z^3 -1!! in the complex plane. Zero is at the center, and the obvious root, !!z=1!! is to its right, deep in the large red region. The other two roots are at the corresponding positions in the green and blue regions.

Starting at any red point converges to the !!z=1!! root. Usually, if you start near this root, you will converge to it, which is why all the points near it are red. But some nearish starting points are near an extremum, so that the next guess goes wild, and then the iteration ends up at the green or the blue root instead; these areas are the rows of green and blue leaves along the boundary of the large red region. And some starting points on the boundaries of those leaves kick the ball into one of the other leaves…

Here's the corresponding basin diagram for the polynomial !!y = (x-2.2)(x-3.3)(x-5.5)!! from earlier:

Julia set for
the function (x-2.2)(x-3.3)(x-5.5).  There is a large pink region on
the left, a large yellow region on the right, and in between a large
blue hourglass-shaped region.  The boundary of yellow region is decorated
with pink bobbles that protrude into the blue region, and similarly
the boundary of the pink region is decorated with yellow
bobbles. Further details are in the text below.

The real axis is the horizontal hairline along the middle of the diagram. The three large regions are the main basins of attraction to the three roots (!!x=2.2, 3.3!!, and !!5.5!!) that lie within them.

But along the boundaries of each region are smaller intrusive bubbles where the iteration converges to a surprising value. A point moving from left to right along the real axis passes through the large pink !!2.2!! region, and then through a very small yellow bubble, corresponding to the values right around the local maximum near !!x=2.7!! where the process unexpectedly converges to the !!5.5!! root. Then things settle down for a while in the blue region, converging to the !!3.3!! root as one would expect, until the value gets close to the local minimum at !!4.64!! where there is a pink bubble because the iteration converges to the !!2.2!! root instead. Then as !!x!! increases from !!4.64!! to !!5.5!!, it leaves the pink bubble and enters the main basin of attraction to !!5.5!! and stays there.

If the picture were higher resolution, you would be able to see that the pink bubbles all have tiny yellow bubbles growing out of them (one is 4.39), and the tiny yellow bubbles have even tinier pink bubbles, and so on forever.

(This was generated by the Online Fractal Generator at usefuljs.net; the labels were added later by me. The labels’ positions are only approximate.)

[ Addendum: Regarding complex points and !!f : z\mapsto z^3-1!! I said “some nearish starting points are near an extremum”. But this isn't right; !!f!! has no extrema. It has an inflection point at !!z=0!! but this doesn't explain the instability along the lines !!\theta = \frac{2k+1}{3}\pi!!. So there's something going on here with the complex derivative that I don't understand yet. ]


[Other articles in category /math] permanent link

Fixed points and attractors, part 3

Last week I wrote about a super-lightweight variation on Newton's method, in which one takes this function: $$f_n : \frac ab \mapsto \frac{a+nb}{a+b}$$

or equivalently

$$f_n : x \mapsto \frac{x+n}{x+1}$$

Iterating !!f_n!! for a suitable initial value (say, !!1!!) converges to !!\sqrt n!!:

$$ \begin{array}{rr} x & f_3(x) \\ \hline 1.0 & 2.0 \\ 2.0 & 1.667 \\ 1.667 & 1.75 \\ 1.75 & 1.727 \\ 1.727 & 1.733 \\ 1.733 & 1.732 \\ 1.732 & 1.732 \end{array} $$

Later I remembered that a few months back I wrote a couple of articles about a more general method that includes this as a special case:

The general idea was:

Suppose we were to pick a function !!f!! that had !!\sqrt 2!! as a fixed point. Then !!\sqrt 2!! might be an attractor, in which case iterating !!f!! will get us increasingly accurate approximations to !!\sqrt 2!!.

We can see that !!\sqrt n!! is a fixed point of !!f_n!!:

$$ \begin{align} f_n(\sqrt n) & = \frac{\sqrt n + n}{\sqrt n + 1} \\ & = \frac{\sqrt n(1 + \sqrt n)}{1 + \sqrt n} \\ & = \sqrt n \end{align} $$

And in fact, it is an attracting fixed point, because if !!x = \sqrt n + \epsilon!! then

$$\begin{align} f_n(\sqrt n + \epsilon) & = \frac{\sqrt n + \epsilon + n}{\sqrt n + \epsilon + 1} \\ & = \frac{(\sqrt n + \sqrt n\epsilon + n) - (\sqrt n -1)\epsilon}{\sqrt n + \epsilon + 1} \\ & = \sqrt n - \frac{(\sqrt n -1)\epsilon}{\sqrt n + \epsilon + 1} \end{align}$$

Disregarding the !!\epsilon!! in the denominator we obtain $$f_n(\sqrt n + \epsilon) \approx \sqrt n - \frac{\sqrt n - 1}{\sqrt n + 1} \epsilon $$

The error term !!-\frac{\sqrt n - 1}{\sqrt n + 1} \epsilon!! is strictly smaller than the original error !!\epsilon!!, because !!0 < \frac{x-1}{x+1} < 1!! whenever !!x>1!!. This shows that the fixed point !!\sqrt n!! is attractive.

In the previous articles I considered several different simple functions that had fixed points at !!\sqrt n!!, but I didn't think to consider this unusally simple one. I said at the time:

I had meant to write about Möbius transformations, but that will have to wait until next week, I think.

but I never did get around to the Möbius transformations, and I have long since forgotten what I planned to say. !!f_n!! is an example of a Möbius transformation, and I wonder if my idea was to systematically find all the Möbius transformations that have !!\sqrt n!! as a fixed point, and see what they look like. It is probably possible to automate the analysis of whether the fixed point is attractive, and if not to apply one of the transformations from the previous article to make it attractive.


[Other articles in category /math] permanent link

Tue, 13 Oct 2020

Newton's Method but without calculus — or multiplication

Newton's method goes like this: We have a function !!f!! and we want to solve the equation !!f(x) = 0.!! We guess an approximate solution, !!g!!, and it doesn't have to be a very good guess.

The graph of a wiggly
polynomial girve with roots between 2 and 2.5, between 3 and 3.5, and
between 5 and 6.  The middle root is labeled with a question mark.
One point of the curve, not too different from the root, is marked in
blue and labeled “⟨g,f(g)⟩”.

Then we calculate the line !!T!! tangent to !!f!! through the point !!\langle g, f(g)\rangle!!. This line intersects the !!x!!-axis at some new point !!\langle \hat g, 0\rangle!!, and this new value, !!\hat g!!, is a better approximation to the value we're seeking.

The same graph as
before, but with a tangent to the curve drawn through the blue point.
It intersects the x-axis quite close to the root.  The intersection
point is labeled “ĝ”.

In the left
margin, a close-up detail of the same curve as before, this time
showing the tangent line at ⟨ĝ,f(ĝ)⟩.  The diagram is a close-up
because the line intersects the x-axis extremely close to the actual
root.

Analytically, we have:

$$\hat g = g - \frac{f(g)}{f'(g)}$$

where !!f'(g)!! is the derivative of !!f!! at !!g!!.

We can repeat the process if we like, getting better and better approximations to the solution. (See detail at left; click to enlarge. Again, the blue line is the tangent, this time at !!\langle \hat g, f(\hat g)\rangle!!. As you can see, it intersects the axis very close to the actual solution.)


In general, this requires calculus or something like it, but in any particular case you can avoid the calculus. Suppose we would like to find the square root of 2. This amounts to solving the equation $$x^2-2 = 0.$$ The function !!f!! here is !!x^2-2!!, and !!f'!! is !!2x!!. Once we know (or guess) !!f'!!, no further calculus is needed. The method then becomes: Guess !!g!!, then calculate $$\hat g = g - \frac{g^2-2}{2g}.$$ For example, if our initial guess is !!g = 1.5!!, then the formula above tells us that a better guess is !!\hat g = 1.5 - \frac{2.25 - 2}{3} = 1.4166\ldots!!, and repeating the process with !!\hat g!! produces !!1.41421\mathbf{5686}!!, which is very close to the correct result !!1.41421\mathbf{3562}!!. If we want the square root of a different number !!n!! we just substitute it for the !!2!! in the numerator.

This method for extracting square roots works well and requires no calculus. It's called the Babylonian method and while there's no evidence that it was actually known to the Babylonians, it is quite ancient; it was first recorded by Hero of Alexandria about 2000 years ago.

How might this have been discovered if you didn't have calculus? It's actually quite easy. Here's a picture of the number line. Zero is at one end, !!n!! is at the other, and somewhere in between is !!\sqrt n!!, which we want to find.

A line with
the left enpoint marked “0”, the right endpoint marked “n”, and a
point in between marked “square root of n”.

Also somewhere in between is our guess !!g!!. Say we guessed too low, so !!0 \lt g < \sqrt n!!. Now consider !!\frac ng!!. Since !!g!! is too small to be !!\sqrt n!! exactly, !!\frac ng!! must be too large. (If !!g!! and !!\frac ng!! were both smaller than !!\sqrt n!!, then their product would be smaller than !!n!!, and it isn't.)

The previous
illustration, with green points marked “g” and “\frac{n}{g}”. The
first of these it to the left of the square root of n, the second to
its right.

Similarly, if the guess !!g!! is too large, so that !!\sqrt n < g!!, then !!\frac ng!! must be less than !!\sqrt n!!. The important point is that !!\sqrt n!! is between !!g!! and !!\frac ng!!. We have narrowed down the interval in which !!\sqrt n!! lies, just by guessing.

Since !!\sqrt n!! lies in the interval between !!g!! and !!\frac ng!! our next guess should be somewhere in this smaller interval. The most obvious thing we can do is to pick the point halfway in the middle of !!g!! and !!\frac ng!!, So if we guess the average, $$\frac12\left(g + \frac ng\right),$$ this will probably be much closer to !!\sqrt n!! than !!g!! was:

The previous
illustration, but the point exactly midway between g and \frac{n}{g}
is marked in blue.  It is quote close to the point marked “square root
of n”.

This average is exactly what Newton's method would have calculated, because $$\frac12\left(g + \frac ng\right) = g - \frac{g^2-n}{2g}.$$

But we were able to arrive at the same computation with no calculus at all — which is why this method could have been, and was, discovered 1700 years before Newton's method itself.

If we're dealing with rational numbers then we might write !!g=\frac ab!!, and then instead of replacing our guess !!g!! with a better guess !!\frac12\left(g + \frac ng\right)!!, we could think of it as replacing our guess !!\frac ab!! with a better guess !!\frac12\left(\frac ab + \frac n{\frac ab}\right)!!. This simplifies to

$$\frac ab \Rightarrow \frac{a^2 + nb^2}{2ab}$$

so that for example, if we are calculating !!\sqrt 2!!, and we start with the guess !!g=\frac32!!, the next guess is $$\frac{3^2 + 2\cdot2^2}{2\cdot3\cdot 2} = \frac{17}{12} = 1.4166\ldots$$ as we saw before. The approximation after that is !!\frac{289+288}{2\cdot17\cdot12} = \frac{577}{408} = 1.41421568\ldots!!. Used this way, the method requires only integer calculations, and converges very quickly.

But the numerators and denominators increase rapidly, which is good in one sense (it means you get to the accurate approximations quickly) but can also be troublesome because the numbers get big and also because you have to multiply, and multiplication is hard.

But remember how we figured out to do this calculation in the first place: all we're really trying to do is find a number in between !!g!! and !!\frac ng!!. We did that the first way that came to mind, by averaging. But perhaps there's a simpler operation that we could use instead, something even easier to compute?

Indeed there is! We can calculate the mediant. The mediant of !!\frac ab!! and !!\frac cd!! is simply $$\frac{a+c}{b+d}$$ and it is very easy to show that it lies between !!\frac ab!! and !!\frac cd!!, as we want.

So instead of the relatively complicated $$\frac ab \Rightarrow \frac{a^2 + nb^2}{2ab}$$ operation, we can try the very simple and quick $$\frac ab \Rightarrow \operatorname{mediant}\left(\frac ab, \frac{nb}{a}\right) = \frac{a+nb}{b+a}$$ operation.

Taking !!n=2!! as before, and starting with !!\frac 32!!, this produces:

$$ \frac 32 \Rightarrow\frac{ 7 }{ 5 } \Rightarrow\frac{ 17 }{ 12 } \Rightarrow\frac{ 41 }{ 29 } \Rightarrow\frac{ 99 }{ 70 } \Rightarrow\frac{ 239 }{ 169 } \Rightarrow\frac{ 577 }{ 408 } \Rightarrow\cdots$$

which you may recognize as the convergents of !!\sqrt2!!. These are actually the rational approximations of !!\sqrt 2!! that are optimally accurate relative to the sizes of their denominators. Notice that !!\frac{17}{12}!! and !!\frac{577}{408}!! are in there as they were before, although it takes longer to get to them.

I think it's cool that you can view it as a highly-simplified version of Newton's method.


[ Addendum: An earlier version of the last paragraph claimed:

None of this is a big surprise, because it's well-known that you can get the convergents of !!\sqrt n!! by applying the transformation !!\frac ab\Rightarrow \frac{a+nb}{a+b}!!, starting with !!\frac11!!.

Simon Tatham pointed out that this was mistaken. It's true when !!n=2!!, but not in general. The sequence of fractions that you get does indeed converge to !!\sqrt n!!, but it's not usually the convergents, or even in lowest terms. When !!n=3!!, for example, the numerators and denominators are all even. ]

[ Addendum: Newton's method as I described it, with the crucial !!g → g - \frac{f(g)}{f'(g)}!! transformation, was actually invented in 1740 by Thomas Simpson. Both Isaac Newton and Thomas Raphson had earlier described only special cases, as had several Asian mathematicians, including Seki Kōwa. ]

[ Previous discussion of convergents: Archimedes and the square root of 3; 60-degree angles on a lattice. A different variation on the Babylonian method. ]

[ Note to self: Take a look at what the AM-GM inequality has to say about the behavior of !!\hat g!!. ]

[ Addendum 20201018: A while back I discussed the general method of picking a function !!f!! that has !!\sqrt 2!! as a fixed point, and iterating !!f!!. This is yet another example of such a function. ]


[Other articles in category /math] permanent link

Fri, 21 Aug 2020

Mixed-radix fractions in Bengali

[ Previously, Base-4 fractions in Telugu. ]

I was really not expecting to revisit this topic, but a couple of weeks ago, looking for something else, I happened upon the following curiously-named Unicode characters:

    U+09F4 (e0 a7 b4): BENGALI CURRENCY NUMERATOR ONE [৴]
    U+09F5 (e0 a7 b5): BENGALI CURRENCY NUMERATOR TWO [৵]
    U+09F6 (e0 a7 b6): BENGALI CURRENCY NUMERATOR THREE [৶]
    U+09F7 (e0 a7 b7): BENGALI CURRENCY NUMERATOR FOUR [৷]
    U+09F8 (e0 a7 b8): BENGALI CURRENCY NUMERATOR ONE LESS THAN THE DENOMINATOR [৸]
    U+09F9 (e0 a7 b9): BENGALI CURRENCY DENOMINATOR SIXTEEN [৹]

Oh boy, more base-four fractions! What on earth does “NUMERATOR ONE LESS THAN THE DENOMINATOR” mean and how is it used?

An explanation appears in the Unicode proposal to add the related “ganda” sign:

     U+09FB (e0 a7 bb): BENGALI GANDA MARK [৻]

(Anshuman Pandey, “Proposal to Encode the Ganda Currency Mark for Bengali in the BMP of the UCS”, 2007.)

Pandey explains: prior to decimalization, the Bengali rupee (rupayā) was divided into sixteen ānā. Standard Bengali numerals were used to write rupee amounts, but there was a special notation for ānā. The sign ৹ always appears, and means sixteenths. Then. Prefixed to this is a numerator symbol, which goes ৴, ৵, ৶, ৷ for 1, 2, 3, 4. So for example, 3 ānā is written ৶৹, which means !!\frac3{16}!!.

The larger fractions are made by adding the numerators, grouping by 4's:

 1, 2, 3
৷৴ ৷৵ ৷৶ 4, 5, 6, 7
৷৷৷৷৴ ৷৷৵ ৷৷৶ 8, 9, 10, 11
৸৴ ৸৵ ৸৶ 12, 13, 14, 15

except that three fours (৷৷৷) is too many, and is abbreviated by the intriguing NUMERATOR ONE LESS THAN THE DENOMINATOR sign ৸ when more than 11 ānā are being written.

Historically, the ānā was divided into 20 gaṇḍā; the gaṇḍā amounts are written with standard (Benagli decimal) numerals instead of the special-purpose base-4 numerals just described. The gaṇḍā sign ৻ precedes the numeral, so 4 gaṇḍā (!!\frac15!! ānā) is wrtten as ৻৪. (The ৪ is not an 8, it is a four.)

What if you want to write 17 rupees plus !!9\frac15!! ānā? That is 17 rupees plus 9 ānā plus 4 gaṇḍā. If I am reading this report correctly, you write it this way:

১৭৷৷৴৻৪

This breaks down into three parts as ১৭ ৷৷৴ ৻৪. The ১৭ is a 17, for 17 rupees; the ৷৷৴ means 9 ānā (the denominator ৹ is left implicit) and the ৻৪ means 4 gaṇḍā, as before. There is no separator between the rupees and the ānā. But there doesn't need to be, because different numerals are used! An analogous imaginary system in Latin script would be to write the amount as

17dda¢4

where the ‘17’ means 17 rupees, the ‘dda’ means 4+4+1=9 ānā, and the ¢4 means 4 gaṇḍā. There is no trouble seeing where the ‘17’ ends and the ‘dda’ begins.

Pandey says there was an even smaller unit, the kaṛi. It was worth ¼ of a gaṇḍā and was again written with the special base-4 numerals, but as if the gaṇḍā had been divided into 16. A complete amount might be written with decimal numerals for the rupees, base-4 numerals for the ānā, decimal numerals again for the gaṇḍā, and base-4 numerals again for the kaṛi. No separators are needed, because each section is written symbols that are different from the ones in the adjoining sections.


[Other articles in category /math] permanent link

Wed, 05 Aug 2020

A maybe-interesting number trick?

I'm not sure if this is interesting, trivial, or both. You decide.

Let's divide the numbers from 1 to 30 into the following six groups:

A 1 2 4 8 16
B 3 6 12 17 24
C 5 9 10 18 20
D 7 14 19 25 28
E 11 13 21 22 26
F 15 23 27 29 30

Choose any two rows. Chose a number from each row, and multiply them mod 31. (That is, multiply them, and if the product is 31 or larger, divide it by 31 and keep the remainder.)

Regardless of which two numbers you chose, the result will always be in the same row. For example, any two numbers chosen from rows B and D will multiply to yield a number in row E. If both numbers are chosen from row F, their product will always appear in row A.


[Other articles in category /math] permanent link

Sat, 01 Aug 2020

How are finite fields constructed?

Here's another recent Math Stack Exchange answer I'm pleased with.

OP asked:

I know this question has been asked many times and there is good information out there which has clarified a lot for me but I still do not understand how the addition and multiplication tables for !!GF(4)!! is constructed?

I've seen [links] but none explicity explain the construction and I'm too new to be told "its an extension of !!GF(2)!!"

The only “reasonable” answer here is “get an undergraduate abstract algebra text and read the chapter on finite fields”. Because come on, you can't expect some random stranger to appear and write up a detailed but short explanation at your exact level of knowledge.

But sometimes Internet Magic Lightning strikes  and that's what you do get! And OP set themselves up to be struck by magic lightning, because you can't get a detailed but short explanation at your exact level of knowledge if you don't provide a detailed but short explanation of your exact level of knowledge — and this person did just that. They understand finite fields of prime order, but not how to construct the extension fields. No problem, I can explain that!

I had special fun writing this answer because I just love constructing extensions of finite fields. (Previously: [1] [2])


For any given !!n!!, there is at most one field with !!n!! elements: only one, if !!n!! is a power of a prime number (!!2, 3, 2^2, 5, 7, 2^3, 3^2, 11, 13, \ldots!!) and none otherwise (!!6, 10, 12, 14\ldots!!). This field with !!n!! elements is written as !!\Bbb F_n!! or as !!GF(n)!!.

Suppose we want to construct !!\Bbb F_n!! where !!n=p^k!!. When !!k=1!!, this is easy-peasy: take the !!n!! elements to be the integers !!0, 1, 2\ldots p-1!!, and the addition and multiplication are done modulo !!n!!.

When !!k>1!! it is more interesting. One possible construction goes like this:

  1. The elements of !!\Bbb F_{p^k}!! are the polynomials $$a_{k-1}x^{k-1} + a_{k-2}x^{k-2} + \ldots + a_1x+a_0$$ where the coefficients !!a_i!! are elements of !!\Bbb F_p!!. That is, the coefficients are just integers in !!{0, 1, \ldots p-1}!!, but with the understanding that the addition and multiplication will be done modulo !!p!!. Note that there are !!p^k!! of these polynomials in total.

  2. Addition of polynomials is done exactly as usual: combine like terms, but remember that the coefficients are added modulo !!p!! because they are elements of !!\Bbb F_p!!.

  3. Multiplication is more interesting:

    a. Pick an irreducible polynomial !!P!! of degree !!k!!. “Irreducible” means that it does not factor into a product of smaller polynomials. How to actually locate an irreducible polynomial is an interesting question; here we will mostly ignore it.

    b. To multiply two elements, multiply them normally, remembering that the coefficients are in !!\Bbb F_p!!. Divide the product by !!P!! and keep the remainder. Since !!P!! has degree !!k!!, the remainder must have degree at most !!k-1!!, and this is your answer.


Now we will see an example: we will construct !!\Bbb F_{2^2}!!. Here !!k=2!! and !!p=2!!. The elements will be polynomials of degree at most 1, with coefficients in !!\Bbb F_2!!. There are four elements: !!0x+0, 0x+1, 1x+0, !! and !!1x+1!!. As usual we will write these as !!0, 1, x, x+1!!. This will not be misleading.

Addition is straightforward: combine like terms, remembering that !!1+1=0!! because the coefficients are in !!\Bbb F_2!!:

$$\begin{array}{c|cccc} + & 0 & 1 & x & x+1 \\ \hline 0 & 0 & 1 & x & x+1 \\ 1 & 1 & 0 & x+1 & x \\ x & x & x+1 & 0 & 1 \\ x+1 & x+1 & x & 1 & 0 \end{array} $$

The multiplication as always is more interesting. We need to find an irreducible polynomial !!P!!. It so happens that !!P=x^2+x+1!! is the only one that works. (If you didn't know this, you could find out easily: a reducible polynomial of degree 2 factors into two linear factors. So the reducible polynomials are !!x^2, x·(x+1) = x^2+x!!, and !!(x+1)^2 = x^2+2x+1 = x^2+1!!. That leaves only !!x^2+x+1!!.)

To multiply two polynomials, we multiply them normally, then divide by !!x^2+x+1!! and keep the remainder. For example, what is !!(x+1)(x+1)!!? It's !!x^2+2x+1 = x^2 + 1!!. There is a theorem from elementary algebra (the “division theorem”) that we can find a unique quotient !!Q!! and remainder !!R!!, with the degree of !!R!! less than 2, such that !!PQ+R = x^2+1!!. In this case, !!Q=1, R=x!! works. (You should check this.) Since !!R=x!! this is our answer: !!(x+1)(x+1) = x!!.

Let's try !!x·x = x^2!!. We want !!PQ+R = x^2!!, and it happens that !!Q=1, R=x+1!! works. So !!x·x = x+1!!.

I strongly recommend that you calculate the multiplication table yourself. But here it is if you want to check:

$$\begin{array}{c|cccc} · & 0 & 1 & x & x+1 \\ \hline 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & x & x+1 \\ x & 0 & x & x+1 & 1 \\ x+1 & 0 & x+1 & 1 & x \end{array} $$

To calculate the unique field !!\Bbb F_{2^3}!! of order 8, you let the elements be the 8 second-degree polynomials !!0, 1, x, \ldots, x^2+x, x^2+x+1!! and instead of reducing by !!x^2+x+1!!, you reduce by !!x^3+x+1!!. (Not by !!x^3+x^2+x+1!!, because that factors as !!(x^2+1)(x+1)!!.) To calculate the unique field !!\Bbb F_{3^2}!! of order 27, you start with the 27 third-degree polynomials with coefficients in !!{0,1,2}!!, and you reduce by !!x^3+2x+1!! (I think).


The special notation !!\Bbb F_p[x]!! means the ring of all polynomials with coefficients from !!\Bbb F_p!!. !!\langle P \rangle!! means the ring of all multiples of polynomial !!P!!. (A ring is a set with an addition, subtraction, and multiplication defined.)

When we write !!\Bbb F_p[x] / \langle P\rangle!! we are constructing a thing called a “quotient” structure. This is a generalization of the process that turns the ordinary integers !!\Bbb Z!! into the modular-arithmetic integers we have been calling !!\Bbb F_p!!. To construct !!\Bbb F_p!!, we start with !!\Bbb Z!! and then agree that two elements of !!\Bbb Z!! will be considered equivalent if they differ by a multiple of !!p!!.

To get !!\Bbb F_p[x] / \langle P \rangle!! we start with !!\Bbb F_p[x]!!, and then agree that elements of !!\Bbb F_p[x]!! will be considered equivalent if they differ by a multiple of !!P!!. The division theorem guarantees that of all the equivalent polynomials in a class, exactly one of them will have degree less than that of !!P!!, and that is the one we choose as a representative of its class and write into the multiplication table. This is what we are doing when we “divide by !!P!! and keep the remainder”.


A particularly important example of this construction is !!\Bbb R[x] / \langle x^2 + 1\rangle!!. That is, we take the set of polynomials with real coefficients, but we consider two polynomials equivalent if they differ by a multiple of !!x^2 + 1!!. By the division theorem, each polynomial is then equivalent to some first-degree polynomial !!ax+b!!.

Let's multiply $$(ax+b)(cx+d).$$ As usual we obtain $$acx^2 + (ad+bc)x + bd.$$ From this we can subtract !!ac(x^2 + 1)!! to obtain the equivalent first-degree polynomial $$(ad+bc) x + (bd-ac).$$

Now recall that in the complex numbers, !!(b+ai)(d + ci) = (bd-ac) + (ad+bc)i!!. We have just constructed the complex numbers,with the polynomial !!x!! playing the role of !!i!!.


[ Note to self: maybe write a separate article about what makes this a good answer, and how it is structured. ]


[Other articles in category /math/se] permanent link

Fri, 31 Jul 2020

What does it mean to expand a function “in powers of x-1”?

A recent Math Stack Excahnge post was asked to expand the function !!e^{2x}!! in powers of !!(x-1)!! and was confused about what that meant, and what the point of it was. I wrote an answer I liked, which I am reproducing here.


You asked:

I don't understand what are we doing in this whole process

which is a fair question. I didn't understand this either when I first learned it. But it's important for practical engineering reasons as well as for theoretical mathematical ones.

Before we go on, let's see that your proposal is the wrong answer to this question, because it is the correct answer, but to a different question. You suggested: $$e^{2x}\approx1+2\left(x-1\right)+2\left(x-1\right)^2+\frac{4}{3}\left(x-1\right)^3$$

Taking !!x=1!! we get !!e^2 \approx 1!!, which is just wrong, since actually !!e^2\approx 7.39!!. As a comment pointed out, the series you have above is for !!e^{2(x-1)}!!. But we wanted a series that adds up to !!e^{2x}!!.

As you know, the Maclaurin series works here:

$$e^{2x} \approx 1+2x+2x^2+\frac{4}{3}x^3$$

so why don't we just use it? Let's try !!x=1!!. We get $$e^2\approx 1 + 2 + 2 + \frac43$$

This adds to !!6+\frac13!!, but the correct answer is actually around !!7.39!! as we saw before. That is not a very accurate approximation. Maybe we need more terms? Let's try ten:

$$e^{2x} \approx 1+2x+2x^2+\frac{4}{3}x^3 + \ldots + \frac{8}{2835}x^9$$

If we do this we get !!7.3887!!, which isn't too far off. But it was a lot of work! And we find that as !!x!! gets farther away from zero, the series above gets less and less accurate. For example, take !!x=3.1!!, the formula with four terms gives us !!66.14!!, which is dead wrong. Even if we use ten terms, we get !!444.3!!, which is still way off. The right answer is actually !!492.7!!.

What do we do about this? Just add more terms? That could be a lot of work and it might not get us where we need to go. (Some Maclaurin series just stop working at all too far from zero, and no amount of terms will make them work.) Instead we use a different technique.

Expanding the Taylor series “around !!x=a!!” gets us a different series, one that works best when !!x!! is close to !!a!! instead of when !!x!! is close to zero. Your homework is to expand it around !!x=1!!, and I don't want to give away the answer, so I'll do a different example. We'll expand !!e^{2x}!! around !!x=3!!. The general formula is $$e^{2x} \approx \sum \frac{f^{(i)}(3)}{i!} (x-3)^i\tag{$\star$}\ \qquad \text{(when $x$ is close to $3$)}$$

The !!f^{(i)}(x)!! is the !!i!!'th derivative of !! e^{2x}!! , which is !!2^ie^{2x}!!, so the first few terms of the series above are:

$$\begin{eqnarray} e^{2x} & \approx& e^6 + \frac{2e^6}1 (x-3) + \frac{4e^6}{2}(x-3)^2 + \frac{8e^6}{6}(x-3)^3\\ & = & e^6\left(1+ 2(x-3) + 2(x-3)^2 + \frac34(x-3)^3\right)\\ & & \qquad \text{(when $x$ is close to $3$)} \end{eqnarray} $$

The first thing to notice here is that when !!x!! is exactly !!3!!, this series is perfectly correct; we get !!e^6 = e^6!! exactly, even when we add up only the first term, and ignore the rest. That's a kind of useless answer because we already knew that !!e^6 = e^6!!. But that's not what this series is for. The whole point of this series is to tell us how different !!e^{2x}!! is from !!e^6!! when !!x!! is close to, but not equal to !!3!!.

Let's see what it does at !!x=3.1!!. With only four terms we get $$\begin{eqnarray} e^{6.2} & \approx& e^6(1 + 2(0.1) + 2(0.1)^2 + \frac34(0.1)^3)\\ & = & e^6 \cdot 1.22075 \\ & \approx & 492.486 \end{eqnarray}$$

which is very close to the correct answer, which is !!492.7!!. And that's with only four terms. Even if we didn't know an exact value for !!e^6!!, we could find out that !!e^{6.2}!! is about !!22.075\%!! larger, with hardly any calculation.

Why did this work so well? If you look at the expression !!(\star)!! you can see: The terms of the series all have factors of the form !!(x-3)^i!!. When !!x=3.1!!, these are !!(0.1)^i!!, which becomes very small very quickly as !!i!! increases. Because the later terms of the series are very small, they don't affect the final sum, and if we leave them out, we won't mess up the answer too much. So the series works well, producing accurate results from only a few terms, when !!x!! is close to !!3!!.

But in the Maclaurin series, which is around !!x=0!!, those !!(x-3)^i!! terms are !!x^i!! terms intead, and when !!x=3.1!!, they are not small, they're very large! They get bigger as !!i!! increases, and very quickly. (The !! i! !! in the denominator wins, eventually, but that doesn't happen for many terms.) If we leave out these many large terms, we get the wrong results.

The short answer to your question is:

Maclaurin series are only good for calculating functions when !!x!! is close to !!0!!, and become inaccurate as !!x!! moves away from zero. But a Taylor series around !!a!! has its “center” near !!a!! and is most accurate when !!x!! is close to !!a!!.


[Other articles in category /math/se] permanent link

Wed, 08 Jul 2020

Ron Graham has died

Ron Graham has died. He had a good run. When I check out I will probably not be as accomplished or as missed as Graham, even if I make it to 84.

I met Graham once and he was very nice to me, as he apparently was to everyone. I was planning to write up a reminiscence of the time, but I find I've already done it so you can read that if you care.

Graham's little book Rudiments of Ramsey Theory made a big impression on me when I was an undergraduate. Chapter 1, if I remember correctly, is a large collection of examples, which suited me fine. Chapter 2 begins by introducing a certain notation of Erdős and Rado: !!\left[{\Bbb N\atop k}\right]!! is the family of subsets of !!\Bbb N!! of size !!k!!, and

$$\left[{\Bbb N\atop k}\right] \to \left[{\Bbb N\atop k}\right]_r$$

is an abbreviation of the statement that for any !!r!!-coloring of members of !!\left[{\Bbb N\atop k}\right]!! there is always an infinite subset !!S\subset \Bbb N!! for which every member of !!\left[{S\atop k}\right]!! is the same color. I still do not find this notation perspicuous, and at the time, with much less experience, I was boggled. In the midst of my bogglement I was hit with the next sentence, which completely derailed me:

Scan of two lines from _Rudiments of Ramsey Theory_
including the sentence “We will occasionally use this arrow notation unless there is danger
of no confusion.”

After this I could no longer think about the mathematics, but only about the sentence.

Outside the mathematical community Graham is probably best-known for juggling, or for Graham's number, which Wikipedia describes:

At the time of its introduction, it was the largest specific positive integer ever to have been used in a published mathematical proof.

One of my better Math Stack Exchange posts was in answer to the question Graham's Number : Why so big?. I love the phrasing of this question! And that, even with the strange phrasing, there is an answer! This type of huge number is quite typical in proofs of Ramsey theory, and I answered in detail.

The sense of humor that led Graham to write “danger of no confusion” is very much on display in the paper that gave us Graham's number. If you are wondering about Graham's number, check out my post.


[Other articles in category /math] permanent link

Mon, 06 Jul 2020

Weird constants in math problems

Michael Lugo recently considered a problem involving the allocation of swimmers to swim lanes at random, ending with:

If we compute this for large !!n!! we get !!f(n) \sim 0.4323n!!, which agrees with the Monte Carlo simulations… The constant !!0.4323!! is $$\frac{(1-e^{-2})}2.$$

I love when stuff like this happens. The computer is great at doing a quick random simulation and getting you some weird number, and you have no idea what it really means. But mathematical technique can unmask the weird number and learn its true identity. (“It was Old Man Haskins all along!”)

A couple of years back Math Stack Exchange had Expected Number and Size of Contiguously Filled Bins, and although it wasn't exactly what was asked, I ended up looking into this question: We take !!n!! balls and throw them at random into !!n!! bins that are lined up in a row. A maximal contiguous sequence of all-empty or all-nonempty bins is called a “cluster”. For example, here we have 13 balls that I placed randomly into 13 bins:

13 boxes, some with blue balls.  The boxes
contain, respectively, 1, 0, 3, 0, 1, 2, 1, 1, 0, 1, 2, 1, 0 balls.

In this example, there are 8 clusters, of sizes 1, 1, 1, 1, 4, 1, 3, 1. Is this typical? What's the expected cluster size?

It's easy to use Monte Carlo methods and find that when !!n!! is large, the average cluster size is approximately !!2.15013!!. Do you recognize this number? I didn't.

But it's not hard to do the calculation analytically and discover that that the reason it's approximately !!2.15013!! is that the actual answer is $$\frac1{2(e^{-1} - e^{-2})}$$ which is approximately !!2.15013!!.

Math is awesome and wonderful.

(Incidentally, I tried the Inverse Symbolic Calculator just now, but it was no help. It's also not in Plouffe's Miscellaneous Mathematical Constants)

[ Addendum 20200707: WolframAlpha does correctly identify the !!2.15013!! constant. ]


[Other articles in category /math] permanent link

Fri, 29 May 2020

Infinite zeroes with one on the end

I recently complained about people who claim:

you can't have a 1 after an infinite sequence of zeroes, because an infinite sequence of zeroes goes on forever.

When I read something like this, the first thing that usually comes to mind is the ordinal number !!\omega+1!!, which is a canonical representative of this type of ordering. But I think in the future I'll bring up this much more elementary example:

$$ S = \biggl\{-1, -\frac12, -\frac13, -\frac14, \ldots, 0\biggr\} $$

Even a middle school student can understand this, and I doubt they'd be able to argue seriously that it doesn't have an infinite sequence of elements that is followed by yet another final element.

Then we could define the supposedly problematic !!0, 0, 0, \ldots, 1!! thingy as a map from !!S!!, just as an ordinary sequence is defined as a map from !!\Bbb N!!.

[ Related: A familiar set with an unexpected order type. ]


[Other articles in category /math] permanent link

Thu, 13 Feb 2020

Gentzen's rules for natural deduction

Here is Gerhard Gentzen's original statement of the rules of Natural Deduction (“ein Kalkül für ‘natürliche’, intuitionistische Herleitungen”):

Screencap
from Gentzen's 1934 paper, titled “Die Schlußfiguren-Schemata”.  The
table is laid out in three lines, with the rules for ‘and’ and ‘or’,
then the rules for ‘exists’ and “for all’, and then the rules for
‘implies’, ‘not’, and ‘false’.  The variable names are written in
old-style German black-letter font, but otherwise the presentation is
almost identical to the modern form.

Natural deduction looks pretty much exactly the same as it does today, although the symbols are a little different. But only a little! Gentzen has not yet invented !!\land!! for logical and, and is still using !!\&!!. But he has invented !!\forall!!. The style of the !!\lnot!! symbol is a little different from what we use now, and he has that tent thingy !!⋏!! where we would now use !!\bot!!. I suppose !!⋏!! didn't catch on because it looks too much like !!\land!!. (He similarly used !!⋎!! to mean !!\top!!, but as usual, that doesn't appear in the deduction rules.)

We still use Gentzen's system for naming the rules. The notations “UE” and “OB” for example, stand for “und-Einführung” and “oder-Beseitigung”, which mean “and-introduction” and “or-elimination”.

Gentzen says (footnote 4, page 178) that he got the !!\lor, \supset, \exists!! signs from Russell, but he didn't want to use Russell's signs !!\cdot, \equiv, \sim, ()!! because they already had other meanings in mathematics. He took the !!\&!! from Hilbert, but Gentzen disliked his other symbols. Gentzen objected especially to the “uncomfortable” overbar that Hilbert used to indicate negation (“[Es] stellt eine Abweichung von der linearen Anordnung der Zeichen dar”). He attributes his symbols for logical equivalence (!!\supset\subset!!) and negation to Heyting, and explains that his new !!\forall!! symbol is analogous to !!\exists!!. I find it remarkable how quickly this caught on. Gentzen also later replaced !!\&!! with !!\land!!. Of the rest, the only one that didn't stick was !!\supset\subset!! in place of !!\equiv!!. But !!\equiv!! is much less important than the others, being merely an abbreviation.

Gentzen died at age 35, a casualty of the World War.

Source: Gerhard Gentzen, “Untersuchungen über das logische Schließen I”, pp. 176–210 Mathematische Zeitschrift v. 39, Springer, 1935. The display above appears on page 186.

[ Addendum 20200214: Thanks to Andreas Fuchs for correcting my German grammar. ]


[Other articles in category /math/logic] permanent link

Thu, 06 Feb 2020

Major screwups in mathematics: example 3

[ Previously: “Cases in which some statement S was considered to be proved, and later turned out to be false”. ]

In 1905, Henri Lebesgue claimed to have proved that if !!B!! is a subset of !!\Bbb R^2!! with the Borel property, then its projection onto a line (the !!x!!-axis, say) is a Borel subset of the line. This is false. The mistake was apparently noticed some years later by Andrei Souslin. In 1912 Souslin and Luzin defined an analytic set as the projection of a Borel set. All Borel sets are analytic, but, contrary to Lebesgue's claim, the converse is false. These sets are counterexamples to the plausible-seeming conjecture that all measurable sets are Borel.

I would like to track down more details about this. This Math Overflow post summarizes Lebesgue's error:

It came down to his claim that if !!{A_n}!! is a decreasing sequence of subsets in the plane with intersection !!A!!, the projected sets in the line intersect to the projection of !!A!!. Of course this is nonsense. Lebesgue knew projection didn't commute with countable intersections, but apparently thought that by requiring the sets to be decreasing this would work.


[Other articles in category /math] permanent link

Sun, 24 Nov 2019

The least common divisor and the greatest common multiple

One day Katara reported that her middle school math teacher had gone into a rage (whether real or facetious I don't know) about some kids’ use of the phrase “greatest common multiple”. “There is no greatest common multiple!” he shouted.

But that got me thinking. There isn't a greatest common multiple, but there certainly is a least common divisor; the least common divisor of !!a!! and !!b!! is !!1!!, for any !!a!! and !!b!!.

The least common multiple and greatest common divisor are beautifully dual, via the identity $$\operatorname{lcm}(a,b)\cdot\gcd(a,b) = ab,$$ so if there's a simple least common divisor I'd expect there would be a greatest common multiple also. I puzzled over this for a minute and realized what is going on. Contrary to Katara's math teacher, there is a greatest common multiple if you understand things properly.

There are two important orderings on the positive integers. One is the familiar total ordering by magnitude: $$0 ≤ 1≤ 2≤ 3≤ 4≤ \ldots.$$ (The closely related !!\lt!! is nearly the same: !!a\lt b!! just means that !!a\le b!! and !!a\ne b!!.)

But there is another important ordering, more relevant here, in which the numbers are ordered not by magnitude but by divisibility. We say that !!a\preceq b!! only when !!b!! is a multiple of !!a!!. This has many of the same familiar and comforting properties that !!\le!! enjoys. Like !!\le!!, it is reflexive. That is, !!a\preceq a!! for all !!a!!, because !!a!! is a multiple of itself; it's !!a\cdot 1!!.

!!\preceq!! is also transitive: Whenever !!a\preceq b!! and !!b\preceq c!!, then also !!a\preceq c!!, because if !!b!! is a multiple of !!a!!, and !!c!! is a multiple of !!b!!, then !!c!! is a multiple of !!a!!.

Like !!\le!!, the !!\preceq!! relation is also antisymmetric. We can have both !!a≤b!! and !!b≤a!!, but only if !!a=b!!. Similarly, we can have both !!a\preceq b!! and !!b\preceq a!! but only if !!a=b!!.

But unlike the familiar ordering, !!\preceq!! is only a partial ordering. For any given !!a!! and !!b!! it must be the case that either !!a\le b!! or !!b \le a!!, with both occurring only in the case where !!a=b!!. !!\preceq!! does not enjoy this property. For any given !!a!! or !!b!!, it might be the case that !!a\preceq b!! or !!b\preceq a!!, or both (if !!a=b!!) or neither. For example, neither of !!15\preceq 10!! and !!10 \preceq 15!! holds, because neither of !!10!! and !!15!! is a multiple of the other.

The total ordering !!≤!! lines up the numbers in an infinite queue, first !!0!!, then !!1!!, and so on to infinity, each one followed directly by its successor.

as
described in the previous paragraph

The partial ordering !!\preceq!! is less rigid and more complex. It is not anything like an infinite queue, and it is much harder to draw. A fragment is shown below:

one node
for thirty of the first 42 positive integers, arranged in a complex
graph; further description in the main text

The nodes here have been arranged so that each number is to the left of all the numbers it divides. The number !!1!!, which divides every other number, is leftmost. The next layer has all the prime numbers, each one with a single arrow pointing to its single divisor !!1!!, although I've omitted most of them from the picture. The second layer has all the numbers of the form !!p^2!! and !!pq!! where !!p!! and !!q!! are primes: !!4, 6, 9, 10, 14, \ldots!!, each one pointing to its prime divisors in the first layer. The third layer should have all the numbers of the form !!p^3, p^2q, !! and !!pqr!!, but I have shown only !!8, 12, 18, !! and !!20!!. The complete diagram would extend forever to the right.

In this graph, “least” and “greatest” have a sort-of-geometric interpretation, “least” roughly corresponding to “leftmost”. But you have to understand “leftmost” in the correct way: it really means “leftmost along the arrows”. In the picture !!11!! is left of !!4!!, but that doesn't count because it's not left of 4 along an arrow path. In the !!\preceq!! ordering, !!11\not \preceq 4!! and !!4\not \preceq 11!!. Similarly “greatest” means “rightmost, but only along the arrows”.

A divisor of a number !!n!! is exactly a node that can be reached from !!n!! by following the arrows leftward. A common divisor of !!n!! and !!m!! is a node that can be reached from both !!n!! and !!m!!. And the greatest common divisor is the rightmost such node.

Just the six nodes 1, 2, 4, 8,
12, 16 from the previous diagram.  1 is left of 2, which is left of 4,
which is left of 8 and 12.  8 is left of 16.

For example, the greatest common divisor of !!12!! and !!16!! is the rightmost node that can be reached by following arrows from both !!12!! and from !!16!!; that's !!4!!. The number !!8!! is right of !!4!! but can't be reached from !!12!!. The number !!12!! is right of !!4!! but can't be reached from !!16!!. The number !!2!! can be reached from both, but !!4!! is right of !!2!!. So the greatest common divisor is !!4!!.


Is there a least common divisor of !!n!! and !!m!!? Sure there is, it's the leftmost node that can be reached from both !!n!! and !!m!!. Clearly, it's always !!1!!, regardless of what !!n!! and !!m!! are.

Going in the other direction, a multiple of !!n!! is a node from which you can reach !!n!! by following the arrows. A common multiple of !!n!! and !!m!! is one from which you can reach both !!n!! and !!m!!, and the least common multiple of !!n!! and !!m!! is the leftmost such node.

But what about the greatest common multiple, angrily denied by Katara's math teacher? It should be the rightmost node from which you can reach both !!n!! and !!m!!. The diagram extends infinitely far to the right, so surely there isn't a rightmost such node?

Not so! There is a rightmost node! !!0!! is a multiple of !!n!! for every !!n!!, we have !!n \preceq 0!! for every !!n!!, and so zero is to the right of every other node. The diagram actually looks like this:

The same
as the previous, but with zero added on the right margin.  There are
no explicit connections with any of the other nodes, because in every
case there are (infinite many) nodes on the path between, which are
omitted from the diagram.

where I left out an infinite number of nodes in between !!0!! and the rest of the diagram.

It's a bit odd to think of !!0!! by itself there, all the way on the right, because in every other case, we have !!m \preceq n!! only when !!m≤ n!!. !!n=0!! is the only exception to that. But when you're thinking about divisibility, that's how you have to think about it: !!1!! is least, as you expect, not because it's smallest but because it's a divisor of every number. And dually, !!0!! is greatest, because it's a multiple of every number.

So that's the answer. Consideration of the seemingly silly question of the greatest common multiple has led us to a better understanding of the multiplicative structure of the integers, and we see that there is a greatest common multiple, which is exactly dual to the least common divisor, just as the least common multiple is dual to the greatest common divisor.


[Other articles in category /math] permanent link

Fri, 11 Oct 2019

More fair cake-cutting

In a recent article about fair cake division, I said:

Grandma can use the same method … to divide a regular 17-gonal cake into 23 equally-iced equal pieces.

I got to wondering what that would look like, and the answer is, not very interesting. A regular 17-gon is pretty close to a circle, and the 23 pieces, which are quite narrow, look very much like equal wedges of a circle:

A 17-gon divided into 23 equal pieces, as described in the
previous paragraph

This is generally true, and it becomes more nearly so both as the number of sides of the polygon increases (it becomes more nearly circular) and as the number of pieces increases (the very small amount of perimeter included in each piece is not very different from a short circular arc).

Recall my observation from last time that even in the nearly extreme case of a square divided into three slices, the central angles deviate from equality by only a few percent.

Of particular interest to me is this series of demonstrations of how to cut four pieces from a cake with an odd number of sides:

I think this shows that the whole question is a little bit silly: if you just cut the cake into equiangular wedges, the resulting slices are very close in volume and in frosting. If the nearly-horizontal cuts in the pentagon above had been perfectly straight and along the !!y!!-axis, they would have intersected the pentagon only 3% of a radius-length lower than they should have.

Some of the simpler divisions of simpler cakes are interesting. A solution to the original problem (of dividing a square cake into nine pieces) is highlighted.

The method as given works regardless of where you make the first cut. But the results do not look very different in any case:

The original SVG files are also available, as is the program that wrote them.


[Other articles in category /math] permanent link

Thu, 10 Oct 2019

Incenters of chocolate-iced cakes

A MathOverflow post asks:

Puzzle 1: Grandma made a cake whose base was a square of size 30 by 30 cm and the height was 10 cm. She wanted to divide the cake fairly among her 9 grandchildren. How should she cut the cake?

Okay, this is obvious.

Puzzle 2: Grandma made a cake whose base was a square of size 30 by 30 cm and the height was 10 cm. She put chocolate icing on top of the cake and on the sides, but not on the bottom. She wanted to divide the cake fairly among her 9 grandchildren so that each child would get an equal amount of the cake and the icing. How should she cut the cake?

This one stumped me; the best I could do was to cut the cake into 27 slabs, each !!\frac{10}3×10×10!! cm, and each with between 1 and 5 units of icing. Then we can give three slabs to each grandkid, taking care that each kid's slabs have a total of 7 units of icing. This seems like it might work for an actual cake, but I suspected that it wasn't the solution that was wanted, because the problem seems like a geometry problem and my solution is essentially combinatorial.

Indeed, there is a geometric solution, which is more interesting, and which cuts the cake into only 9 pieces.

I eventually gave up and looked at the answer, which I will discuss below. Sometimes when I give up I feel that if I had had thought a little harder or given up a little later, I would have gotten the answer, but not this time. It depends on an elementary property of squares that I had been completely unaware of.

This is your last chance to avoid spoilers.


The solution given is this: Divide the perimeter of the square cake into 9 equal-length segments, each of length !!\frac{120}{9}!! cm. Some of these will be straight and others may have right angles; it does not matter. Cut from the center of the cake to the endpoints of these segments; the resulting pieces will satisfy the requirements.

“Wat.” I said. “If the perimeter lengths are equal, then the areas are equal? How can that be?”

This is obviously true for two pieces; if you cut the square from the center into two pieces into two parts that divide the perimeter equally, then of course they are the same size and shape. But surely that is not the case for three pieces?

I could not believe it until I got home and drew some pictures on graph paper. Here Grandma has cut her cake into three pieces in the prescribed way:

A square with vertices
at ‹±3, ±3› and center at ‹0,0›. Three regions are marked on it: a
blue kite
with vertices
‹0,0›, ‹1,3›, ‹-3,3›, ‹-3,-1›; a pink irregular quadrilateral
with vertices ‹0,0›, ‹3,-3›, ‹3,3›, ‹1,3›; a green irregular
quadrilateral congruent to the pink one with vertices ‹0,0›, ‹-3,-1›,
‹-3,-3›, ‹3,-3›.  The three regions completely partition the square
with no overlap and nothing left over.

The three pieces are not the same shape! But each one contains one-third of the square's outer perimeter, and each has an area of 12 square units. (Note, by the way, that although the central angles may appear equal, they are not; the blue one is around 126.9° and the pink and green ones are only 116.6°.)

And indeed, any piece cut by Grandma from the center that includes one-third of the square's perimeter will have an area of one-third of the whole square:

A square with vertices
    at ‹±3, ±3› and center at ‹0,0›. Marked on it is an orange trapezoid
    with vertices ‹0,0›, ‹-3,-2›, ‹-3,3›, ‹0,3›. Also a pink pentagon with
    vertices ‹0,0›, ‹2,-3›, ‹3,-3›, ‹3,3›, ‹2,3›.  Both polygons
    include 8 units of the square’s 24 perimeter units, and both
    have an area of 12 square units.

The proof that this works is actually quite easy. Consider a triangle !!OAB!! where !!O!! is the center of the square and !!A!! and !!B!! are points on one of the square's edges.

The triangle's area is half its height times its base. The base is of course the length of the segment !!AB!!, and the height is the length of the perpendicular from !!O!! to the edge of the square. So for any such triangle, its area is proportional to the length of !!AB!!.

No two of the five triangles below are congruent, but each has the same base and height, and so each has the same area.

Five wedges
radiate downward in different directions from the center of the
square, each arriving at a different part of the edge but each with a
base of 1 unit.

Since the center of the square is the same distance from each of the four edges, the same is true for any two triangles, regardless of which edge they arise from: the area of each triangle is proportional to the length of the square's perimeter at its base. Any piece Grandma cuts in this way, from the center of the cake to the edge, is a disjoint union of triangular pieces of this type, so the total area of any such piece is also proportional to the length of the square's perimeter that it includes.

That's the crucial property of the square that I had not known before: if you make cuts from the center of a square, the area of the piece you get is proportional to the length of the perimeter that it contains. Awesome!

Here Grandma has used the same method to cut a pair of square cakes into ten equal-sized pieces that all the have same amount of icing.

A 10×10 square divided
    into five pieces from the center.  The pieces are three
    different shapes, but each piece contains 8 units
    of the square's perimeter and has an area of 20 square units.

The crucial property here was that the square’s center is the same distance from each of its four edges. This is really obvious, but not every polygon has an analogous point. The center of a regular polygon always has this property, and every triangle has a unique point, called its incenter, which enjoys this property. So Grandma can use the same method to divide a triangular cake into 7 equally-iced equal pieces, if she can find its incenter, or to divide a regular 17-gonal cake into 23 equally-iced equal pieces.

Not every polygon does have an incenter, though. Rhombuses and kites always do, but rectangles do not, except when they are square. If Grandma tries this method with a rectangular sheet cake, someone will get shortchanged. I learned today that polygons that have incenters are known as tangential polygons. They are the only polygons in which can one inscribe a single circle that is tangent to every side of the polygon. This property is easy to detect: these are exactly the polygons in which all the angle bisectors meet at a single point. Grandma should be able to fairly divide the cake and icing for any tangential polygon.

I have probably thought about this before, perhaps in high-school geometry but perhaps not since. Suppose you have two lines, !!m!! and !!n!!, that cross at an acute angle at !!P!!, and you consider the set of points that are equidistant from both !!m!! and !!n!!. Let !!\ell!! be a line through !!P!! which bisects the angle between !!m!! and !!n!!; clearly any point on !!\ell!! will be equidistant from !!m!! and !!n!! by a straightforward argument involving congruent triangles.

Now consider a triangle !!△ABC!!. Let !!P!! be the intersection of the angle bisectors of angles !!∠ A!! and !!∠ B!!.

The
triangle, as described, with the two angle bisectors drawn and their
intersection at P.  Centered at P is a circle that is tangent to all
of AB, BC, and CA at the same time.

!!P!! is the same distance from both !!AB!! and !!AC!! because it is on the angle bisector of !!∠ A!!, and similarly it is the same distance from both !!AB!! and !!BC!! because it is on the angle bisector of !!∠ B!!.

So therefore !!P!! is the same distance from both !!AC!! and !!BC!! and it must be on the angle bisector of angle !!∠ C!! also. We have just shown that a triangle's three angle bisectors are concurrent! I knew this before, but I don't think I knew a proof.

[ Addendum 20191011: Many illustrated examples. ]


[Other articles in category /math] permanent link

Tue, 17 Sep 2019

Breaking pills

Suppose you have a bottle that contains !!N!! whole pills. Each day you select a pill at random from the bottle. If it is a whole pill you eat half and put the other half back in the bottle. If it is a half pill, you eat it. How many half-pills can you expect to have in the bottle the day after you break the last whole pill?

Let's write !!H(N)!! for the expected number of half-pills left, if you start with !!N!!. It's easily seen that !!H(N) = 0, 1, \frac32!! for !!N=0,1,2!!, and it's not hard to calculate that !!H(3) = \frac{11}{6}!!.

For larger !!N!! it's easy to use Monte Carlo simulation, and find that !!H(30)!! is almost exactly !!4!!. But it's also easy to use dynamic programming and compute that $$H(30) = \frac{9304682830147}{2329089562800}$$ exactly, which is a bit less than 4, only !!3.994987!!. Similarly, the dynamic programming approach tells us that $$H(100) = \frac{14466636279520351160221518043104131447711}{2788815009188499086581352357412492142272}$$ which is about !!5.187!!.

(I hate the term “dynamic programming”. It sounds so cool, but then you find out that all it means is “I memoized the results in a table”. Ho hum.)

As you'd expect for a distribution with a small mean, you're much more likely to end with a small number of half-pills than a large number. In this graph, the red line shows the probability of ending with various numbers of half-pills for an initial bottle of 100 whole pills; the blue line for an initial bottle of 30 whole pills, and the orange line for an initial bottle of 5 whole pills. The data were generated by this program.

screenshot of the graph
described above.  In each case, the probability starts relatively
high, then drops rapidly to nearly zero.

The !!E!! function appears to increase approximately logarithmically. It first exceeds !!2!! at !!N=4!!, !!3!! at !!N=11!!, !!4!! at !!N=31!!, and !!5!! at !!N=83!!. The successive ratios of these !!N!!-values are !!2.75, 2.81,!! and !!2.68!!. So we might guess that !!H(N)!! first exceeds 6 around !!N=228!! or so, and indeed !!H(226) < 6 < H(227)!!. So based on purely empirical considerations, we might guess that $$H(N) \approx \frac{\log{\frac{15}{22}N}}{\log 2.75}.$$

(The !!\frac{15}{22}!! is a fudge factor to get the curves to line up properly.)

I don't have any theoretical justification for this, but I think it might be possible to get a bound.

I don't think modeling this as a Markov process would work well. There are too many states, and it is not ergodic.

[ Addendum 20190919: Ben Handley informs me that !!H(n)!! is simply the harmonic numbers, !!H(n) = \sum_1^n \frac1n!!. I feel a little foolish that I did not notice that the first four terms matched. The appearance of !!H(3)=\frac{11}6!! should have tipped me off. Thank you, M. Handley. ]

[ Addendum 20190920: I was so fried when I wrote this that I also didn't notice that the denominator I guessed, !!2.75!!, is almost exactly !!e!!. (The correct value is !!e!!.) I also suspect that the !!\frac{15}{22}!! is just plain wrong, and ought to be !!e^\gamma!! or something like that, but I need to take a closer look. ]

[ Addendum 20191004: More about this ]


[Other articles in category /math] permanent link

Sat, 14 Sep 2019

Convergence of Taylor series

[ (Previously) ]

Wikipedia has an animation that shows the convergence of the Maclaurin series for the exponential function:

Wikipedia's
graphic, which focuses on x between -3 and +3, and y between -5 and
+20.  It displays the curve
y=exp(x) overlaid with its sequence of Maclaurin series
approximations, one after the other.

This makes it look a lot better-behaved than it is. First, because it only shows a narrow region around the center of convergence. But also, it emphasizes the right-hand quadrant, where the convergence is monotone.

If you focus on the left-hand side instead, you see the appoximations thrashing back and forth like a screen door in a hurricane:

The same,
but it focuses on x between -6 and 0, and y between -5 and 1.

This is the same as Wikipedia's animation, just focused on a slightly different region of the plane. But how different it looks!

(Individual frames are available)


[Other articles in category /math] permanent link

Sun, 08 Sep 2019

The exponential function is a miracle

The Maclaurin series for the exponential function converges for every complex number !!x!!: $$1 - x + \frac{x^2}{2} - \frac{x^3}{6} + \frac{x^4}{24} - \ldots = e^{-x} $$

Say that !!x!! is any reasonably large number, such as 5. Then !!e^{-x}!! is close to zero, But the terms of the series are not close to zero. For !!x=5!! we have: $$ 1 - 5 + 12.5 - 20.83 + 26.04 - 26.04 + 21.7 - 15.5 + 9.69 - 5.38 + \ldots \approx {\Large 0}$$

Somehow all these largish random numbers manage to cancel out almost completely. And the larger we make !!x!!, the more of these largish random numbers there are, the larger they are, and yet the more exactly they cancel out. For even as small an argument as !!x=20!!, the series begins with 52 terms that vary between 1 and forty-three million, and these somehow cancel out almost entirely. The sum of these 52 numbers is !!-0.4!!.

In this graph, the red lines are the various partial sums (!!1-x, 1-x+\frac{x^2}2, !! etc.) and the blue line is the total sum !!e^{-x}!!.

As you can see, each red line is a very bad approximation to the blue one, except within a rather narrow region. And yet somehow, it all works out in the end.

[ (Addendum) ]


[Other articles in category /math] permanent link

Wed, 07 Aug 2019

Technical devices for reducing the number of axioms

In a recent article, I wrote:

I guessed it was a mere technical device, similar to the one that we can use to reduce five axioms of group theory to three. …

The fact that you can discard two of the axioms is mildly interesting, but of very little practical value in group theory.

There was a sub-digression, which I removed, about a similar sort of device that does have practical value. Suppose you have a group !!\langle G, \ast \rangle!! with a nonempty subset !!H\subset G!!, and you want to show that !!\langle H, \ast \rangle!! is a subgroup of !!G!!. To do this is it is sufficient to show three things:

  1. !!H!! is closed under !!\ast!!
  2. !!G!!'s identity element is in !!H!!
  3. For each element !!h!! of !!H!!, its inverse !!h^{-1}!! is also in !!H!!

Often, however, it is more convenient to show instead:

For each !!a, b\in H!!, the product !!ab^{-1}!! is also in !!H!!

which takes care of all three at once.


[Other articles in category /math] permanent link

Tue, 23 Jul 2019

Obtuse axiomatization of category theory

About ten years ago I started an article, addressed to my younger self, reviewing various books in category theory. I doubt I will ever publish this. But it contained a long, plaintive digression about Categories, Allegories by Peter Freyd and Andre Scedrov:

I keep this one around on the shelf just so that I can pick it up ever few months and marvel at its opacity.

It is a superb example of the definition-theorem-remark style of mathematics textbooks. I have heard that this was a style pioneered by a book you are already familiar with, John Kelley's General Topology of 1955. If so, all I can say is, sometimes it works, and sometimes it doesn't. It worked for Kelley in 1955 writing about topology.

Here is an example of what is wrong with this book.

Everyone who knows anything about category theory knows that a category is a sort of abstraction of a domain of mathematical objects, like sets, groups, or topological spaces. A category has a bunch of "objects", which are the sets, the groups, or the topological spaces, and it has a bunch of "morphisms", which are maps between the objects that preserve the objects' special structure, be it algebraic, topological, or whatever. If the objects are sets, the morphisms are simply functions. If the objects are groups, the morphisms are group homomorphisms; if the objects are topological spaces, the morphisms are continuous maps. The basic point of category theory is to study the relationships between these structure-preserving maps, independent of the underlying structure of the objects themselves. We ignore the elements of the sets or groups, and the points in the topological spaces, and instead concentrate on the relationships between whole sets, groups, and spaces, by way of these "morphisms".

Here is the opening section of Categories, Allegories:

1.1 BASIC DEFINITIONS

The theory of CATEGORIES is given by two unary operations and a binary partial operation. In most contexts lower-case variables are used for the 'individuals' which are called morphisms or maps. The values of the operations are denoted and pronounced as:
!!□x!! the source of !!x!!,
!!x□!! the target of !!x!!,
!!xy!! the composition of !!x!! and !!y!!,
The axioms:
!!xy!! is defined iff !!x□ = □y!!,
!!(□x)□ = □x!! and !!□(x□) = x□!!,
!!(□x)x = x!! and !!x(x□) = x!!,
!!□(xy) = □(x(□y))!! and !!(xy)□ = ((x□)y)□!!,
!!x(yz) = (xy)z!!.

In light of my capsule summary of category theory, can you figure out what is going on here? Even if you already know what is supposed to be going on you may not be able to make much sense of this. What to make of the axiom that !!□(xy) = □(x(□y))!!, for example?

The explanation is that Freyd has presented a version of category theory in which the objects are missing. Since every object !!X!! in a category is associated with a unique identity morphism !!{\text{id}}_X!! from !!X!! to itself, Freyd has identified each object with its identity morphism. If !!x:C\to D!!, then !!□x!! is !!{\text{id}}_C!! and !!x□!! is !!{\text{id}}_D!!. The axiom !!(□x)□ = □x!! is true because both sides are equal to !!{\text{id}}_C!!.

Still, why phrase it this way? And what about that !!□(x(□y))!! thing? I guessed it was a mere technical device, similar to the one that we can use to reduce five axioms of group theory to three. Normally, one defines a group to have an identity element !!e!! such that !!ex=xe=x!! for all !!x!!, and each element !!x!! has an inverse !!x^{-1}!! such that !!xx^{-1} = x^{-1}x = e!!. But if you are trying to be clever, you can observe that it is sufficient for there to be a left identity and a left inverse:

There must be an identity !!e!! such that !!ex=x!! for all !!x!!, and for each !!x!! there must be an !!x^{-1}!! such that !!x^{-1}x=e!!.

We no longer require !!xe=x!! or !!xx^{-1}=e!!, but it turns out that you can prove these anyway, from what is left. The fact that you can discard two of the axioms is mildly interesting, but of very little practical value in group theory.

I thought that probably the !!□(x(□y))!! thing was some similar bit of “cleverness”, and that perhaps by adopting this one axiom Freyd was able to reduce his list of axioms. For example, from that mysterious fourth axiom !!□(xy) = □(x(□y))!! you can conclude that !!xy!! is defined if and only if !!x(□y)!! is, and therefore, by the first axiom, that !!x□ = □y!! if and only if !!x□ = □(□y)!!, so that !!□y = □(□y)!!. So perhaps the phrasing of the axiom was chosen to allow Freyd to dispense with an additional axiom stating that !!□y = □(□y)!!.

Today I tinkered with it a little bit and decided I think not.

Freyd has:

$$\begin{align} xy \text{ is defined if and only if } x□ & = □y \tag{1} \\ (□x)□ & = □x \tag{2} \\ (□x)x & = x \tag{3} \\ □(xy) & = □(x(□y)) \tag{4} \end{align} $$

and their duals. Also composition is associative, which I will elide.

In place of 4, let's try this much more straightforward axiom:

$$ □(xy) = □x\tag{$4\star$} $$

I can now show that !!1, 2, 3, 4\star!! together imply !!4!!.

First, a lemma: !!□(□x) = □x!!. Axiom !!3!! says !!(□x)x = x!!, so therefore !!□((□x)x) = □x!!. By !!4\star!!, the left-hand side reduces to !!□(□x)!!, and we are done.

Now I want to show !!4!!, that !!□(xy) = □(x(□y))!!. Before I can even discuss the question I first need to show that !!x(□y)!! is defined whenever !!xy!! is; that is, whenever !!x□ = □y!!. But by the lemma, !!□y=□(□y)!!, so !!x□ = □(□y)!!, which is just what we needed.

At this point, !!4\star!! implies !!4!! directly: both sides of !!4!! have the form !!□(xz)!!, and !!4\star!! tells us that both are equal to !!□x!!.

Conversely, !!4!! implies !!4\star!!. So why didn't Freyd use !!4\star!! instead of !!4!!? I emailed him to ask, but he's 83 so I may not get an answer. Also, knowing Freyd, there's a decent chance I won't understand the answer if I do get one.

My plaintive review of this book continued:

Another, briefer complaint about this book: Early on, no later than page 13, Freyd begins to allude to "Lazard sheaves". These are apparently an important example. Freyd does not define or explain what "Lazard sheaves" are. Okay, you are expected to do some background reading, perhaps. Fair enough. But you are doomed, because "Lazard sheaves" is Freyd's own private coinage, and you will not be able to look it up under that name.

Apparently some people like this book. I don't know why, and perhaps I never will.


[Other articles in category /math] permanent link

Tue, 25 Jun 2019

Projection morphisms are (not) epic

I don't have a good catalog in my head of basic theorems of category theory. Every time I try to think about category theory, I get stuck on really basic lemmas like “can I assume that a product !!1×A!! is canonically isomorphic to !!A!!?” Or “Suppose !!f:A→B!! is both monic and epic. Must it be an isomorphism?” Then I get sidetracked trying to prove those lemmas, or else I assume they are true, go ahead, and even if I'm right I start to lose my nerve.

So for years I've wanted to make up a book of every possible basic theorem of category theory, in order from utterly simple to most difficult, and then prove every theorem. Then I'd know all the answers, or if I didn't, I could just look in the book. There would be a chapter on products with a long list of plausible-seeming statements:

  1. If !!P_1!! and !!P_2!! are both products of some !!A!! and !!B!!, then !!P_1\cong P_2!!
  2. !!A×B \cong B×A!!
  3. !!(A×B)×C \cong A×(B×C)!!
  4. !!1×A\cong A!!
  5. !!0×A\cong 0!!
  6. !!A×A\cong A!! if and only if !!A!! is initial
  7. If !!f!! and !!g!! are both monic, then so is !!f×g!!
  8. If !!f×g!! is monic, so are !!f!! and !!g!!
  9. (etc…)

and each one would either be annotated, Snopes-style, with “True”, or with a brief description of a counterexample.

On Sunday I thought I'd give it a shot, and I started with:

Suppose !!A×B!! is a product with projection morphism !!π_1:A×B→A!!. Then !!π_1!! is epic.

This seems very plausible, because how could the product possibly work if its left-hand component couldn't contain any possible element of !!A!!?

I struggled with this for rather a long time but I just got more and more stuck. To prove that !!π_1!! is an epimorphism I need to have !!g,h:A→C!! and then show that !!g ∘ π_1 = h ∘ π_1!! only when !!g=h!!. But !!π_1!! being a projection arrow doesn't help with this, because products are all about arrows into !!A!! and !!B!! and here I need to show something about arrows out of !!A!!.

And there's no hope that I could get any leverage by introducing some arrows into !!A!! and !!B!!, because there might not be any arrows into !!B!!. (Other than !!\text{id}_B!!, of course, but then you need an arrow !!B→A!! and that might not exist either.) Or what if I consider how the arrows from !!A×B!! factor through !!C×B!! — ah ah ah, not so fast! !!C×B!! might not exist!

I eventually gave up and looked it up online, and discovered that, in fact, the claim is not true in general. It's not even true in Set. The left projection !!π_1: X×\emptyset → X!! is not epic. (Which answers my rhetorical question above that asks “how could the product possibly work if…”)

So, uh, victory, I guess? I set out to prove something that is false, so failing to produce a proof is the best possible outcome.

And I can make lemonade out of the lemons. I couldn't prove the theorem, and my ideas about why not were basically right. Now I ought to be able to look carefully at what additional tools I might be able to use to make the proof go through anyway, and those then become part of the statement of the theorem, which then would become something like “If all binary products exist in a category with an initial object, then projection morphisms are epic.”


[Other articles in category /math] permanent link

Sat, 08 Jun 2019

The inside-outness of category theory

I have pondered category theory periodically for the past 35 years, but not often enough to really stay comfortable with it. Today I was pondering again. I wanted to prove that !!1×A \cong A!! and I was having trouble. I eventually realized my difficulty: my brain had slipped out of category theory mode so that the theorem I was trying to prove did not even make sense.

In most of mathematics, !!1\times A!! would denote some specific entity and we would then show that that entity had a certain property. For example, in set theory we might define !!1\times A!! to be some set, say the set of all Kuratowski pairs !!\langle \varnothing, a\rangle!! where !!a\in A!!:

$$ 1×A =_{\text{def}} \{ z : \exists a\in A : z = \{\{\varnothing\}, \{\varnothing, a\}\} \} $$

and then we would explicitly construct a bijection !!f:A\leftrightarrow 1×A!!:

$$ f(a) = \{\{\varnothing\}, \{\varnothing, a\}\}$$

In category theory, this is not what we do. Everything is less concrete. !!\times!! looks like an operator, one that takes two objects and yields a third. It is not. !!1\times A!! does not denote any particular entity with any particular construction. (Nor does !!1!!, for that matter.) Instead, it denotes an unspecified entity, which happens to have a certain universal property, perhaps just one of many such entities with that property, and there is no canonical representative of the class. It's a mistake to think of it as a single thing, and it's also a mistake to ask the question the way I did ask it. You can't show that !!1×A!! has any specific property, because it's not a specific thing. All you can do is show that anything with the one required universal property must also have the other property. We should rephrase the question like this:

Let !!1×A!! be a product of !!1!! and !!A!!. Then !!1×A\cong A!!.

Maybe a better phrasing is:

Let !!1×A!! be some object that is a product of !!1!! and !!A!!. Then !!1×A\cong A!!.

The notation is still misleading, because it looks like !!1×A!! denotes the result of some operation, and it isn't. We can do a little better:

Let !!B!! be a product of !!1!! and !!A!!. Then !!B\cong A!!.

That it, that's the real theorem. It seems at first to be more difficult — where do we start? But it's actually easier! Because now it's enough to simply prove that !!A!! itself is a product of !!1!! and !!A!!, which is easily done: its projection morphisms are evidently !!! !! and !!{\text{id}}_A!!. And by a previous theorem that all products are isomorphic, any other product, such as !!B!!, must be isomorphic to this one, which is !!A!! itself.

(We can similarly imagine that any theorem that mentions !!1!! is preceded by the phrase “Let !!1!! be some terminal object.”)


[Other articles in category /math] permanent link

Fri, 17 May 2019

What's the difference between 0/0 and 1/0?

Last year a new Math Stack Exchange user asked What's the difference between !!\frac00!! and !!\frac10!!?.

I wrote an answer I thought was pretty good, but the question was downvoted and deleted as “not about mathematics”. This is bullshit, but what can I do?

I can repatriate my answer here, anyway.


This long answer has two parts. The first one is about the arithmetic, and is fairly simple, and is not very different from the other answers here: neither !!\frac10!! nor !!\frac00!! has any clear meaning. But your intuition is a good one: if one looks at the situation more carefully, !!\frac10!! and !!\frac00!! behave rather differently, and there is more to the story than can be understood just from the arithmetic part. The second half of my answer tries to go into these developments.

The notation !!\frac ab!! has a specific meaning:

The number !!x!! for which $$x\cdot b=a.$$

Usually this is simple enough. There is exactly one number !!x!! for which !!x\cdot 7=21!!, namely !!3!!, so !!\frac{21}7=3!!. There is exactly one number !!x!! for which !!x\cdot 4=7!!, namely !!\frac74!!, so !!\frac74\cdot4=7!!.

But when !!b=0!! we can't keep the promise that is implied by the word "the" in "The number !!x!! for which...". Let's see what goes wrong. When !!b=0!! the definition says:

The number !!x!! for which $$x\cdot 0=a.$$

When !!a\ne 0!! this goes severely wrong. The left-hand side is zero and the right-hand size is not, so there is no number !!x!! that satisfies the condition. Suppose !!x!! is the ugliest gorilla on the dairy farm. But the farm has no gorillas, only cows. Any further questions you have about !!x!! are pointless: is !!x!! a male or female gorilla? Is its fur black or dark gray? Does !!x!! prefer bananas or melons? There is no such !!x!!, so the questions are unanswerable.

When !!a!! and !!b!! are both zero, something different goes wrong:

The number !!x!! for which $$x\cdot 0=0.$$

It still doesn't work to speak of "The number !!x!! for which..." because any !!x!! will work. Now it's like saying that !!x!! is ‘the’ cow from the dairy farm, But there are many cows, so questions about !!x!! are still pointless, although in a different way: Does !!x!! have spots? I dunno man, what is !!x!!?

Asking about this !!x!!, as an individual number, never makes sense, for one reason or the other, either because there is no such !!x!! at all (!!\frac a0!! when !!a≠0!!) or because the description is not specific enough to tell you anything (!!\frac 00!!).


If you are trying to understand this as a matter of simple arithmetic, using analogies about putting cookies into boxes, this is the best you can do. That is a blunt instrument, and for a finer understanding you need to use more delicate tools. In some contexts, the two situations (!!\frac00!! and !!\frac10!!) are distinguishable, but you need to be more careful.

Suppose !!f!! and !!g!! are some functions of !!x!!, each with definite values for all numbers !!x!!, and in particular !!g(0) = 0!!. We can consider the quantity $$q(x) = \frac{f(x)}{g(x)}$$ and ask what happens to !!q(x)!! when !!x!! gets very close to !!0!!. The quantity !!q(0)!! itself is undefined, because at !!x=0!! the denominator is !!g(0)=0!!. But we can still ask what happens to !!q!! when !!x!! gets close to zero, but before it gets all the way there. It's possible that as !!x!! gets closer and closer to zero, !!q(x)!! might get closer and closer to some particular number, say !!Q!!; we can ask if there is such a number !!Q!!, and if so what it is.

It turns out we can distinguish quite different situations depending on whether the numerator !!f(0)!! is zero or nonzero. When !!f(0)\ne 0!!, we can state decisively that there is no such !!Q!!. For if there were, it would have to satisfy !!Q\cdot 0=f(0)!! which is impossible; !!Q!! would have to be a gorilla on the dairy farm. There are a number of different ways that !!q(x)!! can behave in such cases, when its denominator approaches zero and its numerator does not, but all of the possible behaviors are bad: !!q(x)!! can increase or decrease without bound as !!x!! gets close to zero; or it can do both depending on whether we approach zero from the left or the right; or it can oscillate more and more wildly, but in no case does it do anything like gently and politely approaching a single number !!Q!!.

But if !!f(0) = 0!!, the answer is more complicated, because !!Q!! (if it exists at all) would only need to satisfy !!Q\cdot 0=0!!, which is easy. So there might actually be a !!Q!! that works; it depends on further details of !!f!! and !!g!!, and sometimes there is and sometimes there isn't. For example, when !!f(x) = x^2+2x!! and !!g(x) = x!! then !!q(x) = \frac{x^2+2x}{x}!!. This is still undefined at !!x=0!! but at any other value of !!x!! it is equal to !!x+2!!, and as !!x!! approaches zero, !!q(x)!! slides smoothly in toward !!2!! along the straight line !!x+2!!. When !!x!! is close to (but not equal to) zero, !!q(x)!! is close to (but not equal to) !!2!!; for example when !!x=0.001!! then !!q(x) = \frac{0.002001}{0.001} = 2.001!!, and as !!x!! gets closer to zero !!q(x)!! gets even closer to !!2!!. So the number !!Q!! we were asking about does exist, and is in fact equal to !!2!!. On the other hand if !!f(x) = x!! and !!g(x) = x^2!! then there is still no such !!Q!!.

The details of how this all works, when there is a !!Q!! and when there isn't, and how to find it, are very interesting, and are the basic idea that underpins all of calculus. The calculus part was invented first, but it bothered everyone because although it seemed to work, it depended on an incoherent idea about how division by zero worked. Trying to frame it as a simple matter of putting cookies into boxes was no longer good enough. Getting it properly straightened out was a long process that took around 150 years, but we did eventually get there and now I think we understand the difference between !!\frac10!! and !!\frac00!! pretty well. But to really understand the difference you probably need to use the calculus approach, which may be more delicate than what you are used to. But if you are interested in this question, and you want the full answer, that is definitely the way to go.


[Other articles in category /math] permanent link

Thu, 25 Apr 2019

Sometimes it matters how you get there

Katara was given the homework exercise of rationalizing the denominator of $$\frac1{\sqrt2+\sqrt3+\sqrt5}$$ which she found troublesome. You evidently need to start by multiplying the numerator and denominator by !!-\sqrt2 + \sqrt 3 + \sqrt 5!!, obtaining $$ \frac1{(\sqrt2+\sqrt3+\sqrt5)}\cdot \frac{-\sqrt2 + \sqrt 3 + \sqrt 5}{-\sqrt2 + \sqrt 3 + \sqrt 5} = \frac{-\sqrt2 + \sqrt 3 + \sqrt 5}{(-2 +3 + 5 + 2\sqrt{15})} = \frac{-\sqrt2 + \sqrt 3 + \sqrt 5}{6 + 2\sqrt{15}} $$ and then you go from there, multiplying the top and bottom by !!6 - 2\sqrt{15}!!. It is a mess.

But when I did it, it was much quicker. Instead of using !!-\sqrt2 + \sqrt 3 + \sqrt 5!!, I went with !!\sqrt2 + \sqrt 3 - \sqrt 5!!, not for any reason, but just at random. This got me: $$ \frac1{\sqrt2+\sqrt3+\sqrt5}\cdot \frac{\sqrt2 + \sqrt 3 - \sqrt 5}{\sqrt2 + \sqrt 3 - \sqrt 5} = \frac{\sqrt2 + \sqrt 3 - \sqrt 5}{(2 +3 - 5 + 2\sqrt{6})} = \frac{\sqrt2 + \sqrt 3 - \sqrt 5}{2\sqrt{6}} $$

with the !!2+3-5!! vanishing in the denominator. Then the next step is quite easy; just get rid of the !!\sqrt6!!: $$ \frac{\sqrt2 + \sqrt 3 - \sqrt 5}{2\sqrt{6}}\cdot \frac{\sqrt6}{\sqrt6} = \frac{\sqrt{12}+\sqrt{18}-\sqrt{30}}{12} $$

which is correct.

I wish I could take credit for this, but it was pure dumb luck.


[Other articles in category /math] permanent link

Wed, 02 Jan 2019

More about the happy numeric coincidence

[ Previously ]

Observing three small examples where the digital sum of !!n^k!! was equal to !!n!!, I said:

I supposed that there were few other examples, probably none … Still I hoped there might be one or two…

but in fact there are a great many; for example the digital sum of !!217^{22}!! is !!217!!.

Dave McKee asked what my intuition was. It was something like this. Fix !!n!! and consider the sequence !!n^2, n^3, …!!. The elements become increasingly sparse, and for their digital sum to equal !!n!! requires increasingly improbable coincidences. For example, taking !!n=3!!, we would need to have !!k!! such that $$3^k = 2·10^p + 1.$$ While I can't immediately rule out this possibility, it seems extremely unlikely. This also resembles the Catalan conjecture, which is that the only nontrivial solution of $$ \left\lvert2^i - 3^j\right\rvert = 1$$ is !!i=3, j=2!!. The Catalan conjecture is indeed true, but it was an open problem for 158 years.

[ Addendum 20190205: I have since learned that the Catalan conjecture actually concerns the more general claim about the equation !! \left\lvert a^i - b^j\right\rvert = 1!!. The special case of !!a=2, b=3!! turns out to be elementary. ]

I was quite mistaken, so what went wrong? First, there is one nontrivial solution to the Catalan conjecture, and here we have not one sequence of !!n^k!! but an infinite family, each of which might have an exceptional solution or two. And second, the case of !!n=3!! is atypically bleak. For !!3^k!! to have a digital sum of !!3!! is a tough order because there are very few sequences of digits whose sum is 3. But the number of sequences of !!d!! digits whose sum is !!n!! grows quite rapidly with !!n!!, and for small !!n!! is very misleading.


[Other articles in category /math] permanent link

Tue, 01 Jan 2019

A happy numeric coincidence

A couple of days ago I was pleased to notice the following coincidence:

!!9^2 = 81!!and !!8 + 1 = 9!!
!!8^3 = 512!!and !!5 + 1 + 2 = 8!!
!!7^4 = 2401!!and !!2 + 4 + 0 + 1 = 7!!

I supposed that there were few other examples, probably none, and that at least I could prove that there were only a finite number of examples. My expected proof of this didn't work, but I still supposed that there would be no further examples. Still I hoped there might be one or two, so I set the computer to look for them if there were.

My first run produced:

\begin{array}{rcr} 17^3 &=& 4\;913 \\ 18^3 &=& 5\;832\\ 22^4 &=& 234\;256\\ 25^4 &=& 390\;625\\ 26^3 &=& 17\;576\\ 27^3 &=& 19\;683\\ 28^4 &=& 614\;656\\ 36^4 &=& 1\;679\;616\\ \end{array}

Well, that was a happy surprise.

Wait a minute, though:

\begin{array}{rcr} 18 ^{ 6 } &=& 34\;012\;224 \\ 18 ^{ 7 } &=& 612\;220\;032 \\ 27 ^{ 7 } &=& 10\;460\;353\;203 \\ 28 ^{ 5 } &=& 17\;210\;368 \\ 31 ^{ 7 } &=& 27\;512\;614\;111 \\ 34 ^{ 7 } &=& 52\;523\;350\;144 \\ 35 ^{ 5 } &=& 52\;521\;875 \\ 36 ^{ 5 } &=& 60\;466\;176 \\ 43 ^{ 7 } &=& 271\;818\;611\;107 \\ 45 ^{ 6 } &=& 8\;303\;765\;625 \\ 46 ^{ 5 } &=& 205\;962\;976 \\ 46 ^{ 8 } &=& 20\;047\;612\;231\;936 \\ 53 ^{ 7 } &=& 1\;174\;711\;139\;837 \\ 54 ^{ 6 } &=& 24\;794\;911\;296 \\ 54 ^{ 8 } &=& 72\;301\;961\;339\;136 \\ 54 ^{ 9 } &=& 3\;904\;305\;912\;313\;344 \\ 58 ^{ 7 } &=& 2\;207\;984\;167\;552 \\ 63 ^{ 8 } &=& 248\;155\;780\;267\;521 \\ 64 ^{ 6 } &=& 68\;719\;476\;736 \\ 68 ^{ 7 } &=& 6\;722\;988\;818\;432 \\ 71 ^{ 9 } &=& 45\;848\;500\;718\;449\;031 \\ 81 ^{ 9 } &=& 150\;094\;635\;296\;999\;121 \\ 82 ^{ 10 } &=& 13\;744\;803\;133\;596\;058\;624 \\ 85 ^{ 10 } &=& 19\;687\;440\;434\;072\;265\;625 \\ 94 ^{ 10 } &=& 53\;861\;511\;409\;489\;970\;176 \\ 97 ^{ 10 } &=& 73\;742\;412\;689\;492\;826\;049 \\ 106 ^{ 10 } &=& 179\;084\;769\;654\;285\;362\;176 \\ 117 ^{ 10 } &=& 480\;682\;838\;924\;478\;847\;449 \\ \end{array}

Oh my.

\begin{array}{rcr} 20 ^{ 13 } &=& 81\;920 & · 10^{12} \\ 40 ^{ 13 } &=& 671\;088\;640 & · 10^{12} \\ 80 ^{ 17 } &=& 225\;179\;981\;368\;524\;800 & · 10^{15} \\ \hline 80 ^{ 19 } &=& 1\;441\;151\;880\;758\;558\;720 & · 10^{18} \\ 86 ^{ 13 } &=& 14\;076\;019\;706\;120\;526\;112\;710\;656 \\ 90 ^{ 19 } &=& 13\;508\;517\;176\;729\;920\;890 & · 10^{18} \\ \hline 90 ^{ 20 } &=& 1\;215\;766\;545\;905\;692\;880\;100 & · 10^{18} \\ 90 ^{ 21 } &=& 109\;418\;989\;131\;512\;359\;209 & · 10^{21} \\ 90 ^{ 22 } &=& 9\;847\;709\;021\;836\;112\;328\;810 & · 10^{21} \\ \hline 90 ^{ 28 } &=& 5\;233\;476\;330\;273\;605\;372\;135\;115\;210 & · 10^{27} \\ 91 ^{ 14 } &=& 2\;670\;419\;511\;272\;061\;205\;254\;504\;361 \\ 98 ^{ 11 } &=& 8\;007\;313\;507\;497\;959\;524\;352 \\ \hline 103 ^{ 13 } &=& 146\;853\;371\;345\;156\;431\;381\;127\;623 \\ 104 ^{ 13 } &=& 166\;507\;350\;731\;038\;802\;170\;609\;664 \\ 106 ^{ 13 } &=& 213\;292\;826\;014\;568\;334\;917\;410\;816 \\ \hline 107 ^{ 11 } &=& 21\;048\;519\;522\;998\;348\;950\;643 \\ 107 ^{ 13 } &=& 240\;984\;500\;018\;808\;097\;135\;911\;707 \\ 107 ^{ 15 } & = & 2\;759\;031\;540\;715\;333\;904\;109\;053\;133 & & \\ & & 443 & & \\ \hline 108 ^{ 11 } &=& 23\;316\;389\;970\;546\;096\;340\;992 \\ 108 ^{ 12 } &=& 2\;518\;170\;116\;818\;978\;404\;827\;136 \\ 118 ^{ 14 } &=& 101\;472\;439\;712\;019\;470\;540\;189\;876\;224 \\ \hline 126 ^{ 13 } &=& 2\;017\;516\;459\;574\;609\;153\;391\;845\;376 \\ 127 ^{ 14 } &=& 283\;956\;682\;347\;124\;706\;942\;551\;243\;009 \\ 133 ^{ 16 } & = & 9\;585\;753\;470\;490\;322\;141\;591\;520\;062 & & \\ & & 265\;281 & & \\ \hline 134 ^{ 13 } &=& 4\;491\;199\;828\;872\;408\;503\;792\;328\;704 \\ 134 ^{ 15 } & = & 80\;643\;984\;127\;232\;967\;094\;095\;054\;209 & & \\ & & 024 & & \\ 135 ^{ 13 } &=& 4\;946\;966\;739\;525\;117\;513\;427\;734\;375 \\ \hline 135 ^{ 14 } &=& 667\;840\;509\;835\;890\;864\;312\;744\;140\;625 \\ 136 ^{ 15 } & = & 100\;712\;557\;719\;971\;285\;024\;106\;952\;523 & & \\ & & 776 & & \\ 140 ^{ 25 } &=& 449\;987\;958\;058\;483\;731\;145\;152\;266\;240 & · 10^{24} \\ \hline 142 ^{ 16 } & = & 27\;328\;356\;228\;554\;426\;163\;172\;505\;624 & & \\ & & 313\;856 & & \\ 143 ^{ 17 } & = & 4\;372\;327\;021\;734\;283\;642\;004\;853\;327 & & \\ & & 592\;915\;343 & & \\ 152 ^{ 15 } & = & 534\;138\;422\;146\;939\;893\;094\;821\;310\;496 & & \\ & & 768 & & \\ \hline 154 ^{ 14 } & = & 4\;219\;782\;742\;781\;494\;680\;756\;610\;809 & & \\ & & 856 & & \\ 154 ^{ 15 } & = & 649\;846\;542\;388\;350\;180\;836\;518\;064\;717 & & \\ & & 824 & & \\ 155 ^{ 19 } & = & 413\;335\;079\;574\;020\;313\;162\;122\;296\;733 & & \\ & & 856\;201\;171\;875 & & \\ \hline 157 ^{ 19 } & = & 527\;343\;255\;303\;841\;790\;870\;720\;812\;082 & & \\ & & 050\;804\;460\;293 & & \\ 160 ^{ 28 } & = & 51\;922\;968\;585\;348\;276\;285\;304\;963\;292 & & \\ & & 200\;960 & · 10^{27} & \\ 163 ^{ 16 } & = & 248\;314\;265\;639\;726\;167\;358\;751\;235\;626 & & \\ & & 296\;641 & & \\ \hline 169 ^{ 16 } & = & 442\;779\;263\;776\;840\;698\;304\;313\;192\;148 & & \\ & & 785\;281 & & \\ 169 ^{ 22 } & = & 10\;315\;908\;977\;942\;302\;627\;204\;470\;186 & & \\ & & 314\;316\;211\;062\;255\;002\;161 & & \\ 170 ^{ 31 } & = & 1\;392\;889\;173\;388\;510\;144\;614\;180\;174 & & \\ & & 894\;677\;204\;330 & · 10^{30} & \\ \hline 170 ^{ 33 } & = & 40\;254\;497\;110\;927\;943\;179\;349\;807\;054 & & \\ & & 456\;171\;205\;137 & · 10^{33} & \\ 171 ^{ 17 } & = & 91\;397\;407\;411\;741\;874\;683\;083\;843\;738 & & \\ & & 640\;173\;291 & & \\ 171 ^{ 19 } & = & 2\;672\;551\;590\;126\;744\;157\;608\;054\;674 & & \\ & & 761\;577\;307\;202\;131 & & \\ \hline 172 ^{ 18 } & = & 17\;358\;494\;027\;033\;103\;736\;099\;033\;196 & & \\ & & 316\;709\;617\;664 & & \\ 173 ^{ 19 } & = & 3\;333\;311\;951\;341\;729\;629\;204\;978\;703 & & \\ & & 084\;632\;004\;627\;637 & & \\ 181 ^{ 18 } & = & 43\;472\;473\;122\;830\;653\;562\;489\;222\;659 & & \\ & & 449\;707\;872\;441 & & \\ \hline 181 ^{ 19 } & = & 7\;868\;517\;635\;232\;348\;294\;810\;549\;301 & & \\ & & 360\;397\;124\;911\;821 & & \\ 181 ^{ 20 } & = & 1\;424\;201\;691\;977\;055\;041\;360\;709\;423 & & \\ & & 546\;231\;879\;609\;039\;601 & & \\ 189 ^{ 19 } & = & 17\;896\;754\;443\;176\;031\;520\;198\;514\;559 & & \\ & & 819\;163\;143\;441\;509 & & \\ \hline 193 ^{ 22 } & = & 191\;540\;580\;003\;116\;921\;429\;323\;712\;183 & & \\ & & 642\;218\;614\;831\;262\;597\;249 & & \\ 199 ^{ 21 } & = & 1\;887\;620\;149\;539\;230\;539\;058\;375\;534 & & \\ & & 310\;517\;606\;114\;631\;604\;199 & & \\ 207 ^{ 20 } & = & 20\;864\;448\;472\;975\;628\;947\;226\;005\;981 & & \\ & & 267\;194\;447\;042\;584\;001 & & \\ \hline 211 ^{ 25 } & = & 12\;795\;621\;425\;112\;113\;141\;935\;148\;247 & & \\ & & 655\;082\;376\;252\;275\;523\;500\;373\;035\;251 & & \\ 217 ^{ 22 } & = & 2\;524\;144\;100\;572\;738\;110\;818\;511\;483 & & \\ & & 976\;079\;134\;636\;146\;367\;057\;489 & & \\ 221 ^{ 25 } & = & 40\;719\;913\;064\;560\;249\;818\;128\;041\;081 & & \\ & & 360\;346\;218\;088\;750\;603\;039\;104\;925\;501 & & \\ \hline 225 ^{ 22 } & = & 5\;597\;774\;487\;475\;881\;147\;025\;802\;420 & & \\ & & 102\;991\;163\;730\;621\;337\;890\;625 & & \\ 234 ^{ 23 } & = & 3\;104\;307\;401\;943\;398\;225\;947\;002\;752 & & \\ & & 118\;451\;297\;846\;365\;869\;366\;575\;104 & & \\ 236 ^{ 25 } & = & 210\;281\;019\;656\;164\;214\;863\;109\;519\;134 & & \\ & & 145\;127\;118\;463\;502\;897\;144\;582\;373\;376 & & \\ \hline 250 ^{ 40 } & = & 827\;180\;612\;553\;027\;674\;871\;408\;692\;069 & & \\ & & 962\;853\;565\;812\;110\;900\;878\;906\;250 & · 10^{39} & \\ 265 ^{ 28 } & = & 70\;938\;903\;323\;020\;164\;041\;464\;952\;207 & & \\ & & 191\;804\;150\;246\;813\;586\;391\;508\;579\;254 & & \\ & & 150\;390\;625 & & \\ \end{array}

Holy cow.

I should have been able to foresee some of those, like !!20^{13} = 8192·10^{13}!!, which in hindsight is obvious. But seriously, !!163^{16}!!?

(It rather looks like !!265^{28}!! might be the last one where !!n!! is not a multiple of !!10!!, however. Or maybe that is a misleading artifact of the calculating system I am using? I'm not sure yet.)

Happy new year, all.

[ Addendum 20190102: Several readers have confirmed that there are many examples past !!265^{28}!!. I am probably running into some mismatch between the way the computer represents numbers and the way they actually work. ]

[ Addendum 20190102: Dave McKee asked why I thought there would be few examples. ]

[ Addendum 20220521: I have learned that these numbers are sometimes known as Dudeney numbers after famous puzzlist Henry Dudeney. ]


[Other articles in category /math] permanent link

Sun, 02 Dec 2018

Necklaces and bracelets

There are combinatorial objects called necklaces and bracelets and I can never remember which is which.

Both are finite sequences of things (typically symbols) where the start point is not important. So the bracelet ABCDE is considered to be the same as the bracelets BCDEA, CDEAB, DEABC, and EABCD.

One of the two also disregards the direction you go, so that ABCDE is also considered the same as EDCBA (and so also DCBAE, etc.). But which? I have to look it up every time.

I have finally thought of a mnemonic. In a necklace, the direction is important, because to reverse an actual necklace you have to pull it off over the wearer's head, turn it over, and put it back on. But in a bracelet the direction is not important, because it could be on either wrist, and a bracelet on the left wrist is the same as the reversed bracelet on the right wrist.

Okay, silly, maybe, but I think it's going to work.


[Other articles in category /math] permanent link

Thu, 29 Nov 2018

How many kinds of polygonal loops? (part 2)

I recently asked about these:


Four displays, each with five dark gray dots arranged at the vertices
of a regular pentagon.  In each display the dots are connected with
purple lines, but each in a different order.  If the dots were
numbered 0-1-2-3-4 in clockwise order, the four figures have purple
lines connecting them, respectively, in the orders
0-1-2-3-4-0, 0-1-3-2-4-0, 0-2-1-4-3-0, and 0-2-4-1-3-0.  The first of
these is a plain pentagon, and the last is a five-pointed star.  The
middle two are less symmetric.

And I said I thought there were nine analogous figures with six points.

Rahul Narain referred me to a recent discussion of almost this exact question on Math Stackexchange. (Note that the discussion there considers two figures different if they are reflections of one another; I consider them the same.) The answer turns out to be OEIS A000940. I had said:

… for !!N=6!!, I found it hard to enumerate. I think there are nine shapes but I might have missed one, because I know I kept making mistakes in the enumeration …

I missed three. The nine I got were:

This time
there are nine displays, each with six dots.  The connections are,
respectively, 
012345 (a hexagon), 015432, 032145 (two diamonds), 
015234 (highly irregular), 014523 (three triangles that share a vertex
in the center), 013254, 023154, 031254, and 035142.

And the three I missed are:

Three
more hexagons, this time connected as follows:
014253, 013524, and 015342

I had tried to break them down by the arrangement of the outside ring of edges, which can be described by a composition. The first two of these have type !!1+1+1+1+2!! (which I missed completely; I thought there were none of this type) and the other has type !!1+2+1+2!!, the same as the !!015342!! one in the lower right of the previous diagram.

I had ended by saying:

I would certainly not trust myself to hand-enumerate the !!N=7!! shapes.

Good call, Mr. Dominus! I considered filing this under “oops” but I decided that although I had gotten the wrong answer, my confidence in it had been adequately low. On one level it was a mistake, but on a higher and more important level, I did better.

I am going to try the (Cauchy-Frobenius-)Burnside-(Redfield-)Pólya lemma on it next and see if I can get the right answer.

Thanks again to Rahul Narain for bringing this to my attention.


[Other articles in category /math] permanent link

How many kinds of polygonal loops?

Take !!N!! equally-spaced points on a circle. Now connect them in a loop: each point should be connected to exactly two others, and each point should be reachable from the others. How many geometrically distinct shapes can be formed?

For example, when !!N=5!!, these four shapes can be formed:


Four displays, each with five dark gray dots arranged at the vertices
of a regular pentagon.  In each display the dots are connected with
purple lines, but each in a different order.  If the dots were
numbered 0-1-2-3-4 in clockwise order, the four figures have purple
lines connecting them, respectively, in the orders
0-1-2-3-4-0, 0-1-3-2-4-0, 0-2-1-4-3-0, and 0-2-4-1-3-0.  The first of
these is a plain pentagon, and the last is a five-pointed star.  The
middle two are less symmetric.

(I phrased this like a geometry problem, but it should be clear it's actually a combinatorics problem. But it's much easier to express as a geometry problem; to talk about the combinatorics I have to ask you to consider a permutation !!P!! where !!P(i±1)≠P(i)±1!! blah blah blah…)

For !!N<5!! it's easy. When !!N=3!! it is always a triangle. When !!N=4!! there are only two shapes: a square and a bow tie.

But for !!N=6!!, I found it hard to enumerate. I think there are nine shapes but I might have missed one, because I know I kept making mistakes in the enumeration, and I am not sure they are all corrected:

This time
there are nine displays, each with six dots.  The connections are,
respectively, 
012345 (a hexagon), 015432, 032145 (two diamonds), 
015234 (highly irregular), 014523 (three triangles that share a vertex
in the center), 013254, 023154, 031254, and 035142.

It seems like it ought not to be hard to generate and count these, but so far I haven't gotten a good handle on it. I produced the !!N=6!! display above by considering the compositions of the number !!6!!:

Composition How many
loops?
6 1
1+5
2+4 1
3+3 1
1+1+4
1+2+3 1
2+2+2 2
1+1+1+3
1+1+2+2 1
1+2+1+1 1
1+1+1+1+2
1+1+1+1+1+1 1
Total9 (?)

(Actually it's the compositions, modulo bracelet symmetries — that is, modulo the action of the dihedral group.)

But this is fraught with opportunities for mistakes in both directions. I would certainly not trust myself to hand-enumerate the !!N=7!! shapes.

[ Addendum: For !!N=6!! there are 12 figures, not 9. For !!N=7!!, there are 39. Further details. ]


[Other articles in category /math] permanent link

Sun, 25 Nov 2018

60-degree angles on a lattice

A couple of years back I was thinking about how to draw a good approximation to an equilateral triangle on a piece of graph paper. There are no lattice points that are exactly the vertices of an equilateral triangle, but you can come close, and one way to do it is to find integers !!a!! and !!b!! with !!\frac ba\approx \sqrt 3!!, and then !!\langle 0, 0\rangle, \langle 2a, 0\rangle,!! and !!\langle a, b\rangle!! are almost an equilateral triangle.

But today I came back to it for some reason and I wondered if it would be possible to get an angle closer to 60°, or numbers that were simpler, or both, by not making one of the sides of the triangle perfectly horizontal as in that example.

So okay, we want to find !!P = \langle a, b\rangle!! and !!Q = \langle c,d\rangle!! so that the angle !!\alpha!! between the rays !!\overrightarrow{OP}!! and !!\overrightarrow{OQ}!! is as close as possible to !!\frac\pi 3!!.

The first thing I thought of was that the dot product !!P\cdot Q = |P||Q|\cos\alpha!!, and !!P\cdot Q!! is super-easy to calculate, it's just !!ac+bd!!. So we want $$\frac{ad+bc}{|P||Q|} = \cos\alpha \approx \frac12,$$ and everything is simple, except that !!|P||Q| = \sqrt{a^2+b^2}\sqrt{c^2+d^2}!!, which is not so great.

Then I tried something else, using high-school trigonometry. Let !!\alpha_P!! and !!\alpha_Q!! be the angles that the rays make with the !!x!!-axis. Then !!\alpha = \alpha_Q - \alpha_P = \tan^{-1} \frac dc - \tan^{-1} \frac ba!!, which we want close to !!\frac\pi3!!.

Taking the tangent of both sides and applying the formula $$\tan(q-p) = \frac{\tan q - \tan p}{1 + \tan q \tan p}$$ we get $$ \frac{\frac dc - \frac ba}{1 + \frac dc\frac ba} \approx \sqrt3.$$ Or simplifying a bit, the super-simple $$\frac{ad-bc}{ac+bd} \approx \sqrt3.$$

After I got there I realized that my dot product idea had almost worked. To get rid of the troublesome !!|P||Q|!! you should consider the cross product also. Observe that the magnitude of !!P\times Q!! is !!|P||Q|\sin\alpha!!, and is also $$\begin{vmatrix} a & b & 0 \\ c & d & 0 \\ 1 & 1 & 1 \end{vmatrix} = ad - bc$$ so that !!\sin\alpha = \frac{ad-bc}{|P||Q|}!!. Then if we divide, the !!|P||Q|!! things cancel out nicely: $$\tan\alpha = \frac{\sin\alpha}{\cos\alpha} = \frac{ad-bc}{ac+bd}$$ which we want to be as close as possible to !!\sqrt 3!!.

Okay, that's fine. Now we need to find some integers !!a,b,c,d!! that do what we want. The usual trick, “see what happens if !!a=0!!”, is already used up, since that's what the previous article was about. So let's look under the next-closest lamppost, and let !!a=1!!. Actually we'll let !!b=1!! instead to keep things more horizonal. Then, taking !!\frac74!! as our approximation for !!\sqrt3!!, we want

$$\frac{ad-c}{ac+d} = \frac74$$

or equivalently $$\frac dc = \frac{7a+4}{4a-7}.$$

Now we just tabulate !!7a+4!! and !!4a-7!! looking for nice fractions:

!!a!!!!d =!!
!!7a+4!!
!!c=!!
!!4a-7!!
2181
3255
4329
53913
64617
75321
86025
96729
107433
118137
128841
139545
1410249
1510953
1611657
1712361
1813065
1913769
2014473

Each of these gives us a !!\langle c,d\rangle!! point, but some are much better than others. For example, in line 3, we have take !!\langle 5,25\rangle!! but we can use !!\langle 1,5\rangle!! which gives the same !!\frac dc!! but is simpler. We still get !!\frac{ad-bc}{ac+bd} = \frac 74!! as we want.

Doing this gives us the two points !!P=\langle 3,1\rangle!! and !!Q=\langle 1, 5\rangle!!. The angle between !!\overrightarrow{OP}!! and !!\overrightarrow{OQ}!! is then !!60.255°!!. This is exactly the same as in the approximately equilateral !!\langle 0, 0\rangle, \langle 8, 0\rangle,!! and !!\langle 4, 7\rangle!! triangle I mentioned before, but the numbers could not possibly be easier to remember. So the method is a success: I wanted simpler numbers or a better approximation, and I got the same approximation with simpler numbers.

To draw a 60° angle on graph paper, mark !!P=\langle 3,1\rangle!! and !!Q=\langle 1, 5\rangle!!, draw lines to them from the origin with a straightedge, and there is your 60° angle, to better than a half a percent.

A graph of the lines 3y=x and y=5x, with the points
(3,1) and (1,5) marked, demonstrating that the angle between the
lines is very close to 60 degrees.

There are some other items in the table (for example row 18 gives !!P=\langle 18,1\rangle!! and !!Q=\langle 1, 2\rangle!!) but because of the way we constructed the table, every row is going to give us the same angle of !!60.225°!!, because we approximated !!\sqrt3\approx\frac74!! and !!60.225° = \tan^{-1}\frac74!!. And the chance of finding numbers better than !!\langle 3,1\rangle!! and !!\langle 1, 5\rangle!! seems slim. So now let's see if we can get the angle closer to exactly !!60°!! by using a better approximation to !!\sqrt3!! than !!\frac 74!!.

The next convergents to !!\sqrt 3!! are !!\frac{19}{11}!! and !!\frac{26}{15}!!. I tried the same procedure for !!\frac{19}{11}!! and it was a bust. But !!\frac{26}{15}!! hit the jackpot: !!a=4!! gives us !!15a-26 = 34!! and !!26a-15=119!!, both of which are multiples of 17. So the points are !!P=\langle 4,1\rangle!! and !!Q=\langle 2, 7\rangle!!, and this time the angle between the rays is !!\tan^{-1}\frac{26}{15} = 60.018°!!. This is as accurate as anyone drawing on graph paper could possibly need; on a circle with a one-mile radius it is an error of 20 inches.

A very similar plot, this time with the lines 4y=x and 7y=2x

Of course, the triangles you get are no longer equilateral, not even close. That first one has sides of !!\sqrt{10}, \sqrt{20}, !! and !!\sqrt{26}!!, and the second one has sides of !!\sqrt{17}, \sqrt{40}, !! and !!\sqrt{53}!!. But! The slopes of the lines are so simple, it's easy to construct equilateral triangles with a straightedge and a bit of easy measuring. Let's do it on the !!60.018°!! angle and see how it looks.

!!\overrightarrow{OP}!! has slope !!\frac14!!, so the perpendicular to it has slope !!-4!!, which means that you can draw the perpendicular by placing the straightedge between !!P!! and some point !!P+x\langle -1, 4\rangle!!, say !!\langle 2, 9\rangle!! as in the picture. The straightedge should have slope !!-4!!, which is very easy to arrange: just imagine the little squares grouped into stacks of four, and have the straightedge go through opposite corners of each stack. The line won't necessarily intersect !!\overrightarrow{OQ}!! anywhere great, but it doesn't need to, because we can just mark the intersection, wherever it is:

Same as the previous plot, but now there is another line y-1=-4(x-4)
drawn through (4, 1) and its intersection with the other line, near
(2¼, 8).

Let's call that intersection !!A!! for “apex”.

The point opposite to !!O!! on the other side of !!P!! is even easier; it's just !!P'=2P =\langle 8, 2\rangle!!. And the segment !!P'A!! is the third side of our equilateral triangle:

The previous diagram, with the third side of the equilateral triangle drawn in.

This triangle is geometrically similar to a triangle with vertices at !!\langle 0, 0\rangle, \langle 30, 0\rangle,!! and !!\langle 15, 26\rangle!!, and the angles are just close to 60°, but it is much smaller.

Woot!


[Other articles in category /math] permanent link

Mon, 19 Nov 2018

I love the dodecahedron

I think I forgot to mention that I was looking recently at hamiltonian cycles on a dodecahedron. The dodecahedron has 30 edges and 20 vertices, so a hamiltonian path contains 20 edges and omits 10. It turns out that it is possible to color the edges of the dodecahedron in three colors so that:

  • Every vertex is incident to one edge of each color
  • The edges of any two of the three colors form a hamiltonian cycle

Dear visually impaired people: I try to provide clear
descriptions of illustrations, but here I was stumped about what I
could say that would be helpful that I had not already said.  I would
be grateful for your advice and suggestions.

Marvelous!

(In this presentation, I have taken one of the vertices and sent it away to infinity. The three edges with arrowheads are all attached to that vertex, off at infinity, and the three faces incident to it have been stretched out to cover the rest of the plane.)

Every face has five edges and there are only three colors, so the colors can't be distributed evenly around a face. Each face is surrounded by two edges of one color, two of a second color, and only one of the last color. This naturally divides the 12 faces into three classes, depending on which color is assigned to only one edge of that face.

This is
the same picture as before, but each of the 12 regions of the plane
has been annotated with a large colored circle, the same as the color
of which the region has only one boundary edge.

Each class contains two pairs of two adjacent pentagons, and each adjacent pair is adjacent to the four pairs in the other classes but not to the other pair in its own class.

Each pair shares a single edge, which we might call its “hinge”. Each pair has 8 vertices, of which two are on its hinge, four are adjacent to the hinge, and two are not near of the hinge. These last two vertices are always part of the hinges of the pairs of a different class.

I could think about this for a long time, and probably will. There is plenty more to be seen, but I think there is something else I was supposed to be doing today, let me think…. Oh yes! My “job”! So I will leave you to go on from here on your own.

[ Addendum 20181203: David Eppstein has written a much longer and more detailed article about triply-Hamiltonian edge colorings, using this example as a jumping-off point. ]


[Other articles in category /math] permanent link

Sat, 17 Nov 2018

How do you make a stella octangula?

Yesterday Katara asked me out of nowhere “When you make a stella octangula, do you build it up from an octahedron or a tetrahedron?” Stella octangula was her favorite polyhedron eight years ago.

“Uh,” I said. “Both?”

Then she had to make one to see what I meant. You can start with a regular octahedron:

a regular octahedron made of six steel
ball bearings and twelve blue and green magnetic struts

Then you erect spikes onto four of the octahedron's faces; this produces a regular tetrahedron:

A very similar octahedron, this time
with an orange tripod attached to four of its eight faces, forming an
orange tetrahedron with a blue and green octahedron embedded in it

Then you erect spikes onto the other four of the octahedron's faces. Now you have a stella octangula.

The octahedron from before, but with
red tripods attached to its other four faces, making eight tripods in
all.  The final result looks like interpenetrating red and orange
tetrahedra, with the original octahedron in their intersection

So yeah, both. Or instead of starting with a unit octahedron and erecting eight spikes of size 1, you can start with a unit tetrahedron and erect four spikes of size ½. It's both at once.


[Other articles in category /math] permanent link

Tue, 13 Nov 2018

Counting paths through polyhedra

A while back someone asked on math stack exchange how many paths there were of length !!N!! from one vertex of a dodecahedron to the opposite vertex. The vertices are distance 5 apart, so for !!N<5!! the answer is zero, but the paths need not be simple, so the number grows rapidly with !!N!!; there are 58 million paths of length 19.

This is the kind of thing that the computer answers easily, so that's where I started, and unfortunately that's also where I finished, saying:

I'm still working out a combinatorial method of calculating the answer, and I may not be successful.

Another user reminded me of this and I took another whack at it. I couldn't remember what my idea had been last year, but my idea this time around was to think of the dodecahedron as the Cayley graph for a group, and then the paths are expressions that multiply out to a particular group element.

I started by looking at a tetrahedron instead of at a dodecahedron, to see how it would work out. Here's a tetrahedron.

Let's say we're counting paths from the center vertex to one of the others, say the one at the top. (Tetrahedra don't have opposite vertices, but that's not an important part of the problem.) A path is just a list of edges, and each edge is labeled with a letter !!a!!, !!b!!, or !!c!!. Since each vertex has exactly one edge with each label, every sequence of !!a!!'s, !!b!!'s, and !!c!!'s represents a distinct path from the center to somewhere else, although not necessarily to the place we want to go. Which of these paths end at the bottom vertex?

The edge labeling I chose here lets us play a wonderful trick. First, since any edge has the same label at both ends, the path !!x!! always ends at the same place as !!xaa!!, because the first !!a!! goes somewhere else and then the second !!a!! comes back again, and similarly !!xbb!! and !!xcc!! also go to the same place. So if we have a path that ends where we want, we can insert any number of pairs !!aa, bb, !! or !!cc!! and the new path will end at the same place.

But there's an even better trick available. For any starting point, and any letters !!x!! and !!y!!, the path !!xy!! always ends at the same place as !!yx!!. For example, if we start at the middle and follow edge !!b!!, then !!c!!, we end at the lower left; similarly if we follow edge !!c!! and then !!b!! we end at the same place, although by a different path.

Now suppose we want to find all the paths of length 7 from the middle to the top. Such a path is a sequence of a's, b's, and c's of length 7. Every such sequence specifies a different path out of the middle vertex, but how can we recognize which sequences end at the top vertex?

Since !!xy!! always goes to the same place as !!yx!!, the order of the seven letters doesn't matter. A complicated-seeming path like abacbcb must go to the same place as !!aabbbcc!!, the same path with the letters in alphabetical order. And since !!xx!! always goes back to where it came from, the path !!aabbbcc!! goes to the same place as !!b!!

Since the paths we want are those that go to the same place as the trivial path !!c!!, we want paths that have an even number of !!a!!s and !!b!!s and an odd number of !!c!!s. Any path fitting that description will go to same place as !!c!!, which is the top vertex. It's easy to enumerate such paths:

Prototypical
path
How many?
ccccccc1
cccccaa21
cccccbb21
cccaaaa35
cccaabb210
cccbbbb35
caaaaaa7
caaaabb105
caabbbb105
cbbbbbb7
Total547

Here something like “cccbbbb” stands for all the paths that have three c's and four b's, in some order; there are !!\frac{7!}{4!3!} = 35!! possible orders, so 35 paths of this type. If we wanted to consider paths of arbitrary length, we could use Burnside's lemma, but I consider the tetrahedron to have been sufficiently well solved by the observations above (we counted 547 paths by hand in under 60 seconds) and I don't want to belabor the point.

Okay! Easy-peasy!

Now let's try cubes:

Here we'll consider paths between two antipodal vertices in the upper left and the lower right, which I've colored much darker gray than the other six vertices.

The same magic happens as in the tetrahedron. No matter where we start, and no matter what !!x!! and !!y!! are, the path !!xy!! always gets us to the same place as !!yx!!. So again, if some complicated path gets us where we want to go, we can permute its components into any order and get a different path of the same length to the same place. For example, starting from the upper left, bcba, abcb, and abbc all go to the same place.

And again, because !!xx!! always make a trip along one edge and then back along the same edge, it never goes anywhere. So the three paths in the previous paragraph also go to the same place as ac and ca and also aa bcba bb aa aa aa aa bb cc cc cc bb.

We want to count paths from one dark vertex to the other. Obviously abc is one such, and so too must be bac, cba, acb, and so forth. There are six paths of length 3.

To get paths of length 5, we must insert a pair of matching letters into one of the paths of length 3. Without loss of generality we can assume that we are inserting aa. There are 20 possible orders for aaabc, and three choices about which pair to insert, for a total of 60 paths.

To get paths of length 7, we must insert two pairs. If the two pairs are the same, there are !!\frac{7!}{5!} = 42!! possible orders and 3 choices about which letters to insert, for a total of 126. If the two pairs are different, there are !!\frac{7!}{3!3!} = 140!! possible orders and again 3 choices about which pairs to insert, for a total of 420, and a grand total of !!420+126 = 546!! paths of length 7. Counting the paths of length 9 is almost as easy. For the general case, again we could use Burnside's lemma, or at this point we could look up the unusual sequence !!6, 60, 546!! in OEIS and find that the number of paths of length !!2n+1!! is already known to be !!\frac34(9^n-1)!!.

So far this technique has worked undeservedly well. The original problem wanted to use it to study paths on a dodecahedron. Where, unfortunately, the magic property !!xy=yx!! doesn't hold. It is possible to label the edges of the dodecahedron so that every sequence of labels determines a unique path:

but there's nothing like !!xy=yx!!. Well, nothing exactly like it. !!xy=yx!! is equivalent to !!(xy)^2=1!!, and here instead we have !!(xy)^{10}=1!!. I'm not sure that helps. I will probably need another idea.

The method fails similarly for the octahedron — which is good, because I can use the octahedron as a test platform to try to figure out a new idea. On an octahedron we need to use four kinds of labels because each vertex has four edges emerging from it:

Here again we don't have !!(xy)^2=1!! but we do have !!(xy)^3 = 1!!. So it's possible that if I figure out a good way to enumerate paths on the octahedron I may be able to adapt the technique to the dodecahedron. But the octahedron will be !!\frac{10}3!! times easier.

Viewed as groups, by the way, these path groups are all examples of Coxeter groups. I'm not sure this is actually a useful observation, but I've been wanting to learn about Coxeter groups for a long time and this might be a good enough excuse.


[Other articles in category /math] permanent link

A puzzle about representing numbers as a sum of 3-smooth numbers

I think this would be fun for a suitably-minded bright kid of maybe 12–15 years old.

Consider the following table of numbers of the form !!2^i3^j!!:

1 3 9 27 81 243
2 6 18 54 162
4 12 36 108
8 24 72 216
16 48 144
32 96
64 192
128

Given a number !!n!!, it is possible to represent !!n!! as a sum of entries from the table, with the following constraints:

  • No more than one entry from any column
  • An entry may only be used if it is in a strictly higher row than any entry to its left.

For example, one may not represent !!23 = 2 + 12 + 9!!, because the !!12!! is in a lower row than the !!2!! to its left.

1 3 9 27
2 6 18 54
4 12 36 108

But !!23 = 8 + 6 + 9!! is acceptable, because 6 is higher than 8, and 9 is higher than 6.

1 3 9 27
2 6 18 54
4 12 36 108
8 24 72 216

Or, put another way: can we represent any number !!n!! in the form $$n = \sum_i 2^{a_i}3^{b_i}$$ where the !!a_i!! are strictly decreasing and the !!b_i!! are strictly increasing?

Spoiler:

maxpow3 1 = 1 maxpow3 2 = 1 maxpow3 n = 3 * maxpow3 (n `div` 3) rep :: Integer -> [Integer] rep 0 = [] rep n = if even n then map (* 2) (rep (n `div` 2)) else (rep (n - mp3)) ++ [mp3] where mp3 = maxpow3 n

Sadly, the representation is not unique. For example, !!8+3 = 2+9!!, and !!32+24+9 = 32+6+27 = 8+12=18+27!!.


[Other articles in category /math] permanent link

Mon, 24 Sep 2018

Zipfian regression

A long time ago, I wrote up a blog article about how to derive the linear regression formulas from first principles. Then I decided it was not of general interest, so I didn't publish it. (Sometime later I posted it to math stack exchange, so the effort wasn't wasted.)

The basic idea is, you have some points !!(x_i, y_i)!!, and you assume that they can be approximated by a line !!y=mx+b!!. You let the error be a function of !!m!! and !!b!!: $$\varepsilon(m, b) = \sum (mx_i + b - y_i)^2$$ and you use basic calculus to find !!m!! and !!b!! for which !!\varepsilon!! is minimal. Bing bang boom.

I knew this for a long time but it didn't occur to me until a few months ago that you could use basically the same technique to fit any other sort of curve. For example, suppose you think your data is not a line but a parabola of the type !!y=ax^2+bx+c!!. Then let the error be a function of !!a, b, !! and !!c!!:

$$\varepsilon(a,b,c) = \sum (ax_i^2 + bx_i + c - y_i)^2$$

and again minimize !!\varepsilon!!. You can even get a closed form as you can with ordinary linear regression.

I especially wanted to try fitting hyperbolas to data that I expected to have a Zipfian distribution. For example, take the hundred most popular names for girl babies in Illinois in 2017. Is there a simple formula which, given an ordinal number like 27, tells us approximately how many girls were given the 27th most popular name that year? (“Scarlett”? Seriously?)

I first tried fitting a hyperbola of the form !!y = c + \frac ax!!. We could, of course, take !!y_i' = \frac 1{y_i}!! and then try to fit a line to the points !!\langle x_i, y_i'\rangle!! instead. But this will distort the measurement of the error. It will tolerate gross errors in the points with large !!y!!-coordinates, and it will be extremely intolerant of errors in points close to the !!x!!-axis. This may not be what we want, and it wasn't what I wanted. So I went ahead and figured out the Zipfian regression formulas:

$$ \begin{align} a & = \frac{HY-NQ}D \\ c & = \frac{HQ-JY}D \end{align} $$

Where:

$$\begin{align} H & = \sum x_i^{-1} \\ J & = \sum x_i^{-2} \\ N & = \sum 1\\ Q & = \sum y_ix_i^{-1} \\ Y & = \sum y_i \\ D & = H^2 - NJ \end{align} $$

When I tried to fit this to some known hyperbolic data, it worked just fine. For example, given the four points !!\langle1, 1\rangle, \langle2, 0.5\rangle, \langle3, 0.333\rangle, \langle4, 0.25\rangle!!, it produces the hyperbola $$y = \frac{1.00018461538462}{x} - 0.000179487179486797.$$ This is close enough to !!y=\frac1x!! to confirm that the formulas work; the slight error in the coefficients is because we used !!\bigl\langle3, \frac{333}{1000}\bigr\rangle!! rather than !!\bigl\langle3, \frac13\bigr\rangle!!.

Unfortunately these formulas don't work for the Illinois baby data. Or rather, the hyperbola fits very badly. The regression produces !!y = \frac{892.765272442475}{x} + 182.128894972025:!!

A graph showing
the actual data (as blue dots) and the best-fit hyperbola (as a red
line).  They do not match well.  The hyperbola drops down much too
quickly, then makes a sharp turn to catch up with the right tail it
has overshot. It levels off at 182.12, instead of at around 100.

I think maybe I need to be using some hyperbola with more parameters, maybe something like !!y = \frac a{x-b} + c!!.

In the meantime, here's a trivial script for fitting !!y = \frac ax + c!! hyperbolas to your data:

while (<>) {
  chomp;
  my ($x, $y) = split;
  ($x, $y) = ($., $x) if not defined $y;
  $H += 1/$x;
  $J += 1/($x*$x);
  $N += 1;
  $Q += $y/$x;
  $Y += $y;
}

my $D = $H*$H - $J*$N;
my $c = ($Q*$H - $J*$Y)/$D;
my $a = ($Y*$H - $Q*$N)/$D;

print "y = $a / x + $c\n";

[ Addendum 20180925: Shreevatsa R. asked a related question on StackOverflow and summarized the answers. The problem is more complex than it might first appear. Check it out. ]


[Other articles in category /math] permanent link

Fri, 14 Sep 2018

How not to remember the prime numbers under 1,000

A while back I said I wanted to memorize all the prime numbers under 1,000, because I am tired of getting some number like 851 or 857, or even 307, and then not knowing whether it is prime.

The straightforward way to deal with this is: just memorize the list. There are only 168 of them, and I have the first 25 or so memorized anyway.

But I had a different idea also. Say that a set of numbers from !!10n!! to !!10n+9!! is a “decade”. Each decade contains at most 4 primes, so 4 bits are enough to describe the primes in a single decade. Assign a consonant to each of the 16 possible patterns, say “b” when none of the four numbers is a prime, “d” when only !!10n+1!! is prime, “f” when only !!10n+3!! is, and so on.

Now memorizing the primes in the 90 decades is reduced to memorizing 90 consonants. Inserting vowels wherever convenient, we have now turned the problem into one of memorizing around 45 words. A word like “potato” would encode the constellation of primes in three consecutive decades. 45 words is only a few sentences, so perhaps we could reduce the list of primes to a short and easily-remembered paragraph. If so, memorizing a few sentences should be much easier than memorizing the original list of primes.

The method has several clear drawbacks. We would have to memorize the mapping from consonants to bit patterns, but this is short and maybe not too difficult.

More significant is that if we're trying to decide if, say, 637 is prime, we have to remember which consonant in which word represents the 63rd decade. This can be fixed, maybe, by selecting words and sentences of standard length. Say there are three sentences and each contains 30 consonants. Maybe we can arrange that words always appear in patterns, say four words with 1 or 2 consonants each that together total 7 consonants, followed by a single long word with three consonants. Then each sentence can contain three of these five-word groups and it will be relatively easy to locate the 23rd consonant in a sentence: it is early in the third group.

Katara and I tried this, with not much success. But I'm not ready to give up on the idea quite yet. A problem we encountered early on is that we remember consonants not be how words are spelled but by how they sound. So we don't want a word like “hammer” to represent the consonant pattern h-m-m but rather just h-m.

Another problem is that some constellations of primes are much more common than others. We initially assigned consonants to constellations in order. This assigned letter “b” to the decades that contain no primes. But this is the most common situation, so the letter “b” tended to predominate in the words we needed for our mnemonic. We need to be careful to assign the most common constellations to the best letters.

Some consonants in English like to appear in clusters, and it's not trivial to match these up with the common constellations. The mapping from prime constellations to consonants must be carefully chosen to work with English. We initially assigned “s” to the constellation “☆•☆☆” (where !!10n+1, 10n+7,!! and !!10n+9!! are prime but !!10n+3!! is not) and “t” to the constellation “☆☆••” (where !!10n+1!! and !!10n+3!! are prime but !!10n+7!! and !!10n+9!! are not) but these constellations cannot appear consecutively, since at least one of !!10n+7, 10n+9, 10n+11!! is composite. So any word with “s” and “t” with no intervening consonants was out of play. This eliminated a significant fraction of the entire dictionary!

I still think it could be made to work, maybe. If you're interested in playing around with this, the programs I wrote are available on Github. The mapping from decade constellations to consonant clusters is in select_words.py.


[Other articles in category /math] permanent link

Mon, 27 Aug 2018

Linear mappings of projective space

[ Epistemic status: uninformed musings; anything and everything in here might be wrong or ill-conceived. ]

Suppose !!V!! is some vector space, and let !!V_n!! be the family of all !!n!!-dimensional subspaces of !!V!!. In particular !!V_1!! is the family of all one-dimensional subspaces of !!V!!. When !!V!! is euclidean space, !!V_1!! is the corresponding projective space.

if !!L!! is a nonsingular linear mapping !!V\to V!!, then !!L!! induces a mapping !!L_n!! from !!V_n\to V_n!!. (It does not make sense to ask at this point if the induced mapping is linear because we do not have any obvious linear structure on the projective space. Or maybe there is but I have not heard of it.)

The eigenspaces of !!V!! are precisely the fixed points of !!L_n!!.

(Wrong! Any subspace generated by an !!n!!-set of eigenvectors is a fixed point of !!L_n!!. But such a subspace is not in general an eigenspace. (Note this includes the entire space as a special case.) The converse, however, does hold, since any eigenspace is generated by a subset of eigenvectors.)

Now it is an interesting and useful fact that for typical mappings, say from !!\Bbb R\to\Bbb R!!, fixed points tend to be attractors or repellers. (For example, see this earlier article.) This is true of !!L_1!! also. One-dimensional eigenspaces whose eigenvalues are larger than !!1!! are attractors of !!L_1!!, and those whose eigenvalues are smaller than !!1!! are repellers, and this is one reason why the eigenspaces are important: if !!L!! represents the evolution of state space, then vectors in !!V!! will tend to evolve toward being eigenvectors.

So consider, say, the projective plane !!\Bbb P^2!!, under the induced mapping of some linear operator on !!\Bbb R^3!!. There will be (typically) three special points in !!\Bbb P^2!! and other points will typically tend to gravitate towards one or more of these. Isn't this interesting? Is the three-dimensional situation more interesting than the two-dimensional one? What if a point attracts in one dimension and repels in the other? What can the orbits look like?

Or consider the Grassmanian space !!Gr(2, \Bbb R^3)!! of all planes in !!\Bbb R^3!!. Does a linear operator on !!\Bbb R^3!! tend to drive points in !!Gr(2, \Bbb R^3)!! toward three fixed points? (In general, I suppose !!Gr(k, \Bbb R^n)!! will have !!n\choose k!! fixed points, some of which might attract and some repel.) Is there any geometric intuition for this?

I have been wanting to think about Grassmanian spaces more for a long time now.


[Other articles in category /math] permanent link

Thu, 02 Aug 2018

How to explain infinity to kids

A professor of mine once said to me that all teaching was a process of lying, and then of replacing the lies with successively better approximations of the truth. “I say it's like this,” he said, “and then later I say, well, it's not actually like that, it's more like this, because the real story is too complicated to explain all at once.” I wouldn't have phrased it like this, but I agree with him in principle. One of the most important issues in pedagogical practice is deciding what to leave out, and for how long.

Kids inevitably want to ask about numerical infinity, and many adults will fumble the question, mumbling out some vague or mystical blather. Mathematics is prepared to offer a coherent and carefully-considered answer. Unfortunately, many mathematically-trained people also fumble the question, because mathematics is prepared to offer too many answers. So the mathematical adult will often say something like “well, it's a lot of things…” which for this purpose is exactly not what is wanted. When explaining the concept for the very first time, it is better to give a clear and accurate partial explanation than a vague and imprecise overview. This article suggests an answer that is short, to the point, and also technically correct.

In mathematics “infinity” names a whole collection of not always closely related concepts from analysis, geometry, and set theory. Some of the concepts that come under this heading are:

  • The !!+\infty!! and !!-\infty!! one meets in real analysis, which can be seen either as a convenient fiction (with “as !!x!! goes to infinity” being only a conventional rephrasing of “as !!x!! increases without bound”) or as the endpoints in the two-point compactification of !!\Bbb R!!.

  • The !!\infty!! one meets in complex analysis, which is a single point one adds to compactify the complex plane into the Riemann sphere.

  • The real version of the preceding, the “point at infinity” on the real projective line.

  • The entire “line at infinity” that bounds the real projective plane.

  • The vast family of set-theoretic infinite cardinals !!\aleph_0, \aleph_1, \ldots!!.

  • The vast family of set-theoretic infinite ordinals, !!\omega, \omega+1, \ldots, \epsilon_0, \ldots, \Omega, \ldots!!.

I made a decision ahead of time that when my kids first asked what infinity was, I would at first adopt the stance that “infinity” referred specifically to the set-theoretic ordinal !!\omega!!, and that the two terms were interchangeable. I could provide more details later on. But my first answer to “what is infinity” was:

It's the smallest number you can't count to.

As an explanation of !!\omega!! for kids, I think this is flawless. It's brief and it's understandable. It phrases the idea in familiar terms: counting. And it is technically unimpeachable. !!\omega!! is, in fact, precisely the unique smallest number you can't count to.

How can there be a number that you can't count to? Kids who ask about infinity are already quite familiar with the idea that the sequence of natural numbers is unending, and that they can count on and on without bound. “Imagine taking all the numbers that you could reach by counting,” I said. “Then add one more, after all of them. That is infinity.” This is a bit mind-boggling, but again it is technically unimpeachable, and the mind-bogglyness of it is nothing more than the intrinsic mind-bogglyness of the concept of infinity itself. None has been added by vagueness or imprecise metaphor. When you grapple with this notion, you are grappling with the essence of the problem of the completed infinity.

In my experience all kids make the same move at this point, which is to ask “what comes after infinity?” By taking “infinity” to mean !!\omega!!, we set ourselves up for an answer that is much better than the perplexing usual one “nothing comes after infinity”, which, if infinity is to be considered a number, is simply false. Instead we can decisively say that there is another number after infinity, which is called “infinity plus one”. This suggests further questions. “What comes after infinity plus one?” is obvious, but a bright kid will infer the existence of !!\omega\cdot 2!!. A different bright kid might ask about !!\omega-1!!, which opens a different but fruitful line of discussion: !!\omega!! is not a successor ordinal, it is a limit ordinal.

Or the kid might ask if infinity plus one isn't equal to infinity, in which case you can discuss the non-commutativity of ordinal addition: if you add the “plus one” at the beginning, it is the same (except for !!\omega!! itself, the picture has just been shifted over on the page):

But if you add the new item at the other end, it is a different picture:

Before there was only one extra item on the right, and now there are two. The first picture exemplifies the Dedekind property, which is an essential feature of infinity. But the existence of an injection from !!\omega+1!! to !!\omega!! does not mean that every such map is injective.

Just use !!\omega!!. Later on the kid may ask questions that will need to be answered with “Earlier, I did not tell you the whole story.” That is all right. At that point you can reveal the next thing.

[ Addendum 20220226: Joel D. Hamkins wrote a Math Stack Exchange post in which he made the same suggestion: “In my experience with children, one of the easiest-to-grasp concepts of infinity is provided by the transfinite ordinals…”. ]


[Other articles in category /math] permanent link

Thu, 26 Jul 2018

Testing for divisibility by 7

There are well-known tests if a number (represented as a base-10 numeral) is divisible by 2, 3, 5, 9, or 11. What about 7?

Let's look at where the divisibility-by-9 test comes from. We add up the digits of our number !!n!!. The sum !!s(n)!! is divisible by !!9!! if and only if !!n!! is. Why is that?

Say that !!d_nd_{n-1}\ldots d_0!! are the digits of our number !!n!!. Then

$$n = \sum 10^id_i.$$

The sum of the digits is

$$s(n) = \sum d_i$$

which differs from !!n!! by $$\sum (10^i-1)d_i.$$ Since !!10^i-1!! is a multiple of !!9!! for every !!i!!, every term in the last sum is a multiple of !!9!!. So by passing from !!n!! to its digit sum, we have subtracted some multiple of !!9!!, and the residue mod 9 is unchanged. Put another way:

$$\begin{align} n &= \sum 10^id_i \\ &\equiv \sum 1^id_i \pmod 9 \qquad\text{(because $10 \equiv 1\pmod 9$)} \\ &= \sum d_i \end{align} $$

The same argument works for the divisibility-by-3 test.

For !!11!! the analysis is similar. We add up the digits !!d_0+d_2+\ldots!! and !!d_1+d_3+\ldots!! and check if the sums are equal mod 11. Why alternating digits? It's because !!10\equiv -1\pmod{11}!!, so $$n\equiv \sum (-1)^id_i \pmod{11}$$

and the sum is zero only if the sum of the positive terms is equal to the sum of the negative terms.

The same type of analysis works similarly for !!2, 4, 5, !! and !!8!!. For !!4!! we observe that !!10^i\equiv 0\pmod 4!! for all !!i>1!!, so all but two terms of the sum vanish, leaving us with the rule that !!n!! is a multiple of !!4!! if and only if !!10d_1+d_0!! is. We could simplify this a bit: !!10\equiv 2\pmod 4!! so !!10d_1+d_0 \equiv 2d_1+d_0\pmod 4!!, but we don't usually bother. Say we are investigating !!571496!!; the rule tells us to just consider !!96!!. The "simplified" rule says to consider !!2\cdot9+6 = 24!! instead. It's not clear that that is actually easier.

This approach works badly for divisibility by 7, because !!10^i\bmod 7!! is not simple. It repeats with period 6.

$$\begin{array}{c|cccccc|ccc} i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ %10^i & 1 & 10 & 100 & 1000 & 10000 & \ldots \\ 10^i\bmod 7 & 1 & 3 & 2 & 6 & 4 & 5 & 1 & 3 & 2 & 6 & 4 &\ldots \\ \end{array} $$

The rule we get from this is:

Take the units digit. Add three times the ones digit, twice the hundreds digit, six times the thousands digit… (blah blah blah) and the original number is a multiple of !!7!! if and only if the sum is also.

For example, considering !!12345678!! we must calculate $$\begin{align} 12345678 & \Rightarrow & 3\cdot1 + 1\cdot 2 + 5\cdot 3 + 4\cdot 4 + 6\cdot 5 + 2\cdot6 + 3\cdot 7 + 1\cdot 8 & = & 107 \\\\ 107 & \Rightarrow & 2\cdot1 + 3\cdot 0 + 1\cdot7 & = & 9 \end{align} $$ and indeed !!12345678\equiv 107\equiv 9\pmod 7!!.

My kids were taught the practical divisibility tests in school, or perhaps learned them from YouTube or something like that. Katara was impressed by my ability to test large numbers for divisibility by 7 and asked how I did it. At first I didn't think about my answer enough, and just said “Oh, it's not hard, just divide by 7 and look at the remainder.” (“Just count the legs and divide by 4.”) But I realized later that there are several tricks I was using that are not obvious. First, she had never learned short division. When I was in school I had been tormented extensively with long division, which looks like this:

This was all Katara had been shown, so when I said “just divide by 7” this is what she was thinking of. But you only need long division for large divisors. For simple divisors like !!7!!, I was taught short division, an easier technique:

Yeah, I wrote 4 when I meant 3. It doesn't matter, we don't care about the quotient anyway.

But that's one of the tricks I was using that wasn't obvious to Katara: we don't care about the quotient anyway, only the remainder. So when I did this in my head, I discarded the parts of the calculation that were about the quotient, and only kept the steps that pertained to the remainder. The way I was actually doing this sounded like this in my mind:

7 into 12 leaves 5. 7 into 53 leaves 4. 7 into 44 leaves 2. 7 into 25 leaves 4. 7 into 46 leaves 4. 7 into 57 leaves 5. 7 into 58 leaves 9. The answer is 9.

At each step, we consider only the leftmost part of the number, starting with !!12!!. !!12\div 7 !! has a remainder of 5, and to this 5 we append the next digit of the dividend, 3, giving 53. Then we continue in the same way: !!53\div 7!! has a remainder of 4, and to this 4 we append the next digit, giving 44. We never calculate the quotient at all.

I explained the idea with a smaller example, like this:

Suppose you want to see if 1234 is divisible by 7. It's 1200-something, so take away 700, which leaves 500-something. 500-what? 530-something. So take away 490, leaving 40-something. 40-what? 44. Now take away 42, leaving 2. That's not 0, so 1234 is not divisible by 7.

This is how I actually do it. For me this works reasonably well up to 13, and after that it gets progressively more difficult until by 37 I can't effectively do it at all. A crucial element is having the multiples of the divisor memorized. If you're thinking about the mod-13 residue of 680-something, it is a big help to know immediately that you can subtract 650.

A year or two ago I discovered a different method, which I'm sure must be ancient, but is interesting because it's quite different from the other methods I described.

Suppose that the final digit of !!n!! is !!b!!, so that !!n=10a+b!!. Then !!-2n = -20a-2b!!, and this is a multiple of !!7!! if and only if !!n!! is. But !!-20a\equiv a\pmod7 !!, so !!a-2b!! is a multiple of !!7!! if and only if !!n!! is. This gives us the rule:

To check if !!n!! is a multiple of 7, chop off the last digit, double it, and subtract it from the rest of the number. Repeat until the answer becomes obvious.

For !!1234!! we first chop off the !!4!! and subtract !!2\cdot4!! from !!123!! leaving !!115!!. Then we chop off the !!5!! and subtract !!2\cdot5!! from !!11!!, leaving !!1!!. This is not a multiple of !!7!!, so neither is !!1234!!. But with !!1239!!, which is a multiple of !!7!!, we get !!123-2\cdot 9 = 105!! and then !!10-2\cdot5 = 0!!, and we win.

In contrast to the other methods in this article, this method does not tell you the remainder on dividing because it does not preserve the residue mod 7. When we started with !!1234!! we ended with !!1!!. But !!1234\not\equiv 1\pmod 7!!; rather !!1234\equiv 2!!. Each step in this method multiplies the residue by -2, or, if you prefer, by 5. The original residue was 2, so the final residue is !!2\cdot-2\cdot-2 = 8 \equiv 1\pmod 7!!. (Or, if you prefer, !!2\cdot 5\cdot 5= 50 \equiv 1\pmod 7!!.) But if we only care about whether the residue is zero, multiplying it by !!-2!! doesn't matter.

There are some shortcuts in this method too. If the final digit is !!7!!, then rather than doubling it and subtracting 14 you can just chop it off and throw it away, going directly from !!10a+7!! to !!a!!. If your number is !!10a+8!! you can subtract !!7!! from it to make it easier to work with, getting !!10a+1!! and then going to !!a-2!! instead of to !!a-16!!. Similarly when your number ends in !!9!! you can go to !!a-4!! instead of to !!a-18!!. And on the other side, if it ends in !!4!! it is easier to go to !!a-1!! instead of to !!a-8!!.

But even with these tricks it's not clear that this is faster or easier than just doing the short division. It's the same number of steps, and it seems like each step is about the same amount of work.

Finally, I once wowed Katara on an airplane ride by showing her this:

To check !!1429!! using this device, you start at ⓪. The first digit is !!1!!, so you follow one black arrow, to ①, and then a blue arrow, to ③. The next digit is !!4!!, so you follow four black arrows, back to ⓪, and then a blue arrow which loops around to ⓪ again. The next digit is !!2!!, so you follow two black arrows to ② and then a blue arrow to ⑥. And the last digit is 9 so you then follow 9 black arrows to ① and then stop. If you end where you started, at ⓪, the number is divisible by 7. This time we ended at ①, so !!1429!! is not divisible by 7. But if the last digit had been !!1!! instead, then in the last step we would have followed only one black arrow from ⑥ to ⓪, before we stopped, so !!1421!! is a multiple of 7.

This probably isn't useful for mental calculations, but I can imagine that if you were stuck on a long plane ride with no calculator and you needed to compute a lot of mod-7 residues for some reason, it could be quicker than the short division method. The chart is easy to construct and need not be memorized. The black arrows obviously point from !!n!! to !!n+1!!, and the blue arrows all point from !!n!! to !!10n!!.

I made up a whole set of these diagrams and I think it's fun to see how the conventional divisibility rules turn up in them. For example, the rule for divisibility by 3 that says just add up the digits:

Or the rule for divisibility by 5 that says to ignore everything but the last digit:

[ Addendum 20201122: Testing for divisibility by 19. ]

[ Addendum 20220106: Another test for divisibility by 7. ]


[Other articles in category /math] permanent link

Mon, 23 Jul 2018

Operations that are not quite associative

The paper “Verification of Identities” (1997) of Rajagopalan and Schulman discusses fast ways to test whether a binary operation on a finite set is associative. In general, there is no method that is faster than the naïve algorithm of simply checking whether $$a\ast(b\ast c) = (a\ast b)\ast c$$ for all triples !!\langle a, b, c\rangle!!. This is because:

For every !!n≥3!!, there exists an operation with just one nonassociative triple.

(Page 3.)

But R&S do not give an example. I now have a very nice example, and I think the process that led me to it is a good example of Lower Mathematics at work.

Let's say that an operation !!\ast!! is “good” if it is associative except in exactly one case. We want to find a “good” operation. My first idea was that if we could find a primitive good operation on a set of 3 elements, we could probably extend it to give a good operation on a larger set. Say the set is !!\{a_0, a_1, a_2, b_0, b_1, \ldots, b_{n-4}\}!!. We just need to define the extended operation so that if either of the operands is !!b_i!!, it is associative, and if both operands are !!a_i!! then it is the same as the primitive good operation we already found. The former part is easy: just make it constant, say !!x\ast y = b_0!! except when !!x,y\in\{a_0, a_1, a_2\}!!.

So now all we need to do is find a single good primitive operation, and I did not expend any thought on this at all. There are only !!3^9=19683!! binary operations, and we could quite easily write a program to check them all. In fact, we can do better: generate a binary operation at random and check it. If it's not the primitive good operation we want, just throw it away and try again. This could take longer to run than the exhaustive search, if there happen to be very few good operations and if the program is unlucky, but the program is easier to write, and the run time will be utterly insignificant either way.

I wrote the program, which instantly produced:

$$ \begin{array}{c|ccc} \ast & 0 & 1 & 2 \\ \hline 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 2 & 0 & 1 & 2 \end{array} $$

This is associative except for the case !!1\ast(2\ast 1) \ne (1\ast 2)\ast 1!!. This solves the problem. But confirming that this is a good operation requires manually checking 27 cases, or a perhaps not-immediately-obvious case analysis.

But on a later run of the program, I got lucky and it found a simpler operation, which I can explain without a table:

!!a\ast b = 0!!, except !!2\ast 1=2!!.

Now we don't have to check 27 cases. The operation is simpler, so the proof is too: We know !!b\ast c\ne 1!!, so !!a\ast(b\ast c) !! must be !!0!!. And the only way !!(a \ast b)\ast c \ne 0!! can occur is when !!a\ast b=2!! and !!c=1!!, so !!a=2, b=1, c=1!!.

Now we can even dispense with the construction that extended the operation from !!3!! to !!n!! elements, because the description of the extended operation is the same. We wanted to extend it to be constant whenever !!a!! or !!b!! was larger than !!2!!, and that's what the description already says!

So that reduces the whole thing to about two sentences, which explain the example and why it works. But when reduced in that way, you see how the example works but not how you might go about finding it. I think the interesting part is to see how to get there, and quite a lot of mathematical education tends to over-emphasize analysis (“how can we show this is a good operation?”) at the expense of synthesis (“how can we find a good operation?”).

The exhaustive search would probably have produced the simple operation early on in its run, so there is something to be said for that approach too.


[Other articles in category /math] permanent link

Mon, 18 Jun 2018

Yesterday's metric has a name

Yesterday I presented as a counterexample the topology induced by the following metric:

A circle, with the
center marked.  A shortest-distance path is drawn in blue between two
blue points on the same radius, and in red between two red points on different
radii.  The blue path goes straight from one blue point to the other.
The red path goes from one point straight to the origin,
then straight to the other point.

I asked:

It seems like this example could be useful in other circumstances too. Does it have a name?

Several Gentle Readers have written in to tell me that that this metric is variously known as the British Rail metric, French Metro metric, or SNCF metric. (SNCF = Société nationale des chemins de fer français, the French national railway company). In all cases the conceit is the same (up to isomorphism): to travel to a destination on another railway line one must change trains in London / Paris, where all the lines converge.

Wikipedia claims this is called the post office metric, again I suppose because all the mail comes to the central post office for sorting. I have not seen it called the FedEx metric, but it could have been, with the center of the disc in Memphis.

[ Addendum 20180621: Thanks for Brent Yorgey for correcting my claim that the FedEx super hub is in Nashville. It is in Memphis ]


[Other articles in category /math] permanent link

Sun, 17 Jun 2018

Bounded does not imply totally bounded

I somehow managed to miss the notion of totally bounded when I was learning topology, and it popped up on stack exchange recently. It is a stronger version of boundedness for metric spaces: a space !!M!! is totally bounded if, for any chosen !!\epsilon!!, !!M!! can be covered by a finite family of balls of radius !!\epsilon!!.

This is a strictly stronger property than ordinary boundedness, so the question immediately comes up: what is an example of a space that is bounded but not totally bounded. Many examples are well-known. For example, the infinite-dimensional unit ball is bounded but not totally bounded. But I didn't think of this right away.

Instead I thought of the following rather odd example: Let !!S!! be the closed unit disc and assign each point a polar coordinate !!\langle r,\theta\rangle!! as usual. Now consider the following metric:

$$ d(\langle r_1, \theta_1\rangle, \langle r_2, \theta_2\rangle) = \begin{cases} r_1, & \qquad \text{ if $r_2 = 0$} \\ \lvert r_1 - r_2 \rvert, & \qquad\text{ if $\theta_1 = \theta_2$} \\ r_1 + r_2 & \qquad\text{ otherwise} \\ \end{cases} $$

The idea is this: you can travel between points only along the radii of the disc. To get from !!p_1!! to !!p_2!! that are on different radii, you must go through the origin:

A circle, with the
center marked.  A shortest-distance path is drawn in blue between two
blue points on the same radius, and in red between two red points on different
radii.  The blue path goes straight from one blue point to the other.
The red path goes from one point straight to the origin,
then straight to the other point.

Now clearly when !!\epsilon < \frac12!!, the !!\epsilon!!-ball that covers each point point !!\left\langle 1, \theta\right\rangle!! lies entirely within one of the radii, and so an uncountable number of such balls are required to cover the disc.

It seems like this example could be useful in other circumstances too. Does it have a name?

[ Addendum 2018-07-18: Several Gentle Readers have informed me that this metric has not just one name, but several. ]


[Other articles in category /math] permanent link

Mon, 07 May 2018

Katara constructs finite projective planes

This weekend I got a very exciting text message from Katara:

I have a math question for you

Oh boy! I hope it's one I can answer.

Okay

there's this game called spot it where you have cards with 8 symbols on them like so

The back of the 'Spot It' box shows four white circular cards,
each with 8 little pictures on it. A red double-headed arrow between
the first two cards shows that both cards have a picture of a
ladybug.  Another arrow between the second and third cards shows that
both cards have a picture of a red heart.  The next arrow connects the
trees on cards 2 and 3, and a fourth arrow shows that cards 1 and 4
both have apurple cat.  Even though part of card 3 is cut off, we can
see that they share a pair of lips. Cards 2 and 4 both have a gray
padlock picture, even though there's no arrow pointing it out.

and the goal is to find the one matching symbol on your card and the one in the middle

how is it possible that given any pair of cards, there is exactly one matching symbol

Well, whatever my failings as a dad, this is one problem I can solve. I went a little of overboard in my reply:

You need a particular kind of structure called a projective plane.

They only exist for certain numbers of symbols

A simpler example has 7 cards with 3 symbols each.

One thing that's cool about it is that the symbols and the cards are "dual": say you give each round card a name. Then make up a new deck of square cards. There's one square card for each symbol. So there's a square"Ladybug" card. The Ladybug card has on it the names of the round cards that have the Ladybug. Now you can play Spot with the square cards instead of the round ones: each two square cards have exactly one name in common.

In a geometric plane any two points lie on exactly one common line and any two lines intersect in exactly one common point. This is a sort of finite model of that, with cards playing the role of lines and symbols playing the role of points. Or vice versa, it doesn't matter.

More than you wanted to know 😂

ah thank you, I'm pretty sure I understand, sorry for not responding, my phone was charging

I still couldn't shut up about the finite projective planes:

No problem! No response necessary.

It is known that all finite projective planes have n²+n+1 points for some n. So I guess the Spot deck has either 31, 57, or 73 cards and the same number of symbols. Is 57 correct?

Must be 57 because I see from your picture that each card has 8 symbols.

Katara was very patient:

I guess, I would like to talk about this some more when i get home if that's okay

Any time.

(The game costs $13.)

Anyway this evening I cut up some index cards, and found a bunch of stickers in a drawer, and made Katara a projective plane of order 3. This has 13 cards, each with 4 different stickers, and again, every two cards share exactly one sticker. She was very pleased and wanted to know how to make them herself.

Each set of cards has an order, which is a non-negative integer. Then there must be !!n^2 + n + 1!! cards, each with !!n+1!! stickers or symbols. When !!n!! is a prime power, you can use field theory to construct a set of cards from the structure of the (unique) field of order !!n!!.

Fields to projective planes

Order 2

I'll describe the procedure using the plane of order !!n=2!!, which is unusually simple. There will be !!2^2+2+1 = 7!! cards, each with !!3!! of the !!7!! symbols.

Here is the finite field of order 2, called !!GF(2)!!:

+ 0 1
0 0 1
1 1 0
× 0 1
0 0 0
1 0 1
  • The stickers correspond to ordered triples of elements of !!GF(2)!!, except that !!\langle 0,0,0\rangle!! is always omitted. So they are:

    $$\require{cancel}\begin{array}{cc} \cancel{\langle 0,0,0\rangle} & \langle 1,0,0\rangle \\ \langle 0,0,1\rangle & \langle 1,0,1\rangle \\ \langle 0,1,0\rangle & \langle 1,1,0\rangle \\ \langle 0,1,1\rangle & \langle 1,1,1\rangle \\ \end{array} $$

    Of course, you probably do not want to use these symbols exactly. You might decide that !!\langle 1,0,0\rangle!! is a sticker with a picture of a fish, and !!\langle 0,1,0\rangle!! is a sticker with a ladybug.

  • Each card will have !!n+1 = 3!! stickers. To generate a card, pick any two stickers that haven't appeared together before and put them on the card. Say these stickers correspond to the triples !!\langle a,b,c\rangle!! and !!\langle x,y,z\rangle!!. To find the triple for the third sticker on the card, just add the first two triples componentwise, obtaining !!\langle a+x,b+y,c+z\rangle!!. Remember that the addition must be done according to the !!GF(2)!! addition table above! So for example if a card has !!\langle 1,0,1\rangle!! and !!\langle 0,1,1\rangle!!, its third triple will be

    $$\begin{align} \langle 1,0,1 \rangle + \langle 0,1,1 \rangle & = \\ \langle 1+0,0+1,1+1 \rangle & = \\ \langle 1,1,0 \rangle \end{align} $$

    Observe that it doesn't matter which two triples you add; you always get the third one!

Okay, well, that was simple.

Larger order

After Katara did the order 2 case, which has 7 cards, each with 3 of the 7 kinds of stickers, she was ready to move on to something bigger. I had already done the order 3 deck so she decided to do order 4. This has !!4^2+4+1 = 21!! cards each with 5 of the 21 kinds of stickers. The arithmetic is more complicated too; it's !!GF(2^2)!! instead of !!GF(2)!!:

+ 0 1 2 3
0 0 1 2 3
1 1 0 3 2
2 2 3 0 1
3 3 2 1 0
× 0 1 2 3
0 0 0 0 0
1 0 1 2 3
2 0 2 3 1
3 0 3 1 2

When the order !!n!! is larger than 2, there is another wrinkle. There are !!4^3 = 64!! possible triples, and we are throwing away !!\langle 0,0,0\rangle!! as usual, so we have 63. But we need !!4^2+4+1 = 21!!, not !!63!!.

Each sticker is represented not by one triple, but by three. The triples !!\langle a,b,c\rangle, \langle 2a,2b,2c\rangle,!! and !!\langle 3a,3b,3c\rangle!! must be understood to represent the same sticker, all the multiplications being done according to the table above. Then each group of three triples corresponds to a sticker, and we have 21 as we wanted.

Each triple must have a leftmost non-zero entry, and in each group of three similar triples, there will be one where this leftmost non-zero entry is a !!1!!; we will take this as the canonical representative of its class, and it can wear a costume or a disguise that makes it appear to begin with a !!2!! or a !!3!!.

We might assign stickers to triples like this:

$$ \begin{array}{rl} \cancel{\langle 0,0,0\rangle} & \\ \langle 0,0,1 \rangle & \text{apple} \\ \hline \langle 0,1,0 \rangle & \text{bicycle} \\ \langle 0,1,1 \rangle & \text{carrot} \\ \langle 0,1,2 \rangle & \text{dice} \\ \langle 0,1,3 \rangle & \text{elephant} \\ \hline \langle 1,0,0 \rangle & \text{frog} \\ \langle 1,0,1 \rangle & \text{goat} \\ \langle 1,0,2 \rangle & \text{hat} \\ \langle 1,0,3 \rangle & \text{igloo} \\ \langle 1,1,0 \rangle & \text{jellyfish} \\ \langle 1,1,1 \rangle & \text{kite} \\ \langle 1,1,2 \rangle & \text{ladybug} \\ \langle 1,1,3 \rangle & \text{mermaid} \\ \langle 1,2,0 \rangle & \text{nose} \\ \langle 1,2,1 \rangle & \text{octopus} \\ \langle 1,2,2 \rangle & \text{piano} \\ \langle 1,2,3 \rangle & \text{queen} \\ \langle 1,3,0 \rangle & \text{rainbow} \\ \langle 1,3,1 \rangle & \text{shoe} \\ \langle 1,3,2 \rangle & \text{trombone} \\ \langle 1,3,3 \rangle & \text{umbrella} \\ \end{array} $$

We can stop there, because everything after !!\langle 1,3,3 \rangle!! begins with a !!2!! or a !!3!!, and so is some other triple in disguise. For example what sticker goes with !!\langle 0,2,3 \rangle!!? That's actually !!\langle 0,1,2 \rangle!! in disguise, it's !!2·\langle 0,1,2 \rangle!!, which is “dice”. Okay, how about !!\langle 3,3,1 \rangle!!? That's the same as !!3\cdot\langle 1,1,2 \rangle!!, which is “ladybug”. There are !!21!!, as we wanted. Note that the !!21!! naturally breaks down as !!1+4+4^2!!, depending on how many zeroes are at the beginning; that's where that comes from.

Now, just like before, to make a card, we pick two triples that have not yet gone together, say !!\langle 0,0,1 \rangle!! and !!\langle 0,1,0 \rangle!!. We start adding these together as before, obtaining !!\langle 0,1,1 \rangle!!. But we must also add together the disguised versions of these triples, !!\langle 0,0,2 \rangle!! and !!\langle 0,0,3 \rangle!! for the first, and !!\langle 0,2,0 \rangle!! and !! \langle 0,3,0 \rangle!! for the second. This gets us two additional sums, !!\langle 0,2,3 \rangle!!, which is !!\langle 0,1,2 \rangle!! in disguise, and !!\langle 0,3,2 \rangle!!, which is !!\langle 0,1,3 \rangle!! in disguise.

It might seem like it also gets us !!\langle 0,2,2 \rangle!! and !!\langle 0,3,3 \rangle!!, but these are just !!\langle 0,1,1 \rangle!! again, in disguise. Since there are three disguises for !!\langle 0,0,1 \rangle!! and three for !!\langle 0,1,0 \rangle!!, we have nine possible sums, but it turns out that the nine sums are only three different triples, each in three different disguises. So our nine sums get us three additional triples, and, including the two we started with, that makes five, which is exactly how many we need for the first card. The first card gets the stickers for triples !!\langle 0,0,1 \rangle, \langle 0,1,0 \rangle \langle 0,1,1 \rangle \langle 0,1,2 \rangle,!! and !!\langle 0,1,3 \rangle,!! which are apple, bicycle, carrot, dice, and elephant.

That was anticlimactic. Let's do one more. We don't have a card yet with ladybug and trombone. These are !!\langle 1,1,2 \rangle!! and !!\langle 1,3,2 \rangle!!, and we must add them together, and also the disguised versions:

$$\begin{array}{c|ccc} & \langle 1,1,2 \rangle & \langle 2,2,3 \rangle & \langle 3,3,1 \rangle \\ \hline \langle 1,3,2 \rangle & \langle 0,2,0 \rangle & \langle 3,1,1 \rangle & \langle 2,0,3 \rangle \\ \langle 2,1,3 \rangle & \langle 3,0,1 \rangle & \langle 0,3,0 \rangle & \langle 1,2,2 \rangle \\ \langle 3,2,1 \rangle & \langle 2,3,3 \rangle & \langle 1,0,2 \rangle & \langle 0,1,0 \rangle \\ \end{array}$$

These nine results do indeed pick out three triples in three disguises each, and it's easy to select the three of these that are canonical: they have a 1 in the leftmost nonzero position, so the three sums are !!\langle 0,1,0 \rangle,!! !!\langle 1,0,2 \rangle,!! and !!\langle 1,2,2 \rangle!!, which are bicycle, hat, and piano. So the one card that has a ladybug and a trombone also has a bicycle, a hat, and a piano, which should not seem obvious. Note that this card does have the required single overlap with the other card we constructed: both have bicycles.

Well, that was fun. Katara did hers with colored dots instead of stickers:

Katara's set of 21 cards.  Each
has five colored dots of 21 total different colors, and each colored
dot is labeled with a letter of the alphabet in the same color.  There
are some mistakes that are crossed out and corrected, and a couple of
mistakes that are not yet crossed out or corrected.

The ABCDE card is in the upper left; the bicycle-hat-ladybug-piano-trombone one is the second row from the bottom, second column from the left. The colors look bad in this picture; the light is too yellow and so all the blues and purples look black.

After I took this picture, we checked these cards and found a couple of calculation errors, which we corrected. A correct set of cards is:

$$ \begin{array}{ccc} \text{abcde} & \text{bhlpt} & \text{dgmpr} \\ \text{afghi} & \text{bimqu} & \text{dhjou} \\ \text{ajklm} & \text{cfkpu} & \text{diknt} \\ \text{anopq} & \text{cgjqt} & \text{efmot} \\ \text{arstu} & \text{chmns} & \text{eglnu} \\ \text{bfjnr} & \text{cilor} & \text{ehkqr} \\ \text{bgkos} & \text{dflqs} & \text{eijps} \\ \end{array} $$

Fun facts about finite projective planes:

  • This construction always turns a finite field of order !!n!! into a finite projective plane of order !!n!!.

  • A finite field of order !!n!! exists exactly when !!n!! is a prime power and then there is exactly one finite field. So this construction gives us finite projective planes of orders !!1,2,3,4,5,7,8,9,11,13,16!!, but not of orders !!6,10,12,14,15!!. Do finite projective planes of those latter orders exist?

  • Is this method the only way to construct a finite projective plane? Yes, when !!n<9!!. But there are four non-isomorphic projective planes of order !!9!!, and this only constructs one of them.

    What about for !!n≥11!!? Nobody knows.


[Other articles in category /math] permanent link

Wed, 18 Apr 2018

Lower mathematics solves an easy problem

[ Warning: this article is mathematically uninteresting. ]

I woke up in the middle of the night last night and while I was waiting to go back to sleep, I browsed math Stack Exchange. At four in the morning I am not at my best, but sometimes I can learn something and sometimes I can even contribute.

The question that grabbed my attention this time was Arithmetic sequence where every term is prime?. OP wants to know if the arithmetic sequence $$d\mapsto a+d b$$ contains composite elements for every fixed positive integers !!a,b!!.

Now of course the answer is yes, or the counterexample would give us a quick and simple method for constructing prime numbers, and finding such has been an open problem for thousands of years. OP was certainly aware of this, but had not been able to find a simple proof. Their searching was confounded by more advanced matters relating to the Green-Tao theorem and such like, which, being more interesting, are much more widely discussed.

There are a couple of remarkable things about the answers that were given. First, even though the problem is easy, the first two answers posted were actually wrong, and another (quickly deleted) was so complicated that I couldn't tell if it was right or not.

One user immediately commented:

What happens if !!d=a!!?

which is very much to the point; when !!d=a!! then the element of the sequence is !!a+ab!! which is necessarily composite…

…unless !!a=1!!. So the comment does not quite take care of the whole question. A second user posted an answer with this same omission, and had to correct it later.

I might not have picked up on this case either, during the daytime. But at 4 AM I was not immediately certain that !!a+ab!! was composite and I had to think about it. I factored it to get !!a(1+b)!! and then I saw that if !!a=1!! or !!1+b=1!! then we lose. (!!1+b=1!! is impossible. !!a+ab!! might of course be composite even if !!a=1!!, but further argumentation is needed.)

So I did pick up on this, and gave a complete answer, of which the important part is:

Just take !!d=kb+k+a!! for any !!k!! whatever. Then the element is $$kb^2+(k+a)b+a = (kb+a)(b+a)$$ which is composite.

Okay, fine. But OP asked how I came up with that and if it was pure “insight”, so I thought I'd try to reconstruct how I got there at 4 AM. The problem is simple enough that I think I can remember most of how I got to the answer.

As I've mentioned before, I am not a pure insight kind of person. While better mathematicians are flitting swiftly from peak to peak, I plod along in the dark and gloomy valleys. I did not get !!d=kb+k+a!! in a brilliant flash of inspiration.

Instead, my thought process, as well as I can remember it, went like this:

  • “That doesn't work when !!a=1!!. But there must be composite numbers in sequences of the form !!d b+1!!. I wonder what they look like?”

  • “I guess I'll try an example. How about !!4d+1!!?”

  • “Hmm, it starts at 5, 5 is composite… No it isn't.”

  • “But it's increasing by 4 each time so it must hit each mod-5 residue class in some order.”

I should cut in at this point to add that my thinking was nowhere near this articulate or even verbal. The thing about the sequence hitting all the residue classes was more like a feeling in my body, like when I am recognizing a familiar place. When !!a!! and !!b!! are relatively prime, that means that when you are taking steps of size !!b!!, you hit all the !!a!!'s and don't skip any; that's what relatively prime is all about.

So maybe that counts as “insight”? Or “intuition about relative primality”? I think that description makes it sound much more impressive than it really is. I do not want a lot of credit for this. Maybe a better way to describe it is that I had been in this familiar place many times before, and I recognized it again.

Anyway I continued something like this:

  • “If it hits every residue class over and over in sequence it must hit residue class !!0!! infinitely often. How does that go? It hits !!5!!, so it hits !!5+4·5 = 25!! also.”

That was good enough for me; I did not even consider the next hit, !!45!!, perhaps because that number was too big for me to calculate at that moment.

I didn't use the phrase “residue class” either. That's just my verbal translation of my 4AM nonverbal thinking. At the time it was more like: there are some good things to hit and some bad ones, and the good ones are evenly spaced out, so if we hit each position in the even spacing-ness we must periodically hit some of the good things.

  • “Okay, then the first element of !!1+d b!! is !!1+b!!, and then every !!1+b!! steps after that it hits another multiple of !!1+b!!, so we want every !!1+b!! elements, so take !!d = 1 + k(1+b)!!.”

Then I posted the answer, saying that when !!a=1!! you take !!d=1+k(1+b)!! and the sequence element is !!1+b+kb+kb^2 = (1+b)(1+kb)!!.

Then I realized that I had the same feeling in my body even when !!a≠1!!, because it only depended on the way the residue classes repeated, and changing !!a!! doesn't affect that, it just slides everything left or right by a constant amount. So I went back to edit the !!1+k(1+b)!! to be !!a+k(1+b)!! instead.

I have no particular conclusion to draw about this.


[Other articles in category /math] permanent link

Mon, 16 Apr 2018

A familiar set with an unexpected order type

I dreamed this one up in high school and I recommend it as an exercise for kids at an appropriate level.

Consider the set of all Roman numerals

$${ \text{I}, \text{II}, \text{III}, \text{IV}, \text{V}, \ldots, \text{XIII}, \text{XIV}, \text{XV}, \text{XVI}, \ldots, \\ \text{XXXVIII}, \text{XXXIX}, \text{XL}, \text{XLI}, \ldots, \text{XLIX}, \text{L},\ldots,\\ \text{C}, \ldots , \text{D}, \ldots, \text{M}, \ldots, \text{MM}, \ldots, \text{MMM}, \ldots, \text{MMMM}, \ldots, \text{MMMMM}, \ldots }$$

where we allow an arbitrarily large number of M's on the front, so that every number has a unique representation. For example the number 10,067 is represented by !!\text{MMMMMMMMMMLXVII}!!.

Now sort the list into alphabetical order.

It is easy to show that it begins with !!\text{C}, \text{CC}, \text{CCC}, \text{CCCI}, \ldots!! and ends !!\text{XXXVII}, \text{XXXVIII}!!. But it's still an infinite list! Instead of being infinite at one end or the other, or even both, like most infinite lists, it's infinite in the middle.

Of course once you have the idea it's easy to think of more examples (!!\left\{ \frac1n\mid n\in\Bbb Z, n\ne 0\right\}!! for instance) but I hadn't seen anything like this before and I was quite pleased.


[Other articles in category /math] permanent link

Sun, 15 Apr 2018

On the smallest natural number

The earliest known mathematics book printed in Europe is an untitled arithmetic text published in Treviso in 1478, Originally written in Venetian dialect.

The Treviso Arithmetic states unequivocally:

Number is a multitude brought together or assembled from several units, and always from two at least, as in the case of 2, which is the first and the smallest number.

(original Venetian)

And a little later:

Of [the digits] the first figure, 1, is not called a number but the source of number.

(original Venetian)

(English translations are from David Eugene Smith, A Source Book in Mathematics (1959). A complete translation appears in Frank J. Swetz, Capitalism and Arithmetic The New Math of the Fifteenth Century (1987).)

Complete text (in Venetian)

By the way, today is the 311th birthday of Leonhard Euler.


[Other articles in category /math] permanent link

Fri, 13 Apr 2018

Exciting homology group news!

As an undergraduate I wondered and wondered about how manifolds and things are classified in algebraic topology, but I couldn't find any way into the subject. All the presentations I found were too abstract and I never came out of it with any concrete idea of how you would actually calculate any specific fundamental groups. I knew that the fundamental group of the circle was !!\Bbb Z!! and the group of the torus was !!\Bbb Z^2!! and I understood basically why, but I didn't know how you would figure this out without geometric intuition.

This was fixed for me in the very last undergrad math class I took, at Columbia University with Johan Tysk. That was the lowest point of my adult life, but the algebraic topology was the one bright spot in it. I don't know what might have happened to me if I hadn't had that class to sustain my spirit. And I learned how to calculate homotopy groups!

(We used Professor Tysk's course notes, supplemented by William Massey's introduction to algebraic topology. I didn't buy a copy of Massey and I haven't read it all, but I think I can recommend it for this purpose. The parts I have read seemed clear and direct.)

Anyway there things stood for a long time. Over the next few decades I made a couple of superficial attempts to find out about homology groups, but again the presentations were too abstract. I had been told that the homology approach was preferred to the homotopy approach because the groups were easier to actually calculate. But none of the sources I found seemed to tell me how to actually calculate anything concrete.

Then a few days ago I was in the coffee shop working on a geometry problem involving an icosi-dodecahedron, and the woman next to me asked me what I was doing. Usually when someone asks me this in a coffee shop, they do not want to hear the answer, and I do not want to give it, because if I do their eyes will glaze over and then they will make some comment that I have heard before and do not want to hear again. But it transpired that this woman was a math postdoc at Penn, and an algebraic topologist, so I could launch into an explanation of what I was doing, comfortable in the knowledge that if I said something she didn't understand she would just stop me and ask a question. Yay, fun!

Her research is in “persistent homology”, which I had never heard of. So I looked that up and didn't get very far, also because I still didn't know anything about homology. (Also, as she says, the Wikipedia article is kinda crappy.) But I ran into her again a couple of days later and she explained the persistent part, and I know enough about what homology is that the explanation made sense.

Her research involves actually calculating actual homology groups of actual manifolds on an actual computer, so I was inspired to take another crack at understanding homology groups. I did a couple of web searches and when I searched for “betti number tutorial” I hit paydirt: these notes titled “persistent homology tutorial” by Xiaojin Zhu of the University of Wisconsin at Madison. They're only 37 slides long, and I could skip the first 15. Then slide 23 gives the magic key. Okay! I have not yet calculated any actual homology groups, so this post might be premature, but I expect I'll finish the slides in a couple of days and try my hand at the calculations and be more or less successful. And the instructions seem clear enough that I can imagine implementing a computer algorithm to calculate the homology groups for a big ugly complex, as this math postdoc does.

I had heard before that the advantage of the homology approach over the homotopy approach is that the homologies are easier to actually calculate with, and now I see why. I could have programmed a computer to do homotopy group calculations, but the output would in general have been some quotient of a free group given by a group presentation, and this is basically useless as far as further computation goes. For example the question of whether two differently-presented groups are isomorphic is undecidable, and I think similar sorts of questions, such as whether the group is abelian, or whether it is infinite, are similarly undecidable.

Sometimes you get a nice group, but usually you don't. For example the homotopy group of the Klein bottle is the quotient of the free group on two generators under the smallest equivalence relation in which !!aba = b!!; that is:

$$\langle a, b\mid abab^{-1}\rangle$$

which is not anything I have seen in any other context. Even the question of whether two given group elements are equal is in general undecidable. So you get an answer, but then you can't actually do anything with it once you have it. (“You're in a balloon!”) The homology approach throws away a lot of information, enough to render the results comprehensible, but it also leaves enough to do something with.


[Other articles in category /math] permanent link

Mon, 19 Mar 2018

Funky coordinate systems

I had a fun idea this morning. As a kid I was really interested in polar coordinates and kind of disappointed that there didn't seem to be any other coordinate systems to tinker with. But this morning I realized there were a lot.

Let !!F(c)!! be some parametrized family of curves that partition the plane, or almost all of the plane, say except for a finite number of exceptions. If you have two such families !!F_1(c)!! and !!F_2(c)!!, and if each curve in !!F_1!! intersects each curve in !!F_2!! in exactly one point (again with maybe a few exceptions) then you have a coordinate system: almost every point !!P!! lies on !!F_1(a)!! and !!F_2(b)!! for some unique choice of !!\langle a, b\rangle!!, and these are its coordinates in the !!F_1–F_2!! system.

For example, when !!F_1(c)!! is the family of lines !!x=c!! and !!F_2(c)!! is the family of lines !!y=c!! then you get ordinary Cartesian coordinates, and when !!F_1(c)!! is the family of circles !!x^2+y^2=c!! and !!F_2(c)!! is the family !!y=cx!! (plus also !!x=0!!) you get standard polar coordinates, which don't quite work because the origin is in every member of !!F_2!!, but it's the only weird exception.

But there are many other families that work. To take a particularly simple example you can pick some constant !!k!! and then take

$$\begin{align} F_1(c): && x & =c \\ F_2(c): && y & =kx+c. \end{align} $$

This is like Cartesian coordinates except the axes are skewed. I did know about this when I was a kid but I considered it not sufficiently interesting.

For a more interesting example, try

$$\begin{align} F_1(c): && x^2-y^2 & =c \\ F_2(c): && xy & =c \end{align} $$

which looks like this:

Two families of hyperbolas, as in the displayed equation
immediately preceding. The hyperbolas !!x^2-y^2 = c!! (in blue) and !!xy=c!! (in red)

I've seen that illustration before but I don't think I thought of using it as a coordinate system. Well, okay, every pair of hyperbolas intersects in two points, not one. So it's a parametrization of the boundary of real projective space or something, fine. Still fun!

In the very nice cases (such as the hyperbolas) each pair of curves is orthogonal at their point of intersection, but that's not a requirement, as with the skew Cartesian system. I'm pretty sure that if you have one family !!F!! you can construct a dual family !!F'!! that is orthogonal to it everywhere by letting !!F'!! be the paths of gradient descent or something. I'm not sure what the orthogonality is going to be important for but I bet it's sometimes useful.

You can also mix and match families, so for example take:

$$\begin{align} F_1(c): && x & =c \\ F_2(c): && xy & =c \end{align} $$

Some examples work better than others. The !!xy=c!! hyperbolas are kind of a mess when !!c=0!!, and they don't go together with the !!x^2+y^2=c!! circles in the right way at all: each circle intersects each hyperbola in four points. But it occurs to me that as with the projective plane thingy, we don't have to let that be a problem. Take !!S!! to be the quotient space of the plane where two points are identified if their !!F_1–F_2!!-coordinates are the same and then investigate !!S!!. Or maybe go more directly and take !!S = F_1 \times F_2!! (literally the Cartesian product), and then topologize !!S!! in some reasonably natural way. Maybe just give it the product topology. I dunno, I have to think about it.

(I was a bit worried about how to draw the hyperbola picture, but I tried Google Image search for “families of orthogonal hyperbolas”, and got just what I needed. Truly, we live in an age of marvels!)


[Other articles in category /math] permanent link

Sun, 11 Mar 2018

Quick and dirty prime counting

I've been thinking for a while that I probably ought to get around to memorizing all the prime numbers under 1,000, so that I don't have to wonder about things like 893 all the time, and last night in the car I started thinking about it again, and wondered how hard it would be. There are 25 primes under 100, so presumably fewer than 250 under 1,000, which is not excessive. But I wondered if I could get a better estimate.

The prime number theorem tells us that the number of primes less than !!n!! is !!O(\frac n{\log n})!! and I think the logarithm is a natural one, but maybe there is some constant factor in there or something, I forget and I did not want to think about it too hard because I was driving. Anyway I cannot do natural logarithms in my head.

Be we don't need to do any actual logarithms. Let's estimate the fraction of primes up to !!n!! as !!\frac 1{c\log n}!! where !!c!! is unknown and the base of the logarithm is then unimportant. The denominator scales linearly with the power of !!n!!, so the difference between the denominators for !!n=10!! and !!n=100!! is the same as the difference between the denominators for !!n=100!! and !!n=1000!!.

There are 4 primes less than 10, or !!\frac25!!, so the denominator is 2.5. And there are 25 primes less than 100, so the denominator here is 4. The difference is 1.5, so the denominator for !!n=1000!! ought to be around 5.5, and that means that about !!\frac2{11}!! of the numbers up to 1000 are prime. This yields an estimate of 182.

I found out later that the correct number is 186, so I felt pretty good about that.

[ Addendum: The correct number is 168, not 186, so I wasn't as close as I thought. ]


[Other articles in category /math] permanent link

Wed, 07 Feb 2018

The many faces of the Petersen graph

(Actually the Petersen graph cannot really be said to have faces, as it is nonplanar. HA! HA! I MAKE JOKE!​!1!)

This article was going to be about how GraphViz renders the Petersen graph, but instead it turned out to be about how GraphViz doesn't render the Petersen graph. The GraphViz stuff will be along later.

Here we have the Petersen graph, which, according to Donald Knuth, “serves as a counterexample to many optimistic predictions about what might be true for graphs in general.” It is not that the Petersen graph is stubborn! But it marches to the beat of a different drummer. If you have not met it before, prepare to be delighted.

The Petersen
graph has two sets of five vertices each.  Each set is connected into
a pentagonal ring.  There are five more edges between vertices in
opposite rings, but instead of being connected 0–0 1–1 2–2 3–3 4–4,
they are connected 0–0 1–2 2–4 3–1 4–3.

This is the basic structure: a blue 5-cycle, and a red 5-cycle. Corresponding vertices in the two cycles are connected by five purple edges. But there is a twist! Notice that the vertices in the red cycle are connected in the order 1–3–5–2–4.

There are different ways to lay out the Petersen graph that showcase its many interesting properties. For example, the standard presentation, above, demonstrates that the Petersen graph is nonplanar, since it obviously contracts to !!K_5!!. The presentation below obscures this, but it is good for seeing that the graph has diameter only 2:

Wait, what? Where did the pentagons go?

Try this instead:

The Petersen
graph laid out as a tree, with a root attached to three level-1 nodes,
each attached to 2 level-2 nodes.  The six level-2 nodes are then
connected into a ring so that each level-2 node is at distance 1 or
distance 2 from each other level-2 node.

Again the red vertices are connected in the order 1–3–5–2–4.

Okay, that is indeed the Petersen graph, but how does it help us see that the graph has diameter 2? Color the nodes by how far down they are from the root:

  • Obviously, the root node (black) has distance at most 2 to every other node, because the tree has only depth 2.

  • Each of the three second-level nodes (red) is distance 2 from the other two, via a path through the root.

  • The six third-level nodes (blue) are linked in a 6-cycle (dotted lines), so that each third-level node is at most two steps away along the cycle from the others, except for the one furthest away, but that is its sibling in the tree, and it has a path of length 2 through their common parent.

  • And since each third-level node (say, the one with the red ring) is connected by a dotted edge (orange) to cousins in both of the other branches of the tree, it's only distance 2 from both of its red uncle nodes.

Looking at the pentagonal version, you would not suspect the Petersen graph of also having a sixfold symmetry, but it does. We'll get there in two steps. Again, here's a version where it's not so easy to see that it's actually the Petersen graph, but whatever it is, it is at least clear that it has an automorphism of order six (give it a one-sixth turn):

narf

The represents three vertices, one in each color. In the picture they are superimposed, but in the actual graph, no pair of the three is connected by an edge. Instead, each of the three is connected not to the others but to a tenth vertex that I omitted from the diagram entirely.

Let's pull apart the three vertices and reveal the hidden tenth vertex and its three edges:

narf

Here is the same drawing, recolored to match the tree diagram from before; the outer hexagon is just the 6-cycle formed by the six blue leaf nodes:

narf  

But maybe it's easier to see if we look for red and blue pentagons. There are a couple of ways to do that:

narf   narf

As always, the red vertices are connected in the order 1–3–5–2–4.

Finally, here's a presentation you don't often see. It demonstrates that the Petersen graph also has fourfold symmetry:

narf

Again, and represent single vertices stretched out into dumbbell shapes. The diagram only shows 14 of the 15 edges; the fifteenth connects the two dumbbells.

The pentagons are deeply hidden here. Can you find them? (Spoiler)

Even though this article was supposed to be about GraphViz, I found it impossible to get it to render the diagrams I wanted it to, and I had to fall back on Inkscape. Fortunately Inkscape is a ton of fun.


[Other articles in category /math] permanent link

Mon, 29 Jan 2018

The rubber duck explains coherence spaces

I've spent a chunk of the past week, at least, trying to understand the idea of a coherence space (or coherent space). This appears in Jean-Yves Girard's Proofs and Types, and it's a model of a data type. For example, the type of integers and the type of booleans can be modeled as coherence spaces.

The definition is one of those simple but bafflingly abstract ones that you often meet in mathematics: There is a set !!\lvert \mathcal{A}\rvert!! of tokens, and the points of the coherence space !!\mathcal{A}!! (“cliques”) are sets of tokens. The cliques are required to satisfy two properties:

  1. If !!a!! is a clique, and !!a'\subset a!!, then !!a'!! is also a clique.
  2. Suppose !!\mathcal M!! is some family of cliques such that !!a\cup a'!! is a clique for each !!a, a'\in \mathcal M!!. Then !!\bigcup {\mathcal M}!! is also a clique.

To beginning math students it often seems like these sorts of definitions are generated at random. Okay, today we're going to study Eulerian preorders with no maximum element that are closed under finite unions; tomorrow we're going to study semispatulated coalgebras with countably infinite signatures and the weak Cosell property. Whatever, man.

I have a long article about this in progress, but I'll summarize: they are never generated at random. The properties are carefully chosen because we have something in mind that we are trying to model, and until you understand what that thing is, and why someone thinks those properties are the important ones, you are not going to get anywhere.

So when I see something like this I must stop immediately and ask ‘wat’. I can try to come up with the explanation myself, or I can read on hoping for an explanation, or I can do some of each, but I am not going to progress very far until I understand what it is about. And I'm not sure anyone short of Alexander Grothendieck would have any more success trying to move on with the definition itself and nothing else.

Girard explains shortly after:

The aim is to interpret a type by a coherence space !!\mathcal{A}!!, and a term of this type as a point [clique] of !!\mathcal{A}!!, infinite in general…

Okay, fine. I understand the point of the project, although not why the definition is what it is. I know a fair amount about types. And Girard has given two examples: booleans and integers. But these examples are unusually simple, because none of the cliques has more than one element, and so the examples are not as illuminating as they might be.

Some of the ways I tried to press onward were:

  1. Read ahead and see if there is more explanation. I tried this but I still wasn't getting it. The next section seemed clear: the cliques define a “coherence” relation on the tokens, from which the cliques can be recovered. Consider a graph, where the vertices are tokens and there is an edge !!a—b!! exactly when !!\{a, b\}!! is a clique; we say that !!a!! and !!b!! are coherent. Then the cliques of the coherence space are exactly the cliques of the graph; hence the name. The graph is called the web of the space, and from the web one can recover the original space.

    But after that part came stable functions, which I couldn't figure out, and I got stuck again.

  2. Read ahead and see if there is a more complicated specific example. There wasn't.

  3. Read ahead and see if any of the derived concepts are familiar, and if so then work backward. For instance, if I had been able to recognize that I already knew what stable functions were, I might have been able to leverage that into an understanding of what was going on with the coherence spaces. But for me they were just another problem of the same sort: what is a stable function supposed to be modeling?

  4. Read someone else's explanation instead. I tried several without much success. They all seemed to be written for someone who already had a clue what was going on. (That is a large part of the reason I have written up this long and clueless explanation.)

  5. Try to construct some examples and see if they make sense in the context of what comes later. For example, I know what the coherence space of booleans looks like because Girard showed me. Can I figure out the structure of the coherence space for the type of “wrapped booleans”?

    -- (Haskell)
    data WrappedBoolean = W Bool
    

    Can I figure it out for the type of pairs of booleans?

    -- (Haskell)
    type BooleanPair = (Bool, Bool)
    

None of this was working. I had several different ideas about what the coherence spaces might look like for other types, but none of them seemed to fit with what Girard was doing. I couldn't come up with any consistent story.

So I prepared to ask on StackExchange, and I spent about an hour writing up my question, explaining all the things I had tried and what the problems were with each one. And as I drew near to the end of this, the clouds parted! I never had to post the question. I was in the middle of composing this paragraph:

In section 8.4 Girard defines a direct product of coherence spaces, but it doesn't look like the direct product I need to get a product type; it looks more like a disjoint union type. If the coherence space for Pairbool is the square of the coherence space for !!{{\mathcal B}ool}!!, how? It has 4 2-cliques, but if those are the total elements of !!{{\mathcal B}ool}^2!!, then what are do the 1-cliques mean?

I decided I hadn't made enough of an effort to understand the direct product. So even though I couldn't see how it could possibly give me anything like what I wanted, I followed its definition for !!{{\mathcal B}ool}^2!! — and the light came on.

Here's the puzzling coproduct-like definition of the product of two coherence spaces, from page 61:

If !!{\mathcal A}!! and !!{\mathcal B}!! are two coherence spaces, we define !!{\mathcal A}\&{\mathcal B}!! by:

!!|{\mathcal A}\&{\mathcal B}| = |{\mathcal A}| + |{\mathcal B}| = \{1\}×|{\mathcal A}| \cup \{2\}×|{\mathcal B}|!!

That is, the tokens in the product space are literally the disjoint union of the tokens in the component spaces.

And the edges in the product's web are whatever they were in !!{\mathcal A}!!'s web (except lifted from !!|{\mathcal A}|!! to !!\{1\}×|{\mathcal A}|!!), whatever they were in !!{\mathcal B}!!'s web (similarly), and also there is an edge between every !!\langle1, {\mathcal A}\rangle!! and each !!\langle2, {\mathcal B}\rangle!!. For !!{{\mathcal B}ool}^2!! the web looks like this:

There is no edge between !!\langle 1, \text{True}\rangle!! and !!\langle 1, \text{False}\rangle!! because in !!{{\mathcal B}ool}!! there is no edge between !!\text{True}!! and !!\text{False}!!.

This graph has nine cliques. Here they are ordered by set inclusion:

(In this second diagram I have abbreviated the pair !!\langle1, \text{True}\rangle!! to just !!1T!!. The top nodes in the diagram are each labeled with a set of two ordered pairs.)

What does this mean? The ordered pairs of booleans are being represented by functions. The boolean pair !!\langle x, y\rangle!! is represented by the function that takes as its argument a number, either 1 or 2, and then returns the corresponding component of the pair: the first component !!x!! if the argument was 1, and the second component !!y!! if the argument was 2.

The nodes in the bottom diagram represent functions. The top row are fully-defined functions. For example, !!\{1F, 2T\}!! is the function with !!f(1) = \text{False}!! and !!f(2) = \text{True}!!, representing the boolean pair !!\langle\text{False}, \text{True}\rangle!!. Similarly if we were looking at a space of infinite lists, we could consider it a function from !!\Bbb N!! to whatever the type of the lists elements was. Then the top row of nodes in the coherence space would be infinite sets of pairs of the form !!\langle n, \text{(list element)}\rangle!!.

The lower nodes are still functions, but they are functions about which we have only incomplete information. The node !!\{2T\}!! is a function for which !!f(2) = \text{True}!!. But we don't yet know what !!f(1)!! is because we haven't yet tried to compute it. And the bottommost node !!\varnothing!! is a function where we don't know anything at all — yet. As we test the function on various arguments, we move up the graph, always following the edges. The lower nodes are approximations to the upper ones, made on the basis of incomplete information about what is higher up.

Now the importance of finite approximants on page 56 becomes clearer. !!{{\mathcal B}ool}^2!! is already finite. But in general the space is infinite because the type is functions on an infinite domain, or infinite lists, or something of that sort. In such a space we can't get all the way to the top row of nodes because to do that we would have to call the function on all its possible arguments, or examine every element of the list, which is impossible. Girard says “Above all, there are enough finite approximants to a.” I didn't understand what he meant by “enough”. But what he means is that each clique !!a!! is the union of its finite approximants: each bit of information in the function !!a!! is obtainable from some finite approximation of !!a!!. The “stable functions” of section 8.3 start to become less nebulous also.

I had been thinking that the !!\varnothing!! node was somehow like the !!\bot!! element in a Scott domain, and then I struggled to identify anything like !!\langle \text{False}, \bot\rangle!!. It looks at first like you can do it somehow, because there are the right number of nodes at the middle level. Trouble arises in other coherence spaces.
For the WrappedBoolean type, for example, the type has four elements: !! W\ \text{True}, W\ \text{False}, W\ \bot,!! and !!\bot!!. I think the coherence space for WrappedBoolean is just like the one for !!{{\mathcal B}ool}!!:

Presented with a value from WrappedBoolean, you don't initially know what it is. Then you examine it, and you know whether it is !!W\ \text{True}!! or !!W\ \text{False}!!. You are now done.

I think there isn't anything like !!\bot!! or !!W\ \bot!! in the coherence space. Or maybe they they are there but sharing the !!\varnothing!! node. But I think more likely partial objects will appear in some other way.

Whew! Now I can move along.

(If you don't understand why “rubber duck”, Wikipedia explains:

Many programmers have had the experience of explaining a problem to someone else, possibly even to someone who knows nothing about programming, and then hitting upon the solution in the process of explaining the problem.

[“Rubber duck”] is a reference to a story in the book The Pragmatic Programmer in which a programmer would carry around a rubber duck and debug their code by forcing themselves to explain it, line-by-line, to the duck.

I spent a week on this but didn't figure it out until I tried formulating my question for StackExchange. The draft question, never completed, is here if for some reason you want to see what it looked like.)


[Other articles in category /math/logic] permanent link

Mon, 22 Jan 2018

The gods of Stackexchange karma are fickle

A few years ago an se.math user asked why their undistinguished answer to some humdrum question had gotten so many upvotes. I replied:

It's the gods of Stackexchange Karma evening your score for that really clever answer you posted two weeks ago that still has 0 upvotes.

If you're going to do Stack Exchange, I think it's important not to stress out about the whys of the votes, and particularly important not to take them personally. The karma gods do not always show the most refined taste. As Brandon Tartikoff once said, “All hits are flukes”.

Today I got an unusually flukey hit. But first, here are some nice examples in the opposite direction, posts that I put a lot of trouble and effort into, which were clear and useful, helpful to the querent, and which received no upvotes at all.

Here the querent asked, suppose I have several nonstandard dice with various labeling on the faces. Player A rolls some of the dice, and Player B rolls a different set of dice. How do I calculate things like “Player A has an X% chance of rolling a higher total”?

Mathematicians are not really the right people to ask this to, because many of them will reply obtusely, informing you that it depends on how many dice are rolled and on how their faces are actually labeled, and that the question did not specify these, but that if it had, the problem would be trivial. (I thought there were comments to that effect on this question, but if there were they have been deleted. In any case nobody else answered.) But this person was writing a computer game and wanted to understand how to implement a computer algorithm for doing the calculation. There is a lot one can do to help this person. I posted an answer that I thought was very nice. The querent liked it, but it got zero upvotes. This happens quite often. Which is fine, because the fun of doing it and the satisfaction of helping someone are reward enough.

Like that one, many of these questions are ignored because they aren't mathematically interesting. Or sometimes the mathematics is simple but the computer implementation is not.

Sometimes I do them just because I want to know the particular answer. (How much of an advantage does Player A get from being allowed to add 1 to their total?)

Sometimes there's an interesting pedagogical problem, such as: how do I give a hint that will point in the right direction without giving away the whole secret? Or: how do I wade through this person's confused explanation and understand what they are really asking, or what they are really confused about? That last person was given a proof that glossed over six or seven steps in the reasoning, focusing instead on the technically interesting induction proof in the middle. The six or seven steps are straightforward, if you already have enough practice with logical reasoning about quantified statements. But this querent didn't follow the reasoning that led up to the induction, so they didn't understand why the induction was useful or what it was for.

Some people are just confused about what the question is. That person has a complicated-seeming homework exercise about the behavior of the logistic map, and need helps interpreting it from someone who has a better idea what is going on. The answer didn't require any research effort, and it's mathematically uninteresting, but it did require attention from someone who has a better idea of what is going on. The querent was happy, I was happy, and nobody else noticed.

I never take the voting personally, because the gods of Stackexchange karma are so fickle, and today I got a reminder of that. Today's runaway hit was for a complete triviality that I knocked off in two minutes. It currently has 94 upvotes and seems likely to get a gold medal. (That gold medal, plus $4.25, will get me a free latte at Starbuck's!)

The question asks how to find a prime factor of 7,999,973 without using a calculator.

It's one of those easy-if-you-happen-to-know-the-trick things, and I just happened to get there before one of the (many) other people who happens to know the trick.

As we used to say in the system administration biz, some days you're an idiot if you can't explain how to do real-time robot arm control in Unix, other days you're a genius if you fix their terminal by plugging it in.

[ Addendum 20180123: I got the gold medal. Woo-hoo, free latte! ]


[Other articles in category /math] permanent link

Fri, 22 Dec 2017

Orthogonal polygons

A couple of years ago I wrote here about some interesting projects I had not finished. One of these was to enumerate and draw orthogonal polygons.

An orthogonal polygon is simply one whose angles are all right angles. All rectangles are orthogonal polygons, but there are many other types. For example, here are examples of orthogonal decagons:

orthogonal decagons

If you ignore the lengths of the edges, and pay attention only to the direction that the corners turn, the orthogonal polygons fall into types. The rectangle is the only type with four sides. There is also only one type with six sides; it is an L-shaped hexagon. There are four types with eight sides, and the illustration shows the eight types with ten sides.

Contributing to OEIS was a life goal of mine and I was thrilled when I was able to contribute the sequence of the number of types of orthogonal !!2n!!-gons.

Enumerating the types is not hard. For !!2n!!-gons, there is one type for each “bracelet” of !!n-2!! numbers whose sum is !!n+2!!.[1]

In the illustration above, !!n=5!! and each type is annotated with its !!5-2=3!! numbers whose sum is !!n+2=7!!. But the number of types increases rapidly with the number of sides, and it soons becomes infeasible to draw them by hand as I did above. I had wanted to write a computer program that would take a description of a type (the sequence) and render a drawing of one of the polygons of that type.

The tricky part is how to keep the edges from crossing, which is not allowed. I had ideas for how to do this, but it seemed troublesome, and also it seemed likely to produce ugly, lopsided examples, so I did not implement it. And eventually I forgot about the problem.

But Brent Yorgey did not forget, and he had a completely different idea. He wrote a program to convert a type description to a set of constraints on the !!x!! and !!y!! coordinates of the vertices, and fed the constraints to an SMT solver, which is a system for finding solutions to general sets of constraints. The outcome is as handsome as I could have hoped. Here is M. Yorgey's program's version of the hand-drawn diagram above:

M. Yorgey rendered beautiful pictures of all types of orthogonal polygons up to 12 sides. Check it out on his blog.

[1] “Bracelet” is combinatorist jargon for a sequence of things where the ends are joined together: you can start in the middle, run off one end and come back to the other end, and that is considered the same bracelet. So for example ABCD and BCDA and CDAB are all the same bracelet. And also, it doesn't matter which direction you go, so that DCBA, ADCB, and BADC are also the same bracelet again. Every bracelet made from from A, B, C, and D is equivalent to exactly one of ABCD, ACDB, or ADBC.

[ Addendum 20180202: M. Yorgey has more to say about it on his blog. ]


[Other articles in category /math] permanent link

Fri, 15 Dec 2017

Wasteful and frugal proofs in Ramsey theory

This math.se question asks how to show that, among any 11 integers, one can find a subset of exactly six that add up to a multiple of 6. Let's call this “Ebrahimi’s theorem”.

This was the last thing I read before I put away my phone and closed my eyes for the night, and it was a race to see if I would find an answer before I fell asleep. Sleep won the race this time. But the answer is not too hard.

  1. First, observe that among any five numbers there are three that sum to a multiple of 3: Consider the remainders of the five numbers upon division by 3. There are three possible remainders. If all three remainders are represented, then the remainders are !!\{0,1,2\}!! and the sum of their representatives is a multiple of 3. Otherwise there is some remainder with three representatives, and the sum of these three is a multiple of 3.

  2. Now take the 11 given numbers. Find a group of three whose sum is a multiple of 3 and set them aside. From the remaining 8 numbers, do this a second time. From the remaining 5 numbers, do it a third time.

  3. We now have three groups of three numbers that each sum to a multiple of 3. Two of these sums must have the same parity. The six numbers in those two groups have an even sum that is a multiple of 3, and we win.

Here is a randomly-generated example:

$$3\quad 17\quad 35\quad 42\quad 44\quad 58\quad 60\quad 69\quad 92\quad 97\quad 97$$

Looking at the first 5 numbers !!3\ 17\ 35\ 42\ 44!! we see that on division by 3 these have remainders !!0\ 2\ 2\ 0\ 2!!. The remainder !!2!! is there three times, so we choose those three numbers !!\langle17\ 35\ 44\rangle!!, whose sum is a multiple of 3, and set them aside.

Now we take the leftover !!3!! and !!42!! and supplement them with three more unused numbers !!58\ 60\ 69!!. The remainders are !!0\ 0\ 1\ 0\ 0!! so we take !!\langle3\ 42\ 60\rangle!! and set them aside as a second group.

Then we take the five remaining unused numbers !!58\ 69\ 92\ 97\ 97!!. The remainders are !!1\ 0\ 2\ 1\ 1!!. The first three !!\langle 58\ 69\ 92\rangle!!have all different remainders, so let's use those as our third group.

The three groups are now !! \langle17\ 35\ 44\rangle, \langle3\ 42\ 60\rangle, \langle58\ 69\ 92\rangle!!. The first one has an even sum and the second has an odd sum. The third group has an odd sum, which matches the second group, so we choose the second and third groups, and that is our answer:

$$3\qquad 42\qquad 60\qquad 58 \qquad 69 \qquad 92$$

The sum of these is !!324 = 6\cdot 54!!.

This proves that 11 input numbers are sufficient to produce one output set of 6 whose sum is a multiple of 6. Let's write !!E(n, k)!! to mean that !!n!! inputs are enough to produce !!k!! outputs. That is, !!E(n, k)!! means “any set of !!n!! numbers contains !!k!! distinct 6-element subsets whose sum is a multiple of 6.” Ebrahimi’s theorem, which we have just proved, states that !!E(11, 1)!! is true, and obviously it also proves !!E(n, 1)!! for all larger !!n!!.

I would like to consider the following questions:

  1. Does this proof suffice to show that !!E(10, 1)!! is false?
  2. Does this proof suffice to show that !!E(11, 2)!! is false?

I am specifically not asking whether !!E(10, 1)!! or !!E(11, 2)!! are actually false. There are easy counterexamples that can be found without reference to the proof above. What I want to know is if the proof, as given, contains nontrivial information about these questions.

The reason I think this is interesting is that I think, upon more careful examination, that I will find that the proof above does prove at least one of these, perhaps with a very small bit of additional reasoning. But there are many similar proofs that do not work this way. Here is a famous example. Let !!W(n, k)!! be shorthand for the following claim:

Let the integers from 1 to !!n!! be partitioned into two sets. Then one of the two sets contains !!k!! distinct subsets of three elements of the form !!\{a, a+d, a+2d\}!! for integers !!a, d!!.

Then:

Van der Waerden's theorem: !!W(325, 1)!! is true.

!!W()!!, like !!E()!!, is monotonic: van der Waerden's theorem trivially implies !!W(n, 1)!! for all !!n!! larger than 325. Does it also imply that !!W(n, 1)!! is false for smaller !!n!!? No, not at all; this is actually untrue. Does it also imply that !!W(325, k)!! is false for !!k>1!!? No, this is false also.

Van der Waerden's theorem takes 325 inputs (the integers) and among them finds one output (the desired set of three). But this is extravagantly wasteful. A better argument shows that only 9 inputs were required for the same output, and once we know this it is trivial that 325 inputs will always produce at least 36 outputs, and probably a great many more.

Proofs of theorems in Ramsey theory are noted for being extravagant in exactly this way. But the proof of Ebrahimi's theorem is different. It is not only frugal, it is optimally so. It uses no more inputs than are absolutely necessary.

What is different about these cases? What is the source the frugality of the proof of Ebrahimi’s theorem? Is there a way that we can see from examination of the proof that it will be optimally frugal?

Ebrahimi’s theorem shows !!E(11, 1)!!. Suppose instead we want to show !!E(n, 2)!! for some !!n!!. From Ebrahimi’s theorem itself we immediately get !!E(22, 2)!! and indeed !!E(17, 2)!!. Is this the best we can do? (That is, is !!E(16, 2)!! false?) I bet it isn't. If it isn't, what went wrong? Or rather, what went right in the !!k=1!! case that stopped working when !!k>1!!?

I don't know.


[Other articles in category /math] permanent link

Wed, 22 Nov 2017

Mathematical pettifoggery and pathological examples

Last week I said:

Mathematicians do tend to be the kind of people who quibble and pettifog over the tiniest details. This is because in mathematics, quibbling and pettifogging does work.

This example is technical, but I think I can explain it in a way that will make sense even for people who have no idea what the question is about. Don't worry if you don't understand the next paragraph.

In this math SE question: a user asks for an example of a connected topological space !!\langle X, \tau\rangle!! where there is a strictly finer topology !!\tau'!! for which !!\langle X, \tau'\rangle!! is disconnected. This is a very easy problem if you go about it the right way, and the right way follows a very typical pattern which is useful in many situations.

The pattern is “TURN IT UP TO 11!​!” In this case:

  1. Being disconnected means you can find some things with a certain property.
  2. If !!\tau'!! is finer than !!\tau!!, that means it has all the same things, plus even more things of the same type.
  3. If you could find those things in !!\tau!!, you can still find them in !!\tau'!! because !!\tau'!! has everything that !!\tau!! does.
  4. So although perhaps making !!\tau!! finer could turn a connected space into a disconnected one, by adding the things you needed, it definitely can't turn a disconnected space into a connected one, because the things will still be there.
  5. So a way to look for a connected space that becomes disconnected when !!\tau!! becomes finer is:

    Start with some connected space. Then make !!\tau!! fine as you possibly can and see if that is enough.

  6. If that works, you win. If not, you can look at the reason it didn't work, and maybe either fix up the space you started with, or else use that as the starting point in a proof that the thing you're looking for doesn't exist.

I emphasized the important point here. It is: Moving toward finer !!\tau!! can't hurt the situation and might help, so the first thing to try is to turn the fineness knob all the way up and see if that is enough to get what you want. Many situations in mathematics call for subtlety and delicate reasoning, but this is not one of those.

The technique here works perfectly. There is a topology !!\tau_d!! of maximum possible fineness, called the “discrete” topology, so that is the thing to try first. And indeed it answers the question as well as it can be answered: If !!\langle X, \tau\rangle!! is a connected space, and if there is any refinement !!\tau'!! for which !!\langle X, \tau'\rangle!! is disconnected, then !!\langle X, \tau_d\rangle!! will be disconnected. It doesn't even matter what connected space you start with, because !!\tau_d!! is always a refinement of !!\tau!!, and because !!\langle X, \tau_d\rangle!! is always disconnected, except in trivial cases. (When !!X!! has fewer than two points.)

Right after you learn the definition of what a topology is, you are presented with a bunch of examples. Some are typical examples, which showcase what the idea is really about: the “open sets” of the real line topologize the line, so that topology can be used as a tool for studying real analysis. But some are atypical examples, which showcase the extreme limits of the concept that are as different as possible from the typical examples. The discrete space is one of these. What's it for? It doesn't help with understanding the real numbers, that's for sure. It's a tool, it's the knob on the topology machine that turns the fineness all the way up.[1] If you want to prove that the machine does something or other for the real numbers, one way is to show that it always does that thing. And sometimes part of showing that it always does that thing is to show that it does that even if you turn the knob all the way to the right.

So often the first thing a mathematician will try is:

What happens if I turn the knob all the way to the right? If that doesn't blow up the machine, nothing will!

And that's why, when you ask a mathematician a question, often the first thing they will say is “ťhat fails when !!x=0!!” or “that fails when all the numbers are equal” or “ťhat fails when one number is very much bigger than the other” or “that fails when the space is discrete” or “that fails when the space has fewer than two points.” [2]

After the last article, Kyle Littler reminded me that I should not forget the important word “pathological”. One of the important parts of mathematical science is figuring out what the knobs are, how far they can go, what happens if you turn them all the way up, and what are the limits on how they can be set if we want the machine to behave more or less like the thing we are trying to study.

We have this certain knob for how many dents and bumps and spikes we can put on a sphere and have it still be a sphere, as long as we do not actually puncture or tear the surface. And we expected that no matter how far we turned this knob, the sphere would still divide space into two parts, a bounded inside and an unbounded outside, and that these regions should behave basically the same as they do when the sphere is smooth.[3]

But no, we are wrong, the knob goes farther than we thought. If we turn it to the “Alexander horned sphere” setting, smoke starts to come out of the machine and the red lights begin to blink.[4] Useful! Now if someone has some theory about how the machine will behave nicely if this and that knob are set properly, we might be able to add the useful observation “actually you also have to be careful not to turn that “dents bumps and spikes” knob too far.”

The word for these bizarre settings where some of the knobs are in the extreme positions is “pathological”. The Alexander sphere is a pathological embedding of !!S^2!! into !!\Bbb R^3!!.


[1] The leftmost setting on that knob, with the fineness turned all the way down, is called the “indiscrete topology” or the “trivial topology”.

[2] If you claim that any connected space can be disconnected by turning the “fineness” knob all the way to the right, a mathematican will immediately turn the “number of points” knob all the way to the left, and say “see, that only works for spaces with at least two points”. In a space with fewer than two points, even the discrete topology is connected.

[3]For example, if you tie your dog to a post outside the sphere, and let it wander around, its leash cannot get so tangled up with the sphere that you need to walk the dog backwards to untangle it. You can just slip the leash off the sphere.

[4] The dog can get its leash so tangled around the Alexander sphere that the only way to fix it is to untie the dog and start over. But if the “number of dimensions” knob is set to 2 instead of to 3, you can turn the “dents bumps and spikes” knob as far as you want and the leash can always be untangled without untying or moving the dog. Isn't that interesting? That is called the Jordan curve theorem.


[Other articles in category /math] permanent link

An instructive example of expected value

I think this example is very illuminating of something, although I'm not sure yet what.

Suppose you are making a short journey somewhere. You leave two minutes later than planned. How does this affect your expected arrival time? All other things being equal, you should expect to arrive two minutes later than planned. If you're walking or driving, it will probably be pretty close to two minutes no matter what happens.

Now suppose the major part of your journey involves a train that runs every hour, and you don't know just what the schedule is. Now how does your two minutes late departure affect your expected arrival time?

The expected arrival time is still two minutes later than planned. But it is not uniformly distributed. With probability !!\frac{58}{60}!!, you catch the train you planned to take. You are unaffected by your late departure, and arrive at the same time. But with probability !!\frac{2}{60}!! you miss that train and have to take the next one, arriving an hour later than you planned. The expected amount of lateness is

$$0 \text{ minutes}·\frac{58}{60} + 60 \text{ minutes}·\frac{2}{60} = 2 \text{ minutes}$$

the same as before.

[ Addendum: Richard Soderberg points out that one thing illuminated by this example is that the mathematics fails to capture the emotional pain of missing the train. Going in a slightly different direction, I would add that the expected value reduces a complex situation to a single number, and so must necessarily throw out a lot of important information. I discussed this here a while back in connection with lottery tickets.

But also I think this failure of the expected value is also a benefit: it does capture something interesting about the situation that might not have been apparent before: Considering the two minutes as a time investment, there is a sense in which the cost is knowable; it costs exactly two minutes. Yes, there is a chance that you will be hit by a truck that you would not have encountered had you left on time. But this is exactly offset by the hypothetical truck that passed by harmlessly two minutes before you arrived on the scene but which would have hit you had you left on time. ]


[Other articles in category /math] permanent link

Mon, 20 Nov 2017

Mathematical jargon for quibbling

Mathematicians tend not to be the kind of people who shout and pound their fists on the table. This is because in mathematics, shouting and pounding your fist does not work. If you do this, other mathematicians will just laugh at you. Contrast this with law or politics, which do attract the kind of people who shout and pound their fists on the table.

However, mathematicians do tend to be the kind of people who quibble and pettifog over the tiniest details. This is because in mathematics, quibbling and pettifogging does work.

Mathematics has a whole subjargon for quibbling and pettifogging, and also for excluding certain kinds of quibbles. The word “nontrivial” is preeminent here. To a first approximation, it means “shut up and stop quibbling”. For example, you will often hear mathematicians having conversations like this one:

A: Mihăilescu proved that the only solution of Catalan's equation !!a^x - b^y = 1!! is !!3^2 - 2^3!!.

B: What about when !!a!! and !!b!! are consecutive and !!x=y=1!!?

A: The only nontrivial solution.

B: Okay.

Notice that A does not explain what “nontrivial” is supposed to mean here, and B does not ask. And if you were to ask either of them, they might not be able to tell you right away what they meant. For example, if you were to inquire specifically about !!2^1 - 1^y!!, they would both agree that that is also excluded, whether or not that solution had occurred to either of them before. In this example, “nontrivial” really does mean “stop quibbling”. Or perhaps more precisely “there is actually something here of interest, and if you stop quibbling you will learn what it is”.

In some contexts, “nontrivial” does have a precise and technical meaning, and needs to be supplemented with other terms to cover other types of quibbles. For example, when talking about subgroups, “nontrivial” is supplemented with “proper”:

If a nontrivial group has no proper nontrivial subgroup, then it is a cyclic group of prime order.

Here the “proper nontrivial” part is not merely to head off quibbling; it's the crux of the theorem. But the first “nontrivial” is there to shut off a certain type of quibble arising from the fact that 1 is not considered a prime number. By this I mean if you omit “proper”, or the second “nontrivial”, the statement is still true, but inane:

If a nontrivial group has no subgroup, then it is a cyclic group of prime order.

(It is true, but vacuously so.) In contrast, if you omit the first “nontrivial”, the theorem is substantively unchanged:

If a group has no proper nontrivial subgroup, then it is a cyclic group of prime order.

This is still true, except in the case of the trivial group that is no longer excluded from the premise. But if 1 were considered prime, it would be true either way.

Looking at this issue more thoroughly would be interesting and might lead to some interesting conclusions about mathematical methodology.

  • Can these terms be taxonomized?
  • How do mathematical pejoratives relate? (“Abnormal, irregular, improper, degenerate, inadmissible, and otherwise undesirable”) Kelley says we use these terms to refer to “a problem we cannot handle”; that seems to be a different aspect of the whole story.
  • Where do they fit in Lakatos’ Proofs and Refutations theory? Sometimes inserting “improper” just heads off a quibble. In other cases, it points the way toward an expansion of understanding, as with the “improper” polyhedra that violate Euler's theorem and motivate the introduction of the Euler characteristic.
  • Compare with the large and finely-wrought jargon that distinguishes between proofs that are “elementary”, “easy”, “trivial”, “straightforward”, or “obvious”.
  • Is there a category-theoretic formulation of what it means when we say “without loss of generality, take !!x\lt y!!”?

[ Addendum: Kyle Littler reminds me that I should not forget “pathological”. ]

[ Addendum 20240706: I forgot to mention that I wrote a followup article that discusses why this sort of quibbling is actually useful. ]


[Other articles in category /math] permanent link

Wed, 15 Nov 2017

Down with the negation sign!

[ Credit where it is due: This was entirely Darius Bacon's idea. ]

In connection with “Recognizing when two arithmetic expressions are essentially the same”, I had several conversations with people about ways to normalize numeric expressions. In that article I observed that while everyone knows the usual associative law for addition $$ (a + b) + c = a + (b + c)$$ nobody ever seems to mention the corresponding law for subtraction: $$ (a+b)-c = a + (b-c).$$

And while everyone “knows” that subtraction is not associative because $$(a - b) - c ≠ a - (b-c)$$ nobody ever seems to observe that there is an associative law for subtraction: $$\begin{align} (a - b) + c & = a - (b - c) \\ (a -b) -c & = a-(b+c).\end{align}$$

This asymmetry is kind of a nuisance, and suggests that a more symmetric notation might be better. Darius Bacon suggested a simple change that I think is an improvement:

Write the negation of !!a!! as $$a\star$$ so that one has, for all !!a!!, $$a+a\star = a\star + a = 0.$$

The !!\star!! operation obeys the following elegant and simple laws: $$\begin{align} a\star\star & = a \\ (a+b)\star & = a\star + b\star \end{align} $$

Once we adopt !!\star!!, we get a huge payoff: We can eliminate subtraction:

Instead of !!a-b!! we now write !!a+b\star!!.

The negation of !!a+b\star!! is $$(a+b\star)\star = a\star + b{\star\star} = a\star +b.$$

We no longer have the annoying notational asymmetry between !!a-b!! and !!-b + a!! where the plus sign appears from nowhere. Instead, one is !!a+b\star!! and the other is !!b\star+a!!, which is obviously just the usual commutativity of addition.

The !!\star!! is of course nothing but a synonym for multiplication by !!-1!!. But it is a much less clumsy synonym. !!a\star!! means !!a\cdot(-1)!!, but with less inkjunk.

In conventional notation the parentheses in !!a(-b)!! are essential and if you lose them the whole thing is ruined. But because !!\star!! is just a special case of multiplication, it associates with multiplication and division, so we don't have to worry about parentheses in !!(a\star)b = a(b\star) = (ab)\star!!. They are all equal to just !!ab\star!!. and you can drop the parentheses or include them or write the terms in any order, just as you like, just as you would with !!abc!!.

The surprising associativity of subtraction is no longer surprising, because $$(a + b) - c = a + (b - c)$$ is now written as $$(a + b) + c\star = a + (b + c\star)$$ so it's just the usual associative law for addition; it is not even disguised. The same happens for the reverse associative laws for subtraction that nobody mentions; they become variations on $$ \begin{align} (a + b\star) + c\star & = a + (b\star + c\star) \\ & = a + (b+c)\star \end{align} $$ and such like.

The !!\star!! is faster to read and faster to say. Instead of “minus one” or “negative one” or “times negative one”, you just say “star”.

The !!\star!! is just a number, and it behaves like a number. Its role in an expression is the same as any other number's. It is just a special, one-off notation for a single, particularly important number.

Open questions:

  1. Do we now need to replace the !!\pm!! sign? If so, what should we replace it with?

  2. Maybe the idea is sound, but the !!\star!! itself is a bad choice. It is slow to write. It will inevitably be confused with the * that almost every programming language uses to denote multiplication.

  3. The real problem here is that the !!-!! symbol is overloaded. Instead of changing the negation symbol to !!\star!! and eliminating the subtraction symbol, what if we just eliminated subtraction? None of the new notation would be incompatible with the old notation: !!-(a+-b) = b+-a!! still means exactly what it used to. But you are no longer allowed to abbreviate it to !!-(a-b) = b-a!!.

    This would fix the problem of the !!\star!! taking too long to write: we would just use !!-!! in its place. It would also fix the concern of point 2: !!a\pm b!! now means !!a+b!! or !!a+-b!! which is not hard to remember or to understand. Another happy result: notations like !!-1!! and !!-2!! do not change at all.

Curious footnote: While I was writing up the draft of this article, it had a reminder in it: “How did you and Darius come up with this?” I went back to our email to look, and I discovered the answer was:

  1. Darius suggested the idea to me.
  2. I said, “Hey, that's a great idea!”

I wish I could take more credit, but there it is. Hmm, maybe I will take credit for inspiring Darius! That should be worth at least fifty percent, perhaps more.

[ This article had some perinatal problems. It escaped early from the laboratory, in a not-quite-finished state, so I apologize if you are seeing it twice. ]


[Other articles in category /math] permanent link

Tue, 07 Nov 2017

A modern translation of the 1+1=2 lemma

A while back I blogged an explanation of the “!!1+1=2!!” lemma from Whitehead and Russell's Principia Mathematica:

W. Ethan Duckworth of the Department of Mathematics and Statistics at Loyola University translated this into modern notation and has kindly given me permission to publish it here:

I think it is interesting and instructive to compare the two versions. One thing to notice is that there is no perfect translation. As when translating between two natural languages (German and English, say), the meaning cannot be preserved exactly. Whitehead and Russell's language is different from the modern language not only because the notation is different but because the underlying concepts are different. To really get what Principia Mathematica is saying you have to immerse yourself in the Principia Mathematica model of the world.

The best example of this here is the symbol “1”. In the modern translation, this means the number 1. But at this point in Principia Mathematica, the number 1 has not yet been defined, and to use it here would be circular, because proposition ∗54.43 is an important step on the way to defining it. In Principia Mathematica, the symbol “1” represents the class of all sets that contain exactly one element.[1] Following the definition of ∗52.01, in modern notation we would write something like:

$$1 \equiv_{\text{def}} \{x \mid \exists y . x = \{ y \} \}$$

But in many modern universes, that of ZF set theory in particular, there is no such object.[2] The situation in ZF is even worse: the purported definition is meaningless, because the comprehension is unrestricted.

The Principia Mathematica notation for !!|A|!!, the cardinality of set !!A!!, is !!Nc\,‘A!!, but again this is only an approximate translation. The meaning of !!Nc\,‘A!! is something close to

the unique class !!C!! such that !!x\in C!! if and only if there exists a one-to-one relation between !!A!! and !!x!!.

(So for example one might assert that !!Nc\,‘\Lambda = 0!!, and in fact this is precisely what proposition ∗101.1 does assert.) Even this doesn't quite capture the Principia Mathematica meaning, since the modern conception of a relation is that it is a special kind of set, but in Principia Mathematica relations and sets are different sorts of things. (We would also use a one-to-one function, but here there is no additional mismatch between the modern concept and the Principia Mathematica one.)

It is important, when reading old mathematics, to try to understand in modern terms what is being talked about. But it is also dangerous to forget that the ideas themselves are different, not just the language.[3] I extract a lot of value from switching back and forth between different historical views, and comparing them. Some of this value is purely historiological. But some is directly mathematical: looking at the same concepts from a different viewpoint sometimes illuminates aspects I didn't fully appreciate. And the different viewpoint I acquire is one that most other people won't have.

One of my current low-priority projects is reading W. Burnside's important 1897 book Theory of Groups of Finite Order. The value of this, for me, is not so much the group-theoretic content, but in seeing how ideas about groups have evolved. I hope to write more about this topic at some point.


[1] Actually the situation in Principia Mathematica is more complicated. There is a different class 1 defined at each type. But the point still stands.

[2] In ZF, if !!1!! were to exist as defined above, the set !!\{1\}!! would exist also, and we would have !!\{1\} \in 1!! which would contradict the axiom of foundation.

[3] This was a recurring topic of study for Imre Lakatos, most famously in his little book Proofs and Refutations. Also important is his article “Cauchy and the continuum: the significance of nonstandard analysis for the history and philosophy of mathematics.” Math. Intelligencer 1 (1978), #3, p.151–161, which I discussed here earlier, and which you can read in its entireity by paying the excellent people at Elsevier the nominal and reasonable—nay, trivial—sum of only US$39.95.


[Other articles in category /math] permanent link

Tue, 31 Oct 2017

The Blind Spot and the cut rule

[ The Atom and RSS feeds have done an unusually poor job of preserving the mathematical symbols in this article. It will be much more legible if you read it on my blog. ]

Lately I've been enjoying The Blind Spot by Jean-Yves Girard, a very famous logician. (It is translated from French; the original title is Le Point Aveugle.) This is an unusual book. It is solidly full of deep thought and technical detail about logic, but it is also opinionated, idiosyncratic and polemical. Chapter 2 (“Incompleteness”) begins:

It is out of question to enter into the technical arcana of Gödel’s theorem, this for several reasons:

(i) This result, indeed very easy, can be perceived, like the late paintings of Claude Monet, but from a certain distance. A close look only reveals fastidious details that one perhaps does not want to know.

(ii) There is no need either, since this theorem is a scientific cul-de-sac : in fact it exposes a way without exit. Since it is without exit, nothing to seek there, and it is of no use to be expert in Gödel’s theorem.

(The Blind Spot, p. 29)

He continues a little later:

Rather than insisting on those tedious details which «hide the forest», we shall spend time on objections, from the most ridiculous to the less stupid (none of them being eventually respectable).

As you can see, it is not written in the usual dry mathematical-text style, presenting the material as a perfect and aseptic distillation of absolute truth. Instead, one sees the history of logic, the rise and fall of different theories over time, the interaction and relation of many mathematical and philosophical ideas, and Girard's reflections about it all. It is a transcription of a lecture series, and reads like one, including all of the speaker's incidental remarks and offhand musings, but written down so that each can be weighed and pondered at length. Instead of wondering in the moment what he meant by some intriguing remark, then having to abandon the thought to keep up with the lecture, I can pause and ponder the significance. Girard is really, really smart, and knows way more about logic than I ever will, and his offhand remarks reward this pondering. The book is profound in a way that mathematics books often aren't. I wanted to provide an illustrative quotation, but to briefly excerpt a profound thought is to destroy its profundity, so I will have to refrain.[1])

The book really gets going with its discussion of Gentzen's sequent calculus in chapter 3.

Between around 1890 (when Peano and Frege began to liberate logic from its medieval encrustations) and 1935 when the sequent calculus was invented, logical proofs were mainly in the “Hilbert style”. Typically there were some axioms, and some rules of deduction by which the axioms could be transformed into other formulas. A typical example consists of the axioms $$A\to(B\to A)\\ (A \to (B \to C)) \to ((A \to B) \to (A \to C)) $$ (where !!A, B, C!! are understood to be placeholders that can be replaced by any well-formed formulas) and the deduction rule modus ponens: having proved !!A\to B!! and !!A!!, we can deduce !!B!!.

In contrast, sequent calculus has few axioms and many deduction rules. It deals with sequents which are claims of implication. For example: $$p, q \vdash r, s$$ means that if we can prove all of the formulas on the left of the ⊢ sign, then we can conclude some of the formulas on the right. (Perhaps only one, but at least one.)

A typical deductive rule in sequent calculus is:

$$ \begin{array}{c} Γ ⊢ A, Δ \qquad Γ ⊢ B, Δ \\ \hline Γ ⊢ A ∧ B, Δ \end{array} $$

Here !!Γ!! and !!Δ!! represent any lists of formulas, possibly empty. The premises of the rule are:

  1. !!Γ ⊢ A, Δ!!: If we can prove all of the formulas in !!Γ!!, then we can conclude either the formula !!A!! or some of the formulas in !!Δ!!.

  2. !!Γ ⊢ B, Δ!!: If we can prove all of the formulas in !!Γ!!, then we can conclude either the formula !!B!! or some of the formulas in !!Δ!!.

From these premises, the rule allows us to deduce:

!!Γ ⊢ A ∧ B, Δ!!: If we can prove all of the formulas in !!Γ!!, then we can conclude either the formula !!A \land B!! or some of the formulas in !!Δ!!.

The only axioms of sequent calculus are utterly trivial:

$$ \begin{array}{c} \phantom{A} \\ \hline A ⊢ A \end{array} $$

There are no premises; we get this deduction for free: If can prove !!A!!, we can prove !!A!!. (!!A!! here is a metavariable that can be replaced with any well-formed formula.)

One important point that Girard brings up, which I had never realized despite long familiarity with sequent calculus, is the symmetry between the left and right sides of the turnstile ⊢. As I mentioned, the interpretation of !!Γ ⊢ Δ!! I had been taught was that it means that if every formula in !!Γ!! is provable, then some formula in !!Δ!! is provable. But instead let's focus on just one of the formulas !!A!! on the right-hand side, hiding in the list !!Δ!!. The sequent !!Γ ⊢ Δ, A!! can be understood to mean that to prove !!A!!, it suffices to prove all of the formulas in !!Γ!!, and to disprove all the formulas in !!Δ!!. And now let's focus on just one of the formulas on the left side: !!Γ, A ⊢ Δ!! says that to disprove !!A!!, it suffices to prove all the formulas in !!Γ!! and disprove all the formulas in !!Δ!!.

The all-some correspondence, which had previously caused me to wonder why it was that way and not something else, perhaps the other way around, has turned into a simple relationship about logical negation: the formulas on the left are positive, and the ones on the right are negative.[2]) With this insight, the sequent calculus negation laws become not merely simple but trivial:

$$ \begin{array}{cc} \begin{array}{c} Γ, A ⊢ Δ \\ \hline Γ ⊢ \lnot A, Δ \end{array} & \qquad \begin{array}{c} Γ ⊢ A, Δ \\ \hline Γ, \lnot A ⊢ Δ \end{array} \end{array} $$

For example, in the right-hand deduction: what is sufficient to prove !!A!! is also sufficient to disprove !!¬A!!.

(Compare also the rule I showed above for ∧: It now says that if proving everything in !!Γ!! and disproving everything in !!Δ!! is sufficient for proving !!A!!, and likewise sufficient for proving !!B!!, then it is also sufficient for proving !!A\land B!!.)

But none of that was what I planned to discuss; this article is (intended to be) about sequent calculus's “cut rule”.

I never really appreciated the cut rule before. Most of the deductive rules in the sequent calculus are intuitively plausible and so simple and obvious that it is easy to imagine coming up with them oneself.

But the cut rule is more complicated than the rules I have already shown. I don't think I would have thought of it easily:

$$ \begin{array}{c} Γ ⊢ A, Δ \qquad Λ, A ⊢ Π \\ \hline Γ, Λ ⊢ Δ, Π \end{array} $$

(Here !!A!! is a formula and !!Γ, Δ, Λ, Π!! are lists of formulas, possibly empty lists.)

Girard points out that the cut rule is a generalization of modus ponens: taking !!Γ, Δ, Λ!! to be empty and !!Π = \{B\}!! we obtain:

$$ \begin{array}{c} ⊢ A \qquad A ⊢ B \\ \hline ⊢ B \end{array} $$

The cut rule is also a generalization of the transitivity of implication:

$$ \begin{array}{c} X ⊢ A \qquad A ⊢ Y \\ \hline X ⊢ Y \end{array} $$

Here we took !!Γ = \{X\}, Π = \{Y\}!!, and !!Δ!! and !!Λ!! empty.

This all has given me a much better idea of where the cut rule came from and why we have it.

In sequent calculus, the deduction rules all come in pairs. There is a rule about introducing ∧, which I showed before. It allows us to construct a sequent involving a formula with an ∧, where perhaps we had no ∧ before. (In fact, it is the only way to do this.) There is a corresponding rule (actually two rules) for getting rid of ∧ when we have it and we don't want it:

$$ \begin{array}{cc} \begin{array}{c} Γ ⊢ A\land B, Δ \\ \hline Γ ⊢ A, Δ \end{array} & \qquad \begin{array}{c} Γ ⊢ A\land B, Δ \\ \hline Γ ⊢ B, Δ \end{array} \end{array} $$

Similarly there is a rule (actually two rules) about introducing !!\lor!! and a corresponding rule about eliminating it.

The cut rule seems to lie outside this classification. It is not paired.

But Girard showed me that it is part of a pair. The axiom

$$ \begin{array}{c} \phantom{A} \\ \hline A ⊢ A \end{array} $$

can be seen as an introduction rule for a pair of !!A!!s, one on each side of the turnstile. The cut rule is the corresponding rule for eliminating !!A!! from both sides.

Sequent calculus proofs are much easier to construct than Hilbert-style proofs. Suppose one wants to prove !!B!!. In a Hilbert system the only deduction rule is modus ponens, which requires that we first prove !!A\to B!! and !!A!! for some !!A!!. But what !!A!! should we choose? It could be anything, and we have no idea where to start or how big it could be. (If you enjoy suffering, try to prove the simple theorem !!A\to A!! in the Hilbert system I described at the beginning of the article. (Solution)

In sequent calculus, there is only one way to prove each kind of thing, and the premises in each rule are simply related to the consequent we want. Constructing the proof is mostly a matter of pushing the symbols around by following the rules to their conclusions. (Or, if this is impossible, one can conclude that there is no proof, and why.[3]) Construction of proofs can now be done entirely mechanically!

Except! The cut rule does require one to guess a formula: If one wants to prove !!Γ, Λ ⊢ Δ, Π!!, one must guess what !!A!! should appear in the premises !!Γ, A ⊢ Δ!! and !!Λ ⊢ A, Π!!. And there is no constraint at all on !!A!!; it could be anything, and we have no idea where to start or how big it could be.

The good news is that Gentzen, the inventor of sequent calculus, showed that one can dispense with the cut rule: it is unnecessary:

In Hilbert-style systems, based on common sense, the only rule is (more or less) the Modus Ponens : one reasons by linking together lemmas and consequences. We just say that one can get rid of that : it is like crossing the English Channel with fists and feet bound !

(The Blind Spot, p. 61)

Gentzen's demonstration of this shows how one can take any proof that involves the cut rule, and algorithmically eliminate the cut rule from it to obtain a proof of the same result that does not use cut. Gentzen called this the “Hauptsatz” (“principal theorem”) and rightly so, because it reduces construction of logical proofs to an algorithm and is therefore the ultimate basis for algorithmic proof theory.

The bad news is that the cut-elimination process can super-exponentially increase the size of the proof, so it does not lead to a practical algorithm for deciding provability. Girard analyzed why, and what he discovered amazed me. The only problem is in the contraction rules, which had seemed so trivial and innocuous—uninteresting, even—that I had never given them any thought:

$$ \begin{array}{cc} \begin{array}{c} Γ, A, A ⊢ Δ \\ \hline Γ, A ⊢ Δ \end{array} & \qquad \begin{array}{c} Γ ⊢ A, A, Δ \\ \hline Γ ⊢ A, Δ \end{array} \end{array} $$

And suddenly Girard's invention of linear logic made sense to me. In linear logic, contraction is forbidden; one must use each formula in one and only one deduction.

Previously it had seemed to me that this was a pointless restriction. Now I realized that it was no more of a useless hair shirt than the intuitionistic rejection of the law of the proof by contradiction: not a stubborn refusal to use an obvious tool of reasoning, but a restriction of proofs to produce better reasoning. With the rejection of contraction, cut-elimination no longer explodes proof size, and automated theorem proving becomes practical:

This is why its study is, implicitly, at the very heart of these lectures.

(The Blind Spot, p. 63)

The book is going to get into linear logic later in the next chapter. I have read descriptions of linear logic before, but never understood what it was up to. (It has two logical and operators, and two logical or operators; why?) But I am sure Girard will explain it marvelously.


  1. In place of a profound excerpt, I will present the following, which isn't especially profound but struck a chord for me: “By the way, nothing is more arbitrary than a modal logic « I am done with this logic, may I have another one ? » seems to be the motto of these guys.” (p. 24)
  2. Compare my alternative representation of arithmetic expressions that collapses addition and subtraction, and multiplication and division.
  3. A typically Girardian remark is that analytic tableaux are “the poor man's sequents”.

A brief but interesting discussion of The Blind Spot on Hacker News.

Review by Felipe Zaldivar.


[Other articles in category /math/logic] permanent link

Sun, 15 Oct 2017

Counting increasing sequences with Burnside's lemma

[ I started this article in March and then forgot about it. Ooops! ]

Back in February I posted an article about how there are exactly 715 nondecreasing sequences of 4 digits. I said that !!S(10, 4)!! was the set of such sequences and !!C(10, 4)!! was the number of such sequences, and in general $$C(d,n) = \binom{n+d-1}{d-1} = \binom{n+d-1}{n}$$ so in particular $$C(10,4) = \binom{13}{4} = 715.$$

I described more than one method of seeing this, but I didn't mention the method I had found first, which was to use the Cauchy-Frobenius-Redfeld-Pólya-Burnside counting lemma. I explained the lemma in detail some time ago, with beautiful illustrated examples, so I won't repeat the explanation here. The Burnside lemma is a kind of big hammer to use here, but I like big hammers. And the results of this application of the big hammer are pretty good, and justify it in the end.

To count the number of distinct sequences of 4 digits, where some sequences are considered “the same” we first identify a symmetry group whose orbits are the equivalence classes of sequences. Here the symmetry group is !!S_4!!, the group that permutes the elements of the sequence, because two sequences are considered “the same” if they have exactly the same digits but possibly in a different order, and the elements of !!S_4!! acting on the sequences are exactly what you want to permute the elements into some different order.

Then you tabulate how many of the 10,000 original sequences are left fixed by each element !!p!! of !!S_4!!, which is exactly the number of cycles of !!p!!. (I have also discussed cycle classes of permutations before.) If !!p!! contains !!n!! cycles, then !!p!! leaves exactly !!10^n!! of the !!10^4!! sequences fixed.

Cycle class Number
of cycles
How many
permutations?
Sequences
left fixed
41 10,000
3 6 1,000

2 3 + 8 = 11 100
1 6 10
  24 17,160

(Skip this paragraph if you already understand the table. The four rows above are an abbreviation of the full table, which has 24 rows, one for each of the 24 permutations of order 4. The “How many permutations?” column says how many times each row should be repeated. So for example the second row abbreviates 6 rows, one for each of the 6 permutations with three cycles, which each leave 1,000 sequences fixed, for a total of 6,000 in the second row, and the total for all 24 rows is 17,160. There are two different types of permutations that have two cycles, with 3 and 8 permutations respectively, and I have collapsed these into a single row.)

Then the magic happens: We average the number left fixed by each permutation and get !!\frac{17160}{24} = 715!! which we already know is the right answer.

Now suppose we knew how many permutations there were with each number of cycles. Let's write !!\def\st#1#2{\left[{#1\atop #2}\right]}\st nk!! for the number of permutations of !!n!! things that have exactly !!k!! cycles. For example, from the table above we see that $$\st 4 4 = 1,\quad \st 4 3 = 6,\quad \st 4 2 = 11,\quad \st 4 1 = 6.$$

Then applying Burnside's lemma we can conclude that $$C(d, n) = \frac1{n!}\sum_i \st ni d^i .\tag{$\spadesuit$}$$ So for example the table above computes !!C(10,4) = \frac1{24}\sum_i \st 4i 10^i = 715!!.

At some point in looking into this I noticed that $$\def\rp#1#2{#1^{\overline{#2}}}% \def\fp#1#2{#1^{\underline{#2}}}% C(d,n) = \frac1{n!}\rp dn$$ where !!\rp dn!! is the so-called “rising power” of !!d!!: $$\rp dn = d\cdot(d+1)(d+2)\cdots(d+n-1).$$ I don't think I had a proof of this; I just noticed that !!C(d, 1) = d!! and !!C(d, 2) = \frac12(d^2+d)!! (both obvious), and the Burnside's lemma analysis of the !!n=4!! case had just given me !!C(d, 4) = \frac1{24}(d^4 +6d^3 + 11d^2 + 6d)!!. Even if one doesn't immediately recognize this latter polynomial it looks like it ought to factor and then on factoring it one gets !!d(d+1)(d+2)(d+3)!!. So it's easy to conjecture !!C(d, n) = \frac1{n!}\rp dn!! and indeed, this is easy to prove from !!(\spadesuit)!!: The !!\st n k!! obey the recurrence $$\st{n+1}k = n \st nk + \st n{k-1}\tag{$\color{green}{\star}$}$$ (by an easy combinatorial argument1) and it's also easy to show that the coefficients of !!\rp nk!! obey the same recurrence.2

In general !!\rp nk = \fp{(n+k-1)}k!! so we have !!C(d, n) = \rp dn = \fp{(n+d-1)}n = \binom{n+d-1}d = \binom{n+d-1}{n-1}!! which ties the knot with the formula from the previous article. In particular, !!C(10,4) = \binom{13}9!!.

I have a bunch more to say about this but this article has already been in the oven long enough, so I'll cut the scroll here.


[1] The combinatorial argument that justifies !!(\color{green}{\star})!! is as follows: The Stirling number !!\st nk!! counts the number of permutations of order !!n!! with exactly !!k!! cycles. To get a permutation of order !!n+1!! with exactly !!k!! cycles, we can take one of the !!\st nk!! permutations of order !!n!! with !!k!! cycles and insert the new element into one of the existing cycles after any of the !!n!! elements. Or we can take one of the !!\st n{k-1}!! permutations with only !!k-1!! cycles and add the new element in its own cycle.)

[2] We want to show that the coefficients of !!\rp nk!! obey the same recurrence as !!(\color{green}{\star})!!. Let's say that the coefficient of the !!n^i!! term in !!\rp nk!! is !!c_i!!. We have $$\rp n{k+1} = \rp nk\cdot (n+k) = \rp nk \cdot n + \rp nk \cdot k $$ so the coefficient of the the !!n^i!! term on the left is !!c_{i-1} + kc_i!!.


[Other articles in category /math] permanent link

Sun, 27 Aug 2017

Miscellanea about 24 puzzles

This is a collection of leftover miscellanea about twenty-four puzzles. In case you forgot what that is:

The puzzle «4 6 7 9 ⇒ 24» means that one should take the numbers 4, 6, 7, and 9, and combine them with the usual arithmetic operations of addition, subtraction, multiplication, and division, to make the number 24. In this case the unique solution is $$6\times\frac{7 + 9}{4}.$$ When the target number is 24, as it often is, we omit it and just write «4 6 7 9».

Prior articles on this topic:

How many puzzles have solutions?

For each value of !!T!!, there are 715 puzzles «a b c d ⇒ T». (I discussed this digression in two more earlier articles: [1] [2].) When the target !!T = 24!!, 466 of the 715 puzzles have solutions. Is this typical? Many solutions of «a b c d» puzzles end with a multiplication of 6 and 4, or of 8 and 3, or sometimes of 12 and 2—so many that one quickly learns to look for these types of solutions right away. When !!T=23!!, there won't be any solutions of this type, and we might expect that relatively few puzzles with prime targets have solutions.

This turns out to be the case:

ALT attribute suggestions welcome!

The x-axis is the target number !!T!!, with 0 at the left, 300 at right, and vertical guide lines every 25. The y axis is the number of solvable puzzles out of the maximum possible of 715, with 0 at the bottom, 715 at the top, and horizontal guide lines every 100.

Dots representing prime number targets are colored black. Dots for numbers with two prime factors (4, 6, 9, 10, 14, 15, 21, 22, etc.) are red; dots with three, four, five, six, and seven prime factors are orange, yellow, green, blue, and purple respectively.

Two countervailing trends are obvious: Puzzles with smaller targets have more solutions, and puzzles with highly-composite targets have more solutions. No target number larger than 24 has as many as 466 solvable puzzles.

These are only trends, not hard rules. For example, there are 156 solvable puzzles with the target 126 (4 prime factors) but only 93 with target 128 (7 prime factors). Why? (I don't know. Maybe because there is some correlation with the number of different prime factors? But 72, 144, and 216 have many solutions, and only two different prime factors.)

The smallest target you can't hit is 417. The following numbers 418 and 419 are also impossible. But there are 8 sets of four digits that can be used to make 416 and 23 sets that can be used to make 420. The largest target that can be hit is obviously !!6561 = 9⁴!!; the largest target with two solutions is !!2916 = 4·9·9·9 = 6·6·9·9!!.

(The raw data are available here).

There is a lot more to discover here. For example, from looking at the chart, it seems that the locally-best target numbers often have the form !!2^n3^m!!. What would we see if we colored the dots according to their largest prime factor instead of according to their number of prime factors? (I tried doing this, and it didn't look like much, but maybe it could have been done better.)

Making zero

As the chart shows, 705 of the 715 puzzles of the type «a b c d ⇒ 0», are solvable. This suggests an interesting inverse puzzle that Toph and I enjoyed: find four digits !!a,b,c, d!! that cannot be used to make zero. (The answers).

Identifying interesting or difficult problems

(Caution: this section contains spoilers for many of the most interesting puzzles.)

I spent quite a while trying to get the computer to rank puzzles by difficulty, with indifferent success.

Fractions

Seven puzzles require the use of fractions. One of these is the notorious «3 3 8 8» that I mentioned before. This is probably the single hardest of this type. The other six are:

    «1 3 4 6»
    «1 4 5 6»
    «1 5 5 5»
    «1 6 6 8»
    «3 3 7 7»
    «4 4 7 7»

(Solutions to these (formatted image); solutions to these (plain text))

«1 5 5 5» is somewhat easier than the others, but they all follow pretty much the same pattern. The last two are pleasantly symmetrical.

Negative numbers

No puzzles require the use of negative intermediate values. This surprised me at first, but it is not hard to see why. Subexpressions with negative intermediate values can always be rewritten to have positive intermediate values instead.

For instance, !!3 × (9 + (3 - 4))!! can be rewritten as !!3 × (9 - (4 - 3))!! and !!(5 - 8)×(1 -9)!! can be rewritten as !!(8 - 5)×(9 -1)!!.

A digression about tree shapes

In one of the earlier articles I asserted that there are only two possible shapes for the expression trees of a puzzle solution:

(((a # b) # c) # d) ((a # b) # (c # d))
Form A Form B

(Pink square nodes contain operators and green round nodes contain numbers.)

Lindsey Kuper pointed out that there are five possible shapes, not two. Of course, I was aware of this (it is a Catalan number), so what did I mean when I said there were only two? It's because I had the idea that any tree that wasn't already in one of those two forms could be put into form A by using transformations like the ones in the previous section.

For example, the expression !!(4×((1+2)÷3))!! isn't in either form, but we can commute the × to get the equivalent !!((1+2)÷3)×4!!, which has form A. Sometimes one uses the associative laws, for example to turn !!a ÷ (b × c)!! into !!(a ÷ b) ÷ c!!.

But I was mistaken; not every expression can be put into either of these forms. The expression !!(8×(9-(2·3))!! is an example.

Unusual intermediate values

The most interesting thing I tried was to look for puzzles whose solutions require unusual intermediate numbers.

For example, the puzzle «3 4 4 4» looks easy (the other puzzles with just 3s and 4s are all pretty easy) but it is rather tricky because its only solution goes through the unusual intermediate number 28: !!4 × (3 + 4) - 4!!.

I ranked puzzles as follows: each possible intermediate number appears in a certain number of puzzle solutions; this is the score for that intermediate number. (Lower scores are better, because they represent rarer intermediate numbers.) The score for a single expression is the score of its rarest intermediate value. So for example !!4 × (3 + 4) - 4!! has the intermediate values 7 and 28. 7 is extremely common, and 28 is quite unusual, appearing in only 151 solution expressions, so !!4 × (3 + 4) - 4!! receives a fairly low score of 151 because of the intermediate 28.

Then each puzzle received a difficulty score which was the score of its easiest solution expression. For example, «2 2 3 8» has two solutions, one (!!(8+3)×2+2!!) involving the quite unusual intermediate value 22, which has a very good score of only 79. But this puzzle doesn't count as difficult because it also admits the obvious solution !!8·3·\frac22!! and this is the solution that gives it its extremely bad score of 1768.

Under this ranking, the best-scoring twenty-four puzzles, and their scores, were:

      «1 2 7 7» 3
    * «4 4 7 7» 12
    * «1 4 5 6» 13
    * «3 3 7 7» 14
    * «1 5 5 5» 15
      «5 6 6 9» 23
      «2 5 7 9» 24
      «2 2 5 8» 25
      «2 5 8 8» 45
      «5 8 8 8» 45
      «2 2 2 9» 47
    * «1 3 4 6» 59
    * «1 6 6 8» 59
      «2 4 4 9» 151
      «3 4 4 4» 151
    * «3 3 8 8» 152
      «6 8 8 9» 152
      «2 2 2 7» 155
      «2 2 5 7» 155
      «2 3 7 7» 155
      «2 4 7 7» 155
      «2 5 5 7» 155
      «2 5 7 7» 156
      «4 4 8 9» 162

(Something is not quite right here. I think «2 5 7 7» and «2 5 5 7» should have the same score, and I don't know why they don't. But I don't care enough to do it over.)

Most of these are at least a little bit interesting. The seven puzzles that require the use of fractions appear; I have marked them with stars. The top item is «1 2 7 7», whose only solution goes through the extremely rare intermediate number 49. The next items require fractions, and the one after that is «5 6 6 9», which I found difficult. So I think there's some value in this procedure.

But is there enough value? I'm not sure. The last item on the list, «4 4 8 9», goes through the unusual number 36. Nevertheless I don't think it is a hard puzzle.

(I can also imagine that someone might see the answer to «5 6 6 9» right off, but find «4 4 8 9» difficult. The whole exercise is subjective.)

Solutions with unusual tree shapes

I thought about looking for solutions that involved unusual sequences of operations. Division is much less common than the other three operations.

To get it right, one needs to normalize the form of expressions, so that the shapes !!(a + b) + (c + d)!! and !!a + (b + (c + d))!! aren't counted separately. The Ezpr library can help here. But I didn't go that far because the preliminary results weren't encouraging.

There are very few expressions totaling 24 that have the form !!(a÷b)÷(c÷d)!!. But if someone gives you a puzzle with a solution in that form, then !!(a×d)÷(b×c)!! and !!(a×d) ÷ (b÷c)!! are also solutions, and one or another is usually very easy to see. For example, the puzzle «1 3 8 9» has the solution !!(8÷1)÷(3÷9)!!, which has an unusual form. But this is an easy puzzle; someone with even a little experience will find the solution !!8 × \frac93 × 1!! immediately.

Similarly there are relatively few solutions of the form !!a÷((b-c)÷d)!!, but they can all be transformed into !!a×d÷(b-c)!! which is not usually hard to find. Consider $$\frac 8{\left(\frac{6 - 4}6\right)}.$$ This is pretty weird-looking, but when you're trying to solve it one of the first things you might notice is the 8, and then you would try to turn the rest of the digits into a 3 by solving «4 6 6 ⇒ 3», at which point it wouldn't take long to think of !!\frac6{6-4}!!. Or, coming at it from the other direction, you might see the sixes and start looking for a way to make «4 6 8 ⇒ 4», and it wouldn't take long to think of !!\frac8{6-4}!!.

Ezpr shape

Ezprs (see previous article) correspond more closely than abstract syntax trees do with our intuitive notion of how expressions ought to work, so looking at the shape of the Ezpr version of a solution might give better results than looking at the shape of the expression tree. For example, one might look at the number of nodes in the Ezpr or the depth of the Ezpr.

Ad-hockery

When trying to solve one of these puzzles, there are a few things I always try first. After adding up the four numbers, I then look for ways to make !!8·3, 6·4,!! or !!12·2!!; if that doesn't work I start branching out looking for something of the type !!ab\pm c!!.

Suppose we take a list of all solvable puzzles, and remove all the very easy ones: the puzzles where one of the inputs is zero, or where one of the inputs is 1 and there is a solution of the form !!E×1!!.

Then take the remainder and mark them as “easy” if they have solutions of the form !!a+b+c+d, 8·3, 6·4,!! or !!12·2!!. Also eliminate puzzles with solutions of the type !!E + (c - c)!! or !!E×\left(\frac cc\right)!!.

How many are eliminated in this way? Perhaps most? The remaining puzzles ought to have at least intermediate difficulty, and perhaps examining just those will suggest a way to separate them further into two or three ranks of difficulty.

I give up

But by this time I have solved so many twenty-four puzzles that I am no longer sure which ones are hard and which ones are easy. I suspect that I have seen and tried to solve most of the 466 solvable puzzles; certainly more than half. So my brain is no longer a reliable gauge of which puzzles are hard and which are easy.

Perhaps looking at puzzles with five inputs would work better for me now. These tend to be easy, because you have more to work with. But there are 2002 puzzles and probably some of them are hard.

Close, but no cigar

What's the closest you can get to 24 without hitting it exactly? The best I could do was !!5·5 - \frac89!!. Then I asked the computer, which confirmed that this is optimal, although I felt foolish when I saw the simpler solutions that are equally good: !!6·4 \pm\frac 19!!.

The paired solutions $$5 × \left(4 + \frac79\right) < 24 < 7 × \left(4 - \frac59\right)$$ are very handsome.

Phone app

The search program that tells us when a puzzle has solutions is only useful if we can take it with us in the car and ask it about license plates. A phone app is wanted. I built one with Code Studio.

Code Studio is great. It has a nice web interface, and beginners can write programs by dragging blocks around. It looks very much like MIT's scratch project, which is much better-known. But Code Studio is a much better tool than Scratch. In Scratch, once you reach the limits of what it can do, you are stuck, and there is no escape. In Code Studio when you drag around those blocks you are actually writing JavaScript underneath, and you can click a button and see and edit the underlying JavaScript code you have written.

Suppose you need to convert A to 1 and B to 2 and so on. Scratch does not provide an ord function, so with Scratch you are pretty much out of luck; your only choice is to write a 26-way if-else tree, which means dragging around something like 104 stupid blocks. In Code Studio, you can drop down to the JavaScript level and type in ord to use the standard ord function. Then if you go back to blocks, the ord will look like any other built-in function block.

In Scratch, if you want to use a data structure other than an array, you are out of luck, because that is all there is. In Code Studio, you can drop down to the JavaScript level and use or build any data structure available in JavaScript.

In Scratch, if you want to initialize the program with bulk data, say a precomputed table of the solutions of the 466 twenty-four puzzles, you are out of luck. In Code Studio, you can upload a CSV file with up to 1,000 records, which then becomes available to your program as a data structure.

In summary, you spend a lot of your time in Scratch working around the limitations of Scratch, and what you learn doing that is of very limited applicability. Code Studio is real programming and if it doesn't do exactly what you want out of the box, you can get what you want by learning a little more JavaScript, which is likely to be useful in other contexts for a long time to come.

Once you finish your Code Studio app, you can click a button to send the URL to someone via SMS. They can follow the link in their phone's web browser and then use the app.

Code Studio is what Scratch should have been. Check it out.

Thanks

Thanks to everyone who contributed to this article, including:

  • my daughters Toph and Katara
  • Shreevatsa R.
  • Dr. Lindsey Kuper
  • Darius Bacon
  • everyone else who emailed me


[Other articles in category /math] permanent link

Sun, 20 Aug 2017

Recognizing when two arithmetic expressions are essentially the same

[ Warning: The math formatting in the RSS / Atom feed for this article is badly mutilated. I suggest you read the article on my blog. ]

In this article, I discuss “twenty-four puzzles”. The puzzle «4 6 7 9 ⇒ 24» means that one should take the numbers 4, 6, 7, and 9, and combine them with the usual arithmetic operations of addition, subtraction, multiplication, and division, to make the number 24. In this case the unique solution is !!6·\frac{7 + 9}{4}!!.

When the target number after the is 24, as it often is, we omit it and just write «4 6 7 9». Every example in this article has target number 24.

This is a continuation of my previous articles on this topic:

My first cut at writing a solver for twenty-four puzzles was a straightforward search program. It had a couple of hacks in it to cut down the search space by recognizing that !!a+E!! and !!E+a!! are the same, but other than that there was nothing special about it and I've discussed it before.

It would quickly and accurately report whether any particular twenty-four puzzle was solvable, but as it turned out that wasn't quite good enough. The original motivation for the program was this: Toph and I play this game in the car. Pennsylvania license plates have three letters and four digits, and if we see a license plate FBV 2259 we try to solve «2 2 5 9». Sometimes we can't find a solution and then we wonder: it is because there isn't one, or is it because we just didn't get it yet? So the searcher turned into a phone app, which would tell us whether there was solution, so we'd know whether to give up or keep searching.

But this wasn't quite good enough either, because after we would find that first solution, say !!2·(5 + 9 - 2)!!, we would wonder: are there any more? And here the program was useless: it would cheerfully report that there were three, so we would rack our brains to find another, fail, ask the program to tell us the answer, and discover to our disgust that the three solutions it had in mind were:

$$ 2 \cdot (5 + (9 - 2)) \\ 2 \cdot (9 + (5 - 2)) \\ 2 \cdot ((5 + 9) - 2) $$

The computer thinks these are different, because it uses different data structures to represent them. It represents them with an abstract syntax tree, which means that each expression is either a single constant, or is a structure comprising an operator and its two operand expressions—always exactly two. The computer understands the three expressions above as having these structures:

It's not hard to imagine that the computer could be taught to understand that the first two trees are equivalent. Getting it to recognize that the third one is also equivalent seems somewhat more difficult.

Commutativity and associativity

I would like the computer to understand that these three expressions should be considered “the same”. But what does “the same” mean? This problem is of a kind I particularly like: we want the computer to do something, but we're not exactly sure what that something is. Some questions are easy to ask but hard to answer, but this is the opposite: the real problem is to decide what question we want to ask. Fun!

Certainly some of the question should involve commutativity and associativity of addition and multiplication. If the only difference between two expressions is that one has !!a + b!! where the other has !!b + a!!, they should be considered the same; similarly !!a + (b + c)!! is the same expression as !!(a + b) + c!! and as !!(b + a) + c!! and !!b + (a + c)!! and so forth.

The «2 2 5 9» example above shows that commutativity and associativity are not limited to addition and multiplication. There are commutative and associative properties of subtraction also! For example, $$a+(b-c) = (a+b)-c$$ and $$(a+b)-c = (a-c)+b.$$ There ought to be names for these laws but as far as I know there aren't. (Sure, it's just commutativity and associativity of addition in disguise, but nobody explaining these laws to school kids ever seems to point out that subtraction can enter into it. They just observe that !!(a-b)-c ≠ a-(b-c)!!, say “subtraction isn't associative”, and leave it at that.)

Closely related to these identities are operator inversion identities like !!a-(b+c) = (a-b)-c!!, !!a-(b-c) = (a-b)+c!!, and their multiplicative analogues. I don't know names for these algebraic laws either.

One way to deal with all of this would to build a complicated comparison function for abstract syntax trees that tried to transform one tree into another by applying these identities. A better approach is to recognize that the data structure is over-specified. If we want the computer to understand that !!(a + b) + c!! and !!a + (b + c)!! are the same expression, we are swimming upstream by using a data structure that was specifically designed to capture the difference between these expressions.

Instead, I invented a data structure, called an Ezpr (“Ez-pur”), that can represent expressions, but in a somewhat more natural way than abstract syntax trees do, and in a way that makes commutativity and associativity transparent.

An Ezpr has a simplest form, called its “canonical” or “normal” form. Two Ezprs represent essentially the same mathematical expression if they have the same canonical form. To decide if two abstract syntax trees are the same, the computer converts them to Ezprs, simplifies them, and checks to see if resulting canonical forms are identical.

The Ezpr

Since associativity doesn't matter, we don't want to represent it. When we (humans) think about adding up a long column of numbers, we don't think about associativity because we don't add them pairwise. Instead we use an addition algorithm that adds them all at once in a big pile. We don't treat addition as a binary operation; we normally treat it as an operator that adds up the numbers in a list. The Ezpr makes this explicit: its addition operator is applied to a list of subexpressions, not to a pair. Both !!a + (b + c)!! and !!(a + b) + c!! are represented as the Ezpr

    SUM [ a b c - ]

which just says that we are adding up !!a!!, !!b!!, and !!c!!. (The - sign is just punctuation; ignore it for now.)

Similarly the Ezpr MUL [ a b c ÷ ] represents the product of !!a!!, !!b!!, and !!c!!. (Please ignore the ÷ sign for the time being.)

To handle commutativity, we want those [ a b c ] lists to be bags. Perl doesn't have a built-in bag object, so instead I used arrays and required that the array elements be in sorted order. (Exactly which sorted order doesn't really matter.)

Subtraction and division

This doesn't yet handle subtraction and division, and the way I chose to handle them is the only part of this that I think is at all clever. A SUM object has not one but two bags, one for the positive and one for the negative part of the expression. An expression like !!a - b + c - d!! is represented by the Ezpr:

SUM [ a c - b d ]

and this is also the representation of !!a + c - b - d!!, of !!c + a - d - b!!, of !!c - d+ a-b!!, and of any other expression of the idea that we are adding up !!a!! and !!c!! and then deducting !!b!! and !!d!!. The - sign separates the terms that are added from those that are subtracted.

Either of the two bags may be empty, so for example !!a + b!! is just SUM [ a b - ].

Division is handled similarly. Here conventional mathematical notation does a little bit better than in the sum case: MUL [ a c ÷ b d ] is usually written as !!\frac{ac}{bd}!!.

Ezprs handle the associativity and commutativity of subtraction and division quite well. I pointed out earlier that subtraction has an associative law !!(a + b) - c = a + (b - c)!! even though it's not usually called that. No code is required to understand that those two expressions are equal if they are represented as Ezprs, because they are represented by completely identical structures:

        SUM [ a b - c ]

Similarly there is a commutative law for subtraction: !!a + b - c = a - c + b!! and once again that same Ezpr does for both.

Ezpr laws

Ezprs are more flexible than binary trees. A binary tree can represent the expressions !!(a+b)+c!! and !!a+(b+c)!! but not the expression !!a+b+c!!. Ezprs can represent all three and it's easy to transform between them. Just as there are rules for building expressions out of simpler expressions, there are a few rules for combining and manipulating Ezprs.

Lifting and flattening

The most important transformation is lifting, which is the Ezpr version of the associative law. In the canonical form of an Ezpr, a SUM node may not have subexpressions that are also SUM nodes. If you have

  SUM [ a SUM [ b c - ] - … ]

you should lift the terms from the inner sum into the outer one:

  SUM [ a b c - … ]

effectively transforming !!a+(b+c)!! into !!a+b+c!!. More generally, in

   SUM [ a SUM [ b - c ]
       - d SUM [ e - f ] ]

we lift the terms from the inner Ezprs into the outer one:

   SUM [ a b f - c d e ]

This effectively transforms !!a + (b - c) - d - (e - f))!! to !!a + b + f - c - d - e!!.

Similarly, when a MUL node contains another MUL, we can flatten the structure.

Say we are converting the expression !!7 ÷ (3 ÷ (6 × 4))!! to an Ezpr. The conversion function is recursive and the naïve version computes this Ezpr:

      MUL [ 7 ÷ MUL [ 3 ÷ MUL [ 6 4 ÷ ] ] ]

But then at the bottom level we have a MUL inside a MUL, so the 4 and 6 in the innermost MUL are lifted upward:

      MUL [ 7 ÷ MUL [ 3 ÷ 6 4 ] ]

which represents !!\frac7{\frac{3}{6\cdot 4}}!!. Then again we have a MUL inside a MUL, and again the subexpressions of the innermost MUL can be lifted:

      MUL [ 7 6 4 ÷ 3 ]

which we can imagine as !!\frac{7·6·4}3!!.

The lifting only occurs when the sub-node has the same type as its parent; we may not lift terms out of a MUL into a SUM or vice versa.

Trivial nodes

The Ezpr SUM [ a - ] says we are adding up just one thing, !!a!!, and so it can be eliminated and replaced with just !!a!!. Similarly SUM [ - a ] can be replaced with the constant !!-a!!, if !!a!! is a constant. MUL can be handled similarly.

An even simpler case is SUM [ - ] which can be replaced by the constant 0; MUL [ ÷ ] can be replaced with 1. These sometimes arise as a result of cancellation.

Cancellation

Consider the puzzle «3 3 4 6». My first solver found 49 solutions to this puzzle. One is !!(3 - 3) + (4 × 6)!!. Another is !!(4 + (3 - 3)) × 6!!. A third is !!4 × (6 + (3 - 3))!!.

I think these are all the same: the solution is to multiply the 4 by the 6, and to get rid of the threes by subtracting them to make a zero term. The zero term can be added onto the rest of expression or to any of its subexpressions—there are ten ways to do this—and it doesn't really matter where.

This is easily explained in terms of Ezprs: If the same subexpression appears in both of a node's bags, we can drop it. For example, the expression !!(4 + (3 -3)) × 6!! starts out as

    MUL [ 6 SUM [ 3 4 - 3 ] ÷ ]

but the duplicate threes in SUM [ 3 4 - 3 ] can be canceled, to leave

    MUL [ 6 SUM [ 4 - ] ÷ ]

The sum is now trivial, as described in the previous section, so can be eliminated and replaced with just 4:

    MUL [ 6 4 ÷ ]

This Ezpr records the essential feature of each of the three solutions to «3 3 4 6» that I mentioned: they all are multiplying the 6 by the 4, and then doing something else unimportant to get rid of the threes.

Another solution to the same puzzle is !!(6 ÷ 3) × (4 × 3)!!. Mathematically we would write this as !!\frac63·4·3!! and we can see this is just !!6×4!! again, with the threes gotten rid of by multiplication and division, instead of by addition and subtraction. When converted to an Ezpr, this expression becomes:

    MUL [ 6 4 3 ÷ 3 ]

and the matching threes in the two bags are cancelled, again leaving

    MUL [ 6 4 ÷ ]

In fact there aren't 49 solutions to this puzzle. There is only one, with 49 trivial variations.

Identity elements

In the preceding example, many of the trivial variations on the !!4×6!! solution involved multiplying some subexpression by !!\frac 33!!. When one of the input numbers in the puzzle is a 1, one can similarly obtain a lot of useless variations by choosing where to multiply the 1.

Consider «1 3 3 5»: We can make 24 from !!3 × (3 + 5)!!. We then have to get rid of the 1, but we can do that by multiplying it onto any of the five subexpressions of !!3 × (3 + 5)!!:

$$ 1 × (3 × (3 + 5)) \\ (1 × 3) × (3 + 5) \\ 3 × (1 × (3 + 5)) \\ 3 × ((1 × 3) + 5) \\ 3 × (3 + (1×5)) $$

These should not be considered different solutions. Whenever we see any 1's in either of the bags of a MUL node, we should eliminate them. The first expression above, !!1 × (3 × (3 + 5))!!, is converted to the Ezpr

 MUL [ 1 3 SUM [ 3 5 - ] ÷ ]

but then the 1 is eliminated from the MUL node leaving

 MUL [ 3 SUM [ 3 5 - ] ÷ ]

The fourth expression, !!3 × ((1 × 3) + 5)!!, is initially converted to the Ezpr

 MUL [ 3 SUM [ 5 MUL [ 1 3 ÷ ] - ] ÷ ]

When the 1 is eliminated from the inner MUL, this leaves a trivial MUL [ 3 ÷ ] which is then replaced with just 3, leaving:

 MUL [ 3 SUM [ 5 3 - ] ÷ ]

which is the same Ezpr as before.

Zero terms in the bags of a SUM node can similarly be dropped.

Multiplication by zero

One final case is that MUL [ 0 … ÷ … ] can just be simplified to 0.

The question about what to do when there is a zero in the denominator is a bit of a puzzle. In the presence of division by zero, some of our simplification rules are questionable. For example, when we have MUL [ a ÷ MUL [ b ÷ c ] ], the lifting rule says we can simplify this to MUL [ a c ÷ b ]—that is, that !!\frac a{\frac bc} = \frac{ac}b!!. This is correct, except that when !!b=0!! or !!c=0!! it may be nonsense, depending on what else is going on. But since zero denominators never arise in the solution of these puzzles, there is no issue in this application.

Results

The Ezpr module is around 200 lines of Perl code, including everything: the function that converts abstract syntax trees to Ezprs, functions to convert Ezprs to various notations (both MUL [ 4 ÷ SUM [ 3 - 2 ] ] and 4 ÷ (3 - 2)), and the two versions of the normalization process described in the previous section. The normalizer itself is about 35 lines.

Associativity is taken care of by the Ezpr structure itself, and commutativity is not too difficult; as I mentioned, it would have been trivial if Perl had a built-in bag structure. I find it much easier to reason about transformations of Ezprs than abstract syntax trees. Many operations are much simpler; for example the negation of SUM [ A - B ] is simply SUM [ B - A ]. Pretty-printing is also easier because the Ezpr better captures the way we write and think about expressions.

It took me a while to get the normalization tuned properly, but the results have been quite successful, at least for this problem domain. The current puzzle-solving program reports the number of distinct solutions to each puzzle. When it reports two different solutions, they are really different; when it fails to support the exact solution that Toph or I found, it reports one essentially the same. (There are some small exceptions, which I will discuss below.)

Since there is no specification for “essentially the same” there is no hope of automated testing. But we have been using the app for several months looking for mistakes, and we have not found any. If the normalizer failed to recognize that two expressions were essentially similar, we would be very likely to notice: we would be solving some puzzle, be unable to find the last of the solutions that the program claimed to exist, and then when we gave up and saw what it was we would realize that it was essentially the same as one of the solutions we had found. I am pretty confident that there are no errors of this type, but see “Arguable points” below.

A harder error to detect is whether the computer has erroneously conflated two essentially dissimilar expressions. To detect this we would have to notice that an expression was missing from the computer's solution list. I am less confident that nothing like this has occurred, but as the months have gone by I feel better and better about it.

I consider the problem of “how many solutions does this puzzle really have to have?” been satisfactorily solved. There are some edge cases, but I think we have identified them.

Code for my solver is on Github. The Ezpr code is in the Ezpr package in the Expr.pm file. This code is all in the public domain.

Some examples

The original program claims to find 35 different solutions to «4 6 6 6». The revised program recognizes that these are of only two types:

!!4 × 6 × 6 ÷ 6!!MUL [ 4 6 - ]
!!(6 - 4) × (6 + 6)!!MUL [ SUM [ 6 - 4 ] SUM [ 6 6 - ] ÷ ]

Some of the variant forms of the first of those include:

$$ 6 × (4 + (6 - 6)) \\ 6 + ((4 × 6) - 6) \\ (6 - 6) + (4 × 6) \\ (6 ÷ 6) × (4 × 6) \\ 6 ÷ ((6 ÷ 4) ÷ 6) \\ 6 ÷ (6 ÷ (4 × 6)) \\ 6 × (6 × (4 ÷ 6)) \\ (6 × 6) ÷ (6 ÷ 4) \\ 6 ÷ ((6 ÷ 6) ÷ 4) \\ 6 × (6 - (6 - 4)) \\ 6 × (6 ÷ (6 ÷ 4)) \\ \ldots
$$

In an even more extreme case, the original program finds 80 distinct expressions that solve «1 1 4 6», all of which are trivial variations on !!4·6!!.

Of the 715 puzzles, 466 (65%) have solutions; for 175 of these the solution is unique. There are 3 puzzles with 8 solutions each («2 2 4 8», «2 3 6 9», and «2 4 6 8»), one with 9 solutions («2 3 4 6»), and one with 10 solutions («2 4 4 8»).

The 10 solutions for «2 4 4 8» are as follows:

!!4 × 8 - 2 × 4 !!SUM [ MUL [ 4 8 ÷ ] - MUL [ 2 4 ÷ ] ]
!!4 × (2 + 8 - 4) !!MUL [ 4 SUM [ 2 8 - 4 ] ÷ ]
!!(8 - 4) × (2 + 4) !!MUL [ SUM [ 8 - 4 ] SUM [ 2 4 - ] ÷ ]
!!4 × (4 + 8) ÷ 2 !!MUL [ 4 SUM [ 4 8 - ] ÷ 2 ]
!!(4 - 2) × (4 + 8) !!MUL [ SUM [ 4 - 2 ] SUM [ 4 8 - ] ÷ ]
!!8 × (2 + 4/4) !!MUL [ 8 SUM [ 1 2 - ] ÷ ]
!!2 × 4 × 4 - 8 !!SUM [ MUL [ 2 4 4 ÷ ] - 8 ]
!!8 + 2 × (4 + 4) !!SUM [ 8 MUL [ 2 SUM [ 4 4 - ] ÷ ] - ]
!!4 + 4 + 2 × 8 !!SUM [ 4 4 MUL [ 2 8 ÷ ] - ]
!!4 × (8 - 4/2) !!MUL [ 4 SUM [ 8 - MUL [ 4 ÷ 2 ] ] ÷ ]

A complete listing of every essentially different solution to every «a b c d» puzzle is available here. There are 1,063 solutions in all.

Arguable points

There are a few places where we have not completely pinned down what it means for two solutions to be essentially the same; I think there is room for genuine disagreement.

  1. Any solution involving !!2×2!! can be changed into a slightly different solution involving !!2+2!! instead. These expressions are arithmetically different but numerically equal. For example, I mentioned earlier that «2 2 4 8» has 8 solutions. But two of these are !! 8 + 4 × (2 + 2)!! and !! 8 + 4 × 2 × 2!!. I am willing to accept these as essentially different. Toph, however, disagrees.

  2. A similar but more complex situation arises in connection with «1 2 3 7». Consider !!3×7+3!!, which equals 24. To get a solution to «1 2 3 7», we can replace either of the threes in !!3×7+3!! with !!(1+2)!!, obtaining !!((1 + 2) × 7) + 3!! or !! (3×7)+(1 +2)!!. My program considers these to be different solutions. Toph is unsure.

It would be pretty easy to adjust the normalization process to handle these the other way if the user wanted that.

Some interesting puzzles

«1 2 7 7» has only one solution, quite unusual. (Spoiler) «2 2 6 7» has two solutions, both somewhat unusual. (Spoiler)

Somewhat similar to «1 2 7 7» is «3 9 9 9» which also has an unusual solution. But it has two other solutions that are less surprising. (Spoiler)

«1 3 8 9» has an easy solution but also a quite tricky solution. (Spoiler)

One of my neighbors has the license plate JJZ 4631. «4 6 3 1» is one of the more difficult puzzles.

What took so long?

Back in March, I wrote:

I have enough material for at least three or four more articles about this that I hope to publish here in the coming weeks.

But the previous article on this subject ended similarly, saying

I hope to write a longer article about solvers in the next week or so.

and that was in July 2016, so don't hold your breath.

And here we are, five months later!

This article was a huge pain to write. Sometimes I sit down to write something and all that comes out is dreck. I sat down to write this one at least three or four times and it never worked. The tortured Git history bears witness. In the end I had to abandon all my earlier drafts and start over from scratch, writing a fresh outline in an empty file.

But perseverance paid off! WOOOOO.

[ Addendum 20170825: I completely forgot that Shreevatsa R. wrote a very interesting article on the same topic as this one, in July of last year soon after I published my first article in this series. ]

[ Addendum 20170829: A previous version of this article used the notations SUM [ … # … ] and MUL [ … # … ], which I said I didn't like. Zellyn Hunter has persuaded me to replace these with SUM [ … - … ] and MUL [ … ÷ … ]. Thank you M. Hunter! ]

[ Yet more on this topic! ]


[Other articles in category /math] permanent link

Tue, 08 Aug 2017

That time I met Erdős

I should have written about this sooner, by now it has been so long that I have forgotten most of the details.

I first encountered Paul Erdős in the middle 1980s at a talk by János Pach about almost-universal graphs. Consider graphs with a countably infinite set of vertices. Is there a "universal" graph !!G!! such that, for any finite or countable graph !!H!!, there is a copy of !!H!! inside of !!G!!? (Formally, this means that there is an injection from the vertices of !!H!! to the vertices of !!G!! that preserves adjacency.) The answer is yes; it is quite easy to construct such a !!G!! and in fact nearly all random graphs have this property.

But then the questions become more interesting. Let !!K_\omega!! be the complete graph on a countably infinite set of vertices. Say that !!G!! is “almost universal” if it includes a copy of !!H!! for every finite or countable graph !!H!! except those that contain a copy of !!K_\omega!!. Is there an almost universal graph? Perhaps surprisingly, no! (Sketch of proof.)

I enjoyed the talk, and afterward in the lobby I got to meet Ron Graham and Joel Spencer and talk to them about their Ramsey theory book, which I had been reading, and about a problem I was working on. Graham encouraged me to write up my results on the problem and submit them to Mathematics Magazine, but I unfortunately never got around to this. Graham was there babysitting Erdős, who was one of Pach's collaborators, but I did not actually talk to Erdős at that time. I think I didn't recognize him. I don't know why I was able to recognize Graham.

I find the almost-universal graph thing very interesting. It is still an open research area. But none of this was what I was planning to talk about. I will return to the point. A couple of years later Erdős was to speak at the University of Pennsylvania. He had a stock speech for general audiences that I saw him give more than once. Most of the talk would be a description of a lot of interesting problems, the bounties he offered for their solutions, and the progress that had been made on them so far. He would intersperse the discussions with the sort of Erdősism that he was noted for: referring to the U.S. and the U.S.S.R. as “Sam” and “Joe” respectively; his ever-growing series of styles (Paul Erdős, P.G.O.M., A.D., etc.) and so on.

One remark I remember in particular concerned the $3000 bounty he offered for proving what is sometimes known as the Erdős-Túran conjecture: if !!S!! is a subset of the natural numbers, and if !!\sum_{n\in S}\frac 1n!! diverges, then !!S!! contains arbitrarily long arithmetic progressions. (A special case of this is that the primes contain arbitrarily long arithmetic progressions, which was proved in 2004 by Green and Tao, but which at the time was a long-standing conjecture.) Although the $3000 was at the time the largest bounty ever offered by Erdős, he said it was really a bad joke, because to solve the problem would require so much effort that the per-hour payment would be minuscule.

I made a special trip down to Philadelphia to attend the talk, with the intention of visiting my girlfriend at Bryn Mawr afterward. I arrived at the Penn math building early and wandered around the halls to kill time before the talk. And as I passed by an office with an open door, I saw Erdős sitting in the antechamber on a small sofa. So I sat down beside him and started telling him about my favorite graph theory problem.

Many people, preparing to give a talk to a large roomful of strangers, would have found this annoying and intrusive. Some people might not want to talk about graph theory with a passing stranger. But most people are not Paul Erdős, and I think what I did was probably just the right thing; what you don't do is sit next to Erdős and then ask how his flight was and what he thinks of recent politics. We talked about my problem, and to my great regret I don't remember any of the mathematical details of what he said. But he did not know the answer offhand, he was not able solve it instantly, and he did say it was interesting. So! I had a conversation with Erdős about graph theory that was not a waste of his time, and I think I can count that as one of my lifetime accomplishments.

After a little while it was time to go down to the auditorium for the the talk, and afterward one of the organizers saw me, perhaps recognized me from the sofa, and invited me to the guest dinner, which I eagerly accepted. At the dinner, I was thrilled because I secured a seat next to Erdős! But this was a beginner mistake: he fell asleep almost immediately and slept through dinner, which, I learned later, was completely typical.


[Other articles in category /math] permanent link

Thu, 15 Jun 2017

Base-4 fractions in Telugu

Rik Signes brought to my attention that since version 5.1 Unicode has contained the following excitingly-named characters:

    0C78 ౸ TELUGU FRACTION DIGIT ZERO FOR ODD POWERS OF FOUR
    0C79 ౹ TELUGU FRACTION DIGIT ONE FOR ODD POWERS OF FOUR
    0C7A ౺ TELUGU FRACTION DIGIT TWO FOR ODD POWERS OF FOUR
    0C7B ౻ TELUGU FRACTION DIGIT THREE FOR ODD POWERS OF FOUR

    0C7C ౼ TELUGU FRACTION DIGIT ONE FOR EVEN POWERS OF FOUR
    0C7D ౽ TELUGU FRACTION DIGIT TWO FOR EVEN POWERS OF FOUR
    0C7E ౾ TELUGU FRACTION DIGIT THREE FOR EVEN POWERS OF FOUR

I looked into this a little and found out what they are for. It makes a lot of sense! The details were provided by “Telugu Measures and Arithmetic Marks” by Nāgārjuna Venna.

Telugu is the third-most widely spoken language in India, spoken mostly in the southeast part of the country. Traditional Telugu units of measurement are often divided into four or eight subunits. For example, the tūmu is divided into four kuṁcamulu, the kuṁcamulu, into four mānikalu, and the mānikalu into four sōlalu.

These days they mainly use liters like everyone else. But the traditional measurements are mostly divided into fours, so amounts are written with a base-10 integer part and a base-4 fractional part. The characters above are the base-4 fractional digits.

To make the point clearer, I hope, let's imagine that we are using the Telugu system, but with the familar western-style symbols 0123456789 instead of the Telugu digits ౦౧౨౩౪౫౬౭౮౯. (The Telugu had theirs first of course.) And let's use 0-=Z as our base-four fractional digits, analogous to Telugu ౦౼౽౾. (As in Telugu, we'll use the same zero symbol for both the integer and the fractional parts.) Then to write the number of gallons (7.4805195) in a cubic foot, we say

7.-Z=Z0

which is 7 gallons plus one (-) quart plus three (Z) cups plus two (=) quarter-cups plus three (Z) tablespoons plus zero (0) drams, a total of 7660 drams almost exactly. Or we could just round off to 7.=, seven and a half gallons.

(For the benefit of readers who might be a bit rusty on the details of these traditional European measurements, I should mention that there are four drams in a tablespoon, four tablespoons in a quarter cup, four quarter cups in a cup, four cups in a quart, and four quarts in a gallon, so 4⁵ = 1024 drams in a gallon and 7.4805195·4⁵ = 7660.052 drams in a cubic foot. Note also that these are volume (fluid) drams, not mass drams, which are different.)

We can omit the decimal point (as the Telegu did) and write

7-Z=Z0

and it is still clear where the integer part leaves off and the fraction begins, because we are using special symbols for the fractional part. But no, this isn't quite enough, because if we wrote 20ZZ= it might not be clear whether we meant 20.ZZ= or 2.0ZZ=.

So the system has an elaboration. In the odd positions, we don't use the 0-=Z symbols; we use Q|HN instead. And we don't write 7-Z=Z0, we write

7|ZHZQ

This is always unambiguous: 20.ZZ= is actually written 20NZH and 2.0ZZ= is written 2QZN=, quite different.

This is all fanciful in English, but Telugu actually did this. Instead of 0-=Z they had ౦౼౽౾ as I mentioned before. And instead of Q|HN they had ౸౹౺౻. So if the Telugu were trying to write 7.4805195, where we had 7|ZHZQ they might have written ౭౹౾౺౾౸. Like us, they then appended an abbreviation for the unit of measurement. Instead of “gal.” for gallon they might have put ఘ (letter “gha”), so ౭౹౾౺౾౸ఘ. It's all reasonably straightforward, and also quite sensible. If you have ౭౹౾౺ tūmu, you can read off instantly that there are ౺ (two) sōlalu left over, just as you can see that $7.43 has three pennies left over.

Notice that both sets of Telugu fraction digits are easy to remember: the digits for 3 have either three horizonal strokes ౾ or three vertical strokes ౻, and the others similarly.

I have an idea that the alternating vertical-horizontal system might have served as an error-detection mechanism: if a digit is omitted, you notice right away because the next symbol is wrong.

I find this delightful. A few years back I read all of The Number Concept: Its Origin and Development (1931) by Levi Leonard Conant, hoping to learn something really weird, and I was somewhat disappointed. Conant spends most of his book describing the number words and number systems used by dozens of cultures and almost all of them are based on ten, and a few on five or twenty. (“Any number system which passes the limit 10 is reasonably sure to have either a quinary, a decimal, or a vigesimal structure.”) But he does not mention Telugu!

[ Addendum 20200821: Bengali currency amounts do something similar, but the sections alternate between base-10 and base-4 numerals! ]


[Other articles in category /math] permanent link

Sun, 05 Mar 2017

Solving twenty-four puzzles

Back in July, I wrote:

Lately my kids have been interested in puzzles of this type: You are given a sequence of four digits, say 1,2,3,4, and your job is to combine them with ordinary arithmetic operations (+, -, ×, and ÷) in any order to make a target number, typically 24. For example, with 1,2,3,4, you can go with $$((1+2)+3)×4 = 24$$ or with $$4×((2×3)×1) = 24.$$

I said I had found an unusually difficult puzzle of this type, which is to make 2,5,6,6 total to 17. This is rather difficult. (I will reveal the solution later in this article.) Several people independently wrote to advise me that it is even more difficult to make 3,3,8,8 total to 24. They were right; it is amazingly difficult. After a couple of weeks I finally gave up and asked the computer, and when I saw the answer I didn't feel bad that I hadn't gotten it myself. (The solution is here if you want to give up without writing a program.)

From now on I will abbreviate the two puzzles of the previous paragraph as «2 5 6 6 ⇒ 17» and «3 3 8 8 ⇒ 24», and others similarly.

The article also inspired a number of people to write their own solvers and send them to me, and comparing them was interesting. My solver followed the tree search technique that I described in chapter 5 of Higher-Order Perl, and which has become so familiar to me that by now I can implement it without thinking about it very hard:

  1. Invent a data structure that represents the state of a possibly-incomplete search. This is just a list of the stuff one needs to keep track of while searching. (Let's call this a node.)

  2. Build a function which recognizes when a node represents a successful search.

  3. Build a function which takes a node, computes all the ways the search could proceed from that point, and returns a list of nodes for those slightly-more-advanced searches.

  4. Initialize a queue with a node representing a search that has just begun.

  5. Do this:

      until ( queue.is_empty() ) {
        current_node = queue.get_next()
        if ( is_successful( current_node ) ) { print the solution }
        queue.push( slightly_more_complete_searches( current_node ) )
      }
    

This is precisely a breadth-first search. To make it into depth-first search, replace the queue with a stack. To make a heuristically directed search, replace get_next with a function that looks at the queue and chooses the best-looking node from which to proceed. Many other variations are possible, which is the advantage of this synthetic approach over letting the search arise organically from a recursive searcher. (Higher-Order Perl says “Recursive functions naturally perform depth-first searches.” (page 203)) In Python or Ruby one would be able to use yield and would not have to manage the queue explicitly, but in this case the queue management is trivial.

In my solver, each node contains a list of available expressions, annotated with its numerical value. Initially, the expressions are single numbers and the values are the same, say

    [ [ "2" => 2 ], [ "3" => 3 ], [ "4" => 4 ], [ "6" => 6 ] ]

Whether you represent expressions as strings or as something more structured depends on what you need to do with them at the end. If you just need to print them out, strings are good enough and are easy to handle.

A node represents a successful search if it contains only a single expression and if the expression's value is the target sum, say 24:

    [ [ "(((6÷2)+3)×4)" => 24 ] ]

From a node, the search should proceed by selecting two of the expressions, removing them from the node, selecting a legal operation, combining the two expressions into a single expression, and inserting the result back into the node. For example, from the initial node shown above, the search might continue by subtracting the fourth expression from the second:

    [ [ "2" => 2 ], [ "4" => 4 ], [ "(3-6)" => -3 ] ]

or by multiplying the second and the third:

    [ [ "2" => 2 ], [ "(3×4)" => 12 ], [ "6" => 6 ] ]

When the program encounters that first node it will construct both of these, and many others, and put them all into the queue to be investigated later.

From

    [ [ "2" => 2 ], [ "(3×4)" => 12 ], [ "6" => 6 ] ]

the search might proceed by dividing the first expression by the third:

    [ [ "(3×4)" => 12 ], [ "(2÷6)" => 1/3 ] ]

Then perhaps by subtracting the first from the second:

    [ [ "((2÷6)-(3×4))" => -35/3 ] ]

From here there is no way to proceed, so when this node is removed from the queue, nothing is added to replace it. Had it been a winner, it would have been printed out, but since !!-\frac{35}3!! is not the target value of 24, it is silently discarded.

To solve a puzzle of the «a b c d ⇒ t» sort requires examining a few thousand nodes. On modern hardware this takes approximately zero seconds.

The actual code for my solver is a lot of Perl gobbledygook that may not be of general interest so I will provide a link for people who are interested in deciphering it. It also represents my second attempt: I lost the code that I described in the earlier article and had to rewrite it. It is rather bigger than I would have liked.

My puzzle solver in Perl.

Stuff goes wrong

People showed me a lot of programs to solve this, and many didn't work. There are a few hard cases that several of them get wrong.

Fractions

Some puzzles require that some subexpressions have fractional values. Many of the programs people showed me used integer arithmetic (sometimes implicitly and unintentionally) and failed to solve those puzzles. We can detect this by asking for a solution to «2 5 6 6 ⇒ 17», which requires a fraction. The solution is !!6×(2+(5÷6))!!. A program using integer arithmetic will calculate !!5÷6 = 0!! and fail to recognize the solution.

Several people on Twitter made this mistake and then mistakenly claimed that there was no solution at all. Usually it was possible to correct their programs by changing

        inputs = [ 2, 2, 5, 6 ]

to

        inputs = [ 2.0, 2.0, 5.0, 6.0 ]

or something like that.

Some people also surprised me by claiming that I had lied when I stated that the puzzle could be solved without any “underhanded tricks”, and that the use of intermediate fractions was itself an underhanded trick. Your Honor, I plead not guilty. I originally described the puzzle this way:

You are given a sequence of four digits, say 1,2,3,4, and your job is to combine them with ordinary arithmetic operations (+, -, ×, and ÷) in any order to make a target number, typically 24.

The objectors are implicitly claiming that when you combine 5 and 6 with the “ordinary arithmetic operation” of division, you get something other than !!\frac56!!. This is an indefensible claim.

I wasn't even trying to be tricky! It never occurred to me that fractions were something that some people would consider underhanded, and now that it has been suggested, I reject the suggestion. Folks, the result of division can be a fraction. Fractions are not some sort of obscure mathematical pettifoggery. They have been with us for at least 3,500 years now, so it is time everyone got used to them.

Floating-point error

Some programs used floating-point arithmetic to deal with the fractions and then fell foul of floating-point error. I will defer discussion of this to a future article.

I've complained about floating-point numbers on this blog before. ( 1 2 3 4 5 ) God, how I loathe them.

[ Addendum 20170825: Looking back on our old discussion from July 2016, I see that Lindsey Kuper said to me:

One nice thing about using Racket or Scheme is that it handles the numeric stuff so nicely. If you weren't careful, I could imagine in Python a solution failing because it evaluated to 16.99999999999999997 or something.

Good call, Dr. Kuper! ]

Expression construction

A more subtle error that several programs made was to assume that all expressions can be constructed by combining a previous expression with a single input number. For example, to solve «2 3 5 7 ⇒ 24», you multiply 3 by 7 to get 21, then add 5 to get 26, then subtract 2 to get 24.

But not every puzzle can be solved this way. Consider «2 3 5 7 ⇒ 41». You start by multiplying 2 by 3 to get 6, but if you try to combine the 6 with either 5 or 7 at this point you will lose. The only solution is to put the 6 aside and multiply 5 by 7 to get 35. Then add the 6 and the 35 to get 41.

Another way to put this is that an unordered binary tree with 4 leaves can take two different shapes. (Imagine filling the green circles with numbers and the pink squares with operators.)

The right-hand type of structure is sometimes necessary, as with «2 3 5 7 ⇒ 41». But several of the proposed solutions produced only expressions with structures like that on the left.

Here's Sebastian Fischer's otherwise very elegant Haskell solution, in its entirety:

    import Data.List ( permutations )

    solution = head
      [ (a,x,(b,y,(c,z,d)))
        | [a,b,c,d] <- permutations [2,5,6,6],
           ops <- permutations [((+),'+'),((-),'-'),((*),'*'),((/),'/')],
           let [u,v,w] = map fst $ take 3 ops,
           let [x,y,z] = map snd $ take 3 ops,
           (a `u` (b `v` (c `w` d))) == 17
      ]

You can see the problem in the last line. a, b, c, and d are numbers, and u, v, and w are operators. The program evaluates an expression to see if it has the value 17, but the expression always has the left-hand shape. (The program has another limitation: it never uses the same operator twice in the expression. That second permutations should be (sequence . take 3 . repeat) or something. It can still solve «2 5 6 6 ⇒ 17», however.)

Often the way these programs worked was to generate every possible permutation of the inputs and then apply the operators to the input lists stackwise: pop the first two values, combine them, push the result, and repeat. Here's a relevant excerpt from a program by Tim Dierks, this time in Python:

  for ordered_values in permutations(values):
    for operations in product(ops, repeat=len(values)-1):
      result, formula = calc_result(ordered_values, operations)

Here the expression structure is implicit, but the current result is always made by combining one of the input numbers with the old result.

I have seen many people get caught by this and similar traps in the past. I once posed the problem of enumerating all the strings of balanced parentheses of a given length, and several people assumed that all such strings have the form ()S, S(), or (S), where S is a shorter string of the same type. This seems plausible, and it works up to length 6, but (())(()) does not have that form.

Division by zero

A less common error exhibited by some programs was a failure to properly deal with division by zero. «2 5 6 6 ⇒ 17» has a solution, and if a program dies while checking !!2+(5÷(6-6))!! and doesn't find the solution, that's a bug.

Programs that worked

Ingo Blechschmidt (Haskell)

Ingo Blechschmidt showed me a solution in Haskell. The code is quite short. M. Blechschmidt's program defines a synthetic expression type and an evaluator for it. It defines a function arb which transforms an ordered list of numbers into a list of all possible expressions over those numbers. Reordering the list is taken care of earlier, by Data.List.permutations.

By “synthetic expression type” I mean this:

    data Exp a
        = Lit  a
        | Sum  (Exp a) (Exp a)
        | Diff (Exp a) (Exp a)
        | Prod (Exp a) (Exp a)
        | Quot (Exp a) (Exp a)
        deriving (Eq, Show)

Probably 80% of the Haskell programs ever written have something like this in them somewhere. This approach has a lot of boilerplate. For example, M. Blechschmidt's program then continues:

    eval :: (Fractional a) => Exp a -> a
    eval (Lit x) = x
    eval (Sum  a b) = eval a + eval b
    eval (Diff a b) = eval a - eval b
    eval (Prod a b) = eval a * eval b
    eval (Quot a b) = eval a / eval b

Having made up our own synonyms for the arithmetic operators (Sum for !!+!!, etc.) we now have to explain to Haskell what they mean. (“Not expressions, but an incredible simulation!”)

I spent a while trying to shorten the code by using a less artificial expression type:

    data Exp a
        = Lit  a
        | Op ((a -> a -> a), String) (Exp a) (Exp a)

but I was disappointed; I was only able to cut it down by 18%, from 34 lines to 28. I hope to discuss this in a future article. By the way, “Blechschmidt” is German for “tinsmith”.

Shreevatsa R. (Python)

Shreevatsa R. showed me a solution in Python. It generates every possible expression and prints it out with its value. If you want to filter the voluminous output for a particular target value, you do that later. Shreevatsa wrote up an extensive blog article about this which also includes a discussion about eliminating duplicate expressions from the output. This is a very interesting topic, and I have a lot to say about it, so I will discuss it in a future article.

Jeff Fowler (Ruby)

Jeff Fowler of the Recurse Center wrote a compact solution in Ruby that he described as “hot garbage”. Did I say something earlier about Perl gobbledygook? It's nice that Ruby is able to match Perl's level of gobbledygookitude. This one seems to get everything right, but it fails mysteriously if I replace the floating-point constants with integer constants. He did provide a version that was not “egregiously minified” but I don't have it handy.

Lindsey Kuper (Scheme)

Lindsey Kuper wrote a series of solutions in the Racket dialect of Scheme, and discussed them on her blog along with some other people’s work.

M. Kuper's first draft was 92 lines long (counting whitespace) and when I saw it I said “Gosh, that is way too much code” and tried writing my own in Scheme. It was about the same size. (My Perl solution is also not significantly smaller.)

Martin Janecke (PHP)

I saved the best for last. Martin Janecke showed me an almost flawless solution in PHP that uses a completely different approach than anyone else's program. Instead of writing a lot of code for generating permutations of the input, M. Janecke just hardcoded them:

    $zahlen = [
      [2, 5, 6, 6],
      [2, 6, 5, 6],
      [2, 6, 6, 5],
      [5, 2, 6, 6],
      [5, 6, 2, 6],
      [5, 6, 6, 2],
      [6, 2, 5, 6],
      [6, 2, 6, 5],
      [6, 5, 2, 6],
      [6, 5, 6, 2],
      [6, 6, 2, 5],
      [6, 6, 5, 2]
    ]

Then three nested loops generate the selections of operators:

 $operatoren = [];
 foreach (['+', '-', '*', '/'] as $x) {
   foreach (['+', '-', '*', '/'] as $y) {
     foreach (['+', '-', '*', '/'] as $z) {
       $operatoren[] = [$x, $y, $z];
     }
   }
 }

Expressions are constructed from templates:

        $klammern = [
          '%d %s %d %s %d %s %d',
          '(%d %s %d) %s %d %s %d',
          '%d %s (%d %s %d) %s %d',
          '%d %s %d %s (%d %s %d)',
          '(%d %s %d) %s (%d %s %d)',
          '(%d %s %d %s %d) %s %d',
          '%d %s (%d %s %d %s %d)',
          '((%d %s %d) %s %d) %s %d',
          '(%d %s (%d %s %d)) %s %d',
          '%d %s ((%d %s %d) %s %d)',
          '%d %s (%d %s (%d %s %d))'
         ];

(I don't think those templates are all necessary, but hey, whatever.) Finally, another set of nested loops matches each ordering of the input numbers with each selection of operators, uses sprintf to plug the numbers and operators into each possible expression template, and uses @eval to evaluate the resulting expression to see if it has the right value:

   foreach ($zahlen as list ($a, $b, $c, $d)) {
     foreach ($operatoren as list ($x, $y, $z)) {
       foreach ($klammern as $vorlage) {
         $term = sprintf ($vorlage, $a, $x, $b, $y, $c, $z, $d);
         if (17 == @eval ("return $term;")) {
           print ("$term = 17\n");
         }
       }
     }
   }

If loving this is wrong, I don't want to be right. It certainly satisfies Larry Wall's criterion of solving the problem before your boss fires you. The same approach is possible in most reasonable languages, and some unreasonable ones, but not in Haskell, which was specifically constructed to make this approach as difficult as possible.

M. Janecke wrote up a blog article about this, in German. He says “It's not an elegant program and PHP is probably not an obvious choice for arithmetic puzzles, but I think it works.” Indeed it does. Note that the use of @eval traps the division-by-zero exceptions, but unfortunately falls foul of floating-point roundoff errors.

Thanks

Thanks to everyone who discussed this with me. In addition to the people above, thanks to Stephen Tu, Smylers, Michael Malis, Kyle Littler, Jesse Chen, Darius Bacon, Michael Robert Arntzenius, and anyone else I forgot. (If I forgot you and you want me to add you to this list, please drop me a note.)

Coming up

I have enough material for at least three or four more articles about this that I hope to publish here in the coming weeks.

But the previous article on this subject ended similarly, saying

I hope to write a longer article about solvers in the next week or so.

and that was in July 2016, so don't hold your breath.

[ Addendum 20170820: the next article is ready. I hope you weren't holding your breath!  ]

[ Addendum 20170828: yet more about this ]


[Other articles in category /math] permanent link

Tue, 07 Feb 2017

How many 24 puzzles are there?

[ Note: The tables in this article are important, and look unusually crappy if you read this blog through an aggregator. The properly-formatted version on my blog may be easier to follow. ]

A few months ago I wrote about puzzles of the following type: take four digits, say 1, 2, 7, 7, and, using only +, -, ×, and ÷, combine them to make the number 24. Since then I have been accumulating more and more material about these puzzles, which will eventually appear here. But meantime here is a delightful tangent.

In the course of investigating this I wrote programs to enumerate the solutions of all possible puzzles, and these programs were always much faster than I expected at first. It appears as if there are 10,000 possible puzzles, from «0,0,0,0» through «9,9,9,9». But a moment's thought shows that there are considerably fewer, because, for example, the puzzles «7,2,7,1», «1,2,7,7», «7,7,2,1», and «2,7,7,1» are all the same puzzle. How many puzzles are there really?

A back-of-the-envelope estimate is that only about 1 in 24 puzzles is really distinct (because there are typically 24 ways to rearrange the elements of a puzzle) and so there ought to be around !!\frac{10000}{24} \approx 417!! puzzles. This is an undercount, because there are fewer duplicates of many puzzles; for example there are not 24 variations of «1,2,7,7», but only 12. The actual number of puzzles turns out to be 715, which I think is not an obvious thing to guess.

Let's write !!S(d,n)!! for the set of sequences of length !!n!! containing up to !!d!! different symbols, with the duplicates removed: when two sequences are the same except for the order of their symbols, we will consider them the same sequence.

Or more concretely, we may imagine that the symbols are sorted into nondecreasing order, so that !!S(d,n)!! is the set of nondecreasing sequences of length !!n!! of !!d!! different symbols.

Let's also write !!C(d,n)!! for the number of elements of !!S(d,n)!!.

Then !!S(10, 4)!! is the set of puzzles where input is four digits. The claim that there are !!715!! such puzzles is just that !!C(10,4) = 715!!. A tabulation of !!C(\cdot,\cdot)!! reveals that it is closely related to binomial coefficients, and indeed that $$C(d,n)=\binom{n+d-1}{d-1}.\tag{$\heartsuit$}$$

so that the surprising !!715!! is actually !!\binom{13}{9}!!. This is not hard to prove by induction, because !!C(\cdot,\cdot)!! is easily shown to obey the same recurrence as !!\binom\cdot\cdot!!: $$C(d,n) = C(d-1,n) + C(d,n-1).\tag{$\spadesuit$}$$

To see this, observe that an element of !!C(d,n)!! either begins with a zero or with some other symbol. If it begins with a zero, there are !!C(d,n-1)!! ways to choose the remaining !!n-1!! symbols in the sequence. But if it begins with one of the other !!d-1!! symbols it cannot contain any zeroes, and what we really have is a length-!!n!! sequence of the symbols !!1\ldots (d-1)!!, of which there are !!C(d-1, n)!!.

0 0 0 0 1 1 1
0 0 0 1 1 1 2
0 0 0 2 1 1 3
0 0 0 3 1 1 4
0 0 1 1 1 2 2
0 0 1 2 1 2 3
0 0 1 3 1 2 4
0 0 2 2 1 3 3
0 0 2 3 1 3 4
0 0 3 3 1 4 4
0 1 1 1 2 2 2
0 1 1 2 2 2 3
0 1 1 3 2 2 4
0 1 2 2 2 3 3
0 1 2 3 2 3 4
0 1 3 3 2 4 4
0 2 2 2 3 3 3
0 2 2 3 3 3 4
0 2 3 3 3 4 4
0 3 3 3 4 4 4

Now we can observe that !!\binom74=\binom73!! (they are both 35) so that !!C(5,3) = C(4,4)!!. We might ask if there is a combinatorial proof of this fact, consisting of a natural bijection between !!S(5,3)!! and !!S(4,4)!!. Using the relation !!(\spadesuit)!! we have:

$$ \begin{eqnarray} C(4,4) & = & C(3, 4) + & C(4,3) \\ C(5,3) & = & & C(4,3) + C(5,2) \\ \end{eqnarray}$$

so part of the bijection, at least, is clear: There are !!C(4,3)!! elements of !!S(4,4)!! that begin with a zero, and also !!C(4,3)!! elements of !!S(5, 3)!! that do not begin with a zero, so whatever the bijection is, it ought to match up these two subsets of size 20. This is perfectly straightforward; simply match up !!«0, a, b, c»!! (blue) with !!«a+1, b+1, c+1»!! (pink), as shown at right.

But finding the other half of the bijection, between !!S(3,4)!! and !!S(5,2)!!, is not so straightforward. (Both have 15 elements, but we are looking for not just any bijection but for one that respects the structure of the elements.) We could apply the recurrence again, to obtain:

$$ \begin{eqnarray} C(3,4) & = \color{darkred}{C(2, 4)} + \color{darkblue}{C(3,3)} \\ C(5,2) & = \color{darkblue}{C(4,2)} + \color{darkred}{C(5,1)} \end{eqnarray}$$

and since $$ \begin{eqnarray} \color{darkred}{C(2, 4)} & = \color{darkred}{C(5,1)} \\ \color{darkblue}{C(3,3)} & = \color{darkblue}{C(4,2)} \end{eqnarray}$$

we might expect the bijection to continue in that way, mapping !!\color{darkred}{S(2,4) \leftrightarrow S(5,1)}!! and !!\color{darkblue}{S(3,3) \leftrightarrow S(4,2)}!!. Indeed there is such a bijection, and it is very nice.

To find the bijection we will take a detour through bitstrings. There is a natural bijection between !!S(d, n)!! and the bit strings that contain !!d-1!! zeroes and !!n!! ones. Rather than explain it with pseudocode, I will give some examples, which I think will make the point clear. Consider the sequence !!«1, 1, 3, 4»!!. Suppose you are trying to communicate this sequence to a computer. It will ask you the following questions, and you should give the corresponding answers:

  • “Is the first symbol 0?” (“No”)
  • “Is the first symbol 1?” (“Yes”)
  • “Is the second symbol 1?” (“Yes”)
  • “Is the third symbol 1?” (“No”)
  • “Is the third symbol 2?” (“No”)
  • “Is the third symbol 3?” (“Yes”)
  • “Is the fourth symbol 3?” (“No”)
  • “Is the fourth symbol 4?” (“Yes”)

At each stage the computer asks about the identity of the next symbol. If the answer is “yes” the computer has learned another symbol and moves on to the next element of the sequence. If it is “no” the computer tries guessing a different symbol. The “yes” answers become ones and “no” answers become zeroes, so that the resulting bit string is 0 1 1 0 0 1 0 1.

It sometimes happens that the computer figures out all the elements of the sequence before using up its !!n+d-1!! questions; in this case we pad out the bit string with zeroes, or we can imagine that the computer asks some pointless questions to which the answer is “no”. For example, suppose the sequence is !!«0, 1, 1, 1»!!:

  • “Is the first symbol 0?” (“Yes”)
  • “Is the second symbol 0?” (“No”)
  • “Is the second symbol 1?” (“Yes”)
  • “Is the third symbol 1?” (“Yes”)
  • “Is the fourth symbol 1?” (“Yes”)

The bit string is 1 0 1 1 1 0 0 0, where the final three 0 bits are the padding.

We can reverse the process, simply taking over the role of the computer. To find the sequence that corresponds to the bit string 0 1 1 0 1 0 0 1, we ask the questions ourselves and use the bits as the answers:

  • “Is the first symbol 0?” (“No”)
  • “Is the first symbol 1?” (“Yes”)
  • “Is the second symbol 1?” (“Yes”)
  • “Is the third symbol 1?” (“No”)
  • “Is the third symbol 2?” (“Yes”)
  • “Is the fourth symbol 2?” (“No”)
  • “Is the fourth symbol 3?” (“No”)
  • “Is the fourth symbol 4?” (“Yes”)

We have recovered the sequence !!«1, 1, 2, 4»!! from the bit string 0 1 1 0 1 0 0 1.

This correspondence establishes relation !!(\heartsuit)!! in a different way from before: since there is a natural bijection between !!S(d, n)!! and the bit strings with !!d-1!! zeroes and !!n!! ones, there are certainly !!\binom{n+d-1}{d-1}!! of them as !!(\heartsuit)!! says because there are !!n+d-1!! bits and we may choose any !!d-1!! to be the zeroes.

We wanted to see why !!C(5,3) = C(4,4)!!. The detour above shows that there is a simple bijection between

!!S(5,3)!! and the bit strings with 4 zeroes and 3 ones

on one hand, and between

!!S(4,4)!! and the bit strings with 3 zeroes and 4 ones

on the other hand. And of course the bijection between the two sets of bit strings is completely obvious: just exchange the zeroes and the ones.

The table below shows the complete bijection between !!S(4,4)!! and its descriptive bit strings (on the left in blue) and between !!S(5, 3)!! and its descriptive bit strings (on the right in pink) and that the two sets of bit strings are complementary. Furthermore the top portion of the table shows that the !!S(4,3)!! subsets of the two families correspond, as they should—although the correct correspondence is the reverse of the one that was displayed earlier in the article, not the suggested !!«0, a, b, c» \leftrightarrow «a+1, b+1, c+1»!! at all. Instead, in the correct table, the initial digit of the !!S(4,4)!! entry says how many zeroes appear in the !!S(5,3)!! entry, and vice versa; then the increment to the next digit says how many ones, and so forth.

!!S(4,4)!!(bits)(complement bits)!!S(5,3)!!
0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 1 4 4 4
0 0 0 1 1 1 1 0 1 0 0 0 0 0 1 0 1 1 3 4 4
0 0 0 2 1 1 1 0 0 1 0 0 0 0 1 1 0 1 3 3 4
0 0 0 3 1 1 1 0 0 0 1 0 0 0 1 1 1 0 3 3 3
0 0 1 1 1 1 0 1 1 0 0 0 0 1 0 0 1 1 2 4 4
0 0 1 2 1 1 0 1 0 1 0 0 0 1 0 1 0 1 2 3 4
0 0 1 3 1 1 0 1 0 0 1 0 0 1 0 1 1 0 2 3 3
0 0 2 2 1 1 0 0 1 1 0 0 0 1 1 0 0 1 2 2 4
0 0 2 3 1 1 0 0 1 0 1 0 0 1 1 0 1 0 2 2 3
0 0 3 3 1 1 0 0 0 1 1 0 0 1 1 1 0 0 2 2 2
0 1 1 1 1 0 1 1 1 0 0 0 1 0 0 0 1 1 1 4 4
0 1 1 2 1 0 1 1 0 1 0 0 1 0 0 1 0 1 1 3 4
0 1 1 3 1 0 1 1 0 0 1 0 1 0 0 1 1 0 1 3 3
0 1 2 2 1 0 1 0 1 1 0 0 1 0 1 0 0 1 1 2 4
0 1 2 3 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 2 3
0 1 3 3 1 0 1 0 0 1 1 0 1 0 1 1 0 0 1 2 2
0 2 2 2 1 0 0 1 1 1 0 0 1 1 0 0 0 1 1 1 4
0 2 2 3 1 0 0 1 1 0 1 0 1 1 0 0 1 0 1 1 3
0 2 3 3 1 0 0 1 0 1 1 0 1 1 0 1 0 0 1 1 2
0 3 3 3 1 0 0 0 1 1 1 0 1 1 1 0 0 0 1 1 1
1 1 1 1 0 1 1 1 1 0 0 1 0 0 0 0 1 1 0 4 4
1 1 1 2 0 1 1 1 0 1 0 1 0 0 0 1 0 1 0 3 4
1 1 1 3 0 1 1 1 0 0 1 1 0 0 0 1 1 0 0 3 3
1 1 2 2 0 1 1 0 1 1 0 1 0 0 1 0 0 1 0 2 4
1 1 2 3 0 1 1 0 1 0 1 1 0 0 1 0 1 0 0 2 3
1 1 3 3 0 1 1 0 0 1 1 1 0 0 1 1 0 0 0 2 2
1 2 2 2 0 1 0 1 1 1 0 1 0 1 0 0 0 1 0 1 4
1 2 2 3 0 1 0 1 1 0 1 1 0 1 0 0 1 0 0 1 3
1 2 3 3 0 1 0 1 0 1 1 1 0 1 0 1 0 0 0 1 2
1 3 3 3 0 1 0 0 1 1 1 1 0 1 1 0 0 0 0 1 1
2 2 2 2 0 0 1 1 1 1 0 1 1 0 0 0 0 1 0 0 4
2 2 2 3 0 0 1 1 1 0 1 1 1 0 0 0 1 0 0 0 3
2 2 3 3 0 0 1 1 0 1 1 1 1 0 0 1 0 0 0 0 2
2 3 3 3 0 0 1 0 1 1 1 1 1 0 1 0 0 0 0 0 1
3 3 3 3 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0

Observe that since !!C(d,n) = \binom{n+d-1}{d-1} = \binom{n+d-1}{n} = C(n+1, d-1)!! we have in general that !!C(d,n) = C(n+1, d-1)!!, which may be surprising. One might have guessed that since !!C(5,3) = C(4,4)!!, the relation was !!C(d,n) = C(d+1, n-1)!! and that !!S(d,n)!! would have the same structure as !!S(d+1, n-1)!!, but it isn't so. The two arguments exchange roles. Following the same path, we can identify many similar ‘coincidences’. For example, there is a simple bijection between the original set of 715 puzzles, which was !!S(10,4)!!, and !!S(5,9)!!, the set of nondecreasing sequences of !!0\ldots 4!! of length !!9!!.

[ Thanks to Bence Kodaj for a correction. ]

[ Addendum 20170829: Conway and Guy, in The Book of Numbers, describe the same bijection, but a little differently; see their discussion of the Sweet Seventeen deck on pages 70–71. ]

[ Addendum 20171015: More about this, using Burnside's lemma. ]


[Other articles in category /math] permanent link

Thu, 15 Dec 2016

Let's decipher a thousand-year-old magic square

The Parshvanatha temple in Madhya Pradesh, India was built around 1,050 years ago. Carved at its entrance is this magic square:

The digit signs have changed in the past thousand years, but it's a quick and fun puzzle to figure out what they mean using only the information that this is, in fact, a magic square.

A solution follows. No peeking until you've tried it yourself!


There are 9 one-digit entries
and 7 two-digit entries
so we can guess that the entries are the numbers 1 through 16, as is usual, and the magic sum is 34. The appears in the same position in all the two-digit numbers, so it's the digit 1. The other digit of the numeral is , and this must be zero. If it were otherwise, it would appear on its own, as does for example the from or the from .

It is tempting to imagine that is 4. But we can see it's not so. Adding up the rightmost column, we get

+ + + =
+ 11 + + =
(10 + ) + 11 + + = 34,

so that must be an odd number. We know it isn't 1 (because is 1), and it can't be 7 or 9 because appears in the bottom row and there is no 17 or 19. So must be 3 or 5.

Now if were 3, then would be 13, and the third column would be

+ + + =
1 + + 10 + 13 = 34,

and then would be 10, which is too big. So must be 5, and this means that is 4 and is 8. ( appears only a as a single-digit numeral, which is consistent with it being 8.)

The top row has

+ + + =
+ + 1 + 14 =
+ (10 + ) + 1 + 14 = 34

so that + = 9. only appears as a single digit and we already used 8 so must be 7 or 9. But 9 is too big, so it must be 7, and then is 2.

is the only remaining unknown single-digit numeral, and we already know 7 and 8, so is 9. The leftmost column tells us that is 16, and the last two entries, and are easily discovered to be 13 and 3. The decoded square is:

712114
213811
163105
96154

I like that people look at the right-hand column and immediately see 18 + 11 + 4 + 8 but it's actually 14 + 11 + 5 + 4.

This is an extra-special magic square: not only do the ten rows, columns, and diagonals all add up to 34, so do all the four-cell subsquares, so do any four squares arranged symmetrically about the center, and so do all the broken diagonals that you get by wrapping around at the edges.

[ Addendum: It has come to my attention that the digit symbols in the magic square are not too different from the current forms of the digit symbols in the Gujarati script. ]

[ Addendum 20161217: The temple is not very close to Gujarat or to the area in which Gujarati is common, so I guess that the digit symbols in Indian languages have evolved in the past thousand years, with the Gujarati versions remaining closest to the ancient forms, or else perhaps Gujarati was spoken more widely a thousand years ago. I would be interested to hear about this from someone who knows. ]

[ Addendum 20170130: Shreevatsa R. has contributed a detailed discussion of the history of the digit symbols. ]


[Other articles in category /math] permanent link

Fri, 29 Jul 2016

Decomposing a function into its even and odd parts

As I have mentioned before, I am not a sudden-flash-of-insight person. Every once in a while it happens, but usually my thinking style is to minutely examine a large mass of examples and then gradually synthesize some conclusion about them. I am a penetrating but slow thinker. But there have been a few occasions in my life when the solution to a problem struck me suddenly out of the blue.

One such occasion was on the first day of my sophomore honors physics class in 1987. This was one of the best classes I took in my college career. It was given by Professor Stephen Nettel, and it was about resonance phenomena. I love when a course has a single overarching theme and proceeds to examine it in detail; that is all too rare. I deeply regret leaving my copy of the course notes in a restaurant in 1995.

The course was very difficult, But also very satisfying. It was also somewhat hair-raising, because of Professor Nettel's habit of saying, all through the second half “Don't worry if it doesn't seem to make any sense, it will all come together for you during the final exam.” This was not reassuring. But he was right! It did all come together during the final exam.

The exam had two sets of problems. The problems on the left side of the exam paper concerned some mechanical system, I think a rod fixed at one end and free at the other, or something like that. This set of problems asked us to calculate the resonant frequency of the rod, its rate of damping at various driving frequencies, and related matters. The right-hand problems were about an electrical system involving a resistor, capacitor, and inductor. The questions were the same, and the answers were formally identical, differing only in the details: on the left, the answers involved length, mass and stiffness of the rod, and on the right, the resistance, capacitance, and inductance of the electrical components. It was a brilliant exam, and I have never learned so much about a subject during the final exam.

Anyway, I digress. After the first class, we were assigned homework. One of the problems was

Show that every function is the sum of an even function and an odd function.

(Maybe I should explain that an even function is one which is symmetric across the !!y!!-axis; formally it is a function !!f!! for which !!f(x) = f(-x)!! for every !!x!!. For example, the function !!x^2-4!!, shown below left. An odd function is one which is symmetric under a half-turn about the origin; formally it satisfies !!f(x) = -f(-x)!! for all !!x!!. For example !!\frac{x^3}{20}!!, shown below right.)

 

I found this claim very surprising, and we had no idea how to solve it. Well, not quite no idea: I knew that functions could be expanded in Fourier series, as the sum of a sine series and a cosine series, and the sine part was odd while the cosine part was even. But this seemed like a bigger hammer than was required, particularly since new sophomores were not expected to know about Fourier series.

I had the privilege to be in that class with Ron Buckmire, and I remember we stood outside the class building in the autumn sunshine and discussed the problem. I might have been thinking that perhaps there was some way to replace the negative part of !!f!! with a reflected copy of the positive part to make an even function, and maybe that !!f(x) + f(-x)!! was always even, when I was hit from the blue with the solution:

$$ \begin{align} f_e(x) & = \frac{f(x) + f(-x)}2 \text{ is even},\\ f_o(x) & = \frac{f(x) - f(-x)}2 \text{ is odd, and}\\ f(x) &= f_e(x) + f_o(x) \end{align} $$

So that was that problem solved. I don't remember the other three problems in that day's homework, but I have remembered that one ever since.

But for some reason, it didn't occur to me until today to think about what those functions actually looked like. Of course, if !!f!! itself is even, then !!f_e = f!! and !!f_o = 0!!, and similarly if !!f!! is odd. But most functions are neither even nor odd.

For example, consider the function !!2^x!!, which is neither even nor odd. Then we get

$$ \begin{align} f_e(x) & = \frac{2^x + 2^{-x}}2\\ f_o(x) & = \frac{2^x - 2^{-x}}2 \end{align} $$

The graph is below left. The solid red line is !!2^x!!, and the blue and purple dotted lines are !!f_e!! and !!f_o!!. The red line is the sum of the blue and purple lines. I thought this was very interesting-looking, but a little later I realized that I had already known what these graphs would look like, because !!2^x!! is just like !!e^x!!, and for !!e^x!! the even and odd components are exactly the familiar !!\cosh!! and !!\sinh!! functions. (Below left, !!2^x!!; below right, !!e^x!!.)

I wasn't expecting polynomials to be more interesting, but they were. (Polynomials whose terms are all odd powers of !!x!!, such as !!x^{13} - 4x^5 + x!!, are always odd functions, and similarly polynomials whose terms are all even powers of !!x!! are even functions.) For example, consider !!(x-1)^2!!, which is neither even nor odd. We don't even need the !!f_e!! and !!f_o!! formulas to separate this into even and odd parts: just expand !!(x-1)^2!! as !!x^2 - 2x + 1!! and separate it into odd and even powers, !!-2x!! and !!x^2 + 1!!:

Or we could do !!\frac{(x-1)^3}3!! similarly, expanding it as !!\frac{x^3}3 - x^2 + x -\frac13!! and separating this into !!-x^2 -\frac13!! and !!\frac{x^3}3 + x!!:

I love looking at these and seeing how the even blue line and the odd purple line conspire together to make whatever red line I want.

I kept wanting to try familiar simple functions, like !!\frac1x!!, but many of these are either even or odd, and so are uninteresting for this application. But you can make an even or an odd function into a neither-even-nor-odd function just by translating it horizontally, which you do by replacing !!x!! with !!x-c!!. So the next function I tried was !!\frac1{x+1}!!, which is the translation of !!\frac 1x!!. Here I got a surprise. I knew that !!\frac1{x+1}!! was undefined at !!x=-1!!, so I graphed it only for !!x>-1!!. But the even component is !!\frac12\left(\frac1{1+x}+\frac1{1-x}\right)!!, which is undefined at both !!x=-1!! and at !!x=+1!!. Similarly the odd component is undefined at two points. So the !!f = f_o + f_e!! formula does not work quite correctly, failing to produce the correct value at !!x=1!!, even though !!f!! is defined there. In general, if !!f!! is undefined at some !!x=c!!, then the decomposition into even and odd components fails at !!x=-c!! as well. The limit $$\lim_{x\to -c} f(x) = \lim_{x\to -c} \left(f_o(x) + f_e(x)\right)$$ does hold, however. The graph below shows the decomposition of !!\frac1{x+1}!!.

Vertical translations are uninteresting: they leave !!f_o!! unchanged and translate !!f_e!! by the same amount, as you can verify algebraically or just by thinking about it.

Following the same strategy I tried a cosine wave. The evenness of the cosine function is one of its principal properties, so I translated it and used !!\cos (x+1)!!. The graph below is actually for !!5\cos(x+1)!! to prevent the details from being too compressed:

This reminded me of the time I was fourteen and graphed !!\sin x + \cos x!! and was surprised to see that it was another perfect sinusoid. But I realized that there was a simple way to understand this. I already knew that !!\cos(x + y) = \sin x\cos y + \sin y \cos x!!. If you take !!y=\frac\pi4!! and multiply the whole thing by !!\sqrt 2!!, you get $$\sqrt2\cos\left(x + \frac\pi4\right) = \sqrt2\sin x\cos\frac\pi4 + \sqrt2\cos x\sin\frac\pi4 = \sin x + \cos x$$ so that !!\sin x + \cos x!! is just a shifted, scaled cosine curve. The decomposition of !!\cos(x+1)!! is even simpler because you can work forward instead of backward and find that !!\cos(x+1) = \sin x\cos 1 + \cos x \sin 1!!, and the first term is odd while the second term is even, so that !!\cos(x+1)!! decomposes as a sum of an even and an odd sinusoid as you see in the graph above.

Finally, I tried a Poisson distribution, which is highly asymmetric. The formula for the Poisson distribution is !!\frac{\lambda^xe^\lambda}{x!}!!, for some constant !!\lambda!!. The !!x! !! in the denominator is only defined for non-negative integer !!x!!, but you can extend it to fractional and negative !!x!! in the usual way by using !!\Gamma(x+1)!! instead, where !!\Gamma!! is the Gamma function. The !!\Gamma!! function is undefined at zero and negative integers, but fortunately what we need here is the reciprocal gamma function !!\frac1{\Gamma(x)}!!, which is perfectly well-behaved. The results are spectacular. The graph below has !!\lambda = 0.8!!.

The part of this with !!x\ge 0!! is the most interesting to me, because the Poisson distribution has a very distinctive shape, and once again I like seeing the blue and purple !!\Gamma!! functions working together to make it. I think it's just great how the red line goes gently to zero as !!x!! increases, even though the even and the odd components are going wild. (!!x! !! increases rapidly with !!x!!, so the reciprocal !!\Gamma!! function goes rapidly to zero. But the even and odd components also have a !!\frac1{\Gamma(-x)}!! part, and this is what dominates the blue and purple lines when !!x >4!!.)

On the !!x\lt 0!! side it has no meaning for me, and it's just wiggly lines. It hadn't occurred to me before that you could extend the Poisson distribution function to negative !!x!!, and I still can't imagine what it could mean, but I suppose why not. Probably some statistician could explain to me what the Poisson distribution is about when !!x<0!!.

You can also consider the function !!\sqrt x!!, which breaks down completely, because either !!\sqrt x!! or !!\sqrt{-x}!! is undefined except when !!x=0!!. So the claim that every function is the sum of an even and an odd function fails here too. Except perhaps not! You could probably consider the extension of the square root function to the complex plane, and take one of its branches, and I suppose it works out just fine. The geometric interpretation of evenness and oddness are very different, of course, and you can't really draw the graphs unless you have four-dimensional vision.

I have no particular point to make, except maybe that math is fun, even elementary math (or perhaps especially elementary math) and it's fun to see how it works out.

The beautiful graphs in this article were made with Desmos. I had dreaded having to illustrate my article with graphs from Gnuplot (ugh) or Wolfram|α (double ugh) and was thrilled to find such a handsome alternative.

[ Addendum: I've just discovered that in Desmos you can include a parameter in the functions that it graphs, and attach the parameter to a slider. So for example you can arrange to have it display !!(x+k)^3!! or !!e^{-(x+k)^2}!!, with the value of !!k!! controlled by the slider, and have the graph move left and right on the plane as you adjust the slider, with its even and odd parts changing in real time to match. ]

[ For example, check out travelling Gaussians or varying sinusoid. ]


[Other articles in category /math] permanent link

Tue, 12 Jul 2016

A simple but difficult arithmetic puzzle

Lately my kids have been interested in puzzles of this type: You are given a sequence of four digits, say 1,2,3,4, and your job is to combine them with ordinary arithmetic operations (+, -, ×, and ÷) in any order to make a target number, typically 24. For example, with 1,2,3,4, you can go with $$((1+2)+3)×4 = 24$$ or with $$4×((2×3)×1) = 24.$$

We were stumped trying to make 6,6,5,2 total 24, so I hacked up a solver; then we felt a little foolish when we saw the solutions, because it is not that hard. But in the course of testing the solver, I found the most challenging puzzle of this type that I've ever seen. It is:

Given 6,6,5,2, make 17.

There are no underhanded tricks. For example, you may not concatenate 2 and 5 to make 25; you may not say !!6÷6=1!! and !!5+2=7!! and concatenate 1 and 7 to make !!17!!; you may not interpret the 17 as a base 12 numeral, etc.

I hope to write a longer article about solvers in the next week or so.

[ Addendum 20170305: The next week or so, ha ha. Anyway, here it is. ]


[Other articles in category /math] permanent link

Tue, 19 Apr 2016

The sage and the seven horses

A classic puzzle of mathematics goes like this:

A father dies and his will states that his elder daughter should receive half his horses, the son should receive one-quarter of the horses, and the younger daughter should receive one-eighth of the horses. Unfortunately, there are seven horses. The siblings are arguing about how to divide the seven horses when a passing sage hears them. The siblings beg the sage for help. The sage donates his own horse to the estate, which now has eight. It is now easy to portion out the half, quarter, and eighth shares, and having done so, the sage's horse is unaccounted for. The three heirs return the surplus horse to the sage, who rides off, leaving the matter settled fairly.

(The puzzle is, what just happened?)

It's not hard to come up with variations on this. For example, picking three fractions at random, suppose the will says that the eldest child receives half the horses, the middle child receives one-fifth, and the youngest receives one-seventh. But the estate has only 59 horses and an argument ensues. All that is required for the sage to solve the problem is to lend the estate eleven horses. There are now 70, and after taking out the three bequests, !!70 - 35 - 14 - 10 = 11!! horses remain and the estate settles its debt to the sage.

But here's a variation I've never seen before. This time there are 13 horses and the will says that the three children should receive shares of !!\frac12, \frac13,!! and !!\frac14!!. respectively. Now the problem seems impossible, because !!\frac12 + \frac13 + \frac14 \gt 1!!. But the sage is equal to the challenge! She leaps into the saddle of one of the horses and rides out of sight before the astonished heirs can react. After a day of searching the heirs write off the lost horse and proceed with executing the will. There are now only 12 horses, and the eldest takes half, or six, while the middle sibling takes one-third, or 4. The youngest heir should get three, but only two remain. She has just opened her mouth to complain at her unfair treatment when the sage rides up from nowhere and hands her the reins to her last horse.


[Other articles in category /math] permanent link

Fri, 18 Dec 2015

Math SE report 2015-08

I only posted three answers in August, but two of them were interesting.

  • In why this !!\sigma\pi\sigma^{-1}!! keeps apearing in my group theory book? (cycle decomposition) the querent asked about the “conjugation” operation that keeps cropping up in group theory. Why is it important? I sympathize with this; it wasn't adequately explained when I took group theory, and I had to figure it out a long time later. Unfortunately I don't think I picked the right example to explain it, so I am going to try again now.

    Consider the eight symmetries of the square. They are of five types:

    1. Rotation clockwise or counterclockwise by 90°.
    2. Rotation by 180°.
    3. Horizontal or vertical reflection
    4. Diagonal reflection
    5. The trivial (identity) symmetry

    What is meant when I say that a horizontal and a vertical reflection are of the same ‘type’? Informally, it is that the horizontal reflection looks just like the vertical reflection, if you turn your head ninety degrees. We can formalize this by observing that if we rotate the square 90°, then give it a horizontal flip, then rotate it back, the effect is exactly to give it a vertical flip. In notation, we might represent the horizontal flip by !!H!!, the vertical flip by !!V!!, the clockwise rotation by !!\rho!!, and the counterclockwise rotation by !!\rho^{-1}!!; then we have

    $$ \rho H \rho^{-1} = V$$

    and similarly

    $$ \rho V \rho^{-1} = H.$$

    Vertical flips do not look like diagonal flips—the diagonal flip leaves two of the corners in the same place, and the vertical flip does not—and indeed there is no analogous formula with !!H!! replaced with one of the diagonal flips. However, if !!D_1!! and !!D_2!! are the two diagonal flips, then we do have

    $$ \rho D_1 \rho^{-1} = D_2.$$

    In general, When !!a!! and !!b!! are two symmetries, and there is some symmetry !!x!! for which

    $$xax^{-1} = b$$

    we say that !!a!! is conjugate to !!b!!. One can show that conjugacy is an equivalence relation, which means that the symmetries of any object can be divided into separate “conjugacy classes” such that two symmetries are conjugate if and only if they are in the same class. For the square, the conjugacy classes are the five I listed earlier.

    This conjugacy thing is important for telling when two symmetries are group-theoretically “the same”, and have the same group-theoretic properties. For example, the fact that the horizontal and vertical flips move all four vertices, while the diagonal flips do not. Another example is that a horizontal flip is self-inverse (if you do it again, it cancels itself out), but a 90° rotation is not (you have to do it four times before it cancels out.) But the horizontal flip shares all its properties with the vertical flip, because it is the same if you just turn your head.

    Identifying this sameness makes certain kinds of arguments much simpler. For example, in counting squares, I wanted to count the number of ways of coloring the faces of a cube, and instead of dealing with the 24 symmetries of the cube, I only needed to deal with their 5 conjugacy classes.

    The example I gave in my math.se answer was maybe less perspicuous. I considered the symmetries of a sphere, and talked about how two rotations of the sphere by 17° are conjugate, regardless of what axis one rotates around. I thought of the square at the end, and threw it in, but I wish I had started with it.

  • How to convert a decimal to a fraction easily? was the month's big winner. OP wanted to know how to take a decimal like !!0.3760683761!! and discover that it can be written as !!\frac{44}{117}!!. The right answer to this is of course to use continued fraction theory, but I did not want to write a long treatise on continued fractions, so I stripped down the theory to obtain an algorithm that is slower, but much easier to understand.

    The algorithm is just binary search, but with a twist. If you are looking for a fraction for !!x!!, and you know !!\frac ab < x < \frac cd!!, then you construct the mediant !!\frac{a+c}{b+d}!! and compare it with !!x!!. This gives you a smaller interval in which to search for !!x!!, and the reason you use the mediant instead of using !!\frac12\left(\frac ab + \frac cd\right)!! as usual is that if you use the mediant you are guaranteed to exactly nail all the best rational approximations of !!x!!. This is the algorithm I described a few years ago in your age as a fraction, again; there the binary search proceeds down the branches of the Stern-Brocot tree to find a fraction close to !!0.368!!.


I did ask a question this month: I was looking for a simpler version of the dogbone space construction. The dogbone space is a very peculiar counterexample of general topology, originally constructed by R.H. Bing. I mentioned it here in 2007, and said, at the time:

[The paper] is on my desk, but I have not read this yet, and I may never.

I did try to read it, but I did not try very hard, and I did not understand it. So my question this month was if there was a simpler example of the same type. I did not receive an answer, just a followup comment that no, there is no such example.


[Other articles in category /math/se] permanent link

Sun, 16 Aug 2015

Math SE report 2015-07

My overall SE posting volume was down this month, and not only did I post relatively few interesting items, I've already written a whole article about the most interesting one. So this will be a short report.

  • I already wrote up Building a box from smaller boxes on the blog here. But maybe I have a couple of extra remarks. First, the other guy's proposed solution is awful. It's long and complicated, which is forgivable if it had answered the question, but it doesn't. And the key point is “blah blah blah therefore code a solver which visits all configurations of the search space”. Well heck, if this post had just been one sentence that ended with “code a solver which visits all configurations of the search space” I would not have any complaints about that.

    As an undergraduate I once gave a talk on this topic. One of my examples was the problem of packing 31 dominoes into a chessboard from which two squares have been deleted. There is a simple combinatorial argument why this is impossible if the two deleted squares are the same color, say if they are opposite corners: each domino must cover one square of each color. But if you don't take time to think about the combinatorial argument you could waste a lot of time on computer search learning that there is no solution in that case, and completely miss the deeper understanding that it brings you. So this has been on my mind for a long time.

  • I wrote a few posts this month where I thought I gave good hints. In How to scale an unit vector !!u!! in such way that !!a u\cdot u=1!! where !!a!! is a scalar I think I did a good job identifying the original author's confusion; he was conflating his original unit vector !!u!! and the scaled, leading him to write !!au\cdot u=1!!. This is sure to lead to confusion. So I led him to the point of writing !!a(bv)\cdot(bv)=1!! and let him take it from there. The other proposed solution is much more rote and mechanical. (“Divide this by that…”)

    In Find numbers !!\overline{abcd}!! so that !!\overline{abcd}+\overline{bcd}+\overline{cd}+d+1=\overline{dcba}!! the OP got stuck partway through and I specifically addressed the stuckness; other people solved the problem from the beginning. I think that's the way to go, if the original proposal was never going to work, especially if you stop and say why it was never going to work, but this time OP's original suggestion was perfectly good and she just didn't know how to get to the next step. By the way, the notation !!\overline{abcd}!! here means the number !!1000a+100b+10c+d!!.

    In Help finding the limit of this series !!\frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \frac{1}{32} + \cdots!! it would have been really easy to say “use the formula” or to analyze the series de novo, but I think I almost hit the nail on the head here: it's just like !!1+\frac12 + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \frac{1}{32} + \cdots!!, which I bet OP already knows, except a little different. But I pointed out the wrong difference: I observed that the first sequence is one-fourth the second one (which it is) but it would have been simpler to observe that it's just the second one without the !!1+\frac12!!. I had to review it just now to give the simpler explanation, but I sure wish I'd thought of it at the time. Nobody else pointed it out either. Best of all, would have been to mention both methods. If you can notice both of them you can solve the problem without the advance knowledge of the value of !!1+\frac12+\frac14+\ldots!!, because you have !!4S = 1+\frac12 + S!! and then solve for !!S!!.

    In Visualization of Rhombus made of Radii and Chords it seemed that OP just needed to see a diagram (“I really really don't see how two circles can form a rhombus?”), so I drew one.


[Other articles in category /math/se] permanent link

Tue, 28 Jul 2015

Another ounce of theory

A few months ago I wrote an article here called an ounce of theory is worth a pound of search and I have a nice followup.

When I went looking for that article I couldn't find it, because I thought it was about how an ounce of search is worth a pound of theory, and that I was writing a counterexample. I am quite surprised to discover that that I have several times discussed how a little theory can replace a lot of searching, and not vice versa, but perhaps that is because the search is my default.

Anyway, the question came up on math StackExchange today:

John has 77 boxes each having dimensions 3×3×1. Is it possible for John to build one big box with dimensions 7×9×11?

OP opined no, but had no argument. The first answer that appeared was somewhat elaborate and outlined a computer search strategy which claimed to reduce the search space to only 14,553 items. (I think the analysis is wrong, but I agree that the search space is not too large.)

I almost wrote the search program. I have a program around that is something like what would be needed, although it is optimized to deal with a few oddly-shaped tiles instead of many similar tiles, and would need some work. Fortunately, I paused to think a little before diving in to the programming.

For there is an easy answer. Suppose John solved the problem. Look at just one of the 7×11 faces of the big box. It is a 7×11 rectangle that is completely filled by 1×3 and 3×3 rectangles. But 7×11 is not a multiple of 3. So there can be no solution.

Now how did I think of this? It was a very geometric line of reasoning. I imagined a 7×11×9 carton and imagined putting the small boxes into the carton. There can be no leftover space; every one of the 693 cells must be filled. So in particular, we must fill up the bottom 7×11 layer. I started considering how to pack the bottommost 7×11×1 slice with just the bottom parts of the small boxes and quickly realized it couldn't be done; there is always an empty cell left over somewhere, usually in the corner. The argument about considering just one face of the large box came later; I decided it was clearer than what I actually came up with.

I think this is a nice example of the Pólya strategy “solve a simpler problem” from How to Solve It, but I was not thinking of that specifically when I came up with the solution.

For a more interesting problem of the same sort, suppose you have six 2×2x1 slabs and three extra 1×1×1 cubes. Can you pack the nine pieces into a 3×3x3 box?


[Other articles in category /math] permanent link

Sat, 18 Jul 2015

Math SE report 2015-04

[ Notice: I originally published this report at the wrong URL. I moved it so that I could publish the June 2015 report at that URL instead. If you're seeing this for the second time, you might want to read the June article instead. ]

A lot of the stuff I've written in the past couple of years has been on Mathematics StackExchange. Some of it is pretty mundane, but some is interesting. I thought I might have a little meta-discussion in the blog and see how that goes. These are the noteworthy posts I made in April 2015.

  • Languages and their relation : help is pretty mundane, but interesting for one reason: OP was confused about a statement in a textbook, and provided a reference, which OPs don't always do. The text used the symbol !!\subset_\ne!!. OP had interpreted it as meaning !!\not\subseteq!!, but I think what was meant was !!\subsetneq!!.

    I dug up a copy of the text and groveled over it looking for the explanation of !!\subset_\ne!!, which is not standard. There was none that I could find. The book even had a section with a glossary of notation, which didn't mention !!\subset_\ne!!. Math professors can be assholes sometimes.

  • Is there an operation that takes !!a^b!! and !!a^c!!, and returns !!a^{bc}!! is more interesting. First off, why is this even a reasonable question? Why should there be such an operation? But note that there is an operation that takes !!a^b!! and !!a^c!! and returns !!a^{b+c}!!, namely, multiplication, so it's plausible that the operation that OP wants might also exist.

    But it's easy to see that there is no operation that takes !!a^b!! and !!a^c!! and returns !!a^{bc}!!: just observe that although !!4^2=2^4!!, the putative operation (call it !!f!!) should take !!f(2^4, 2^4)!! and yield !!2^{4\cdot4} = 2^{16} = 65536!!, but it should also take !!f(4^2, 4^2)!! and yield !!4^{2\cdot2} = 2^4 = 256!!. So the operation is not well-defined. And you can take this even further: !!2^4!! can be written as !!e^{4\log 2}!!, so !!f!! should also take !!f(e^{2\log 4}, e^{2\log 4})!! and yield !!e^{4(\log 4)^2} \approx 2180.37!!.

    They key point is that the representation of a number, or even an integer, in the form !!a^b!! is not unique. (Jargon: "exponentiation is not injective".) You can raise !!a^b!!, but having done so you cannot look at the result and know what !!a!! and !!b!! were, which is what !!f!! needs to do.

    But if !!f!! can't do it, how can multiplication do it when it multiplies !!a^b!! and !!a^c!! and gets !!a^{b+c}!!? Does it somehow know what !!a!! is? No, it turns out that it doesn't need !!a!! in this case. There is something magical going on there, ultimately related to the fact that if some quantity is increasing by a factor of !!x!! every !!t!! units of time, then there is some !!t_2!! for which it is exactly doubling every !!t_2!! units of time. Because of this there is a marvelous group homomophism $$\log : \langle \Bbb R^+, \times\rangle \to \langle \Bbb R ,+\rangle$$ which can change multiplication into addition without knowing what the base numbers are.

    In that thread I had a brief argument with someone who thinks that operators apply to expressions rather than to numbers. Well, you can say this, but it makes the question trivial: you can certainly have an "operator" that takes expressions !!a^b!! and !!a^c!! and yields the expression !!a^{bc}!!. You just can't expect to apply it to numbers, such as !!16!! and !!16!!, because those numbers are not expressions in the form !!a^b!!. I remembered the argument going on longer than it did; I originally ended this paragraph with a lament that I wasted more than two comments on this guy, but looking at the record, it seems that I didn't. Good work, Mr. Dominus.

  • how 1/0.5 is equal to 2? wants a simple explanation. Very likely OP is a primary school student. The question reminds me of a similar question, asking why the long division algorithm is the way it is. Each of these is a failure of education to explain what division is actually doing. The long division answer is that long division is an optimization for repeated subtraction; to divide !!450\div 3!! you want to know how many shares of three cookies each you can get from !!450!! cookies. Long division is simply a notation for keeping track of removing !!100!! shares, leaving !!150!! cookies, then !!5\cdot 10!! further shares, leaving none.

    In this question there was a similar answer. !!1/0.5!! is !!2!! because if you have one cookie, and want to give each kid a share of !!0.5!! cookies, you can get out two shares. Simple enough.

    I like division examples that involve giving cookies to kids, because cookies are easy to focus on, and because the motivation for equal shares is intuitively understood by everyone who has kids, or who has been one.

    There is a general pedagogical principle that an ounce of examples are worth a pound of theory. My answer here is a good example of that. When you explain the theory, you're telling the student how to understand it. When you give an example, though, if it's the right example, the student can't help but understand it, and when they do they'll understand it in their own way, which is better than if you told them how.

  • How to read a cycle graph? is interesting because hapless OP is asking for an explanation of a particularly strange diagram from Wikipedia. I'm familiar with the eccentric Wikipedian who drew this, and I was glad that I was around to say "The other stuff in this diagram is nonstandard stuff that the somewhat eccentric author made up. Don't worry if it's not clear; this author is notorious for that."

  • In Expected number of die tosses to get something less than 5, OP calculated as follows: The first die roll is a winner !!\frac23!! of the time. The second roll is the first winner !!\frac13\cdot\frac23!! of the time. The third roll is the first winner !!\frac13\cdot\frac13\cdot\frac23!! of the time. Summing the series !!\sum_n \frac23\left(\frac13\right)^nn!! we eventually obtain the answer, !!\frac32!!. The accepted answer does it this way also.

    But there's a much easier way to solve this problem. What we really want to know is: how many rolls before we expect to have seen one good one? And the answer is: the expected number of winners per die roll is !!\frac23!!, expectations are additive, so the expected number of winners per !!n!! die rolls is !!\frac23n!!, and so we need !!n=\frac32!! rolls to expect one winner. Problem solved!

    I first discovered this when I was around fifteen, and wrote about it here a few years ago.

    As I've mentioned before, this is one of the best things about mathematics: not that it works, but that you can do it by whatever method that occurs to you and you get the same answer. This is where mathematics pedagogy goes wrong most often: it proscribes that you must get the answer by method X, rather than that you must get the answer by hook or by crook. If the student uses method Y, and it works (and if it is correct) that should be worth full credit.

    Bad instructors always say "Well, we need to test to see if the student knows method X." No, we should be testing to see if the student can solve problem P. If we are testing for method X, that is a failure of the test or of the curriculum. Because if method X is useful, it is useful because for some problems, it is the only method that works. It is the instructor's job to find one of these problems and put it on the test. If there is no such problem, then X is useless and it is the instructor's job to omit it from the curriculum. If Y always works, but X is faster, it is the instructor's job to explain this, and then to assign a problem for the test where Y would take more time than is available.

    I see now I wrote the same thing in 2006. It bears repeating. I also said it again a couple of years ago on math.se itself in reply to a similar comment by Brian Scott:

    If the goal is to teach students how to write proofs by induction, the instructor should damned well come up with problems for which induction is the best approach. And if even then a student comes up with a different approach, the instructor should be pleased. ... The directions should not begin [with "prove by induction"]. I consider it a failure on the part of the instructor if he or she has to specify a technique in order to give students practice in applying it.


[Other articles in category /math/se] permanent link

Fri, 03 Jul 2015

The annoying boxes puzzle: solution
I presented this logic puzzle on Wednesday:

There are two boxes on a table, one red and one green. One contains a treasure. The red box is labelled "exactly one of the labels is true". The green box is labelled "the treasure is in this box."

Can you figure out which box contains the treasure?

It's not too late to try to solve this before reading on. If you want, you can submit your answer here:

The treasure is in the red box
The treasure is in the green box
There is not enough information to determine the answer
Something else:

Results

There were 506 total responses up to Fri Jul 3 11:09:52 2015 UTC; I kept only the first response from each IP address, leaving 451. I read all the "something else" submissions and where it seemed clear I recoded them as votes for "red", for "not enough information", or as spam. (Several people had the right answer but submitted "other" so they could explain themselves.) There was also one post attempted to attack my (nonexistent) SQL database. Sorry, Charlie; I'm not as stupid as I look.

	 66.52%  300 red
	 25.72   116 not-enough-info
	  3.55    16 green
	  2.00     9 other
	  1.55     7 spam
	  0.44     2 red-with-qualification
	  0.22     1 attack

	100.00   451 TOTAL
One-quarter of respondents got the right answer, that there is not enough information given to solve the problem, Two-thirds of respondents said the treasure was in the red box. This is wrong. The treasure is in the green box.

What?

Let me show you. I stated:

There are two boxes on a table, one red and one green. One contains a treasure. The red box is labelled "exactly one of the labels is true". The green box is labelled "the treasure is in this box."

The labels are as I said. Everything I told you was literally true.

The treasure is definitely not in the red box.

No, it is actually in the green box.

(It's hard to see, but one of the items in the green box is the gold and diamond ring made in Vienna by my great-grandfather, which is unquestionably a real treasure.)

So if you said the treasure must be in the red box, you were simply mistaken. If you had a logical argument why the treasure had to be in the red box, your argument was fallacious, and you should pause and try to figure out what was wrong with it.

I will discuss it in detail below.

Solution

The treasure is undeniably in the green box. However, correct answer to the puzzle is "no, you cannot figure out which box contains the treasure". There is not enough information given. (Notice that the question was not “Where is the treasure?” but “Can you figure out…?”)

(Fallacious) Argument A

Many people erroneously conclude that the treasure is in the red box, using reasoning something like the following:

  1. Suppose the red label is true. Then exactly one label is true, and since the red label is true, the green label is false. Since it says that the treasure is in the green box, the treasure must really be in the red box.
  2. Now suppose that the red label is false. Then the green label must also be false. So again, the treasure is in the red box.
  3. Since both cases lead to the conclusion that the treasure is in the red box, that must be where it is.

What's wrong with argument A?

Here are some responses people commonly have when I tell them that argument A is fallacious:

"If the treasure is in the green box, the red label is lying."

Not quite, but argument A explicitly considers the possibility that the red label was false, so what's the problem?

"If the treasure is in the green box, the red label is inconsistent."

It could be. Nothing in the puzzle statement ruled this out. But actually it's not inconsistent, it's just irrelevant.

"If the treasure is in the green box, the red label is meaningless."

Nonsense. The meaning is plain: it says “exactly one of these labels is true”, and the meaning is that exactly one of the labels is true. Anyone presenting argument A must have understood the label to mean that, and it is incoherent to understand it that way and then to turn around and say that it is meaningless! (I discussed this point in more detail in 2007.)

"But the treasure could have been in the red box."

True! But it is not, as you can see in the pictures. The puzzle does not give enough information to solve the problem. If you said that there was not enough information, then congratulations, you have the right answer. The answer produced by argument A is incontestably wrong, since it asserts that the treasure is in the red box, when it is not.

"The conditions supplied by the puzzle statement are inconsistent."

They certainly are not. Inconsistent systems do not have models, and in particular cannot exist in the real world. The photographs above demonstrate a real-world model that satisfies every condition posed by the puzzle, and so proves that it is consistent.

"But that's not fair! You could have made up any random garbage at all, and then told me afterwards that you had been lying."

Had I done that, it would have been an unfair puzzle. For example, suppose I opened the boxes at the end to reveal that there was no treasure at all. That would have directly contradicted my assertion that "One [box] contains a treasure". That would have been cheating, and I would deserve a kick in the ass.

But I did not do that. As the photograph shows, the boxes, their colors, their labels, and the disposition of the treasure are all exactly as I said. I did not make up a lie to trick you; I described a real situation, and asked whether people they could diagnose the location of the treasure.

(Two respondents accused me of making up lies. One said:

There is no treasure. Both labels are lying. Look at those boxes. Do you really think someone put a treasure in one of them just for this logic puzzle?
What can I say? I did put a treasure in a box just for this logic puzzle. Some of us just have higher standards.)

"But what about the labels?"

Indeed! What about the labels?

The labels are worthless

The labels are red herrings; the provide no information. Consider the following version of the puzzle:

There are two boxes on a table, one red and one green. One contains a treasure.

Which box contains the treasure?

Obviously, the problem cannot be solved from the information given.

Now consider this version:

There are two boxes on a table, one red and one green. One contains a treasure. The red box is labelled "gcoadd atniy fnck z fbi c rirpx hrfyrom". The green box is labelled "ofurb rz bzbsgtuuocxl ckddwdfiwzjwe ydtd."

Which box contains the treasure?

One is similarly at a loss here.

(By the way, people who said one label was meaningless: this is what a meaningless label looks like.)

There are two boxes on a table, one red and one green. One contains a treasure. The red box is labelled "exactly one of the labels is true". The green box is labelled "the treasure is in this box."

But then the janitor happens by. "Don't be confused by those labels," he says. "They were stuck on there by the previous owner of the boxes, who was an illiterate shoemaker who only spoke Serbian. I think he cut them out of a magazine because he liked the frilly borders."

Which box contains the treasure?

The point being that in the absence of additional information, there is no reason to believe that the labels give any information about the contents of the boxes, or about labels, or about anything at all. This should not come as a surprise to anyone. It is true not just in annoying puzzles, but in the world in general. A box labeled “fresh figs” might contain fresh figs, or spoiled figs, or angry hornets, or nothing at all.

Why doesn't every logic puzzle fall afoul of this problem?

I said as part of the puzzle conditions that there was a treasure in one box. For a fair puzzle, I am required to tell the truth about the puzzle conditions. Otherwise I'm just being a jerk.

Typically the truth or falsity of the labels is part of the puzzle conditions. Here's a typical example, which I took from Raymond Smullyan's What is the name of this book? (problem 67a):

… She had the following inscriptions put on the caskets:
GoldSilverLead
THE PORTRAIT IS IN THIS CASKET THE PORTRAIT IS NOT IN THIS CASKET THE PORTRAIT IS NOT IN THE GOLD CASKET
Portia explained to the suitor that of the three statements, at most one was true.

Which casket should the suitor choose [to find the portrait]?

Notice that the problem condition gives the suitor a certification about the truth of the labels, on which he may rely. In the quotation above, the certification is in boldface.

A well-constructed puzzle will always contain such a certification, something like “one label is true and one is false” or “on this island, each person always lies, or always tells the truth”. I went to What is the Name of this Book? to get the example above, and found more than I had bargained for: problem 70 is exactly the annoying boxes problem! Smullyan says:

Good heavens, I can take any number of caskets that I please and put an object in one of them and then write any inscriptions at all on the lids; these sentences won't convey any information whatsoever.
(Page 65)

Had I known ahead of time that Smullyan had treated the exact same topic with the exact same example, I doubt I would have written this post at all.

But why is this so surprising?

I don't know.

Final notes

16 people correctly said that the treasure was in the green box. This has to be counted as a lucky guess, unacceptable as a solution to a logic puzzle.

One respondent referred me to a similar post on lesswrong.

I did warn you all that the puzzle was annoying.

I started writing this post in October 2007, and then it sat on the shelf until I got around to finding and photographing the boxes. A triumph of procrastination!

Addendum 2015091

Steven Mazie has written a blog article about this topic, A Logic Puzzle That Teaches a Life Lesson.

Addendum 20240628

It seems that the Lesswrongers call this The Parable of the Dagger. Here's their version of the payoff:

The jester opened the second box, and found a dagger.

“How?!” cried the jester in horror, as he was dragged away. “It’s logically impossible!”

“It is entirely possible,” replied the king. “I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”


[Other articles in category /math/logic] permanent link

Wed, 01 Jul 2015

The annoying boxes puzzle
Here is a logic puzzle. I will present the solution on Friday.

There are two boxes on a table, one red and one green. One contains a treasure. The red box is labelled "exactly one of the labels is true". The green box is labelled "the treasure is in this box."

Can you figure out which box contains the treasure?

The treasure is in the red box
The treasure is in the green box
There is not enough information to determine the answer
Something else:
Starting on 2015-07-03, the solution will be here.


[Other articles in category /math/logic] permanent link

Thu, 18 Jun 2015

Math SE report 2015-05

A lot of the stuff I've written in the past couple of years has been on math.StackExchange. Some of it is pretty mundane, but some is interesting. My summary of April's interesting posts was well-received, so here are the noteworthy posts I made in May 2015.

  • What matrix transforms !!(1,0)!! into !!(2,6)!! and tranforms !!(0,1)!! into !!(4,8)!!? was a little funny because the answer is $$\begin{pmatrix}2 & 4 \\ 6 & 8 \end{pmatrix}$$ and yeah, it works exactly like it appears to, there's no trick. But if I just told the guy that, he might feel unnecessarily foolish. I gave him a method for solving the problem and figured that when he saw what answer he came up with, he might learn the thing that the exercise was designed to teach him.

  • Is a “network topology'” a topological space? is interesting because several people showed up right away to say no, it is an abuse of terminology, and that network topology really has nothing to do with mathematical topology. Most of those comments have since been deleted. My answer was essentially: it is topological, because just as in mathematical topology you care about which computers are connected to which, and not about where any of the computers actually are.

    Nobody constructing a token ring network thinks that it has to be a geometrically circular ring. No, it only has to be a topologically circular ring. A square is fine; so is a triangle; topologically they are equivalent, both in networking and in mathematics. The wires can cross, as long as they don't connect at the crossings. But if you use something that isn't topologically a ring, like say a line or a star or a tree, the network doesn't work.

    The term “topological” is a little funny. “Topos” means “place” (like in “topography” or “toponym”) but in topology you don't care about places.

  • Is there a standard term for this generalization of the Euler totient function? was asked by me. I don't include all my answers in these posts, but I think maybe I should have a policy of including all my questions. This one concerned a simple concept from number theory which I was surprised had no name: I wanted !!\phi_k(n)!! to be the number of integers !!m!! that are no larger than !!n!! for which !!\gcd(m,n) = k!!. For !!k=1!! this is the famous Euler totient function, written !!\varphi(n)!!.

    But then I realized that the reason it has no name is that it's simply !!\phi_k(n) = \varphi\left(\frac n k\right)!! so there's no need for a name or a special notation.

    As often happens, I found the answer myself shortly after I asked the question. I wonder if the reason for this is that my time to come up with the answer is Poisson-distributed. Then if I set a time threshold for how long I'll work on the problem before asking about it, I am likely to find the answer to almost any question that exceeds the threshold shortly after I exceed the threshold. But if I set the threshold higher, this would still be true, so there is no way to win this particular game. Good feature of this theory: I am off the hook for asking questions I could have answered myself. Bad feature: no real empirical support.

  • how many ways can you divide 24 people into groups of two? displays a few oddities, and I think I didn't understand what was going on at that time. OP has calculated the first few special cases:

    1:1 2:1 3:3 4:3 5:12 6:15

    which I think means that there is one way to divide 2 people into groups of 2, 3 ways to divide 4 people, and 15 ways to divide 6 people. This is all correct! But what could the 1:1, 3:3, 5:12 terms mean? You simply can't divide 5 people into groups of 2. Well, maybe OP was counting the extra odd person left over as a sort of group on their own? Then odd values would be correct; I didn't appreciate this at the time.

    But having calculated 6 special cases correctly, why can't OP calculate the seventh? Perhaps they were using brute force: the next value is 48, hard to brute-force correctly if you don't have a enough experience with combinatorics.

    I tried to suggest a general strategy: look at special cases, and not by brute force, but try to analyze them so that you can come up with a method for solving them. The method is unnecessary for the small cases, where brute force enumeration suffices, but you can use the brute force enumeration to check that the method is working. And then for the larger cases, where brute force is impractical, you use your method.

    It seems that OP couldn't understand my method, and when they tried to apply it, got wrong answers. Oh well, you can lead a horse to water, etc.

    The other pathology here is:

    I think I did what you said and I got 1.585times 10 to the 21

    for the !!n=24!! case. The correct answer is $$23\cdot21\cdot19\cdot17\cdot15\cdot13\cdot11\cdot9\cdot7\cdot5\cdot3\cdot1 = 316234143225 \approx 3.16\cdot 10^{11}.$$ OP didn't explain how they got !!1.585\cdot10^{21}!! so there's not much hope of correcting their weird error.

    This is someone who probably could have been helped in person, but on the Internet it's hopeless. Their problems are Internet communication problems.

  • Lambda calculus typing isn't especially noteworthy, but I wrote a fairly detailed explanation of the algorithm that Haskell or SML uses to find the type of an expression, and that might be interesting to someone.

  • I think Special representation of a number is the standout post of the month. OP speculates that, among numbers of the form !!pq+rs!! (where !!p,q,r,s!! are prime), the choice of !!p,q,r,s!! is unique. That is, the mapping !!\langle p,q,r,s\rangle \to pq+rs!! is reversible.

    I was able to guess that this was not the case within a couple of minutes, replied pretty much immediately:

    I would bet money against this representation being unique.

    I was sure that a simple computer search would find counterexamples. In fact, the smallest is !!11\cdot13 + 19\cdot 29 = 11\cdot 43 + 13\cdot 17 = 694!! which is small enough that you could find it without the computer if you are patient.

    The obvious lesson to learn from this is that many elementary conjectures of this type can be easily disproved by a trivial computer search, and I frequently wonder why more amateur mathematicians don't learn enough computer programming to investigate this sort of thing. (I wrote recently on the topic of An ounce of theory is worth a pound of search , and this is an interesting counterpoint to that.)

    But the most interesting thing here is how I was able to instantly guess the answer. I explained in some detail in the post. But the basic line of reasoning goes like this.

    Additive properties of the primes are always distributed more or less at random unless there is some obvious reason why they can't be. For example, let !!p!! be prime and consider !!2p+1!!. This must have exactly one of the three forms !!3n-1, 3n,!! or !!3n+1!! for some integer !!n!!. It obviously has the form !!3n+1!! almost never (the only exception is !!p=3!!). But of the other two forms there is no obvious reason to prefer one over the other, and indeed of the primes up to 10,000, 611 are of the type !!3n!! and and 616 are of the type !!3n-1!!.

    So we should expect the value !!pq+rs!! to be distributed more or less randomly over the set of outputs, because there's no obvious reason why it couldn't be, except for simple stuff, like that it's obviously almost always even.

    So we are throwing a bunch of balls at random into bins, and the claim is that no bin should contain more than one ball. For that to happen, there must be vastly more bins than balls. But the bins are numbers, and primes are not at all uncommon among numbers, so the number of bins isn't vastly larger, and there ought to be at least some collisions.

    In fact, a more careful analysis, which I wrote up on the site, shows that the number of balls is vastly larger—to have them be roughly the same, you would need primes to be roughly as common as perfect squares, but they are far more abundant than that—so as you take larger and larger primes, the number of collisions increases enormously and it's easy to find twenty or more quadruples of primes that all map to the same result. But I was able to predict this after a couple of minutes of thought, from completely elementary considerations, so I think it's a good example of Lower Mathematics at work.

    This is an example of a fairly common pathology of math.se questions: OP makes a conjecture that !!X!! never occurs or that there are no examples with property !!X!!, when actually !!X!! almost always occurs or every example has property !!X!!.

    I don't know what causes this. Rik Signes speculates that it's just wishful thinking: OP is doing some project where it would be useful to have !!pq+rs!! be unique, so posts in hope that someone will tell them that it is. But there was nothing more to it than baseless hope. Rik might be right.

[ Addendum 20150619: A previous version of this article included the delightful typo “mathemativicians”. ]


[Other articles in category /math/se] permanent link

Sun, 14 Jun 2015

Math SE report 2015-06

[ This page originally held the report for April 2015, which has moved. It now contains the report for June 2015. ]

  • Is “smarter than” a transitive relationship? concerns a hypothetical "is smarter than" relation with the following paradoxical-seeming property:

    most X's are smarter than most Y's, but most Y's are such that it is not the case that most X's are smarter than it.

    That is, if !!\mathsf Mx.\Phi(x)!! means that most !!x!! have property !!\Phi!!, then we want both $$\mathsf Mx.\mathsf My.S(x, y)$$ and also $$\mathsf My.\mathsf Mx.\lnot S(x, y).$$

    “Most” is a little funny here: what does it mean? But we can pin it down by supposing that there are an infinite number of !!x!!es and !!y!!s, and agreeing that most !!x!! have property !!P!! if there are only a finite number of exceptions. For example, everyone should agree that most positive integers are larger than 7 and that most prime numbers are odd. The jargon word here is that we are saying that a subset contains “most of” the elements of a larger set if it is cofinite.

    There is a model of this property, and OP reports that they asked the prof if this was because the "smarter than" relation !!S(x,y)!! could be antitransitive, so that one might have !!S(x,y), S(y,z)!! but also !!S(z,x)!!. The prof said no, it's not because of that, but the OP want so argue that it's that anyway. But no, it's not because of that; there is a model that uses a perfectly simple transitive relation, and the nontransitive thing nothing but a distraction. (The model maps the !!x!!es and !!y!!s onto numbers, and says !!x!! is smarter than !!y!! if its number is bigger.) Despite this OP couldn't give up the idea that the model exists because of intransitive relations. It's funny how sometimes people get stuck on one idea and can't let go of it.

  • How to generate a random number between 1 and 10 with a six-sided die? was a lot of fun and attracted several very good answers. Top-scoring is Jack D'Aurizio's, which proposes a completely straightforward method: roll once to generate a bit that selects !!N=0!! or !!N=5!!, and then roll again until you get !!M\ne 6!!, and the result is !!N+M!!.

    But several other answers were suggested, including two by me, one explaining the general technique of arithmetic coding, which I'll probably refer back to in the future when people ask similar questions. Don't miss NovaDenizen's clever simplification of arithmetic coding, which I want to think about more, or D'Aurizio's suggestion that if you threw the die into a V-shaped trough, it would land with one edge pointing up and thus select a random number from 1 to 12 in a single throw.

    Interesting question: Is there an easy-to-remember mapping from edges to numbers from 1–12? Each edge is naturally identified by a pair of distinct integers from 1–6 that do not add to 7.

  • The oddly-phrased Category theory with objects as logical expressions over !!{\vee,\wedge,\neg}!! and morphisms as? asks if there is a standard way to turn logical expressions into a category, which there is: you put an arrow from !!A\to B!! for each proof that !!A!! implies !!B!!; composition of arrows is concatenation of proofs, and identity arrows are empty proofs. The categorial product, coproduct, and exponential then correspond to !!\land, \lor, !! and !!\to!!.

    This got me thinking though. Proofs are properly not lists, they are trees, so it's not entirely clear what the concatenation operation is. For example, suppose proof !!X!! concludes !!A!! at its root and proof !!Y!! assumes !!A!! in more than one leaf. When you concatenate !!X!! and !!Y!! do you join all the !!A!!'s, or what? I really need to study this more. Maybe the Lambek and Scott book talks about it, or maybe the Goldblatt Topoi book, which I actually own. I somehow skipped most of the Cartesian closed category stuff, which is an oversight I ought to correct.

  • In Why is the Ramsey`s theorem a generalization of the Pigeonhole principle I gave what I thought was a terrific answer, showing how Ramsey's graph theorem and the pigeonhole principle are both special cases of Ramsey's hypergraph theorem. This might be my favorite answer of the month. It got several upvotes, but OP preferred a different answer, with fewer details.

    There was a thread a while back about theorems which are generalizations of other theorems in non-obvious ways. I pointed out the Yoneda lemma was a generalization of Cayley's theorem from group theory. I see that nobody mentioned the Ramsey hypergraph theorem being a generalization of the pigeonhole principle, but it's closed now, so it's too late to add it.

  • In Why does the Deduction Theorem use Union? I explained that the English word and actually has multiple meanings. I know I've seen this discussed in elementary logic texts but I don't remember where.

  • Finally, Which is the largest power of natural number that can be evaluated by computers? asks if it's possible for a computer to calculate !!7^{120000000000}!!. The answer is yes, but it's nontrivial and you need to use some tricks. You have to use the multiplying-by-squaring trick, and for the squarings you probably want to do the multiplication with DFT. OP was dissatistifed with the answer, and seemed to have some axe to grind, but I couldn't figure out what it was.


[Other articles in category /math/se] permanent link

Fri, 20 Mar 2015

Rectangles with equal area and perimeter

Wednesday while my 10-year-old daughter Katara was doing her math homework, she observed with pleasure that a !!6×3!! rectangle has a perimeter of 18 units and also an area of 18 square units. I mentioned that there was an infinite family of such rectangles, and, after a small amount of tinkering, that the only other such rectangle with integer sides is a !!4×4!! square, so in a sense she had found the single interesting example. She was very interested in how I knew this, and I promised to show her how to figure it out once she finished her homework. She didn't finish before bedtime, so we came back to it the following evening.

This is just one of many examples of how she has way too much homework, and how it interferes with her education.

She had already remarked that she knew how to write an equation expressing the condition she wanted, so I asked her to do that; she wrote $$(L×W) = ([L+W]×2).$$ I remember being her age and using all different shapes of parentheses too. I suggested that she should solve the equation for !!W!!, getting !!W!! on one side and a bunch of stuff involving !!L!! on the other, but she wasn't sure how to do it, so I offered suggestions while she moved the symbols around, eventually obtaining $$W = 2L\div (L-2).$$ I would have written it as a fraction, but getting the right answer is important, and using the same notation I would use is much less so, so I didn't say anything.

I asked her to plug in !!L=3!! and observe that !!W=6!! popped right out, and then similarly that !!L=6!! yields !!W=3!!, and then I asked her to try the other example she knew. Then I suggested that she see what !!L=5!! did: it gives !!W=\frac{10}3!!, This was new, so she checked it by calculating the area and the perimeter, both !!\frac{50}3!!. She was very excited by this time. As I have mentioned earlier, algebra is magical in its ability to mechanically yield answers to all sorts of questions. Even after thirty years I find it astonishing and delightful. You set up the equations, push the symbols around, and all sorts of stuff pops out like magic. Calculus is somehow much less astonishing; the machinery is all explicit. But how does algebra work? I've been thinking about this on and off for a long time and I'm still not sure.

At that point I took over because I didn't think I would be able to guide her through the next part of the problem without a demonstration; I wanted to graph the function !!W=2L\div(L-2)!! and she does not have much experience with that. She put in the five points we already knew, which already lie on a nice little curve, and then she asked an incisive question: does it level off, or does it keep going down, or what? We discussed what happens when !!L!! gets close to 2; then !!W!! shoots up to infinity. And when !!L!! gets big, say a million, you can see from the algebra that !!W!! is a hair more than 2. So I drew in the asymptotes on the hyperbola.

Katara is not yet familiar with hyperbolas. (She has known about parabolas since she was tiny. I have a very fond memory of visiting Portland with her when she was almost two, and we entered Holladay park, which has fountains that squirt out of the ground. Seeing the water arching up before her, she cried delightedly “parabolas!”)


Once you know how the graph behaves, it is a simple matter to see that there are no integer solutions other than !!\langle 3,6\rangle, \langle 4,4\rangle,!! and !!\langle6,3\rangle!!. We know that !!L=5!! does not work. For !!L>6!! the value of !!W!! is always strictly between !!2!! and !!3!!. For !!L=2!! there is no value of !!W!! that works at all. For !!0\lt L\lt 2!! the formula says that !!W!! is negative, on the other branch of the hyperbola, which is a perfectly good numerical solution (for example, !!L=1, W=-2!!) but makes no sense as the width of a rectangle. So it was a good lesson about how mathematical modeling sometimes introduces solutions that are wrong, and how you have to translate the solutions back to the original problem to see if they make sense.

[ Addendum 20150330: Thanks to Steve Hastings for his plot of the hyperbola, which is in the public domain. ]


[Other articles in category /math] permanent link

Thu, 19 Mar 2015

An ounce of theory is worth a pound of search

The computer is really awesome at doing quick searches for numbers with weird properties, and people with an amateur interest in recreational mathematics would do well to learn some simple programming. People appear on math.stackexchange quite often with questions about tic-tac-toe, but there are only 5,478 total positions, so any question you want to ask can be instantaneously answered by an exhaustive search. An amateur showed up last fall asking “Is it true that no prime larger than 241 can be made by either adding or subtracting 2 coprime numbers made up out of the prime factors 2,3, and 5?” and, once you dig through the jargon, the question is easily answered by the computer, which quickly finds many counterexamples, such as !!162+625=787!! and !!2^{19}+3^4=524369!!.

But sometimes the search appears too large to be practical, and then you need to apply theory. Sometimes you can deploy a lot of theory and solve the problem completely, avoiding the search. But theory is expensive, and not always available. A hybrid approach often works, which uses a tiny amount of theory to restrict the search space to the point where the search is easy.

One of these I wrote up on this blog back in 2006:

A number !!n!! is excellent if it has an even number of digits, and if when you chop it into a front half !!a!! and a back half !!b!!, you have !!b^2 - a^2 = n!!. For example, !!48!! is excellent, because !!8^2 - 4^2 = 48!!, and !!3468!! is excellent, because !!68^2 - 34^2 = 4624 - 1156 = 3468!!.

The programmer who gave me thie problem had tried a brute-force search over all numbers, but to find all 10-digit excellent numbers, this required an infeasible search of 9,000,000,000 candidates. With the application of a tiny amount of algebra, one finds that !!a(10^k+a) = b^2+b!! and it's not hard to quickly test candidates for !!a!! to see if !!a(10^k+a)!! has this form and if so to find the corresponding value of !!b!!. (Details are in the other post.) This reduces the search space for 10-digit excellent numbers from 9,000,000,000 candidates to 90,000, which could be done in under a minute even with last-century technology, and is pretty nearly instantaneous on modern equipment.

But anyway, the real point of this note is to discuss a different problem entirely. A recreational mathematician on stackexchange wanted to find distinct integers !!a,b,c,d!! for which !!a^2+b^2, b^2+c^2, c^2+d^2, !! and !!d^2+a^2!! were all perfect squares. You can search over all possible quadruples of numbers, but this takes a long time. The querent indicated later that he had tried such a search but lost patience before it yielded anything.

Instead, observe that if !!a^2+b^2!! is a perfect square then !!a!! and !!b!! are the legs of a right triangle with integer sides; they are terms in what is known as a Pythagorean triple. The prototypical example is !!3^2 + 4^2 = 5^2!!, and !!\langle 3,4,5\rangle!! is the Pythagorean triple. (The querent was quite aware that he was asking for Pythagorean triples, and mentioned them specifically.)

Here's the key point: It has been known since ancient times that if !!\langle a,b,c\rangle!! is a Pythagorean triple, then there exist integers !!m!! and !!n!! such that: $$\begin{align} \require{align} a & = n^2-m^2 \\ b & = 2mn \\ c & = n^2 + m^2 \end{align}$$

So you don't have to search for Pythagorean triples; you can just generate them with no searching:

    for my $m (1 .. 200) {
      for my $n ($m+1 .. 200) {
        my $a = $n*$n-$m*$m;
        my $b = 2 * $n * $m;
        $trip{$a}{$b} = 1;
        $trip{$b}{$a} = 1;
      }
    }

This builds a hash table, %trip, with two important properties:

  1. $trip{$a} is a sub-table whose keys are all the numbers that can form a triple with !!a!!. For example, $trip{20} is a hash with three keys: 21, 48, and 99, because !!20^2+21^2 = 29^2, 20^2+48^2= 52^2, !! and !!20^2 + 99^2 = 101^2!!, but 20 is not a participant in any other triples.

  2. $trip{$a}{$b} is true if and only if !!a^2+b^2!! is a perfect square, and false otherwise.

The table has only around 40,000 entries. Having constructed it, we now search it:

    for my $a (keys %trip) {
      for my $b (keys %{$trip{$a}}) {
        for my $c (keys %{$trip{$b}}) {
          next if $c == $a;
          for my $d (keys %{$trip{$c}}) {
            next if $d == $b;
            print "$a $b $c $d\n" if $trip{$d}{$a};
          }
        }
      }
    }

The outer loop runs over each !!a!! that is known to be a member of a Pythagorean triple. (Actually the !!m,n!! formulas show that every number bigger than 2 is a member of some triple, but we may as well skip the ones that are only in triples we didn't tabulate.) Then the next loop runs over every !!b!! that can possibly form a triple with !!a!!; that is, every !!b!! for which !!a^2+b^2!! is a perfect square. We don't have to search for them; we have them tabulated ahead of time. Then for each such !!b!! (and there aren't very many) we run over every !!c!! that forms a triple with !!b!!, and again there is no searching and very few candidates. Then then similarly !!d!!, and if the !!d!! we try forms a triple with !!a!!, we have a winner.

The next if $c == $a and next if $d == $b tests are to rule out trivial solutions like !!a=c=3, b=d=4!!, which the querent wasn't interested in anyway. We don't have to test for equality of any of ther other pairs because no number can form a Pythagorean triple with itself (because !!\sqrt2!! is irrational).

This runs in less than a second on so-so hardware and produces 11 solutions:

    3472  7296  10400 2175
    4312  23520 12008 465
    6512  9984  800   6375
    12312 666   1288  8415
    14592 6944  4350  20800
    16830 2576  1332  24624
    19968 13024 12750 1600
    25500 26048 39936 3200
    30192 6175  2400  9856
    41600 29184 13888 8700
    47040 8624  930   24016

Only five of these are really different. For example, the last one is the same as the second, with every element multiplied by 2; the third, seventh, and eighth are similarly the same. In general if !!\langle a,b,c,d\rangle!! is a solution, so is !!\langle ka, kb,kc,kd\rangle!! for any !!k!!. A slightly improved version would require that the four numbers not have any common factor greater than 1; there are few enough solutions that the cost of this test would be completely negligible.

The only other thing wrong with the program is that it produces each solution 8 times; if !!\langle a,b,c,d\rangle!! is a solution, then so are !!\langle b,c,d,a\rangle, \langle d,c,b,a\rangle,!! and so on. This is easily fixed with a little post-filtering; pipe the output through

    perl -nle '$k = join " ",  sort { $a <=> $b } split; print unless $seen{$k}++ '

or something of that sort.

The corresponding run with !!m!! and !!n!! up to 2,000 instead of only 200 takes 5 minutes and finds 445 solutions, of which 101 are distinct, including !!\langle 3614220, 618192, 2080820, 574461\rangle!!. It would take a very long time to find this with a naïve search.

[ For a much larger and more complex example of the same sort of thing, see When do !!n!! and !!2n!! have the same digits?. I took a seemingly-intractable problem and analyzed it mathematically. I used considerably more than an ounce of theory in this case, and while the theory was not enough to solve the problem, it was enough to reduce the pool of candates to the point that a computer search was feasible. ]

[ Addendum 20150728: Another example ]


[Other articles in category /math] permanent link

Sat, 22 Nov 2014

Within this instrument, resides the Universe

When opportunity permits, I have been trying to teach my ten-year-old daughter Katara rudiments of algebra and group theory. Last night I posed this problem:

Mary and Sue are sisters. Today, Mary is three times as old as Sue; in two years, she will be twice as old as Sue. How old are they now?

I have tried to teach Katara that these problems have several phases. In the first phase you translate the problem into algebra, and then in the second phase you manipulate the symbols, almost mechanically, until the answer pops out as if by magic.

There is a third phase, which is pedagogically and practically essential. This is to check that the solution is correct by translating the results back to the context of the original problem. It's surprising how often teachers neglect this step; it is as if a magician who had made a rabbit vanish from behind a screen then forgot to take away the screen to show the audience that the rabbit had vanished.

Katara set up the equations, not as I would have done, but using four unknowns, to represent the two ages today and the two ages in the future:

$$\begin{align} MT & = 3ST \\ MY & = 2SY \\ \end{align} $$

(!!MT!! here is the name of a single variable, not a product of !!M!! and !!T!!; the others should be understood similarly.)

“Good so far,” I said, “but you have four unknowns and only two equations. You need to find two more relationships between the unknowns.” She thought a bit and then wrote down the other two relations:

$$\begin{align} MY & = MT + 2 \\ SY & = ST + 2 \end{align} $$

I would have written two equations in two unknowns:

$$\begin{align} M_T & = 3S_T\\ M_T+2 & = 2(S_T + 2) \end{align} $$

but one of the best things about mathematics is that there are many ways to solve each problem, and no method is privileged above any other except perhaps for reasons of practicality. Katara's translation is different from what I would have done, and it requires more work in phase 2, but it is correct, and I am not going to tell her to do it my way. The method works both ways; this is one of its best features. If the problem can be solved by thinking of it as a problem in two unknowns, then it can also be solved by thinking of it as a problem in four or in eleven unknowns. You need to find more relationships, but they must exist and they can be found.

Katara may eventually want to learn a technically easier way to do it, but to teach that right now would be what programmers call a premature optimization. If her formulation of the problem requires more symbol manipulation than what I would have done, that is all right; she needs practice manipulating the symbols anyway.

She went ahead with the manipulations, reducing the system of four equations to three, then two and then one, solving the one equation to find the value of the single remaining unknown, and then substituting that value back to find the other unknowns. One nice thing about these simple problems is that when the solution is correct you can see it at a glance: Mary is six years old and Sue is two, and in two years they will be eight and four. Katara loves picking values for the unknowns ahead of time, writing down a random set of relations among those values, and then working the method and seeing the correct answer pop out. I remember being endlessly delighted by almost the same thing when I was a little older than her. In The Dying Earth Jack Vance writes of a wizard who travels to an alternate universe to learn from the master “the secret of renewed youth, many spells of the ancients, and a strange abstract lore that Pandelume termed ‘Mathematics.’”

“I find herein a wonderful beauty,” he told Pandelume. “This is no science, this is art, where equations fall away to elements like resolving chords, and where always prevails a symmetry either explicit or multiplex, but always of a crystalline serenity.”

After Katara had solved this problem, I asked if she was game for something a little weird, and she said she was, so I asked her:

Mary and Sue are sisters. Today, Mary is three times as old as Sue; in two years, they will be the same age. How old are they now?

“WHAAAAAT?” she said. She has a good number sense, and immediately saw that this was a strange set of conditions. (If they aren't the same age now, how can they be the same age in two years?) She asked me what would happen. I said (truthfully) that I wasn't sure, and suggested she work through it to find out. So she set up the equations as before and worked out the solution, which is obvious once you see it: Both girls are zero years old today, and zero is three times as old as zero. Katara was thrilled and delighted, and shared her discovery with her mother and her aunt.

There are some powerful lessons here. One is that the method works even when the conditions seem to make no sense; often the results pop out just the same, and can sometimes make sense of problems that seem ill-posed or impossible. Once you have set up the equations, you can just push the symbols around and the answer will emerge, like a familiar building approached through a fog.

But another lesson, only hinted at so far, is that mathematics has its own way of understanding things, and this is not always the way that humans understand them. Goethe famously said that whatever you say to mathematicians, they immediately translate it into their own language and then it is something different; I think this is exactly what he meant.

In this case it is not too much of a stretch to agree that Mary is three times as old as Sue when they are both zero years old. But in the future I plan to give Katara a problem that requires Mary and Sue to have negative ages—say that Mary is twice as old as Sue today, but in three years Sue will be twice as old—to demonstrate that the answer that pops out may not be a reasonable one, or that the original translation into mathematics can lose essential features of the original problem. The solution that says that !!M_T=-2, S_T=-1 !! is mathematically irreproachable, and if the original problem had been posed as “Find two numbers such that…” it would be perfectly correct. But translated back to the original context of a problem that asks about the ages of two sisters, the solution is unacceptable. This is the point of the joke about the spherical cow.


[Other articles in category /math] permanent link

Wed, 23 Jul 2014

When do n and 2n have the same digits?

[This article was published last month on the math.stackexchange blog, which seems to have died young, despite many earnest-sounding promises beforehand from people who claimed they would contribute material. I am repatriating it here.]

A recent question on math.stackexchange asks for the smallest positive integer !!A!! for which the number !!2A!! has the same decimal digits in some other order.

Math geeks may immediately realize that !!142857!! has this property, because it is the first 6 digits of the decimal expansion of !!\frac 17!!, and the cyclic behavior of the decimal expansion of !!\frac n7!! is well-known. But is this the minimal solution? It is not. Brute-force enumeration of the solutions quickly reveals that there are 12 solutions of 6 digits each, all permutations of !!142857!!, and that larger solutions, such as 1025874 and 1257489 seem to follow a similar pattern. What is happening here?

Stuck in Dallas-Fort Worth airport one weekend, I did some work on the problem, and although I wasn't able to solve it completely, I made significant progress. I found a method that allows one to hand-calculate that there is no solution with fewer than six digits, and to enumerate all the solutions with 6 digits, including the minimal one. I found an explanation for the surprising behavior that solutions tend to be permutations of one another. The short form of the explanation is that there are fairly strict conditions on which sets of digits can appear in a solution of the problem. But once the set of digits is chosen, the conditions on that order of the digits in the solution are fairly lax.

So one typically sees, not only in base 10 but in other bases, that the solutions to this problem fall into a few classes that are all permutations of one another; this is exactly what happens in base 10 where all the 6-digit solutions are permutations of !!124578!!. As the number of digits is allowed to increase, the strict first set of conditions relaxes a little, and other digit groups appear as solutions.

Notation

The property of interest, !!P_R(A)!!, is that the numbers !!A!! and !!B=2A!! have exactly the same base-!!R!! digits. We would like to find numbers !!A!! having property !!P_R!! for various !!R!!, and we are most interested in !!R=10!!. Suppose !!A!! is an !!n!!-digit numeral having property !!P_R!!; let the (base-!!R!!) digits of !!A!! be !!a_{n-1}\ldots a_1a_0!! and similarly the digits of !!B = 2A!! are !!b_{n-1}\ldots b_1b_0!!. The reader is encouraged to keep in mind the simple example of !!R=8, n=4, A=\mathtt{1042}, B=\mathtt{2104}!! which we will bring up from time to time.

Since the digits of !!B!! and !!A!! are the same, in a different order, we may say that !!b_i = a_{P(i)}!! for some permutation !!P!!. In general !!P!! might have more than one cycle, but we will suppose that !!P!! is a single cycle. All the following discussion of !!P!! will apply to the individual cycles of !!P!! in the case that !!P!! is a product of two or more cycles. For our example of !!a=\mathtt{1042}, b=\mathtt{2104}!!, we have !!P = (0\,1\,2\,3)!! in cycle notation. We won't need to worry about the details of !!P!!, except to note that !!i, P(i), P(P(i)), \ldots, P^{n-1}(i)!! completely exhaust the indices !!0. \ldots n-1!!, and that !!P^n(i) = i!! because !!P!! is an !!n!!-cycle.

Conditions on the set of digits in a solution

For each !!i!! we have $$a_{P(i)} = b_{i} \equiv 2a_{i} + c_i\pmod R $$ where the ‘carry bit’ !!c_i!! is either 0 or 1 and depends on whether there was a carry when doubling !!a_{i-1}!!. (When !!i=0!! we are in the rightmost position and there is never a carry, so !!c_0= 0!!.) We can then write:

$$\begin{align} a_{P(P(i))} &= 2a_{P(i)} + c_{P(i)} \\ &= 2(2a_{i} + c_i) + c_{P(i)} &&= 4a_i + 2c_i + c_{P(i)}\\ a_{P(P(P(i)))} &= 2(4a_i + 2c_i + c_{P(P(i)})) + c_{P(i)} &&= 8a_i + 4c_i + 2c_{P(i)} + c_{P(P(i))}\\ &&&\vdots\\ a_{P^n(i)} &&&= 2^na_i + v \end{align} $$

all equations taken !!\bmod R!!. But since !!P!! is an !!n!!-cycle, !!P^n(i) = i!!, so we have $$a_i \equiv 2^na_i + v\pmod R$$ or equivalently $$\big(2^n-1\big)a_i + v \equiv 0\pmod R\tag{$\star$}$$ where !!v\in\{0,\ldots 2^n-1\}!! depends only on the values of the carry bits !!c_i!!—the !!c_i!! are precisely the binary digits of !!v!!.

Specifying a particular value of !!a_0!! and !!v!! that satisfy this equation completely determines all the !!a_i!!. For example, !!a_0 = 2, v = \color{darkblue}{0010}_2 = 2!! is a solution when !!R=8, n=4!! because !!\bigl(2^4-1\bigr)\cdot2 + 2\equiv 0\pmod 8!!, and this solution allows us to compute

$$\def\db#1{\color{darkblue}{#1}}\begin{align} a_0&&&=2\\ a_{P(0)} &= 2a_0 &+ \db0 &= 4\\ a_{P^2(0)} &= 2a_{P(0)} &+ \db0 &= 0 \\ a_{P^3(0)} &= 2a_{P^2(0)} &+ \db1 &= 1\\ \hline a_{P^4(0)} &= 2a_{P^3(0)} &+ \db0 &= 2\\ \end{align}$$

where the carry bits !!c_i = \langle 0,0,1,0\rangle!! are visible in the third column, and all the sums are taken !!\pmod 8!!. Note that !!a_{P^n(0)} = a_0!! as promised. This derivation of the entire set of !!a_i!! from a single one plus a choice of !!v!! is crucial, so let's see one more example. Let's consider !!R=10, n=3!!. Then we want to choose !!a_0!! and !!v!! so that !!\left(2^3-1\right)a_0 + v \equiv 0\pmod{10}!! where !!v\in\{0\ldots 7\}!!. One possible solution is !!a_0 = 5, v=\color{darkblue}{101}_2 = 5!!. Then we can derive the other !!a_i!! as follows:

$$\begin{align} a_0&&&=5\\ a_{P(0)} &= 2a_0 &+ \db1 &= 1\\ a_{P^2(0)} &= 2a_{P(0)} &+ \db0 &= 2 \\\hline a_{P^3(0)} &= 2a_{P^2(0)} &+ \db1 &= 5\\ \end{align}$$

And again we have !!a_{P^n(0)}= a_0!! as required.

Since the bits of !!v!! are used cyclically, not every pair of !!\langle a_0, v\rangle!! will yield a different solution. Rotating the bits of !!v!! and pairing them with different choices of !!a_0!! will yield the same cycle of digits starting from a different place. In the first example above, we had !!a_0 = 2, v = 0010_2 = 2!!. If we were to take !!a_0 = 4, v = 0100_2 = 4!! (which also solves !!(\star)!!) we would get the same cycle of values of the !! a\_i !! but starting from !!4!! instead of from !!2!!, and similarly if we take !!a_0=0, v = 1000_2 = 8!! or !!a_0 = 1, v = 0001_2!!. So we can narrow down the solution set of !!(\star)!! by considering only the so-called necklaces of !!v!! rather than all !!2^n!! possible values. Two values of !!v!! are considered equivalent as necklaces if one is a rotation of the other. When a set of !!v!!-values are equivalent as necklaces, we need only consider one of them; the others will give the same cyclic sequence of digits, but starting in a different place. For !!n=4!!, for example, the necklaces are !!0000, 0001, 0011, 0101, 0111, !! and !!1111!!; the sequences !!0110, 1100,!! and !!1001!! being equivalent to !!0011!!, and so on.

Example

Let us take !!R=9, n=3!!, so we want to find 3-digit numerals with property !!P_9!!. According to !!(\star)!! we need !!7a_i + v \equiv 0\pmod{9}!! where !!v\in\{0\ldots 7\}!!. There are 9 possible values for !!a_i!!; for each one there is at most one possible value of !!v!! that makes the sum zero:

$$\begin{array}{rrr} a_i & 7a_i & v \\ \hline 0 & 0 & 0 \\ 1 & 7 & 2 \\ 2 & 14 & 4 \\ 3 & 21 & 6 \\ 4 & 28 & \\ 5 & 35 & 1 \\ 6 & 42 & 3 \\ 7 & 49 & 5 \\ 8 & 56 & 7 \\ \end{array} $$

(For !!a_i=4!! there is no solution.) We may disregard the non-necklace values of !!v!!, as these will give us solutions that are the same as those given by necklace values of !!v!!. The necklaces are:

$$\begin{array}{rl} 000 & 0 \\ 001 & 1 \\ 011 & 3 \\ 111 & 7 \end{array}$$

so we may disregard the solutions exacpt when !!v=0,1,3,7!!. Calculating the digit sequences from these four values of !!v!! and the corresponding !!a_i!! we find:

$$\begin{array}{ccl} a_0 & v & \text{digits} \\ \hline 0 & 0 & 000 \\ 5 & 1 & 512 \\ 6 & 3 & 637 \\ 8 & 7 & 888 \ \end{array} $$

(In the second line, for example, we have !!v=1 = 001_2!!, so !!1 = 2\cdot 5 + 0; 2 = 1\cdot 2 + 0;!! and !!5 = 2\cdot 2 + 1!!.)

Any number !!A!! of three digits, for which !!2A!! contains exactly the same three digits, in base 9, must therefore consist of exactly the digits !!000, 125, 367,!! or !!888!!.

A warning

All the foregoing assumes that the permutation !!P!! is a single cycle. In general, it may not be. Suppose we did an analysis like that above for !!R=10, n=5!! and found that there was no possible digit set, other than the trivial set 00000, that satisfied the governing equation !!(2^5-1)a_0 + v\equiv 0\pmod{10}!!. This would not completely rule out a base-10 solution with 5 digits, because the analysis only rules out a cyclic set of digits. There could still be a solution where !!P!! was a product of a !!2!! and a !!3!!-cycle, or a product of still smaller cycles.

Something like this occurs, for example, in the !!n=4, R=8!! case. Solving the governing equation !!(2^5-1)a_0 + v \equiv 0\pmod 8!! yields only four possible digit cycles, namely !!\{0,1,2,4\}, \{1,3,6,4\}, \{2,5,2,5\}!!, and !!\{3,7,6,5\}!!. But there are several additional solutions: !!2500_8\cdot 2 = 5200_8, 2750_8\cdot 2 = 5720_8, !! and !!2775_8\cdot 2 = 5772_8!!. These correspond to permutations !!P!! with more than one cycle. In the case of !!2750_8!!, for example, !!P!! exchanges the !!5!! and the !!2!!, and leaves the !!0!! and the !!7!! fixed.

For this reason we cannot rule out the possibility of an !!n!!-digit solution without first considering all smaller !!n!!.

The Large Equals Odd rule

When !!R!! is even there is a simple condition we can use to rule out certain sets of digits from being single-cycle solutions. Recall that !!A=a_{n-1}\ldots a_0!! and !!B=b_{n-1}\ldots b_0!!. Let us agree that a digit !!d!! is large if !!d\ge \frac R2!! and small otherwise. That is, !!d!! is large if, upon doubling, it causes a carry into the next column to the left.

Since !!b_i =(2a_i + c_i)\bmod R!!, where the !!c_i!! are carry bits, we see that, except for !!b_0!!, the digit !!b_i!! is odd precisely when there is a carry from the next column to the right, which occurs precisely when !!a_{i-1}!! is large. Thus the number of odd digits among !!b_1,\ldots b_{n-1}!! is equal to the number of large digits among !!a_1,\ldots a_{n-2}!!. This leaves the digits !!b_0!! and !!a_{n-1}!! uncounted. But !!b_0!! is never odd, since there is never a carry in the rightmost position, and !!a_{n-1}!! is always small (since otherwise !!B!! would have !!n+1!! digits, which is not allowed). So the number of large digits in !!A!! is exactly equal to the number of odd digits in !!B!!. And since !!A!! and !!B!! have exactly the same digits, the number of large digits in !!A!! is equal to the number of odd digits in !!A!!. Observe that this is the case for our running example !!1042_8!!: there is one odd digit and one large digit (the 4).

When !!R!! is odd the analogous condition is somewhat more complicated, but since the main case of interest is !!R=10!!, we have the useful rule that:

For !!R!! even, the number of odd digits in any solution !!A!! is equal to the number of large digits in !!A!!.

Conditions on the order of digits in a solution

We have determined, using the above method, that the digits !!\{5,1,2\}!! might form a base-9 numeral with property !!P_9!!. Now we would like to arrange them into a base-9 numeral that actually does have that property. Again let us write !!A = a_2a_1a_0!! and !!B=b_2b_1b_0!!, with !!B=2A!!. Note that if !!a_i = 1!!, then !!b_i = 3!! (if there was a carry from the next column to the right) or !!b_i=2!! (if there was no carry), but since !!b_i\in{5,1,2}!!, we must have !!b_i = 2!! and therefore !!a_{i-1}!! must be small, since there is no carry into position !!i!!. But since !!a_{i-1}!! is also one of !!\{5,1,2\}!!, and it cannot also be !!1!!, it must be !!2!!. This shows that the 1, unless it appears in the rightmost position, must be to the left of the !!2!!; it cannot be to the left of the !!5!!. Similarly, if !!a_i = 2!! then !!b_i = 5!!, because !!4!! is impossible, so the !!2!! must be to the left of a large digit, which must be the !!5!!. Similar reasoning produces no constraint on the position of the !!5!!; it could be to the left of a small digit (in which case it doubles to !!1!!) or a large digit (in which case it doubles to !!2!!). We can summarize these findings as follows:

$$\begin{array}{cl} \text{digit} & \text{to the left of} \\ \hline 1 & 1, 2, \text{end} \\ 2 & 5 \\ 5 & 1,2,5,\text{end} \end{array}$$

Here “end” means that the indicated digit could be the rightmost.

Furthermore, the left digit of !!A!! must be small (or else there would be a carry in the leftmost place and !!2A!! would have 4 digits instead of 3) so it must be either 1 or 2. It is not hard to see from this table that the digits must be in the order !!125!! or !!251!!, and indeed, both of those numbers have the required property: !!125_9\cdot 2 = 251_9!!, and !!251_9\cdot 2 = 512_9!!.

This was a simple example, but in more complicated cases it is helpful to draw the order constraints as a graph. Suppose we draw a graph with one vertex for each digit, and one additional vertex to represent the end of the numeral. The graph has an edge from vertex !!v!! to !!v'!! whenever !!v!! can appear to the left of !!v'!!. Then the graph drawn for the table above looks like this:

Graph for 125 base 9

A 3-digit numeral with property !!P_9!! corresponds to a path in this graph that starts at one of the nonzero small digits (marked in blue), ends at the red node marked ‘end’, and visits each node exactly once. Such a path is called hamiltonian. Obviously, self-loops never occur in a hamiltonian path, so we will omit them from future diagrams.

Now we will consider the digit set !!637!!, again base 9. An analysis similar to the foregoing allows us to construct the following graph:

Graph for 367 base 9

Here it is immediately clear that the only hamiltonian path is !!3-7-6-\text{end}!!, and indeed, !!376_9\cdot 2 = 763_9!!.

In general there might be multiple instances of a digit, and so multiple nodes labeled with that digit. Analysis of the !!0,0,0!! case produces a graph with no legal start nodes and so no solutions, unless leading zeroes are allowed, in which case !!000!! is a perfectly valid solution. Analysis of the !!8,8,8!! case produces a graph with no path to the end node and so no solutions. These two trivial patterns appear for all !!R!! and all !!n!!, and we will ignore them from now on.

Returning to our ongoing example, !!1042!! in base 8, we see that !!1!! and !!2!! must double to !!2!! and !!4!!, so must be to the left of small digits, but !!4!! and !!0!! can double to either !!0!! or !!1!! and so could be to the left of anything. Here the constraints are so lax that the graph doesn't help us narrow them down much:

Graph for 1024 base 8

Observing that the only arrow into the 4 is from 0, so that the 4 must follow the 0, and that the entire number must begin with 1 or 2, we can enumerate the solutions:

      1042
      1204
      2041
      2104

If leading zeroes are allowed we have also:

      0412
      0421

All of these are solutions in base 8.

The case of !!R=10!!

Now we turn to our main problem, solutions in base 10.

To find all the solutions of length 6 requires an enumeration of smaller solutions, which, if they existed, might be concatenated into a solution of length 6. This is because our analysis of the digit sets that can appear in a solution assumes that the digits are permuted cyclically; that is, the permutations !!P!! that we considered had only one cycle each.

There are no smaller solutions, but to prove that the length 6 solutions are minimal, we must analyze the cases for smaller !!n!! and rule them out. We now produce a complete analysis of the base 10 case with !!R=10!! and !!n\le 6!!. For !!n=1!! there is only the trivial solution of !!0!!, which we disregard. (The question asked for a positive number anyway.)

!!n=2!!

For !!n=2!!, we want to find solutions of !!3a_i + v \equiv 0\pmod{10}!! where !!v!! is a two-bit necklace number, one of !!00_2, 01_2, !! or !!11_2!!. Tabulating the values of !!a_i!! and !!v\in\{0,1,3\}!! that solve this equation we get:

$$\begin{array}{ccc} v& a_i \\ \hline 0 & 0 \\ 1& 3 \\ 3& 9 \\ \end{array}$$

We can disregard the !!v=0!! and !!v=3!! solutions because the former yields the trivial solution !!00!! and the latter yields the nonsolution !!99!!. So the only possibility we need to investigate further is !!a_i = 3, v = 1!!, which corresponds to the digit sequence !!36!!: Doubling !!3!! gives us !!6!! and doubling !!6!!, plus a carry, gives us !!3!! again.

But when we tabulate of which digits must be left of which informs us that there is no solution with just !!3!! and !!6!!, because the graph we get, once self-loops are eliminated, looks like this:

graph for 36 base 10

which obviously has no hamiltonian path. Thus there is no solution for !!R=10, n=2!!.

!!n=3!!

For !!n=3!! we need to solve the equation !!7a_i + v \equiv 0\pmod{10}!! where !!v!! is a necklace number in !!\{0,\ldots 7\}!!, specifically one of !!0,1,3,!! or !!7!!. Since !!7!! and !!10!! are relatively prime, for each !!v!! there is a single !!a_i!! that solves the equation. Tabulating the possible values of !!a_i!! as before, and this time omitting rows with no solution, we have:

$$\begin{array}{rrl} v & a_i & \text{digits}\\ \hline 0& 0 & 000\\ 1& 7 & 748 \\ 3& 1 & 125\\ 7&9 & 999\\ \end{array}$$

The digit sequences !!0,0,0!! and !!9,9,9!! yield trivial solutions or nonsolutions as usual, and we will omit them in the future. The other two lines suggest the digit sets !!1,2,5!! and !!4,7,8!!, both of which fail the “odd equals large” rule.

This analysis rules out the possibility of a digit set with !!a_0 \to a_1 \to a_2 \to a_1!!, but it does not completely rule out a 3-digit solution, since one could be obtained by concatenating a one-digit and a two-digit solution, or three one-digit solutions. However, we know by now that no one- or two-digit solutions exist. Therefore there are no 3-digit solutions in base 10.

!!n=4!!

For !!n=4!! the governing equation is !!15a_i + v \equiv 0\pmod{10}!! where !!v!! is a 4-bit necklace number, one of !!\{0,1,3,5,7,15\}!!. This is a little more complicated because !!\gcd(15,10)\ne 1!!. Tabulating the possible digit sets, we get:

$$\begin{array}{crrl} a_i & 15a_i& v & \text{digits}\\ \hline 0 & 0 & 0 & 0000\\ 1 & 5 & 5 & 1250\\ 1 & 5 & 15 & 1375\\ 2 & 0 & 0 & 2486\\ 3 & 5 & 5 & 3749\\ 3 & 5 & 15 & 3751\\ 4 & 0 & 0 & 4862\\ 5 & 5 & 5 & 5012\\ 5 & 5 & 5 & 5137\\ 6 & 0 & 0 & 6248\\ 7 & 5 & 5 & 7493\\ 7 & 5 & 5 & 7513\\ 8 & 0 & 0 & 8624 \\ 9 & 5 & 5 & 9874\\ 9 & 5 & 15 & 9999 \\ \end{array}$$

where the second column has been reduced mod !!10!!. Note that even restricting !!v!! to necklace numbers the table still contains duplicate digit sequences; the 15 entries on the right contain only the six basic sequences !!0000, 0125, 1375, 2486, 3749, 4987!!, and !!9999!!. Of these, only !!0000, 9999,!! and !!3749!! obey the odd equals large criterion, and we will disregard !!0000!! and !!9999!! as usual, leaving only !!3749!!. We construct the corresponding graph for this digit set as follows: !!3!! must double to !!7!!, not !!6!!, so must be left of a large number !!7!! or !!9!!. Similarly !!4!! must be left of !!7!! or !!9!!. !!9!! must also double to !!9!!, so must be left of !!7!!. Finally, !!7!! must double to !!4!!, so must be left of !!3,4!! or the end of the numeral. The corresponding graph is:

graph for 3749 base 10

which evidently has no hamiltonian path: whichever of 3 or 4 we start at, we cannot visit the other without passing through 7, and then we cannot reach the end node without passing through 7 a second time. So there is no solution with !!R=10!! and !!n=4!!.

!!n=5!!

We leave this case as an exercise. There are 8 solutions to the governing equation, all of which are ruled out by the odd equals large rule.

!!n=6!!

For !!n=6!! the possible solutions are given by the governing equation !!63a_i + v \equiv 0\pmod{10}!! where !!v!! is a 6-bit necklace number, one of !!\{0,1,3,5,7,9,11,13,15,21,23,27,31,63\}!!. Tabulating the possible digit sets, we get:

$$\begin{array}{crrl} v & a_i & \text{digits}\\ \hline 0 & 0 & 000000\\ 1 & 3 & 362486 \\ 3 & 9 & 986249 \\ 5 & 5 & 500012 \\ 7 & 1 & 124875 \\ 9 & 7 & 748748 \\ 11 & 3 & 362501 \\ 13 & 9 & 986374 \\ 15 & 5 & 500137 \\ 21 & 3 & 363636 \\ 23 & 9 & 989899 \\ 27 & 1 & 125125 \\ 31 & 3 & 363751 \\ 63 & 9 & 999999 \\ \end{array}$$

After ignoring !!000000!! and !!999999!! as usual, the large equals odd rule allows us to ignore all the other sequences except !!124875!! and !!363636!!. The latter fails for the same reason that !!36!! did when !!n=2!!. But !!142857!! , the lone survivor, gives us a complicated derived graph containing many hamiltonian paths, every one of which is a solution to the problem:

graph for 124578 base 10

It is not hard to pick out from this graph the minimal solution !!125874!!, for which !!125874\cdot 2 = 251748!!, and also our old friend !!142857!! for which !!142857\cdot 2 = 285714!!.

We see here the reason why all the small numbers with property !!P_{10}!! contain the digits !!124578!!. The constraints on which digits can appear in a solution are quite strict, and rule out all other sequences of six digits and all shorter sequences. But once a set of digits passes these stringent conditions, the constraints on it are much looser, because !!B!! is only required to have the digits of !!A!! in some order, and there are many possible orders, many of which will satisfy the rather loose conditions involving the distribution of the carry bits. This graph is typical: it has a set of small nodes and a set of large nodes, and each node is connected to either all the small nodes or all the large nodes, so that the graph has many edges, and, as in this case, a largish clique of small nodes and a largish clique of large nodes, and as a result many hamiltonian paths.

Onward

This analysis is tedious but is simple enough to perform by hand in under an hour. As !!n!! increases further, enumerating the solutions of the governing equation becomes very time-consuming. I wrote a simple computer program to perform the analysis for given !!R!! and !!n!!, and to emit the possible digit sets that satisfied the large equals odd criterion. I had wondered if every base-10 solution contained equal numbers of the digits !!1,2,4,8,5,!! and !!7!!. This is the case for !!n=7!! (where the only admissible digit set is !!\{1,2,4,5,7,8\}\cup\{9\}!!), for !!n=8!! (where the only admissible sets are !!\{1,2,4,5,7,8\}\cup \{3,6\}!! and !!\{1,2,4,5,7,8\}\cup\{9,9\}!!), and for !!n=9!! (where the only admissible sets are !!\{1,2,4,5,7,8\}\cup\{3,6,9\}!! and !!\{1,2,4,5,7,8\}\cup\{9,9,9\}!!). But when we reach !!n=10!! the increasing number of necklaces has loosened up the requirements a little and there are 5 admissible digit sets. I picked two of the promising-seeming ones and quickly found by hand the solutions !!4225561128!! and !!1577438874!!, both of which wreck any theory that the digits !!1,2,4,5,8,7!! must all appear the same number of times.

Acknowledgments

Thanks to Karl Kronenfeld for corrections and many helpful suggestions.


[Other articles in category /math] permanent link

Fri, 28 Feb 2014

Proof by contradiction

Intuitionistic logic is deeply misunderstood by people who have not studied it closely; such people often seem to think that the intuitionists were just a bunch of lunatics who rejected the law of the excluded middle for no reason. One often hears that intuitionistic logic rejects proof by contradiction. This is only half true. It arises from a typically classical misunderstanding of intuitionistic logic.

Intuitionists are perfectly happy to accept a reductio ad absurdum proof of the following form:

$$(P\to \bot)\to \lnot P$$

Here !!\bot!! means an absurdity or a contradiction; !!P\to \bot!! means that assuming !!P!! leads to absurdity, and !!(P\to \bot)\to \lnot P!! means that if assuming !!P!! leads to absurdity, then you can conclude that !!P!! is false. This is a classic proof by contradiction, and it is intuitionistically valid. In fact, in many formulations of intuitionistic logic, !!\lnot P!! is defined to mean !!P\to \bot!!.

What is rejected by intuitionistic logic is the similar-seeming claim that:

$$(\lnot P\to \bot)\to P$$

This says that if assuming !!\lnot P!! leads to absurdity, you can conclude that !!P!! is true. This is not intuitionistically valid.

This is where people become puzzled if they only know classical logic. “But those are the same thing!” they cry. “You just have to replace !!P!! with !!\lnot P!! in the first one, and you get the second.”

Not quite. If you replace !!P!! with !!\lnot P!! in the first one, you do not get the second one; you get:

$$(\lnot P\to \bot)\to \lnot \lnot P$$

People familiar with classical logic are so used to shuffling the !!\lnot !! signs around and treating !!\lnot \lnot P!! the same as !!P!! that they often don't notice when they are doing it. But in intuitionistic logic, !!P!! and !!\lnot \lnot P!! are not the same. !!\lnot \lnot P!! is weaker than !!P!!, in the sense that from !!P!! one can always conclude !!\lnot \lnot P!!, but not always vice versa. Intuitionistic logic is happy to agree that if !!\lnot P!! leads to absurdity, then !!\lnot \lnot P!!. But it does not agree that this is sufficient to conclude !!P!!.

As is often the case, it may be helpful to try to understand intuitionistic logic as talking about provability instead of truth. In classical logic, !!P!! means that !!P!! is true and !!\lnot P!! means that !!P!! is false. If !!P!! is not false it is true, so !!\lnot \lnot P!! and !!P!! mean the same thing. But in intuitionistic logic !!P!! means that !!P!! is provable, and !!\lnot P!! means that !!P!! is not provable. !!\lnot \lnot P!! means that it is impossible to prove that !!P!! is not provable.

If !!P!! is provable, it is certainly impossible to prove that !!P!! is not provable. So !!P!! implies !!\lnot \lnot P!!. But just because it is impossible to prove that there is no proof of !!P!! does not mean that !!P!! itself is provable, so !!\lnot \lnot P!! does not imply !!P!!.

Similarly,

$$(P\to \bot)\to \lnot P $$

means that if a proof of !!P!! would lead to absurdity, then we may conclude that there cannot be a proof of !!P!!. This is quite valid. But

$$(\lnot P\to \bot)\to P$$

means that if assuming that a proof of !!P!! is impossible leads to absurdity, there must be a proof of !!P!!. But this itself isn't a proof of !!P!!, nor is it enough to prove !!P!!; it only shows that there is no proof that proofs of !!P!! are impossible.

[ Addendum 20141124: This article by Andrej Bauer says much the same thing. ]

[ Addendum 20170508: This article by Robert Harper is another in the same family. ]


[Other articles in category /math] permanent link

Sat, 04 Jan 2014

Cauchy and the continuum

There is a famous mistake of Augustin-Louis Cauchy, in which he is supposed to have "proved" a theorem that is false. I have seen this cited many times, often in very serious scholarly literature, and as often as not Cauchy's purported error is completely misunderstood, and replaced with a different and completely dumbass mistake that nobody could have made.

The claim is often made that Cauchy's Course d'analyse of 1821 contains a "proof" of the following statement: a convergent sequence of continuous functions has a continuous limit. For example, the Wikipedia article on "uniform convergence" claims:

Some historians claim that Augustin Louis Cauchy in 1821 published a false statement, but with a purported proof, that the pointwise limit of a sequence of continuous functions is always continuous…

The nLab article on "Cauchy sum theorem" states:

Non-theorem (attributed to Cauchy, 1821). Let !!f=(f_1,f_2,\ldots)!! be an infinite sequence of continuous functions from the real line to itself. Suppose that, for every real number !!x!!, the sequence !!(f_1(x), f_2(x), \ldots)!! converges to some (necessarily unique) real number !!f_\infty(x)!!, defining a function !!f_\infty!!; in other words, the sequence !!f!! converges pointwise? to !!f_\infty!!. Then !!f_\infty!! is also continuous.

Cauchy never claimed to have proved any such thing, and it beggars belief that Cauchy could have made such a claim, because the counterexamples are so many and so easily located. For example, the sequence !! f_n(x) = x^n!! on the interval !![-1,1]!! is a sequence of continuous functions that converges everywhere on !![0,1]!! to a discontinuous limit. You would have to be a mathematical ignoramus to miss this, and Cauchy wasn't.

Another simple example, one that converges everywhere in !!\mathbb R!!, is any sequence of functions !!f_n!! that are everywhere zero, except that each has a (continuous) bump of height 1 between !!-\frac1n!! and !!\frac1n!!. As !!n\to\infty!!, the width of the bump narrows to zero, and the limit function !!f_\infty!! is everywhere zero except that !!f_\infty(0)=1!!. Anyone can think of this, and certainly Cauchy could have. A concrete example of this type is $$f_n(x) = e^{-x^{2}/n}$$ which converges to 0 everywhere except at !! x=0 !!, where it converges to 1.

Cauchy's controversial theorem is not what Wikipedia or nLab claim. It is that that the pointwise limit of a convergent series of continuous functions is always continuous. Cauchy is not claiming that $$f_\infty(x) = \lim_{i\to\infty} f_i(x)$$ must be continuous if the limit exists and the !!f_i!! are continuous. Rather, he claims that $$S(x) = \sum_{i=1}^\infty f_i(x)$$ must be continuous if the sum converges and the !!f_i!! are continuous. This is a completely different claim. It premise, that the sum converges, is much stronger, and so the claim itself is much weaker, and so much more plausible.

Here the counterexamples are not completely trivial. Probably the best-known counterexample is that a square wave (which has a jump discontinuity where the square part begins and ends) can be represented as a Fourier series.

(Cauchy was aware of this too, but it was new mathematics in 1821. Lakatos and others have argued that the theorem, understood in the way that continuity was understood in 1821, is not actually erroneous, but that the idea of continuity has changed since then. One piece of evidence strongly pointing to this conclusion is that nobody complained about Cauchy's controversial theorem until 1847. But had Cauchy somehow, against all probability, mistakenly claimed that a sequence of continuous functions converges to a continuous limit, you can be sure that it would not have taken the rest of the mathematical world 26 years to think of the counterexample of !!x^n!!.)

The confusion about Cauchy's controversial theorem arises from a perennially confusing piece of mathematical terminology: a convergent sequence is not at all the same as a convergent series. Cauchy claimed that a convergent series of continuous functions has a continuous limit. He did not ever claim that a convergent sequence of continuous functions had a continuous limit. But I have often encountered claims that he did that, even though such such claims are extremely implausible.

The claim that Cauchy thought a sequence of continuous functions converges to a continuous limit is not only false but is manifestly so. Anyone making it has at best made a silly and careless error, and perhaps doesn't really understand what they are talking about, or hasn't thought about it.

[ I had originally planned to write about this controversial theorem in my series of articles about major screwups in mathematics, but the longer and more closely I looked at it the less clear it was that Cauchy had actually made a mistake. ]


[Other articles in category /math] permanent link

Sat, 25 Aug 2012

On the consistency of PA
A monk asked Li Fu, "Master, how do we know that the Peano axioms are consistent?"

Li Fu said, "The axioms are consistent because they have a model."


[Other articles in category /math] permanent link

Fri, 24 Aug 2012

More about ZF's asymmetry between union and intersection
In an article earlier this week, I explored some oddities of defining a toplogy in terms of closed sets rather than open sets, mostly as a result of analogous asymmetry in the ZF set theory axioms.

Let's review those briefly. The relevant axioms concern the operations by which sets can be constructed. There are two that are important. First is the axiom of union, which says that if !!{\mathcal F}!! is a family of sets, then we can form !!\bigcup {\mathcal F}!!, which is the union of all the sets in the family.

The other is actually a family of axioms, the specification axiom schema. It says that for any one-place predicate !!\phi(x)!! and any set !!X!! we can construct the subset of !!X!! for which !!\phi!! holds:

$$\{ x\in X \;|\; \phi(x) \}$$

Both of these are required. The axiom of union is for making bigger sets out of smaller ones, and the specification schema is for extracting smaller sets from bigger ones. (Also important is the axiom of pairing, which says that if !!x!! and !!y!! are sets, then so is the two-element set !!\{x, y\}!!; with pairing and union we can construct all the finite sets. But we won't need it in this article.)

Conspicuously absent is an axiom of intersection. If you have a family !!{\mathcal F}!! of sets, and you want a set of every element that is in some member of !!{\mathcal F}!!, that is easy; it is what the axiom of union gets you. But if you want a set of every element that is in every member of !!{\mathcal F}!!, you have to use specification.

Let's begin by defining this compact notation: $$\bigcap_{(X)} {\mathcal F}$$

for this longer formula: $$\{ x\in X \;|\; \forall f\in {\mathcal F} . x\in f \}$$

This is our intersection of the members of !!{\mathcal F}!!, taken "relative to !!X!!", as we say in the biz. It gives us all the elements of !!X!! that are in every member of !!{\mathcal F}!!. The !!X!! is mandatory in !!\bigcap_{(X)}!!, because ZF makes it mandatory when you construct a set by specification. If you leave it out, you get the Russell paradox.

Most of the time, though, the !!X!! is not very important. When !!{\mathcal F}!! is nonempty, we can choose some element !!f\in {\mathcal F}!!, and consider !!\bigcap_{(f)} {\mathcal F}!!, which is the "normal" intersection of !!{\mathcal F}!!. We can easily show that $$\bigcap_{(X)} {\mathcal F}\subseteq \bigcap_{(f)} {\mathcal F}$$ for any !!X!! whatever, and this immediately implies that $$\bigcap_{(f)} {\mathcal F} = \bigcap_{(f')}{\mathcal F}$$ for any two elements of !!{\mathcal F}!!, so when !!{\mathcal F}!! contains an element !!f!!, we can omit the subscript and just write $$\bigcap {\mathcal F}$$ for the usual intersection of members of !!{\mathcal F}!!.

Even the usually troublesome case of an empty family !!{\mathcal F}!! is no problem. In this case we have no !!f!! to use for !!\bigcap_{(f)} {\mathcal F}!!, but we can still take some other set !!X!! and talk about !!\bigcap_{(X)} \emptyset!!, which is just !!X!!.

Now, let's return to topology. I suggested that we should consider the following definition of a topology, in terms of closed sets, but without an a priori notion of the underlying space:

A co-topology is a family !!{\mathcal F}!! of sets, called "closed" sets, such that:

  1. The union of any two elements of !!{\mathcal F}!! is again in !!{\mathcal F}!!, and
  2. The intersection of any subfamily of !!{\mathcal F}!! is again in !!{\mathcal F}!!.
Item 2 begs the question of which intersection we are talking about here. But now that we have nailed down the concept of intersections, we can say briefly and clearly what we want: It is the intersection relative to !!\bigcup {\mathcal F}!!. This set !!\bigcup {\mathcal F}!! contains anything that is in any of the closed sets, and so !!\bigcup {\mathcal F}!!, which I will henceforth call !!U!!, is effectively a universe of discourse. It is certainly big enough that intersections relative to it will contain everything we want them to; remember that intersections of subfamilies of !!{\mathcal F}!! have a maximum size, so there is no way to make !!U!! too big.

It now immediately follows that !!U!! itself is a closed set, since it is the intersection !!\bigcap_{(U)} \emptyset!! of the empty subfamily of !!{\mathcal F}!!.

If !!{\mathcal F}!! itself is empty, then so is !!U!!, and !!\bigcap_{(U)} {\mathcal F} = \emptyset!!, so that is all right. From here on we will assume that !!{\mathcal F}!! is nonempty, and therefore that !!\bigcap {\mathcal F}!!, with no relativization, is well-defined.

We still cannot prove that the empty set is closed; indeed, it might not be, because even !!M = \bigcap {\mathcal F}!! might not be empty. But as David Turner pointed out to me in email, the elements of !!M!! play a role dual to the extratoplogical points of a topological space that has been defined in terms of open sets. There might be points that are not in any open set anywhere, but we may as well ignore them, because they are topologically featureless, and just consider the space to be the union of the open sets. Analogously and dually, we can ignore the points of !!M!!, which are topologically featureless in the same way. Rather than considering !!{\mathcal F}!!, we should consider !!{\widehat{\mathcal F}}!!, whose members are the members of !!{\mathcal F}!!, but with !!M!! subtracted from each one:

$${\widehat{\mathcal F}} = \{\hat{f}\in 2^U \;|\; \exists f\in {\mathcal F} . \hat{f} = f\setminus M \}$$

So we may as well assume that this has been done behind the scenes and so that !!\bigcap {\mathcal F}!! is empty. If we have done this, then the empty set is closed.

Now we move on to open sets. An open set is defined to be the complement of a closed set, but we have to be a bit careful, because ZF does not have a global notion of the complement !!S^C!! of a set. Instead, it has only relative complements, or differences. !!X\setminus Y!! is defined as: $$X\setminus Y = \{ x\in X \;|\; x\notin Y\} $$

Here we say that the complement of !!Y!! is taken relative to !!X!!.

For the definition of open sets, we will say that the complement is taken relative to the universe of discourse !!U!!, and a set !!G!! is open if it has the form !!U\setminus f!! for some closed set !!f!!.

Anatoly Karp pointed out on Twitter that we know that the empty set is open, because it is the relative complement of !!U!!, which we already know is closed. And if we ensure that !!\bigcap {\mathcal F}!! is empty, as in the previous paragraph, then since the empty set is closed, !!U!! is open, and we have recovered all the original properties of a topology.

But gosh, what a pain it was; in contrast recovering the missing axioms from the corresponding open-set definition of a topology was painless. (John Armstrong said it was bizarre, and probably several other people were thinking that too. But I did not invent this bizarre idea; I got it from the opening paragraph of John L. Kelley's famous book General Topology, which has been in print since 1955.

Here Kelley deals with the empty set and the universe in two sentences, and never worries about them again. In contrast, doing the same thing for closed sets was fraught with technical difficulties, mostly arising from ZF. (The exception was the need to repair the nonemptiness of the minimal closed set !!M!!, which was not ZF's fault.)


I don't think I have much of a conclusion here, except that whatever the advantages of ZF as a millieu for doing set theory, it is overrated as an underlying formalism for actually doing mathematics. (Another view on this is laid out by J.H. Conway in the Appendix to Part Zero of On Numbers and Games (Academic Press, 1976).) None of the problems we encountered were technically illuminating, and nothing was clarified by examining them in detail.

On the other hand, perhaps this conclusion is knocking down a straw man. I think working mathematicians probably don't concern themselves much with whether their stuff works in ZF, much less with what silly contortions are required to make it work in ZF. I think day-to-day mathematical work, to the extent that it needs to deal with set theory at all, handles it in a fairly naïve way, depending on a sort of folk theory in which there is some reasonably but not absurdly big universe of discourse in which one can take complements and intersections, and without worrying about this sort of technical detail.

[ MathJax doesn't work in Atom or RSS syndication feeds, and can't be made to work, so if you are reading a syndicated version of this article, such as you would in Google Reader, or on Planet Haskell or PhillyLinux, you are seeing inlined images provided by the Google Charts API. The MathJax looks much better, and if you would like to compare, please visit my blog's home site. ]


[Other articles in category /math] permanent link

Tue, 21 Aug 2012

The non-duality of open and closed sets
I had long thought that it doesn't matter if we define a topology in terms of open sets or in terms of closed sets, because the two definitions are in every way dual and equivalent. This seems not to be the case: the definition in terms of closed sets seems to be slightly weaker than the definition in terms of open sets.

We can define a topology without reference to the underlying space as follows: A family !!{\mathfrak I}!! of sets is a topology if it is closed under pairwise intersections and arbitrary unions, and we call a set "open" if it is an element of !!{\mathfrak I}!!. From this we can recover the omitted axiom that says that !!\emptyset!! is open: it must be in !!{\mathfrak I}!! because it is the empty union !!\bigcup_{g\in\emptyset} g!!. We can also recover the underlying space of the topology, or at least some such space, because it is the unique maximal open set !!X=\bigcup_{g\in{\mathfrak I}} g!!. The space !!X!! might be embedded in some larger space, but we won't ever have to care, because that larger space is topologically featureless. From a topological point of view, !!X!! is our universe of discourse. We can then say that a set !!C!! is "closed" whenever !!X\setminus C!! is open, and prove all the usual theorems.

If we choose to work with closed sets instead, we run into problems. We can try starting out the same way: A family !!{\mathfrak I}!! of sets is a co-topology if it is closed under pairwise unions and arbitrary intersections, and we call a set "closed" if it is an element of !!{\mathfrak I}!!. But we can no longer prove that !!\emptyset\in{\mathfrak I}!!. We can still recover an underlying space !!X = \bigcup_{c\in{\mathfrak I}} c!!, but we cannot prove that !!X!! is closed, or identify any maximal closed set analogous to the maximal open set of the definition of the previous paragraph. We can construct a minimal closed set !!\bigcap_{c\in{\mathfrak I}} c!!, but we don't know anything useful about it, and in particular we don't know whether it is empty, whereas with the open-sets definition of a topology we can be sure that the empty set is the unique minimal open set.

We can repair part of this asymmetry by changing the "pairwise unions" axiom to "finite unions"; then the empty set is closed because it is a finite union of closed sets. But we still can't recover any maximal closed set. Given a topology, it is easy to identify the unique maximal closed set, but given a co-topology, one can't, and indeed there may not be one. The same thing goes wrong if one tries to define a topology in terms of a Kuratowski closure operator.

We might like to go on and say that complements of closed sets are open, but we can't, because we don't have a universe of discourse in which we can take complements.

None of this may make very much difference in practice, since we usually do have an a priori idea of the universe of discourse, and so we do not care much whether we can define a topology without reference to any underlying space. But it is at least conceivable that we might want to abstract away the underlying space, and if we do, it appears that open and closed sets are not as exactly symmetric as I thought they were.

Having thought about this some more, it seems to me that the ultimate source of the asymmetry here is in our model of set theory. The role of union and intersection in ZF is not as symmetric as one might like. There is an axiom of union, which asserts that the union of the members of some family of sets is again a set, but there is no corresponding axiom of intersection. To get the intersection of a family of sets !!\mathcal S!!, you use a specification axiom. Because of the way specification works, you cannot take an empty intersection, and there is no universal set. If topology were formulated in a set theory with a universal set, such as NF, I imagine the asymmetry would go away.

[ This is my first blog post using MathJax, which I hope will completely replace the ad-hoc patchwork of systems I had been using to insert mathematics. Please email me if you encounter any bugs. ]

[ Addendum 20120823: MathJax depends on executing Javascript, and so it won't render in an RSS or Atom feed or on any page where the blog content is syndicated. So my syndication feed is using the Google Charts service to render formulas for you. If the formulas look funny, try looking at http://blog.plover.com/ directly. ]

[ Addendum 20120824: There is a followup to this article. ]


[Other articles in category /math] permanent link

Tue, 10 Jan 2012

Elaborations of Russell's paradox
When Katara was five or six, I told her about Russell's paradox in the following form: in a certain library, some books are catalogs that contain lists of other books. For example, there is a catalog of all the books on the second floor, and a catalog of all the books about birds. Some catalogs might include themselves. For example, the catalog of all the books in the library certainly includes itself. Such catalogs have red covers; the other catalogs, which do not include themselves, such as the catalog of all the plays of Shakespeare, have blue covers. Now is there a catalog of all the catalogs with blue covers?

I wasn't sure she would get this, but it succeeded much better than I expected. After I prompted her to consider what color cover it would have, she thought it out, first ruling out one color, and then, when she got to the second color, she just started laughing.

A couple of days ago she asked me if I could think of anything that was like that but with three different colors. Put on the spot, I suggested she consider what would happen if there could be green catalogs that might or might not include themselves. This is somewhat interesting, because you now can have a catalog of all the blue catalogs; it can have a green cover. But I soon thought of a much better extension.

I gave it to Katara like this: say you have a catalog, let's call it X. If X mentions a catalog that mentions X, it has a gold stripe on the spine. Otherwise, it has a silver stripe. Now:

  1. Could there be a red catalog with a gold stripe?
  2. Could there be a red catalog with a silver stripe?
  3. Could there be a blue catalog with a gold stripe?
  4. Could there be a blue catalog with a silver stripe?
And more interesting:

  1. Is there a catalog of all the catalogs with gold stripes?
  2. Is there a catalog of all the catalogs with silver stripes?
I knew that early 20th century logicians, trying to repair the Russell paradox, first tried a very small patch: since comprehension over the predicate XX causes problems, just forbid that predicate. This unfortunately doesn't solve the problem at all; there are an infinite number of equally problematic predicates. (Whitehead and Russell's theory of types is an attempt to fix this; Quine's New Foundations is a different attempt.) One of these predicates is ¬∃Y.X∈Y and Y∈X. You can't construct the set of all X such that ¬∃Y.X∈Y and Y∈X because there is no such set, for reasons similar to the reason why there's no set of all X such that XX, so that's where I got the silver stripe predicate.

Translating this into barber language is left as an exercise for the reader.

[ Addendum 20231128: More about uncountable sets for seven-year-olds. ]


[Other articles in category /math] permanent link

Sat, 11 Jun 2011

Concerts

At a book sale I recently picked up Terence Tao's little book on problem solving for 50¢. One of the exercises (pp. 85–86) is the following little charmer: There are six musicians who will play a series of concerts. At each concert, some of the musicians will be on stage and some will be in the audience. What is the fewest number of concerts that can be played to that each musician gets to see the each of the others play?

Obviously, no more than six concerts are required. (I have a new contribution to the long-debated meaning of the mathematical jargon term "obviously": if my six-year-old daughter Katara could figure out the answer, so can you.) And an easy argument shows that four are necessary: let's say that when a musician views another, that is a "viewing event"; we need to arrange at least 5×6 = 30 viewing events. A concert that has p performers and 6-p in the audience arranges p(6 - p) events, which must be 5, 8, or 9. Three concerts yield no more than 27 events, which is insufficient. So there must be at least 4 concerts, and we may as well suppose that each concert has three musicians in the audience and three onstage, to maximize the number of events at 9·4 = 36. (It turns out there there is no solution otherwise, but that is a digression.)

Each musician must attend at least 2 concerts, or else they would see only 3 other musicians onstage. But 6 musicians attending 2 concerts each takes up all 12 audience spots, so every musician is at exactly 2 concerts. Each musician thus sees exactly six musicians onstage, and since five of them must be different, one is a repeat, and the viewing event is wasted. We knew there would be some waste, since there are 36 viewing avents available and only 30 can be useful, but now we know that each spectator wastes exactly one event.

A happy side effect of splitting the musicians evenly between the stage and the audience in every concert is that we can exploit the symmetry: if we have a solution to the problem, then we can obtain a dual solution by exchanging the performers and the audience in each concert. The conclusion of the previous paragraph is that in any solution, each spectator wastes exactly one event; the duality tells us that each performer is the subject of exactly one wasted event.

Now suppose the same two musicians, say A and B, perform together twice. We know that some spectator must see A twice; this spectator sees B twice also, this wasting two events. But each spectator wastes only one event. So no two musicians can share the stage twice; each two musicians share the stage exactly once. By duality, each two spectators are in the same audience together exactly once.

So we need to find four 3-sets of the elements { A, B, C, D, E, F }, with each element appearing in precisely two sets, and such that each two sets have exactly one element in common.

Or equivalently, we need to find four triangles in K4, none of which share an edge.

The solution is not hard to find:

 1234
On stageA B CC D EE F AB D F
In audienceD E FA B FB C DA C E

And in fact this solution is essentially unique.

If you generalize these arguments to 2m musicians, you find that there is a lower bound of $$\left\lceil{4m^2 - 2m \over m^2 }\right\rceil$$ concerts, which is 4. And indeed, even with as few as 4 musicians, you still need four concerts. So it's tempting to wonder if 4 concerts is really sufficient for all even numbers of musicians. Consider 8 musicians, for example. You need 56 viewing events, but a concert with half the musicians onstage and half in the audience provides 16 events, so you might only need as few as 4 concerts to provide the necessary events.

The geometric formulation is that you want to find four disjoint K4s in a K8; or alternatively, you want to find four 4-element subsets of { 1,2,3,4,5,6,7,8 }, such that each element appears in exactly two sets and no two elements are in the same. There seemed to be no immediately obvious reason that this wouldn't work, and I spent a while tinkering around looking for a way to do it and didn't find one. Eventually I did an exhaustive search and discovered that it was impossible.

But the tinkering and the exhaustive search were a waste of time, because there is an obvious reason why it's impossible. As before, each musician must be in exactly two audiences, and can share audiences with each other musician at most once. But there are only 6 ways to be in two audiences, and 8 musicians, so some pair of musicians must be in precisely the same pair of audiences, this wastes too many viewing events, and so there's no solution. Whoops!

It's easy to find solutions for 8 musicians with 5 concerts, though. There is plenty of room to maneuver and you can just write one down off the top of your head. For example:

 12345
On stage E F G HB C D HA C D F GA B D E GA B C E F H
In audienceA B C DA E F GB E H C F H D G

Actually I didn't write this one down off the top of my head; I have a method that I'll describe in a future article. But this article has already taken me several weeks to get done, so I'll stop here for now.

[ Addendum: For n = 1…10 musicians, the least number of concerts required is 0, 2, 3, 4, 4, 4, 5, 5, 5, 5; beyond this, I only have bounds. ]


[Other articles in category /math] permanent link

Mon, 15 Nov 2010

A draft of a short introduction to topology
One of my ongoing projects is to figure out how to explain topology briefly. For example, What is Topology?, putatively part 1 of a three-part series that I have not yet written parts 2 or 3 of yet.

CS grad students often have to take classes in category theory. These classes always want to use groups and topological spaces as examples, and my experience is that at this point many of the students shift uncomfortably in their seats since they have not had undergraduate classes in group theory, topology, analysis, or anything else relevant. But you do not have to know much topology to be able to appreciate the example, so I tried to write up the minimal amount necessary. Similarly, if you already understand intuitionistic logic, you do not need to know much topology to understand the way in which topological spaces are natural models for intuitionistic logic—but you do need to know more than zero.

So a couple of years ago I wrote up a short introduction to topology for first-year computer science grad students and other people who similarly might like to know the absolute minimum, and only the absolute minimum, about topology. It came out somewhat longer than I expected, 11 pages, of which 6 are the introduction, and 5 are about typical applications to computer science. But it is a very light, fluffy 11 pages, and I am generally happy with it.

I started writing this shortly after my second daughter was born, and I have not yet had a chance to finish it. It contains many errors. Many, many errors. For example, there is a section at the end about the compactness principle, which can only be taken as a sort of pseudomathematical lorem ipsum. This really is a draft; it is only three-quarters finished.

But I do think it will serve a useful function once it is finished, and that finishing it will not take too long. If you have any interest in this project, I invite you to help.

The current draft is version 0.6 of 2010-11-14. I do not want old erroneous versions wandering around confusing people in my name, so please do not distribute this draft after 2010-12-15. I hope to have an improved draft available here before that.

Please do send me corrections, suggestions, questions, advice, patches, pull requests, or anything else.


[Other articles in category /math] permanent link

Mon, 08 Nov 2010

Semi-boneless ham
The Math Project on Wikipedia is having a discussion about whether or not to have an article about the jargon term "semi-infinite", which I have long considered one of my favorite jargon terms, because it sounds so strange, but makes so much sense. A structure is semi-infinite when it is infinite in one direction but not in the other. For example, the set of positive integers is semi-infinite, since it possesses a least element (1) but no greatest element. Similarly rays in geometry are semi-infinite.

The term is informal, however, and it's not clear just what it should mean in all cases. For example, consider the set S of 1/n for every positive integer n. Is this set semi-infinite? It is bounded in both directions, since it is contained in [0, 1]. But as you move left through the set, you ancounter an infinite number of elements, so it ought to be semi-infinite in the same sense that S ∪ { 1-x : xS } is fully-infinite. Whatever sense that is.

Informal and ill-defined it may be, but the term is widely used; one can easily find mentions in the literature of semi-infinite paths, semi-infinite strips, semi-infinite intervals, semi-infinite cylinders, and even semi-infinite reservoirs and conductors.

The term has spawned an offshoot, the even stranger-sounding "quarter-infinite". This seems to refer to a geometric object that is unbounded in the same way that a quarter-plane is unbounded, where "in the same way" is left rather vague. Consider the set (depicted at left) of all points of the plane for which 0 ≤ |y/x| ≤ √3, for example; is this set quarter-infinite, or only 1/6-infinite? Is the set of points (depicted at right) with xy > 1 and x, y > 0 quarter-infinite? I wouldn't want to say. But the canonical example is simple: the product of two semi-infinite intervals is a quarter-infinite set.

I was going to say that I had never seen an instance of the obvious next step, the eighth-infinite solid, but in researching this article I did run into a few. I can't say it trips off the tongue, however. And if we admit that a half of a quarter-infinite plane segment is also eighth-infinite, we could be getting ourselves into trouble.

(This all reminds me of the complaint of J.H. Conway of the increasing use of the term "biunique". Conway sarcastically asked if he should expect to see "triunique" and soforth, culminating in the idiotic "polyunique".)

Sometimes "semi" really does mean exactly one-half, as in "semimajor axis" (the longest segment from the center of an ellipse to its boundary), "semicubic parabola" (determined by an equation with a term kx3/2), or "semiperimeter" (half the perimeter of a triangle). But just as often, "semi" is one of the dazzling supply of mathematical pejoratives. ("Abnormal, irregular, improper, degenerate, inadmissible, and otherwise undesirable", says Kelley's General Topology.) A semigroup, for example, is not half of a group, but rather an algebraic structure that possesses less structure than a group. Similarly, one has semiregular polyhedra and semidirect products.

I was planning to end with a note that mathematics has so far avoided the "demisemi-" prefix. But alas! Google found this 1971 paper on Demi-semi-primal algebras and Mal'cev-type conditions.


[Other articles in category /math] permanent link

Mon, 14 Dec 2009

Another short explanation of Gödel's theorem
In yesterday's article, I said:

A while back I started writing up an article titled "World's shortest explanation of Gödel's theorem". But I didn't finish it...

I went and had a look to see what was wrong with it, and to my surprise, there seemed to be hardly anything wrong with it. Perhaps I just forgot to post it. So if you disliked yesterday's brief explanation of Gödel's theorem—and many people did—you'll probably dislike this one even more. Enjoy!


A reader wrote to question my characterization of Gödel's theorem in the previous article. But I think I characterized it correctly; I said:

The only systems of mathematical axioms strong enough to prove all true statements of arithmetic, are those that are so strong that they also prove all the false statements of arithmetic.
I'm going to explain how this works.

You start by choosing some system of mathematics that has some machinery in it for making statements about things like numbers and for constructing proofs of theorems like 1+1=2. Many such systems exist. Let's call the one we have chosen M, for "mathematics".

Gödel shows that if M has enough mathematical machinery in it to actually do arithmetic, then it is possible to produce a statement S whose meaning is essentially "Statement S cannot be proved in system M."

It is not at all obvious that this is possible, or how it can be done, and I am not going to get into the details here. Gödel's contribution was seeing that it was possible to do this.

So here's S again:

S: Statement S cannot be proved in system M.

Now there are two possibilities. Either S is in fact provable in system M, or it is not. One of these must hold.

If S is provable in system M, then it is false, and so it is a false statement that can be proved in system M. M therefore proves some false statements of arithmetic.

If S is not provable in system M, then it is true, and so it is a true statement that cannot be proved in system M. M therefore fails to prove some true statements of arithmetic.

So something goes wrong with M: either it fails to prove some true statements, or else it succeeds in proving some false statements.

List of topics I deliberately omitted from this article, that mathematicians should not write to me about with corrections: Presburger arithmetic. Dialetheism. Inexhaustibility. ω-incompleteness. Non-RE sets of axioms.


Well, I see now that left out the step where I go from "M proves a false statement" to "M proves all false statements". Oh well, another topic for another post.

If you liked this post, you may enjoy Torkel Franzén's books Godel's Theorem: An Incomplete Guide to Its Use and Abuse and Inexhaustibility: A Non-Exhaustive Treatment. If you disliked this post, you are even more likely to enjoy them.



Many thanks to Robert Bond for his contribution.


[Other articles in category /math] permanent link

Sun, 13 Dec 2009

World's shortest explanation of Gödel's theorem
A while back I started writing up an article titled "World's shortest explanation of Gödel's theorem". But I didn't finish it, and later I encountered Raymond Smullyan's version, which is much shorter anyway. So here, shamelessly stolen from Smullyan, is the World's shortest explanation of Gödel's theorem.

We have some sort of machine that prints out statements in some sort of language. It needn't be a statement-printing machine exactly; it could be some sort of technique for taking statements and deciding if they are true. But let's think of it as a machine that prints out statements.

In particular, some of the statements that the machine might (or might not) print look like these:

P*x (which means that the machine will print x)
NP*x (which means that the machine will never print x)
PR*x (which means that the machine will print xx)
NPR*x (which means that the machine will never print xx)

For example, NPR*FOO means that the machine will never print FOOFOO. NP*FOOFOO means the same thing. So far, so good.

Now, let's consider the statement NPR*NPR*. This statement asserts that the machine will never print NPR*NPR*.

Either the machine prints NPR*NPR*, or it never prints NPR*NPR*.

If the machine prints NPR*NPR*, it has printed a false statement. But if the machine never prints NPR*NPR*, then NPR*NPR* is a true statement that the machine never prints.

So either the machine sometimes prints false statements, or there are true statements that it never prints.

So any machine that prints only true statements must fail to print some true statements.

Or conversely, any machine that prints every possible true statement must print some false statements too.


The proof of Gödel's theorem shows that there are statements of pure arithmetic that essentially express NPR*NPR*; the trick is to find some way to express NPR*NPR* as a statement about arithmetic, and most of the technical details (and cleverness!) of Gödel's theorem are concerned with this trick. But once the trick is done, the argument can be applied to any machine or other method for producing statements about arithmetic.

The conclusion then translates directly: any machine or method that produces statements about arithmetic either sometimes produces false statements, or else there are true statements about arithmetic that it never produces. Because if it produces something like NPR*NPR* then it is wrong, but if it fails to produce NPR*NPR*, then that is a true statement that it has failed to produce.

So any machine or other method that produces only true statements about arithmetic must fail to produce some true statements.

Hope this helps!

(This explanation appears in Smullyan's book 5000 BC and Other Philosophical Fantasies, chapter 3, section 65, which is where I saw it. He discusses it at considerable length in Chapter 16 of The Lady or the Tiger?, "Machines that Talk About Themselves". It also appears in The Mystery of Scheherezade.)



I gratefully acknowledge Charles Colht for his generous donation to this blog.


[ Addendum 20091214: Another article on the same topic. ]

[ Addendum 20150403: Reddit user cafe_anon has formalized this argument as a Coq proof. ]

[ Addendum 20150406: Reddit user TezlaKoil has formalized this argument as an Agda proof. (Unformatted version) ]


[Other articles in category /math] permanent link

Sun, 21 Jun 2009

Gray code at the pediatrician's office
Last week we took Katara to the pediatrician for a checkup, during which they weighed, measured, and inoculated her. The measuring device, which I later learned is called a stadiometer, had a bracket on a slider that went up and down on a post. Katara stood against the post and the nurse adjusted the bracket to exactly the top of her head. Then she read off Katara's height from an attached display.

How did the bracket know exactly what height to report? This was done in a way I hadn't seen before. It had a photosensor looking at the post, which was printed with this pattern:

(Click to view the other pictures I took of the post.)

The pattern is binary numerals. Each numeral is a certain fraction of a centimeter high, say 1/4 centimeter. If the sensor reads the number 433, that means that the bracket is 433/4 = 108.25 cm off the ground, and so that Katara is 108.25 cm tall.

The patterned strip in the left margin of this article is a straightforward translation of binary numerals to black and white boxes, with black representing 1 and white representing 0:

0000000000
0000000001
0000000010
0000000011
0000000100
0000000101
0000000101
...
1111101000
1111101001
...
1111111111
If you are paying attention, you will notice that although the strip at left is similar to the pattern in the doctor's office, it is not the same. That is because the numbers on the post are Gray-coded.

Gray codes solve the following problem with raw binary numbers. Suppose Katara is close to 104 = 416/4 cm tall, so that the photosensor is in the following region of the post:

...
0110100001 (417)
0110100000 (416)
0110011111 (415)
0110011110 (414)
...
But suppose that the sensor (or the post) is slightly mis-aligned, so that instead of properly reading the (416) row, it reads the first half of the (416) row and last half of the (415) row. That makes 0110111111, which is 447 = 111.75 cm, an error of almost 7.5%. (That's three inches, for my American and Burmese readers.) Or the error could go the other way: if the sensor reads the first half of the (415) and the second half of the (416) row, it will see 0110000000 = 384 = 96 cm.

Gray code is a method for encoding numbers in binary so that each numeral differs from the adjacent ones in only one position:

0000000000
0000000001
0000000011
0000000010
0000000110
0000000111
0000000101
0000000100
0000001100
...
1000011100
1000011101
...
1000000000
This is the pattern from the post, which you can also see at the right of this article.

Now suppose that the mis-aligned sensor reads part of the (416) line and part of the (417) line. With ordinary binary coding, this could result in an error of up to 7.75 cm. (And worse errors for children of other heights.) But with Gray coding no error results from the misreading:

...
0101110000 (417)
0101010000 (416)
0101010001 (415)
0101010011 (414)
...
No matter what parts of 0101110000 and 0101110001 are stitched together, the result is always either 416 or 417.

Converting from Gray code to standard binary is easy: take the binary expansion, and invert every bit that is immediately to the right of a 1 bit. For example, in 1111101000, each red bit is to the right of a 1, and so is inverted to obtain the Gray code 1000011100.

Converting back is also easy: Flip any bit that is anywhere to the right of an odd number of 1 bits, and leave alone the bits that are to the right of an even number of 1 bits. For example, Gray code 1000011100 contains four 1's, and the bits to the right of the first (00001) and third (1), but not the second or fourth, get flipped to 11110 and 0, to give 1111101000.

[ Addendum 20110525: Every so often someone asks why the stadiometer is so sophisticated. Here is the answer. ]


[Other articles in category /math] permanent link

Fri, 22 May 2009

A child is bitten by a dog every 0.07 seconds...
I read in the newspaper today that letter carriers were bitten by dogs 3,000 times last year. (Curiously, this is not a round number; it is exact.) The article then continued: "children ... are 900 times more likely to be bitten than letter carriers."

This is obviously nonsense, because suppose the post office employs half a million letter carriers. (The actual number is actually about half that, but we are doing a back-of-the-envelope estimate of plausibility.) Then the bite rate is six bites per thousand letter carriers per year, and if children are 900 times more likely to be bitten, they are getting bitten at a rate of 5,400 bites per thousand children per year, or 5.4 bites per child. Insert your own joke here, or use the prefabricated joke framework in the title of this article.

I wrote to the reporter, who attributed the claim to the Postal Bulletin 22258 of 7 May 2009. It does indeed appear there. I am trying to track down the ultimate source, but I suspect I will not get any farther. I have discovered that the "900 times" figure appears in the Post Office's annual announcements of Dog Bite Prevention Month as far back as 2004, but not as far back as 2002.

Meantime, what are the correct numbers?

The Centers for Disease Control and Prevention have a superb on-line database of injury data. It immediately delivers the correct numbers for dog bite rate among children:

AgeNumber of
injuries
PopulationRate per
100,000
0 2,302 4,257,020 54.08
1 7,100 4,182,171 169.77
2 10,049 4,110,458 244.47
3 10,355 4,111,354 251.86
4 9,920 4,063,122 244.15
5 7,915 4,031,709 196.32
6 8,829 4,089,126 215.91
7 6,404 3,935,663 162.72
8 8,464 3,891,755 217.48
9 8,090 3,901,375 207.36
10 7,388 3,927,298 188.11
11 6,501 4,010,171 162.11
12 7,640 4,074,587 187.49
13 5,876 4,108,962 142.99
14 4,720 4,193,291 112.56
15 5,477 4,264,883 128.42
16