The Universe of Discourse
https://blog.plover.com
The Universe of Discourse (Mark Dominus Blog)enAnother system software error
https://blog.plover.com/2017/11/16#compiler-error-2
<p>[ Warning: This article is meandering and does not end anywhere in particular ]</p>
<p><a href="https://blog.plover.com/prog/compiler-error.html">My recent article about system software errors</a>
kinda blew up the Reddit / Hacker News space, and even got listed on
Voat, which I understand is the Group W Bench where they send you if
you aren't moral enough to be in Reddit. Many people on these fora
were eager to tell war stories of times that they <em>had</em> found errors
in the compiler or other infrastructural software.</p>
<p>This morning I remembered another example that had happened to me. In
the middle 1990s, I was just testing some network program on one of the Sun
Solaris machines that belonged to the
<a href="http://www.ling.upenn.edu/research/computational.html">Computational Linguistics program</a>,
when the entire machine locked up. I had to go into the machine room
and power-cycle it to get it to come back up.</p>
<p>I returned to my desk to pick up where I had left off, and the machine
locked up, again just as I ran my program. I rebooted the machine
again, and putting two and two together I tried the next run on a
different, less heavily-used machine, maybe my desk workstation or
something.</p>
<p>The problem turned out to be a bug in that version of Solaris: if you
bound a network socket to some address, and then tried to connect it
to the same address, everything got stuck. I wrote a five-line
demonstration program and we reported the bug to Sun. I don't know if
it was fixed.</p>
<p>My boss had an odd immediate response to this, something along the lines
that connecting a socket to itself is not a sanctioned use case, so
the failure is excusable. Channeling Richard Stallman, I argued that
no user-space system call should <em>ever</em> be able to crash the system,
no matter what stupid thing it does. He at once agreed.</p>
<p>I felt I was on safe ground, because I had in mind
<a href="https://gcc.gnu.org/onlinedocs/gcc-4.0.1/gcc/Bug-Criteria.html">the GNU GCC bug reporting instructions</a>
of the time, which contained the following unequivocal statement:</p>
<blockquote>
<p>If the compiler gets a fatal signal, for any input whatever, that is
a compiler bug. Reliable compilers never crash.</p>
</blockquote>
<p>I love this paragraph. So clear, so pithy! And the second sentence!
It could have been left off, but it is there to articulate the
writer's moral stance. It is a rock-firm committment in a wavering
and uncertain world.</p>
<p>Stallman was a major influence on my writing for a long time. I first
encountered his work in 1985, when I was browsing in a bookstore and
happened to pick up a copy of <em>Dr. Dobb's Journal</em>. That issue
contained the very first publication of the
<a href="https://www.gnu.org/gnu/manifesto.en.html">GNU Manifesto</a>. I had
never heard of Unix before, but I was bowled over by Stallman's
vision, and I read the whole thing then and there, standing up.</p>
<div class="bookbox"><table align=right width="14%" bgcolor="#ffffdd" border=1><tr><td align=center>
<A HREF="http://www.powells.com/partner/29575/biblio/9780806529301"><font size="-2">Order</font><br>
<cite><font>The Crazy Ape</font></cite><br>
<IMG SRC="https://covers.powells.com/9780806529301.jpg" BORDER="0" ALIGN="center" ALT="The Crazy Ape" ><br>
<font size="-2">from Powell's</font></a>
</td></tr></table></div>
<p>(It hit the same spot in my heart as Albert Szent-Györgyi's <em>The Crazy
Ape</em>, which made a similarly big impression on me at about the same
time. I think programmers don't take moral concerns seriously enough,
and this is one reason why so many of them find Stallman annoying.
But this is what I think makes Stallman so important. Perhaps Dan
Bernstein is a similar case.)</p>
<p>I have very vague memories of perhaps finding a bug in <code>gcc</code>, which is
perhaps why I was familiar with that particular section of the <code>gcc</code>
documentation. But more likely I just read it because I read
a lot of stuff. Also Stallman was probably on my “read everything he
writes” list.</p>
<p>Why was I trying to connect a socket to itself, anyway? Oh, it was a
bug. I meant to connect it somewhere else and used the wrong
variable or something. If the operating system crashes when you try,
that is a bug. Reliable operating systems never crash.</p>
<p>[ Final note: I looked for my five-line program that connected a
socket to itself, but I could not find it. But I found something
better instead: an email I sent in April 1993 reporting a program that
caused <code>g++</code> version 2.3.3 to crash with an internal compiler error.
And yes, my report does quote the same passage I quoted above. ]</p>
Down with the negation sign!
https://blog.plover.com/2017/11/15#24-puzzle-bacon
<p>[ Credit where it is due: This was entirely <a href="http://wry.me/~darius/">Darius Bacon's</a> idea. ]</p>
<p>In connection with <a href="https://blog.plover.com/testblog/math/24-puzzle-2.html">“Recognizing when two arithmetic expressions are
essentially the
same”</a>, I had
several conversations with people about ways to normalize numeric
expressions. In that article I observed that while everyone knows the
usual associative law for addition $$ (a + b) + c = a + (b + c)$$
nobody ever seems to mention the corresponding law for subtraction: $$
(a+b)-c = a + (b-c).$$</p>
<p>And while everyone
“knows” that subtraction is not associative because $$(a - b) - c ≠ a
- (b-c)$$ nobody ever seems to observe that there <em>is</em> an associative
law for subtraction: $$\begin{align} (a - b) + c & = a - (b - c)
\\ (a -b) -c & = a-(b+c).\end{align}$$</p>
<p>This asymmetry is kind of a nuisance, and suggests that a more
symmetric notation might be better. Darius Bacon suggested a simple
change that I think is an improvement:</p>
<blockquote>
<p>Write the negation of
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%24"> as $$a\star$$ so that one has, for all <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%24">, $$a+a\star =
a\star + a = 0.$$</p>
</blockquote>
<p>The <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24"> operation obeys the following elegant and simple laws:
$$\begin{align}
a\star\star & = a \\
(a+b)\star & = a\star + b\star
\end{align}
$$</p>
<p>Once we adopt <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24">, we get a huge payoff: <em>We can eliminate
subtraction</em>:</p>
<blockquote>
<p>Instead of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2db%24"> we now write <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2bb%5cstar%24">.</p>
</blockquote>
<p>The negation of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2bb%5cstar%24"> is $$(a+b\star)\star = a\star + b{\star\star} = a\star +b.$$</p>
<p>We no longer have the annoying notational asymmetry between <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2db%24">
and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%2db%20%2b%20a%24"> where the plus sign appears from nowhere. Instead, one
is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2bb%5cstar%24"> and the other is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24b%5cstar%2ba%24">, which is obviously just the
usual commutativity of addition.</p>
<p>The <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24"> is of course nothing but a synonym for multiplication by
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%2d1%24">. But it is a much less clumsy synonym. <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%5cstar%24"> means
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%5ccdot%28%2d1%29%24">, but with less inkjunk.</p>
<p>In conventional notation the parentheses in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%28%2db%29%24"> are essential
and if you lose them the whole thing is ruined. But because <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24"> is
just a special case of multiplication, it associates with
multiplication and division, so we don't have to worry about
parentheses in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%5cstar%29b%20%3d%20a%28b%5cstar%29%20%3d%20%28ab%29%5cstar%24">. They are all equal to just
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24ab%5cstar%24">. and you can drop the parentheses or include them or write the
terms in any order, just as you like, just as you would with <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24abc%24">.</p>
<p>The surprising associativity of subtraction is no longer surprising,
because $$(a + b) - c = a + (b - c)$$ is now written as $$(a + b) + c\star
= a + (b + c\star)$$ so it's just the usual associative law for addition;
it is not even disguised. The same happens for the reverse
associative laws for subtraction that nobody mentions; they
become variations on $$ \begin{align} (a + b\star) + c\star & = a + (b\star + c\star)
\\ & = a + (b+c)\star \end{align} $$ and such like.</p>
<p>The <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24"> is faster to read and faster to say. Instead of “minus one”
or “negative one” or “times negative one”, you just say “star”. </p>
<p>The <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24"> is just a number, and it behaves like a number. Its role
in an expression is the same as any other number's. It is just a
special, one-off notation for a single, particularly important number.</p>
<p>Open questions:</p>
<ol>
<li><p>Do we now need to replace the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cpm%24"> sign? If so,
what should we replace it with?</p></li>
<li><p>Maybe the idea is sound, but the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24"> itself is a bad choice. It
is slow to write. It will inevitably be confused with the <tt>*</tt> that
almost every programming language uses to denote multiplication.</p></li>
<li><p>The real problem here is that the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%2d%24"> symbol is overloaded.
Instead of changing the negation symbol to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24"> and eliminating
the subtraction symbol, what if we just eliminated subtraction?
None of the new notation would be incompatible with the old
notation: <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%2d%28a%2b%2db%29%20%3d%20b%2b%2da%24"> still means exactly what it used to.
But you are no longer allowed to abbreviate it to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%2d%28a%2db%29%20%3d%20b%2da%24">.</p>
<p>This would fix the problem of the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24"> taking too long to write:
we would just use <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%2d%24"> in its place. It would also fix the
concern of point 2: <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%5cpm%20b%24"> now means <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2bb%24"> or <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2b%2db%24"> which
is not hard to remember or to understand. Another happy result:
notations like <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%2d1%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%2d2%24"> do not change at all.</p></li>
</ol>
<!--
One thing you do have to be careful of is mixing <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24"> with
exponents. <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%5e%7bbc%7d%20%3d%20a%20%5e%7bcb%7d%24">, but <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%5e%7b%5cstar%20b%7d%20%5cne%20a%5e%7bb%5cstar%7d%24">.
For example, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%5e%7b%5cstar%202%7d%20%3d%20a%5e2%24">. This makes me wonder if perhaps
the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cstar%24"> should be a subscript instead of a superscript.
((Making it not a subscript at all made this whole problem go away.))
-->
<p>Curious footnote: While I was writing up the draft of this article,
it had a reminder in it: “How did you and Darius come up with
this?” I went back to our email to look, and I discovered the answer
was:</p>
<ol>
<li>Darius suggested the idea to me.</li>
<li>I said, “Hey, that's a great idea!”</li>
</ol>
<p>I wish I could take more credit, but there it is. Hmm, maybe I will
take credit for inspiring Darius! That should be worth at least
fifty percent, perhaps more.</p>
<p>[ This article had some perinatal problems. It escaped early from the
laboratory, in a not-quite-finished state, so I apologize if you are
seeing it twice. ]</p>
No, it is not a compiler error. It is never a compiler error.
https://blog.plover.com/2017/11/12#compiler-error
<p>When I used to hang out in the <code>comp.lang.c</code> Usenet group, back when
there <em>was</em> a <code>comp.lang.c</code> Usenet group, people would show up fairly
often with some program they had written that didn't work, and ask if
their compiler had a bug. The compiler did not have a bug. The
compiler never had a bug. The bug was always in the programmer's code
and usually in their understanding of the language.</p>
<p>When I worked at the University of Pennsylvania, a grad student posted
to one of the internal bulletin boards looking for help with a program
that didn't work. Another graduate student, a super-annoying
know-it-all, said confidently that it was certainly a compiler bug.
It was not a compiler bug. It was caused by a misunderstanding of the
way arguments to unprototyped functions were automatically promoted.</p>
<p>This is actually a subtle point, obscure and easily misunderstood.
Most examples I have seen of people blaming the compiler are much
sillier. I used to be on the mailing list for discussing the
development of Perl 5, and people would show up from time to time to
ask if Perl's <code>if</code> statement was broken. This is a little
mind-boggling, that someone could think this. Perl was first released
in 1987. (How time flies!) The <code>if</code> statement is not exactly an
obscure or little-used feature. If there had been a bug in <code>if</code> it
would have been discovered and fixed by 1988. Again, the bug was
always in the programmer's code and usually in their understanding of
the language.</p>
<p><a href="https://groups.google.com/d/msg/comp.lang.perl.misc/l95-_OoO8ck/zG3ob9-b5wsJ">Here's something I wrote in October 2000</a>,
which I think makes the case very clearly, this time concerning a
claimed bug in the <code>stat()</code> function, another feature that first
appeared in Perl 1.000:</p>
<blockquote>
<p>On the one hand, there's a chance that the compiler has a broken
<code>stat</code> and is subtracting 6 or something. Maybe that sounds likely to
you but it sounds really weird to me. I cannot imagine how such a
thing could possibly occur. Why 6? It all seems very unlikely.</p>
<p>Well, in the absence of an alternative hypothesis, we have to take what
we can get. But in this case, there is an alternative hypothesis!
The alternative hypothesis is that [this person's] program has a bug.</p>
<p>Now, which seems more likely to you?</p>
<ul>
<li>Weird, inexplicable compiler bug that nobody has ever seen before</li>
</ul>
<p>or</p>
<ul>
<li>Programmer fucked up</li>
</ul>
<p>Hmmm. Let me think.</p>
<p>I'll take Door #2, Monty.</p>
</blockquote>
<p>Presumably I had to learn this myself at some point. A programmer can
waste a lot of time looking for the bug in the compiler instead of
looking for the bug in their program. I have a file of (obnoxious)
Good Advice for Programmers that I wrote about twenty years ago, and
one of these items is:</p>
<blockquote>
<p>Looking for a compiler bug is the strategy of LAST resort. LAST resort.</p>
</blockquote>
<p>Anyway, I will get to the point. As I mentioned a few months ago,
<a href="https://blog.plover.com/math/24-puzzle-3.html#phoneapp">I built a simple phone app</a>
that Toph and I can use to find solutions to
“twenty-four puzzles”. In these puzzles, you are given four
single-digit numbers and you have to combine them arithmetically to
total 24. Pennsylvania license plates have four digits, so as we
drive around we play the game with the license plate numbers we see.
Sometimes we can't solve a puzzle, and then we wonder: is it because
there is no solution, or because we just couldn't find one? Then we
ask the phone app.</p>
<p>The other day we saw the puzzle «5 4 5 1», which is very easy, but I
asked the phone app, to find out if there were any other solutions
that we missed. And it announced `No solutions.” Which is wrong. So
my program had a bug, as my programs often do.</p>
<p>The app has a pre-populated dictionary containing all possible
solutions to all the puzzles that have solutions, which I generated
ahead of time and embedded into the app. My first guess was that bug
had been in the process that generated this dictionary, and that it
had somehow missed the solutions of «5 4 5 1». These would be indexed
under the key <code>1455</code>, which is the same puzzle, because each list of
solutions is associated with the four input numbers in ascending
order. Happily I still had the original file containing the
dictionary data, but when I looked in it under <code>1455</code> I saw exactly
the two solutions that I expected to see.</p>
<p>So then I looked into the app itself to see where the bug was. Code
Studio's underlying language is Javascript, and Code Studio has a
nice debugger. I ran the app under the debugger, and stopped in the
relevant code, which was:</p>
<pre><code> var x = [getNumber("a"), getNumber("b"), getNumber("c"), getNumber("d")].sort().join("");
</code></pre>
<p>This constructs a hash key (<code>x</code>) that is used to index into the canned
dictionary of solutions. The <code>getNumber()</code> calls were retrieving the
four numbers from the app's menus, and I verified that the four
numbers were «5 4 5 1» as they ought to be. But what I saw next
astounded me: <code>x</code> was not being set to <code>1455</code> as it should have been.
It was set to <code>4155</code>, which was not in the dictionary. And it was set
to <code>4155</code> because</p>
<p>the built-in <code>sort()</code> function</p>
<p>was sorting the numbers</p>
<p>into</p>
<p>the</p>
<p>wrong</p>
<p>order.</p>
<p align="center"><img src="https://pic.blog.plover.com/prog/compiler-error/WTF.jpg"></p>
<p>For a while I could not believe my eyes. But after another fifteen or
thirty minutes of tinkering, I sent off a bug report… no, I did not.
I still didn't believe it. I asked the front-end programmers at my
company what my mistake had been. Nobody had any suggestions.</p>
<p><em>Then</em> I sent off a bug report that began:</p>
<blockquote>
<p>I think that Array.prototype.sort() returned a wrongly-sorted result
when passed a list of four numbers. This seems impossible, but …</p>
</blockquote>
<p>I was about 70% expecting to get a reply back explaining what I had
misunderstood about the behavior of Javascript's <code>sort()</code>.</p>
<p>But to my astonishment, the reply came back only an hour later:</p>
<blockquote>
<p>Wow! You're absolutely right. We'll investigate this right away.</p>
</blockquote>
<p>In case you're curious, the bug was as follows: The <code>sort()</code> function
was using a bubble sort. (This is of course a bad choice, and I think
the maintainers plan to replace it.) The bubble sort makes several
passes through the input, swapping items that are out of order. It
keeps a count of the number of swaps in each pass, and if the number
of swaps is zero, the array is already ordered and the sort can stop
early and skip the remaining passes. The test for this was:</p>
<pre><code> if (changes <= 1) break;
</code></pre>
<p>but it should have been:</p>
<pre><code> if (changes == 0) break;
</code></pre>
<p>Ouch.</p>
<p>The Code Studio folks handled this very creditably, and did indeed fix it the same day.
(<a href="https://support.code.org/hc/en-us/requests/115548">The support system ticket is available for your perusal</a>,
as is <a href="https://github.com/code-dot-org/JS-Interpreter/pull/23">the Github pull request with the fix</a>,
in case you are interested.)</p>
<p>I still can't quite believe it. I feel as though I have accidentally
spotted the Loch Ness Monster, or Bigfoot, or something like that, a
strange and legendary monster that until now I thought most likely didn't
exist.</p>
<p>A bug in the <code>sort()</code> function. O day and night, but this is wondrous
strange!</p>
<p>[ Addendum 20171113: Thanks to <a href="https://www.reddit.com/user/spotter">Reddit user spotter</a>
for pointing me to a related 2008 blog post of Jeff Atwood's,
<a href="https://blog.codinghorror.com/the-first-rule-of-programming-its-always-your-fault/">“The First Rule of Programming: It's Always Your Fault”</a>.
]</p>
<p>[ Addendum 20171113: Yes, yes, I know <code>sort()</code> is in the library, not in the compiler. I am using “compiler error” as a <a href="https://en.wikipedia.org/wiki/Synecdoche">synecdoche</a>
for “system software error”. ]</p>
<!--
What's that? Yes, Mr. Travatino, you are responsible for _everything_
we cover in class, and synecdoche is fair game for the final exam.
-->
<p>[ Addendum 20171116: I remembered examples of <a href="https://blog.plover.com/prog/compiler-error-2.html">two other fundamental
system software errors I have
discovered</a>, including one
honest-to-goodness compiler bug. ]</p>
Randomized algorithms go fishing
https://blog.plover.com/2017/11/11#randomized-algorithms
<p>A little while back I thought of a perfect metaphor for explaining
what a <a href="https://en.wikipedia.org/wiki/Randomized_algorithm">randomized
algorithm</a> is.
It's so perfect I'm sure it must have thought of many times before,
but it's new to me.</p>
<p>Suppose you have a lake, and you want to know if there are fish in the
lake. You dig some worms, pick a spot, bait the hook, and wait. At
the end of the day, if you have caught a fish, you have your answer:
there <em>are</em> fish in the lake.<a href="#fn1">[1]</a></p>
<p>But what if you don't catch a fish? Then you still don't know.
Perhaps you used the wrong bait, or fished in the wrong spot. Perhaps
you did everything right and the fish happened not to be biting that
day. Or perhaps you did everything right except there are no fish in
the lake.</p>
<p>But you can try again. Pick a different spot, try a different bait,
and fish for another day. And if you catch a fish, you know the
answer: the lake does contain fish. But if not, you can go fishing
again tomorrow.</p>
<p>Suppose you go fishing every day for a month and you catch nothing.
You still don't know why. But you have a pretty good idea: most
likely, it is because there are no fish to catch. It <em>could</em> be that
you have just been very unlucky, but that much bad luck is unlikely.</p>
<p>But perhaps you're not sure enough. You can keep fishing. If, after
a year, you have not caught any fish, you can be almost certain that
there were no fish in the lake at all. Because a year-long run of bad
luck is extremely unlikely. But if you are still not convinced, you
can keep on fishing. You will never be 100% certain, but if you keep
at it long enough you can become 99.99999% certain with as many nines
as you like.</p>
<p>That is a randomized algorithm, for finding out of there are fish in a
lake! It might tell you definitively that there are, by producing a
fish. Or it might fail, and then you still don't know. But as long
as it keeps failing, the chance that there are any fish rapidly
becomes very small, exponentially so, and can be made as small as you
like.</p>
<p>For not-metaphorical examples, see:</p>
<ul>
<li><p>The <a href="https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test">Miller-Rabin primality
test</a>:
Given an odd number <em>K</em>, is <em>K</em> composite? If it is, the
Miller-Rabin test will tell you so 75% of the time. If not, you can
go fishing again the next day. After <em>n</em> trials, you are either
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24100%5c%25%24"> certain that <em>K</em> is composite, or <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24100%5c%25%2d%5cfrac1%7b2%5e%7b2n%7d%7d%24"> certain
that it is prime.</p></li>
<li><p><a href="https://en.wikipedia.org/wiki/Freivalds%27_algorithm">Frievalds’
algorithm</a>:
given three square matrices <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%2c%20B%2c%20%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%24">, is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%24"> the
product <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%c3%97B%24">? Actually multiplying <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%c3%97B%24"> could be slow. But
if <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%c3%97B%24"> is <em>not</em> equal to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%24">, Frievald's algorithm will
quickly tell you that it isn't—half the time. If not, you can go
fishing again. After <em>n</em> trials, you are either <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24100%5c%25%24">
certain that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%24"> is not the correct product, or <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24100%5c%25%2d%5cfrac1%7b2%5en%7d%24"> certain
that it is.</p></li>
</ul>
<hr />
<p><a name="#fn1">[1]</a> Let us ignore mathematicians’
pettifoggery about lakes that contain exactly one fish. This is just
a metaphor. If you are really concerned, you can
catch-and-release.<p></p>
A modern translation of the 1+1=2 lemma
https://blog.plover.com/2017/11/07#PM-translation
<p><a href="https://blog.plover.com/math/PM.html">A while back I blogged an explanation of the “<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%241%2b1%3d2%24">” lemma from
Whitehead and Russell's <em>Principia
Mathematica</em></a>:</p>
<p align=center><img src="https://pic.blog.plover.com/math/PM-translation/1+1=2.png"></p>
<p>W. Ethan Duckworth of the Department of Mathematics and Statistics at
Loyola University translated this into modern notation and has kindly
given me permission to publish it here:</p>
<p align=center><img src="https://pic.blog.plover.com/math/PM-translation/1+1=2-translation.png"></p>
<p>I think it is interesting and instructive to compare the two versions.
One thing to notice is that there is no perfect translation. As when
translating between two natural languages (German and English, say),
the meaning cannot be preserved exactly. Whitehead and Russell's
language is different from the modern language not only because the
notation is different but because the underlying concepts are
different. To really get what <em>Principia Mathematica</em> is saying you have to immerse
yourself in the <em>Principia Mathematica</em> model of the world.</p>
<p>The best example of this here is the symbol “1”. In the modern
translation, this means the number 1. But at this point in <em>Principia Mathematica</em>, the
number 1 has not yet been defined, and to use it here would be
circular, because proposition ∗54.43 is an important step on
the way to defining it. In <em>Principia Mathematica</em>, the symbol “1” represents the class of all
sets that contain exactly one element.<a href="#fn1">[1]</a> Following
<a href="https://quod.lib.umich.edu/cache/a/a/t/aat3201.0001.001/00000381.tif.20.pdf#page=5;zoom=75">the definition of ∗52.01</a>, in modern notation we would write
something like:</p>
<p>$$1 \equiv_{\text{def}} \{x \mid \exists y . x = \{ y \} \}$$</p>
<p>But in many modern universes, that of ZF set theory in particular,
there is no such object.<a href="#fn2">[2]</a> The situation in ZF is
even worse: the purported definition is meaningless, because the
comprehension is unrestricted.</p>
<div class="bookbox"><table align=right width="14%" bgcolor="#ffffdd" border=1><tr><td align=center>
<A HREF="http://www.powells.com/partner/29575/biblio/0521626064"><font size="-2">Order</font><br>
<cite><font>Principia Mathematica (through section 56)</font></cite><br>
<IMG SRC="http://covers.librarything.com/devkey/4e7443e59f586e9306d61bb521a11d8e/medium/isbn/0521626064" BORDER="0" ALIGN="center" ALT="Principia Mathematica (through section 56)" ><br>
<font size="-2">from Powell's</font></a>
</td></tr></table></div>
<p>The <em>Principia Mathematica</em> notation for <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%7cA%7c%24">, the cardinality of set <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24">, is
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24Nc%5c%2c%e2%80%98A%24">, but again this is only an approximate translation. The
meaning of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24Nc%5c%2c%e2%80%98A%24"> is something close to</p>
<blockquote>
<p>the unique class <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%24"> such
that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24x%5cin%20C%24"> if and only if there exists a one-to-one relation
between <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24x%24">.</p>
</blockquote>
<p>(So for example one might assert that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24Nc%5c%2c%e2%80%98%5cLambda%20%3d%200%24">, and in
fact this is precisely what <a href="https://quod.lib.umich.edu/cache/a/a/t/aat3201.0002.001/00000041.tif.20.pdf#page=13;zoom=75">proposition ∗101.1</a> does assert.)
Even this doesn't quite capture the <em>Principia Mathematica</em> meaning, since the modern
conception of a relation is that it is a special kind of set, but in
<em>Principia Mathematica</em> relations and sets are different sorts of things. (We would also
use a one-to-one <em>function</em>, but here there is no <em>additional</em>
mismatch between the modern concept and the <em>Principia Mathematica</em> one.)</p>
<p>It is important, when reading old mathematics, to try to understand in
modern terms what is being talked about. But it is also dangerous to
forget that the ideas themselves are different, not just the
language.<a href="#fn3">[3]</a> I extract a lot of value from
switching back and forth between different historical views, and
comparing them. Some of this value is purely historiological. But
some is directly mathematical: looking at the same concepts from a
different viewpoint sometimes illuminates aspects I didn't fully
appreciate. And the different viewpoint I acquire is one that most
other people won't have.</p>
<p>One of my current low-priority projects is reading
<a href="https://en.wikipedia.org/wiki/William_Burnside">W. Burnside</a>'s
important 1897 book <a href="https://archive.org/details/in.ernet.dli.2015.168629"><em>Theory of Groups of Finite
Order</em></a>. The
value of this, for me, is not so much the group-theoretic content, but
in seeing how ideas about groups have evolved. I hope to write more
about this topic at some point.</p>
<hr />
<p><a name="fn1">[1]</a> Actually the situation in <em>Principia Mathematica</em> is more
complicated. There is a different class 1 defined at each type. But
the point still stands.</p>
<p><a name="fn2">[2]</a> In ZF, if <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%241%24"> were to exist as defined above,
the set <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5c%5c%7b1%5c%5c%7d%24"> would exist also, and we would have <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5c%5c%7b1%5c%5c%7d%20%5cin%0a1%24"> which would contradict the axiom of foundation.</p>
<div class="bookbox"><table align=right width="14%" bgcolor="#ffffdd" border=1><tr><td align=center>
<A HREF="http://www.powells.com/partner/29575/biblio/9781107534056"><font size="-2">Order</font><br>
<cite><font>Proofs and Refutations</font></cite><br>
<IMG SRC="https://covers.powells.com/9781107534056.jpg" BORDER="0" ALIGN="center" ALT="Proofs and Refutations" ><br>
<font size="-2">from Powell's</font></a>
</td></tr></table></div>
<p><a name="fn3">[3]</a> This was a recurring topic of study for Imre
Lakatos, most famously in his little book <a href="https://math.berkeley.edu/~kpmann/Lakatos.pdf"><em>Proofs and
Refutations</em></a>. Also
important is his article “Cauchy and the continuum: the significance
of nonstandard analysis for the history and philosophy of
mathematics.” <cite>Math. Intelligencer</cite> <b>1</b> (1978), #3,
p.151–161, which <a href="https://blog.plover.com/math/Cauchy.html">I discussed here
earlier</a>, and which you can read in its entireity by paying the
excellent people at Elsevier the nominal and reasonable—nay,
trivial—sum of only US$39.95.</p>
I missed an easy solution to a silly problem
https://blog.plover.com/2017/11/02#blosxom-macros-3
<p>A few years back I wrote a couple of articles about the extremely poor
macro plugin I wrote for Blosxom.
(<a href="https://blog.plover.com/prog/blosxom-macros.html">[1]</a>
<a href="https://blog.plover.com/prog/blosxom-macros-2.html">[2]</a>). The feature-poorness of
the macro system is itself the system's principal feature, since it
gives the system simple behavior and simple implementation. Sometimes
this poverty means I have to use odd workarounds to get it to do what
I want, but they are always <em>simple</em> workarounds and it is never hard
to figure out <em>why</em> it didn't do what I wanted.</p>
<p>Yesterday, though, I got stuck. I had defined a macro, <code>-></code>, which
the macro system would replace with →. Fine. But later in the
file I had an HTML comment:</p>
<pre><code> <!-- blah blah -->
</code></pre>
<p>and the macro plugin duly transformed this to</p>
<pre><code> <!-- blah blah -→
</code></pre>
<p>creating an unterminated comment.</p>
<p>Normally the way I would deal with this would be to change the name of
the macro from <code>-></code> to something else that did not conflict with HTML
syntax, but I was curiously resistant to this. I could not think of
anything I wanted to use instead. (After a full night's sleep, it
seems to me that <code>=></code> would have been just fine.)</p>
<p>I then tried replacing <code>--></code> with <code>&#45;&#45;&#62;</code>. I didn't expect
this to work, and indeed it didn't. Those <code>&#</code> escapes only work for
text parts of an HTML document, and cannot be used for HTML syntax.
They <em>remove</em> the special meaning of character sequences, which is the opposite
of what I need here.</p>
<p>Eventually, I just deleted the comment and moved on. That worked,
although it was obviously suboptimal. I was too tired to think, and I
just wanted the problem out of the way. I wish I had been a little
less impulsive, because there are at least two other solutions I
overlooked:</p>
<ul>
<li><p>I could have defined an <code>END-HTML-COMMENT</code> macro that expanded to
<code>--></code> and then used it at the end of the comment. The results of
macro expansion are never re-expanded, so the resulting <code>--></code> would
not have been subject to further replacement. This is ugly, but
would have done the job, and I have used this sort of technique
before.</p></li>
<li><p>Despite its smallness, the macro plugin has had feature creep over
the years, and some of these features I put in and then forgot
about. (Bad programmer! No cookie!) One of these lost features is
the directive <code>#undefall</code>, which instructs the plugin to forget all
the macro definitions. I could have solved the problem by sticking
this in just before the broken comment.</p>
<p>(The plugin does not (yet) have a selective <code>#undef</code>. It would be
easy to add, but the module has too many features already.)</p></li>
</ul>
<p>Yesterday morning I asked my co-workers if there was an alternative
HTML comment syntax, or some way to modify the comment ending sequence
so that the macro plugin wouldn't spoil it. (I think there isn't, and
a short look at the HTML 5.0 standard didn't suggest any workaround.)</p>
<p>One of the co-workers was Tye McQueen. He said that as far as he knew
there was no fix to the HTML comments that was like what I had asked
for. He asked whether I could define a second macro, <code>--></code>, which
would expand to <code>--></code>.</p>
<p>I carefully explained why this wouldn't work: when two macro
definitions share a prefix, and they both match, the macro system does
not make any guarantee about which substitution it will perform. If
there are two overlapping macros, say:</p>
<pre><code> #define pert curt
#define per bloods
</code></pre>
<p>then the string <code>pertains</code> might turn into <code>curtains</code> (if the first
substitution is performed) or into <code>bloodstains</code> (if the second is).
But you don't know which it will be, and it might be different each
time. I could fix this, but at present I prefer the answer to be
“don't define a macro that is a prefix of another macro.”</p>
<p>Then M. McQueen gently pointed out that <code>-></code> is not a prefix of <code>--></code>.
It is a suffix. My objection does not apply, and his suggestion
solves the problem.</p>
<p>When I write an Oops post I try to think about what lesson I can learn
from the mistake. This time there isn't too much, but I do have a
couple of ideas:</p>
<ol>
<li><p>Before giving up on some piece of software I've written, look to
see if I foresaw the problem and put in a feature to deal with it.
I have a notion that this would work surprisingly often.</p>
<p>There are two ways to deal with the problem of writing software
with too many features, and I have worked on this problem for many
years from only one direction: try to write fewer features. But I
could also come at it from the opposite direction: try to make
better use of the features that I do write.</p>
<p>It feels odd that this should seem to me like a novel idea, but
there it is.</p></li>
<li><p>Try to get more sleep.</p></li>
</ol>
<p>(I haven't published an <a href="https://blog.plover.com/oops/">oops</a> article in far too long,
and it certainly isn't because I haven't been making mistakes. I will
try to keep it in mind.)</p>
The Blind Spot and the cut rule
https://blog.plover.com/2017/10/31#cut-rule
<p>[ The Atom and RSS feeds have done an unusually poor job of preserving
the mathematical symbols in this article. It will be much more
legible if you <a href="https://blog.plover.com/math/logic/cut-rule.html">read it on my
blog</a>. ]</p>
<p>Lately I've been enjoying <em>The Blind Spot</em> by <a href="https://en.wikipedia.org/wiki/Jean-Yves_Girard">Jean-Yves
Girard</a>, a very famous
logician. (It is translated from French; the original title is <em>Le Point
Aveugle</em>.) This is an unusual book. It is solidly full of deep
thought and technical detail about logic, but it is also opinionated,
idiosyncratic and polemical. Chapter 2 (“Incompleteness”) begins:</p>
<blockquote>
<p>It is out of question to enter into the technical arcana of Gödel’s
theorem, this for several reasons:</p>
<p>(i) This result, indeed very easy,
can be perceived, like the late paintings of Claude Monet, but from
a certain distance. A close look only reveals fastidious details
that one perhaps does not want to know.</p>
<p>(ii) There is no need either, since this theorem is a scientific
cul-de-sac : in fact it exposes a way without exit. Since it is
without exit, nothing to seek there, and it is of no use to be
expert in Gödel’s theorem.</p>
<p>(<em>The Blind Spot</em>, p. 29)</p>
</blockquote>
<p>He continues a little later:</p>
<blockquote>
<p>Rather than insisting on those tedious details which «hide the
forest», we shall spend time on objections, from the most ridiculous
to the less stupid (none of them being eventually respectable).</p>
</blockquote>
<p>As you can see, it is not written in the usual dry mathematical-text
style, presenting the material as a perfect and aseptic distillation
of absolute truth. Instead, one sees the history of logic, the rise
and fall of different theories over time, the interaction and relation
of many mathematical and philosophical ideas, and Girard's reflections
about it all. It is a transcription of a lecture series, and reads
like one, including all of the speaker's incidental remarks and
offhand musings, but written down so that each can be weighed and
pondered at length. Instead of wondering in the moment what he meant
by some intriguing remark, then having to abandon the thought to keep
up with the lecture, I can pause and ponder the significance. Girard
is really, really smart, and knows way more about logic than I ever
will, and his offhand remarks reward this pondering. The book is
<em>profound</em> in a way that mathematics books often aren't. I wanted to
provide an illustrative quotation, but to briefly excerpt a profound
thought is to destroy its profundity, so I will have to refrain.<a href="#fn1">[1]</a>)</p>
<p>The book really gets going with its discussion of Gentzen's
<a href="https://en.wikipedia.org/wiki/Sequent_calculus">sequent calculus</a> in
chapter 3.</p>
<!-- careful, this next definition spoils your ability to close HTML comments! -->
<p>Between around 1890 (when Peano and Frege began to liberate logic from its
medieval encrustations) and 1935 when the sequent calculus was
invented, logical proofs were mainly in the “Hilbert style”.
Typically there were some axioms, and some rules of deduction by which
the axioms could be transformed into other formulas. A typical
example consists of the axioms $$A\to(B\to A)\\
(A \to (B \to C)) \to ((A \to B) \to (A \to C)) $$
(where <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%2c%20B%2c%20C%24"> are understood to be placeholders that can be
replaced by any well-formed formulas) and the deduction rule <em>modus
ponens</em>: having proved <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%5cto%20B%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24">, we can deduce <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24B%24">.</p>
<p>In contrast, sequent calculus has few axioms and many deduction
rules. It deals with <em>sequents</em> which are claims of implication.
For example: $$p, q \vdash r, s$$ means that if
we can prove <em>all</em> of the formulas on the left of the ⊢ sign, then we can conclude
<em>some</em> of the formulas on the right. (Perhaps only one, but at least
one.)</p>
<p>A typical deductive rule in
sequent calculus is:</p>
<p>$$
\begin{array}{c}
Γ ⊢ A, Δ \qquad Γ ⊢ B, Δ \\ \hline
Γ ⊢ A ∧ B, Δ
\end{array}
$$</p>
<p>Here <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%94%24"> represent any lists of formulas, possibly empty.
The premises of the rule are:</p>
<ol>
<li><p><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%20%e2%8a%a2%20A%2c%20%ce%94%24">: If we can prove <em>all</em> of the formulas in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%24">, then we can
conclude either the formula <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24"> or <em>some</em> of the formulas in
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%94%24">.</p></li>
<li><p><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%20%e2%8a%a2%20B%2c%20%ce%94%24">: If we can prove <em>all</em> of the formulas in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%24">, then we can
conclude either the formula <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24B%24"> or <em>some</em> of the formulas in
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%94%24">.</p></li>
</ol>
<p>From these premises, the rule allows us to deduce:</p>
<blockquote>
<p><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%20%e2%8a%a2%20A%20%e2%88%a7%20B%2c%20%ce%94%24">: If we can prove <em>all</em> of the formulas in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%24">, then we can
conclude either the formula <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%20%5cland%20B%24"> or <em>some</em> of the formulas in
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%94%24">.</p>
</blockquote>
<p>The only axioms of sequent calculus are utterly trivial:</p>
<p>$$
\begin{array}{c}
\phantom{A} \\ \hline
A ⊢ A
\end{array}
$$</p>
<p>There are no premises; we get this deduction for free: If can prove
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24">, we can prove <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24">. (<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24"> here is a metavariable that can be
replaced with any well-formed formula.)</p>
<p>One important point that Girard brings up, which I had never realized
despite long familiarity with sequent calculus, is the symmetry
between the left and right sides of the turnstile ⊢. As I mentioned,
the interpretation of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%20%e2%8a%a2%20%ce%94%24"> I had been taught was that it
means that if every formula in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%24"> is provable, then some formula in
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%94%24"> is provable. But instead let's focus on just one of the
formulas <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24"> on the right-hand side, hiding in the list <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%94%24">. The sequent
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%20%e2%8a%a2%20%ce%94%2c%20A%24"> can be understood to mean that to prove
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24">, it suffices to prove
all of the formulas in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%24">, and to <em>disprove</em> all the formulas in
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%94%24">. And now let's focus on just one of
the formulas on the left side: <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%2c%20A%20%e2%8a%a2%20%ce%94%24"> says that to
disprove <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24">, it suffices to prove all the formulas in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%24"> and
disprove all the formulas in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%94%24">.</p>
<p>The all-some correspondence, which had previously caused me to wonder
why it was that way and not something else, perhaps the other way
around, has turned into a simple relationship about logical negation:
the formulas on the left are positive, and the ones on the right are
negative.<a href="#fn2">[2]</a>) With this insight, the sequent calculus negation laws
become not merely simple but trivial:</p>
<p>$$
\begin{array}{cc}
\begin{array}{c}
Γ, A ⊢ Δ \\ \hline
Γ ⊢ \lnot A, Δ
\end{array}
& \qquad
\begin{array}{c}
Γ ⊢ A, Δ \\ \hline
Γ, \lnot A ⊢ Δ
\end{array}
\end{array}
$$</p>
<p>For example, in the right-hand deduction: what is sufficient to prove
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24"> is also sufficient to disprove <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%c2%acA%24">.</p>
<p>(Compare also the rule I showed above for ∧: It now says
that if proving everything in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%24"> and disproving everything in
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%94%24"> is sufficient for proving <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24">, and likewise
sufficient for proving <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24B%24">, then it is also sufficient for proving
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%5cland%20B%24">.)</p>
<p>But none of that was what I planned to discuss; this article is
(intended to be) about sequent calculus's “cut rule”.</p>
<p>I never really appreciated the cut rule before. Most of the deductive
rules in the sequent calculus are intuitively plausible and so simple
and obvious that it is easy to imagine coming up with them oneself.</p>
<p>But the cut rule is more complicated than the rules I have already
shown. I don't think I would have thought of it easily:</p>
<p>$$
\begin{array}{c}
Γ ⊢ A, Δ \qquad Λ, A ⊢ Π \\ \hline
Γ, Λ ⊢ Δ, Π
\end{array}
$$</p>
<p>(Here <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24"> is a formula and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%2c%20%ce%94%2c%20%ce%9b%2c%20%ce%a0%24"> are lists of
formulas, possibly empty lists.)</p>
<p>Girard points out that the cut rule is a generalization of modus
ponens: taking <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%2c%20%ce%94%2c%20%ce%9b%24"> to be empty and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%a0%20%3d%20%5c%5c%7bB%5c%5c%7d%24">
we obtain:</p>
<p>$$
\begin{array}{c}
⊢ A \qquad A ⊢ B \\ \hline
⊢ B
\end{array}
$$</p>
<p>The cut rule is also a generalization of the transitivity of implication:</p>
<p>$$
\begin{array}{c}
X ⊢ A \qquad A ⊢ Y \\ \hline
X ⊢ Y
\end{array}
$$</p>
<p>Here we took <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%20%3d%20%5c%5c%7bX%5c%5c%7d%2c%20%ce%a0%20%3d%20%5c%5c%7bY%5c%5c%7d%24">, and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%94%24"> and
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%9b%24"> empty.</p>
<p>This all has given me a much better idea of where the cut rule came
from and why we have it.</p>
<p>In sequent calculus, the deduction rules all come in pairs. There is
a rule about introducing ∧, which I showed before. It allows us
to construct a sequent involving a formula with an ∧, where
perhaps we had no ∧ before. (In fact, it is the only way to do
this.) There is a corresponding rule (actually two rules) for
getting rid of ∧ when we have it and we don't want it:</p>
<p>$$
\begin{array}{cc}
\begin{array}{c}
Γ ⊢ A\land B, Δ \\ \hline
Γ ⊢ A, Δ
\end{array}
& \qquad
\begin{array}{c}
Γ ⊢ A\land B, Δ \\ \hline
Γ ⊢ B, Δ
\end{array}
\end{array}
$$</p>
<p>Similarly there is a rule (actually two rules) about introducing <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5clor%24"> and
a corresponding rule about eliminating it.</p>
<p>The cut rule seems to lie outside this classification. It is not
paired.</p>
<p>But Girard showed me that it <em>is</em> part of a pair. The axiom </p>
<p>$$
\begin{array}{c}
\phantom{A} \\ \hline
A ⊢ A
\end{array}
$$</p>
<p>can be seen as an introduction rule for a pair of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24">s, one on each side
of the turnstile. The cut rule is the corresponding rule for
eliminating <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24"> from both sides. </p>
<p>Sequent calculus proofs are
much easier to construct than
Hilbert-style proofs.
Suppose one wants to prove <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24B%24">.
In a Hilbert system the only deduction rule is modus ponens, which requires that we first
prove <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%5cto%20B%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24"> for some <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24">. But what <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24"> should we choose? It could be
anything, and we have no idea where to start or how big it could be.
(If you enjoy suffering, try to prove the simple theorem <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%5cto%20A%24"> in
the Hilbert system I described at the beginning of the
article. (<a href="https://pic.blog.plover.com/math/logic/cut-rule/hilbert-proof.txt">Solution</a>) </p>
<p>In sequent calculus, there is only one way to prove each kind of
thing, and the premises in each rule are simply related to the
consequent we want. Constructing the proof is mostly a matter of
pushing the symbols around by following the rules to their
conclusions. (Or, if this is impossible, one can conclude that there
is no proof, and why.<a href="#fn3">[3]</a>) Construction of
proofs can now be done entirely mechanically!</p>
<p>Except! The cut rule <em>does</em> require one to guess a formula: If one
wants to prove <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%2c%20%ce%9b%20%e2%8a%a2%20%ce%94%2c%20%ce%a0%24">, one must guess what
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24"> should appear in the premises <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%93%2c%20A%20%e2%8a%a2%20%ce%94%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%ce%9b%20%e2%8a%a2%20A%2c%0a%ce%a0%24">. And there is no constraint at all on <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24A%24">; it could be
anything, and we have no idea where to start or how big it could be.</p>
<p>The good news is that
<a href="https://en.wikipedia.org/wiki/Gerhard_Gentzen">Gentzen</a>, the inventor
of sequent calculus, showed that one can dispense with the cut rule:
it is unnecessary:</p>
<blockquote>
<p>In Hilbert-style systems, based on common sense, the only rule is
(more or less) the Modus Ponens : one reasons by linking together
lemmas and consequences. We just say that one can get rid of that :
it is like crossing the English Channel with fists and feet bound !</p>
<p>(<em>The Blind Spot</em>, p. 61)</p>
</blockquote>
<p>Gentzen's demonstration of this shows how one can take any proof that
involves the cut rule, and algorithmically eliminate the cut rule from
it to obtain a proof of the same result that does not use cut.
Gentzen called this the “Hauptsatz” (“principal theorem”) and rightly
so, because it reduces construction of logical proofs to an algorithm
and is therefore the ultimate basis for algorithmic proof theory.</p>
<p>The bad news is that the cut-elimination process can
super-exponentially increase the size of the proof, so it does not
lead to a <em>practical</em> algorithm for deciding provability. Girard
analyzed why, and what he discovered amazed me. The only problem is
in the contraction rules, which had seemed so trivial and
innocuous—uninteresting, even—that I had never given them any thought:</p>
<p>$$
\begin{array}{cc}
\begin{array}{c}
Γ, A, A ⊢ Δ \\ \hline
Γ, A ⊢ Δ
\end{array}
& \qquad
\begin{array}{c}
Γ ⊢ A, A, Δ \\ \hline
Γ ⊢ A, Δ
\end{array}
\end{array}
$$</p>
<p>And suddenly Girard's invention of <a href="https://en.wikipedia.org/wiki/Linear_logic">linear
logic</a> made sense to me.
In linear logic, contraction is forbidden; one must use each formula
in one and only one deduction.</p>
<p>Previously it had seemed to me that this was a pointless restriction.
Now I realized that it was no more of a useless hair shirt than the
intuitionistic rejection of the law of the proof by contradiction:
<a href="https://blog.plover.com/math/IL-contradiction.html">not a stubborn refusal to use an obvious tool of
reasoning</a>, but a
restriction of proofs to produce <em>better</em> reasoning. With the
rejection of contraction, cut-elimination no longer explodes
proof size, and automated theorem proving becomes practical:</p>
<blockquote>
<p>This is why its study is, implicitly, at the very heart of these
lectures.</p>
<p>(<em>The Blind Spot</em>, p. 63)</p>
</blockquote>
<p>The book is going to get into linear logic later in the next chapter.
I have read descriptions of linear logic before, but never understood
what it was up to. (It has two logical and operators, and two logical
or operators; why?) But I am sure Girard will explain it marvelously.</p>
<hr />
<ol>
<li><a name="fn1"></a> In place of a profound excerpt, I will present
the following, which isn't especially profound but struck a chord for
me: “By the way, nothing is more arbitrary than a
modal logic « I am done with this logic, may I have another one ? »
seems to be the <i>motto</i> of these guys.” (p. 24)
<li><a name="fn2"></a> Compare <a
href="https://blog.plover.com/math/24-puzzle-2.html">my alternative representation of
arithmetic expressions</a> that collapses addition and subtraction,
and multiplication and division.
<li><a name="fn3"></a> A typically Girardian remark is that analytic tableaux are “the poor man's sequents”.
</ol>
<hr />
<p><a href="https://news.ycombinator.com/item?id=14249733">A brief but interesting discussion of <em>The Blind Spot</em> on Hacker
News</a>.</p>
<p><a href="https://www.maa.org/press/maa-reviews/the-blind-spot-lectures-on-logic">Review by Felipe
Zaldivar</a>.</p>
Counting increasing sequences with Burnside's lemma
https://blog.plover.com/2017/10/15#increasing-sequences-2
<p>[ I started this article in March and then forgot about it. Ooops! ]</p>
<p>Back in February I posted <a href="https://blog.plover.com/math/increasing-sequences.html">an article about how there are exactly 715
nondecreasing sequences of 4
digits</a>.
I said that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24S%2810%2c%204%29%24"> was the set of such sequences and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%2810%2c%0a4%29%24"> was the number of such sequences, and in general $$C(d,n) =
\binom{n+d-1}{d-1} = \binom{n+d-1}{n}$$ so in particular $$C(10,4) =
\binom{13}{4} = 715.$$</p>
<p>I described more than one method of seeing this, but I didn't mention
the method I had found first, which was to use the
<a href="https://en.wikipedia.org/wiki/Burnside%27s_lemma">Cauchy-Frobenius-Redfeld-Pólya-Burnside counting
lemma</a>.
<a href="https://blog.plover.com/math/polya-burnside.html">I explained
the lemma in detail</a> some time ago,
with beautiful illustrated examples, so I won't repeat the explanation
here. The Burnside lemma is a kind of big hammer to use here, but I
like big hammers. And the results of this application of the big
hammer are pretty good, and justify it in the end.</p>
<p>To count the number of distinct sequences of 4 digits, where some
sequences are considered “the same” we first identify a symmetry group
whose orbits are the equivalence classes of sequences. Here the
symmetry group is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24S_4%24">, the group that permutes the elements of the
sequence, because two sequences are considered “the same” if they have
exactly the same digits but possibly in a different order, and the
elements of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24S_4%24"> acting on the sequences are exactly what you want
to permute the elements into some different order.</p>
<p>Then you tabulate how many of the 10,000 original sequences are left
fixed by each element <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24p%24"> of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24S_4%24">, which is exactly the number of
cycles of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24p%24">. (<a href="https://blog.plover.com/math/fixpoints.html">I have also discussed cycle classes of permutations
before</a>.) If <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24p%24"> contains <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%24">
cycles, then <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24p%24"> leaves exactly <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%2410%5en%24"> of the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%2410%5e4%24"> sequences fixed.</p>
<p><table align=center cellspacing=0 cellpadding=6>
<tr bgcolor=pink><td align=center><span style="font-size: 120%;">Cycle class</span>
<td>Number<br>of cycles<td>How many<br>permutations?<td>Sequences<br>left fixed
<tr bgcolor="white"><td><img src="https://pic.blog.plover.com/math/increasing-sequences-2/fp4-1111.jpg"><td>4<td align=right>1
<td align=right>10,000
<tr bgcolor="#eee"><td bgcolor=""><img
src="https://pic.blog.plover.com/math/increasing-sequences-2/fp4-211.jpg"><td>3
<td align=right>6
<td align=right>1,000
<tr bgcolor="white"><td>
<img src="https://pic.blog.plover.com/math/increasing-sequences-2/fp4-22.jpg"><br>
<img src="https://pic.blog.plover.com/math/increasing-sequences-2/fp4-31.jpg">
<td>2
<td align=right>3 + 8 = 11
<td align=right>100
<tr bgcolor="#eee"><td><img
src="https://pic.blog.plover.com/math/increasing-sequences-2/fp4-4.jpg">
<td>1
<td align=right>6
<td align=right>10
<tr style="border: black medium solid;" bgcolor="pink"><td bgcolor="pink">
<td><td align=right><span style="font-size: 130%;">24</span>
<td align=right><span style="font-size: 130%;">17,160</span>
</table></p>
<p>(Skip this paragraph if you already understand the table. The
four rows above are an abbreviation of the full table, which has 24
rows, one for each of the 24 permutations of order 4. The “How many
permutations?” column says how many times each row should be
repeated. So for example the second row abbreviates 6 rows, one for
each of the 6 permutations with three cycles,
which each leave 1,000 sequences fixed, for a total of 6,000 in the
second row, and the total for all 24 rows is 17,160. There are two
different types of permutations that have two cycles, with 3 and 8
permutations respectively, and I have collapsed these into a single
row.)</p>
<p>Then the magic happens: We average the number left fixed by each
permutation and get <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%7b17160%7d%7b24%7d%20%3d%20715%24"> which we already know
is the right answer.</p>
<p>Now suppose we knew how many permutations there were with each number
of cycles. Let's write
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cdef%5cst%231%232%7b%5cleft%5c%5b%7b%231%5catop%20%232%7d%5cright%5d%7d%5cst%20nk%24"> for the number of
permutations of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%24"> things that have exactly <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24k%24"> cycles. For example, from
the table above we see that
$$\st 4 4 = 1,\quad
\st 4 3 = 6,\quad
\st 4 2 = 11,\quad
\st 4 1 = 6.$$</p>
<p>Then applying Burnside's lemma we can conclude
that $$C(d, n) = \frac1{n!}\sum_i \st ni d^i .\tag{$\spadesuit$}$$ So for example the table above
computes <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%2810%2c4%29%20%3d%20%5cfrac1%7b24%7d%5csum_i%20%5cst%204i%2010%5ei%20%3d%20715%24">.</p>
<p>At some point in looking into this I noticed that
$$\def\rp#1#2{#1^{\overline{#2}}}%
\def\fp#1#2{#1^{\underline{#2}}}%
C(d,n) =
\frac1{n!}\rp dn$$ where <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5crp%20dn%24"> is the so-called “<a href="https://en.wikipedia.org/wiki/Rising_power">rising
power</a>” of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24d%24">: $$\rp dn = d\cdot(d+1)(d+2)\cdots(d+n-1).$$
I don't think I had a proof of this; I just noticed that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%28d%2c%201%29%20%3d%0ad%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%28d%2c%202%29%20%3d%20%5cfrac12%28d%5e2%2bd%29%24"> (both obvious), and the Burnside's
lemma analysis of the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%3d4%24"> case had just given me <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%28d%2c%204%29%20%3d%0a%5cfrac1%7b24%7d%28d%5e4%20%2b6d%5e3%20%2b%2011d%5e2%20%2b%206d%29%24">. Even if one doesn't immediately
recognize this latter polynomial it looks like it ought to factor and
then on factoring it one gets <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24d%28d%2b1%29%28d%2b2%29%28d%2b3%29%24">. So it's easy to
conjecture <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%28d%2c%20n%29%20%3d%20%5cfrac1%7bn%21%7d%5crp%20dn%24"> and indeed, this is easy to
prove from <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28%5cspadesuit%29%24">: The <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cst%20n%20k%24"> obey the recurrence
$$\st{n+1}k = n \st nk + \st n{k-1}\tag{$\color{green}{\star}$}$$ (by an easy combinatorial argument<a href="#fn1"><sup>1</sup></a>)
and it's also easy to show that the
coefficients of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5crp%20nk%24"> obey the same recurrence.<a href="#fn2"><sup>2</sup></a></p>
<p>In general <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5crp%20nk%20%3d%20%5cfp%7b%28n%2bk%2d1%29%7dk%24"> so we have
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%28d%2c%20n%29%20%3d%20%5crp%20dn%20%3d%20%5cfp%7b%28n%2bd%2d1%29%7dn%20%3d%0a%5cbinom%7bn%2bd%2d1%7dd%20%3d%20%5cbinom%7bn%2bd%2d1%7d%7bn%2d1%7d%24"> which ties the knot with the formula from <a href="https://blog.plover.com/math/increasing-sequences.html">the
previous article</a>. In particular, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24C%2810%2c4%29%20%3d%20%5cbinom%7b13%7d9%24">.</p>
<!--
f529ff60ac6fe02d4c0b3d6cf368d93a
I forget where this went. I think the RHS turns into a double sum
and then we can exchange the order of summation but I don't remember
where we end up.
-->
<p>I have a bunch more to say about this but this article has already
been in the oven long enough, so I'll cut the scroll here.</p>
<hr>
<p><a name="fn1">[1]</a> The combinatorial argument that justifies
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28%5ccolor%7bgreen%7d%7b%5cstar%7d%29%24"> is as follows:
The Stirling number <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cst%20nk%24"> counts the number of permutations of
order <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%24"> with exactly <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24k%24"> cycles.
To get a permutation of order <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%2b1%24"> with exactly <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24k%24"> cycles, we
can take one of the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cst%20nk%24"> permutations of order <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%24"> with <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24k%24">
cycles and insert the new element into one of the existing cycles
after any of the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%24"> elements. Or we can take on of the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cst%0an%7bk%2d1%7d%24"> permutations with only <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24k%2d1%24"> cycles and add the new element
in its own cycle.)</p>
<p><a name="fn2">[2]</a> We want to show that the coefficients of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5crp%0ank%24"> obey the same recurrence as
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28%5ccolor%7bgreen%7d%7b%5cstar%7d%29%24">. Let's say that the coefficient of the
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%5ei%24"> term in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5crp%20nk%24"> is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24c_i%24">.
We have $$\rp n{k+1} = \rp nk\cdot (n+k) = \rp
nk \cdot n + \rp nk \cdot k $$ so the
coefficient of the
the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%5ei%24"> term on the left is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24c_%7bi%2d1%7d%20%2b%20kc_i%24">.</p>
Gompertz' law for wooden utility poles
https://blog.plover.com/2017/09/20#gompertz
<p><a href="https://en.wikipedia.org/wiki/Gompertz%E2%80%93Makeham_law_of_mortality">Gompertz'
law</a>
says that the human death rate increases exponentially with age. That
is, if your chance of dying during this year is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24x%24">, then your
chance of dying during next year is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24cx%24"> for some constant <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24c%3e1%24">.
The death rate doubles every 8 years, so the constant <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24c%24"> is
empirically around <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%5e%7b1%2f8%7d%20%5capprox%201%2e09%24">. This is of course
mathematically incoherent, since it predicts that sufficiently old
people will have a mortality rate greater than 100%. But a number of
things are both true and mathematically incoherent, and this is one of
them. (<a href="https://en.wikipedia.org/wiki/Zipf%27s_law">Zipf's law</a> is another.)</p>
<p>The Gravity and Levity blog has
<a href="https://gravityandlevity.wordpress.com/2009/07/08/your-body-wasnt-built-to-last-a-lesson-from-human-mortality-rates/">a superb article about this</a>
from 2009 that reasons backwards from Gompertz' law to rule out
certain theories of mortality, such as the theory that death is due to
the random whims of a fickle god. (If death were entirely random, and
if you had a 50% chance of making it to age 70, then you would have a
25% chance of living to 140, and a 12.5% chance of living to 210,
which we know is not the case.)</p>
<p>Gravity and Levity says:</p>
<blockquote>
<p>Surprisingly enough, the Gompertz law holds across a large number of
countries, time periods, and even different species.</p>
</blockquote>
<p>To this list I will add wooden utility poles.</p>
<p>A couple of weeks ago Toph asked me why there were so many old rusty
staples embedded in the utility poles near our house, and this is easy
to explain: people staple up their yard sale posters and lost-cat
flyers, and then the posters and flyers go away and leave behind the
staples. (I once went out with a pliers and extracted a few dozen
staples from one pole; it was very satisfying but ultimately
ineffective.) If new flyer is stapled up each week, that is 52
staples per year, and 1040 in twenty years. If we agree that 20 years
is the absolute minimum plausible lifetime of a pole, we should not be
surprised if typical poles have hundreds or thousands of staples each.</p>
<p>But this morning I got to wondering what <em>is</em> the expected lifetime of
a wooden utility pole? I guessed it was probably in the range of 40
to 70 years. And happily, because of the Wonders of the Internet, I
could look it up right then and there, on the way to the trolley stop,
and spend my commute time reading about it.</p>
<p>It was not hard to find <a href="http://quanta-technology.com/sites/default/files/doc-files/Wood%20Pole%20Management%20Practices%20-%202012.pdf">an authoritative sounding and widely-cited
2012
study</a>
by electric utility consultants <a href="http://quanta-technology.com/">Quanta
Technology</a>.</p>
<p><strong>Summary</strong>: Most poles die because of fungal rot, so pole lifetime
varies widely depending on the local climate. An unmaintained pole
will last 50–60 years in a cold or dry climate and 30-40 years in a
hot wet climate. Well-maintained poles will last around twice as
long.</p>
<p>Anyway, Gompertz' law holds for wooden utility poles also. According
to the study:</p>
<blockquote>
<p>Failure and breakdown rates for wood poles are thought to increase
exponentially with deterioration and advancing time in service.</p>
</blockquote>
<p>The Quanta study presents this chart, taken from the (then
forthcoming) 2012 book <a href="https://www.crcpress.com/Aging-Power-Delivery-Infrastructures-Second-Edition/Willis-Schrieber/p/book/9781138072985"><em>Aging Power Delivery
Infrastructures</em></a>:</p>
<p align="center"><img src="https://pic.blog.plover.com/tech/gompertz/gompertz-poles.png"></p>
<p>The solid line is the pole failure rate for a particular unnamed
utility company in a median climate. The failure rate with increasing
age clearly increases exponentially, as Gompertz' law dictates,
doubling every 12½ years or so: Around 1 in 200 poles fails at age 50,
around 1 in 100 of the remaining poles fails at age 62.5, and around 1
in 50 of the remaining poles fails at age 75.</p>
<p>(The dashed and dotted lines represent poles that are removed from
service for other reasons.)</p>
<p>From Gompertz' law itself and a minimum of data, we can extrapolate
the maximum human lifespan. The death rate for 65-year-old women is
around 1%, and since it doubles every 8 years or so, we find that 50%
of women are dead by age 88, and all but the most outlying outliers
are dead by age 120. And indeed, the human longevity record is
currently attributed to Jeanne Calment, who died in 1997 at the age of
122½.</p>
<p>Similarly we can extrapolate the maximum service time for a wooden
utility pole. Half of them make it to 90 years, but if you have a
large installed base of 110-year-old poles you will be replacing about
one-seventh of them every year and it might make more sense to rip
them all out at once and start over. At a rate of one yard sale per
week, a 110-year-old pole will have accumulated 5,720 staples.</p>
<p>The Quanta study does not address deterioration of utility poles due
to the accumulation of rusty staples.</p>
Miscellanea about 24 puzzles
https://blog.plover.com/2017/08/28#24-puzzle-3
<p>This is a collection of leftover miscellanea about twenty-four
puzzles. In case you forgot what that is:</p>
<blockquote>
<p>The puzzle «4 6 7 9 ⇒ 24»
means that one should take the numbers 4, 6, 7, and 9, and combine
them with the usual arithmetic operations of addition, subtraction,
multiplication, and division, to make the number 24. In this case the
unique solution is $$6\times\frac{7 + 9}{4}.$$ When the target number is
24, as it often is, we omit it and just write «4 6 7 9».</p>
<p>Prior articles on this topic:</p>
<ul>
<li><a href="https://blog.plover.com/math/17-puzzle.html">A simple but difficult arithmetic puzzle</a> (July 2016)</li>
<li><a href="https://blog.plover.com/math/24-puzzle.html">Solving twenty-four puzzles</a> (March 2017)</li>
<li><a href="https://blog.plover.com/math/24-puzzle-2.html">Recognizing when two arithmetic expressions are essentially the same</a> (August 2017)</li>
</ul>
</blockquote>
<h2>How many puzzles have solutions?</h2>
<p>For each value of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24T%24">, there are 715 puzzles
«a b c d ⇒ T».
(I discussed this digression in two more earlier articles:
<a href="https://blog.plover.com/math/increasing-sequences.html">[1]</a>
<a href="https://blog.plover.com/math/increasing-sequences-2.html">[2]</a>.)
When the target <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24T%20%3d%2024%24">, 466 of the 715 puzzles have solutions. Is this
typical? Many solutions of
«a b c d»
puzzles end with a multiplication of 6 and 4, or of 8 and 3, or
sometimes of 12 and 2—so many that one quickly learns to look for
these types of solutions right away. When <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24T%3d23%24">, there won't be
any solutions of this type, and we might expect that relatively few
puzzles with prime targets have solutions.</p>
<p>This turns out to be the case:</p>
<p><img src="https://pic.blog.plover.com/math/24-puzzle-3/scatterplot.svg" alt="ALT attribute suggestions welcome!"/></p>
<p>The <em>x</em>-axis is the target number <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24T%24">, with 0 at the left, 300 at right,
and vertical guide lines every 25. The <em>y</em> axis is the number of
solvable puzzles out of the maximum possible of 715, with 0 at the
bottom, 715 at the top, and horizontal guide lines every 100.</p>
<p>Dots representing prime number targets are colored black. Dots for
numbers with two prime factors (4, 6, 9, 10, 14, 15, 21, 22, etc.) are red;
dots with three, four, five, six, and seven prime factors are orange,
yellow, green, blue, and purple respectively.</p>
<p>Two countervailing trends are obvious: Puzzles with smaller targets
have more solutions, and puzzles with highly-composite targets have
more solutions. No target number larger than 24 has as many as 466
solvable puzzles.</p>
<p>These are only trends, not hard rules. For example, there are 156
solvable puzzles with the target 126 (4 prime factors) but only 93
with target 128 (7 prime factors). Why? (I don't know. Maybe because
there is some correlation with the number of <em>different</em> prime
factors? But 72, 144, and 216 have many solutions, and only two
different prime factors.)</p>
<p>The smallest target you can't hit is 417. The following
numbers 418 and 419
are also impossible. But there are 8 sets
of four digits that can be used to make 416 and 23 sets
that can be used to make 420. The largest target that
can be hit is obviously <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%246561%20%3d%209%e2%81%b4%24">; the largest target with two
solutions is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242916%20%3d%204%c2%b79%c2%b79%c2%b79%20%3d%206%c2%b76%c2%b79%c2%b79%24">.</p>
<p><a href="https://pic.blog.plover.com/math/24-puzzle-3/number-of-solvable-puzzles.txt">(The raw data are available
here)</a>.</p>
<p>There is a lot more to discover here. For example, from looking at
the chart, it seems that the locally-best target numbers often have
the form <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%5en3%5em%24">. What would we see if we colored the dots
according to their largest prime factor instead of according to their
number of prime factors? (I tried doing this, and it didn't look like
much, but maybe it could have been done better.)</p>
<h3>Making zero</h3>
<p>As the chart shows, 705 of the 715 puzzles of the type «a b c d ⇒ 0»,
are solvable. This suggests an interesting inverse puzzle that Toph
and I enjoyed: find four digits <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2cb%2cc%2c%20d%24"> that <em>cannot</em> be used to
make zero. (<a href="https://pic.blog.plover.com/math/24-puzzle-3/SOLUTIONS-zeroes.txt">The answers</a>).</p>
<h2>Identifying interesting or difficult problems</h2>
<p><b>(Caution: this section contains spoilers for many of the most
interesting puzzles.)</b></p>
<p>I spent quite a while trying to get the computer to rank puzzles by
difficulty, with indifferent success.</p>
<h3>Fractions</h3>
<p>Seven puzzles require the use of fractions. One of these is the
notorious «3 3 8 8» that I mentioned before. This is probably the
single hardest of this type. The other six are:</p>
<pre><code> «1 3 4 6»
«1 4 5 6»
«1 5 5 5»
«1 6 6 8»
«3 3 7 7»
«4 4 7 7»
</code></pre>
<p>(<a href="https://pic.blog.plover.com/math/24-puzzle-3/solutions-fractions.png">Solutions to these (formatted image)</a>;
<a href="https://pic.blog.plover.com/math/24-puzzle-3/solutions-fractions.txt">solutions to these (plain text)</a>)</p>
<p>«1 5 5 5» is somewhat easier than the others, but they all follow
pretty much the same pattern. The last two are pleasantly
symmetrical.</p>
<!-- Also worth mentioning in this connection is the interesting
«1 6 9 9 ⇒ 18».
Unfortunately it has two solutions, and the other one is easy.
-->
<h3>Negative numbers</h3>
<p>No puzzles require the use of negative intermediate values. This
surprised me at first, but it is not hard to see why.
Subexpressions with negative intermediate values can always
be rewritten to have positive intermediate values instead.</p>
<p>For instance,
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%243%20%c3%97%20%289%20%2b%20%283%20%2d%204%29%29%24"> can be rewritten as
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%243%20%c3%97%20%289%20%2d%20%284%20%2d%203%29%29%24">
and
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%285%20%2d%208%29%c3%97%281%20%2d9%29%24">
can be rewritten as
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%288%20%2d%205%29%c3%97%289%20%2d1%29%24">.</p>
<h3>A digression about tree shapes</h3>
<p>In one of the earlier articles I asserted that there are only two
possible shapes for the expression trees of a puzzle solution:</p>
<p align="center"><table><tr>
<td align="center"><img src="https://pic.blog.plover.com/math/24-puzzle-3/tree1.png" alt="(((a # b) # c) # d)">
<td align="center"><img src="https://pic.blog.plover.com/math/24-puzzle-3/tree2.png" alt="((a # b) # (c # d))">
<tr><td align="center">Form <i>A</i> <td align="center">Form <i>B</i>
</table></p>
<p>(Pink square nodes contain operators and green round nodes contain
numbers.)</p>
<p>Lindsey Kuper pointed out that there are five possible shapes, not
two. Of course, I was aware of this (it is a Catalan number), so what
did I mean when I said there were only two? It's because I had the
idea that any tree that wasn't already in one of those two forms could
be put into form <em>A</em> by using transformations like the ones in the
previous section.</p>
<p>For example, the expression <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%284%c3%97%28%281%2b2%29%c3%b73%29%29%24"> isn't in either form,
but we can commute the × to get the equivalent <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28%281%2b2%29%c3%b73%29%c3%974%24">, which
has form <em>A</em>. Sometimes one uses the associative laws, for example to
turn <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%c3%b7%20%28b%20%c3%97%20c%29%24"> into <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%20%c3%b7%20b%29%20%c3%b7%20c%24">.</p>
<p>But I was mistaken; not every expression can be put into either of
these forms. The expression <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%288%c3%97%289%2d%282%c2%b73%29%29%24"> is an example.</p>
<h3>Unusual intermediate values</h3>
<p>The most interesting thing I tried was to look for puzzles whose
solutions require unusual intermediate numbers.</p>
<p>For example, the puzzle «3 4 4 4» looks easy (the other puzzles
with just 3s and 4s are all pretty easy) but it is rather tricky
because its only solution goes through the unusual intermediate number
28: <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%20%c3%97%20%283%20%2b%204%29%20%2d%204%24">.</p>
<p>I ranked puzzles as follows: each possible intermediate number appears in a
certain number of puzzle solutions; this is the score for that
intermediate number.
(Lower scores are better, because they represent rarer intermediate numbers.)
The score
for a single expression is the score of its rarest intermediate
value. So for example <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%20%c3%97%20%283%20%2b%204%29%20%2d%204%24"> has the intermediate
values 7 and 28. 7 is extremely common, and 28 is quite unusual,
appearing in only 151 solution expressions, so <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%20%c3%97%20%283%20%2b%204%29%20%2d%204%24">
receives a fairly low score of 151 because of the intermediate 28.</p>
<p>Then each puzzle received a difficulty
score which was the score of its <em>easiest</em> solution expression. For
example,
«2 2 3 8»
has two solutions, one (<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%288%2b3%29%c3%972%2b2%24">) involving the quite unusual
intermediate value 22, which has a very good score of only 79. But
this puzzle doesn't
count as difficult because it also admits the obvious solution
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%248%c2%b73%c2%b7%5cfrac22%24"> and this is the solution that gives it its extremely
bad score of 1768.</p>
<p>Under this ranking, the best-scoring twenty-four puzzles, and their
scores, were:</p>
<pre><code> «1 2 7 7» 3
* «4 4 7 7» 12
* «1 4 5 6» 13
* «3 3 7 7» 14
* «1 5 5 5» 15
«5 6 6 9» 23
«2 5 7 9» 24
«2 2 5 8» 25
«2 5 8 8» 45
«5 8 8 8» 45
«2 2 2 9» 47
* «1 3 4 6» 59
* «1 6 6 8» 59
«2 4 4 9» 151
«3 4 4 4» 151
* «3 3 8 8» 152
«6 8 8 9» 152
«2 2 2 7» 155
«2 2 5 7» 155
«2 3 7 7» 155
«2 4 7 7» 155
«2 5 5 7» 155
«2 5 7 7» 156
«4 4 8 9» 162
</code></pre>
<p>(Something is not quite right here. I think «2 5 7 7» and «2 5 5 7»
should have the same score, and I don't know why they don't. But I
don't care enough to do it over.)</p>
<p>Most of these are at least a little bit interesting. The seven
puzzles that require the use of fractions appear; I have marked them
with stars. The top item is «1 2 7 7», whose only solution goes
through the extremely rare intermediate number 49. The next items
require fractions, and the one after that is «5 6 6 9», which I found
difficult. So I think there's some
value in this procedure.</p>
<p>But is there enough value? I'm not sure. The last item on the list,
«4 4 8 9», goes through the unusual number 36. Nevertheless I don't
think it is a hard puzzle.</p>
<p>(I can also imagine that someone might see the answer to «5 6 6 9» right
off, but find «4 4 8 9» difficult. The whole exercise is subjective.)</p>
<h3>Solutions with unusual tree shapes</h3>
<p>I thought about looking for solutions that involved unusual sequences
of operations. Division is much less common than the other three
operations.</p>
<p>To get it right, one needs to normalize the form of expressions, so
that the shapes <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%20%2b%20b%29%20%2b%20%28c%20%2b%20d%29%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2b%20%28b%20%2b%20%28c%20%2b%20d%29%29%24"> aren't
counted separately. The Ezpr library can help here. But I didn't go
that far because the preliminary results weren't encouraging.</p>
<p>There are very few expressions totaling 24 that have the form
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%c3%b7b%29%c3%b7%28c%c3%b7d%29%24">. But if someone gives you a puzzle with a solution in
that form, then <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%c3%97d%29%c3%b7%28b%c3%97c%29%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%c3%97d%29%20%c3%b7%20%28b%c3%b7c%29%24"> are also
solutions, and one or another is usually very easy to see. For
example, the puzzle «1 3 8 9» has the solution <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%288%c3%b71%29%c3%b7%283%c3%b79%29%24">,
which has an unusual form. But this is an <em>easy</em> puzzle; someone with
even a little experience will find the solution <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%248%20%c3%97%20%5cfrac93%20%c3%97%201%24">
immediately.</p>
<p>Similarly there are relatively few solutions of the form
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%c3%b7%28%28b%2dc%29%c3%b7d%29%24">, but they can all be transformed into
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%c3%97d%c3%b7%28b%2dc%29%24"> which is not usually hard to find. Consider
$$\frac 8{\left(\frac{6 - 4}6\right)}.$$ This is pretty
weird-looking, but when you're trying to solve it
one of the first things you might notice is the 8, and then you would
try to turn the rest of the digits into a 3 by solving
«4 6 6 ⇒ 3»,
at which point it wouldn't take long to think of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac6%7b6%2d4%7d%24">. Or, coming at
it from the other direction, you might see the sixes and start looking
for a way to make «4 6 8 ⇒ 4», and it wouldn't take long to think of
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac8%7b6%2d4%7d%24">. </p>
<h3>Ezpr shape</h3>
<p>Ezprs (<a href="https://blog.plover.com/math/24-puzzle-2.html">see previous article</a>)
correspond more closely than abstract syntax trees do with our
intuitive notion of how expressions ought to work, so looking at the
shape of the Ezpr version of a solution might give better results than
looking at the shape of the expression tree. For example, one might
look at the number of nodes in the Ezpr or the depth of the Ezpr.</p>
<h3>Ad-hockery</h3>
<p>When trying to solve one of these puzzles, there are a few things I
always try first. After adding up the four numbers, I then look for
ways to make <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%248%c2%b73%2c%206%c2%b74%2c%24"> or <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%2412%c2%b72%24">; if that doesn't work I start
branching out looking for something of the type <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24ab%5cpm%20c%24">.</p>
<p>Suppose we take a list of all solvable puzzles, and remove all the very
easy ones: the puzzles where one of the inputs is zero, or where one of
the inputs is 1 and there is a solution of the form <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24E%c3%971%24">.</p>
<p>Then take the remainder and mark them as “easy” if they have solutions of
the form <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2bb%2bc%2bd%2c%208%c2%b73%2c%206%c2%b74%2c%24"> or <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%2412%c2%b72%24">. Also eliminate puzzles
with solutions of the type <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24E%20%2b%20%28c%20%2d%20c%29%24"> or <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24E%c3%97%5cleft%28%5cfrac%0acc%5cright%29%24">.</p>
<p>How many are eliminated in this way? Perhaps most? The remaining
puzzles ought to have at least intermediate difficulty, and perhaps
examining just those will suggest a way to separate them further into
two or three ranks of difficulty.</p>
<h3>I give up</h3>
<p>But by this time I have solved so many twenty-four
puzzles that I am no longer sure which ones are hard and which ones
are easy. I suspect that I have seen and tried to solve most of the
466 solvable puzzles; certainly more than half. So my brain is no
longer a reliable gauge of which puzzles are hard and which are easy.</p>
<p>Perhaps looking at puzzles with five inputs would work better for me
now. These tend to be easy, because you have more to work with. But
there are 2002 puzzles and probably some of them are hard.</p>
<h2>Close, but no cigar</h2>
<p>What's the closest you can get to 24 without hitting it exactly? The
best I could do was <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%245%c2%b75%20%2d%20%5cfrac89%24">. Then I asked the computer,
which confirmed that this is optimal, although I felt foolish when I
saw the simpler solutions that are equally good: <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%246%c2%b74%20%5cpm%5cfrac%2019%24">.</p>
<p>The paired solutions
$$5 × \left(4 + \frac79\right) < 24 < 7 × \left(4 - \frac59\right)$$
are very handsome.</p>
<h2><a name="phoneapp">Phone app</a></h2>
<p>The search program that tells us when a puzzle has solutions is only
useful if we can take it with us in the car and ask it about license
plates. A phone app is wanted. <a href="https://studio.code.org/projects/applab/32u24dDEOBMCO1vGVVw1YA">I built one with <em>Code Studio</em></a>.</p>
<p><a href="https://studio.code.org/projects/public">Code Studio is great</a>. It has a nice web interface, and beginners can
write programs by dragging blocks around. It looks very much like <a href="https://scratch.mit.edu/">MIT's
<em>scratch</em> project</a>, which is much
better-known. But Code Studio is a much better tool than Scratch. In
Scratch, once you reach the limits of what it can do, you are stuck,
and there is no escape. In Code Studio when you drag around those
blocks you are actually
writing JavaScript underneath, and you can click a button and see and
edit the
underlying JavaScript code you have written.</p>
<p>Suppose you need to convert <code>A</code> to 1 and <code>B</code> to 2 and so on. Scratch
does not provide an <code>ord</code> function, so with Scratch you are pretty
much out of luck; your only choice is to write a 26-way if-else tree,
which means dragging around something like 104 stupid blocks. In Code
Studio, you can drop down the the JavaScript level and type in <code>ord</code>
to use the standard <code>ord</code> function. Then if you go back to blocks,
the <code>ord</code> will look like any other built-in function block.</p>
<p>In Scratch, if you want to use a data structure other than an array,
you are out of luck, because that is all there is. In Code Studio,
you can drop down to the JavaScript level and use or build any data
structure available in JavaScript.</p>
<p>In Scratch, if you want to
initialize the program with bulk data, say a precomputed table of the
solutions of the 466
twenty-four puzzles, you are out of luck. In Code Studio, you can
upload a CSV file with up to 1,000 records, which then becomes
available to your program as a data structure.</p>
<p>In summary, you spend a lot of your time in Scratch working around the
limitations of Scratch, and what you learn doing that is of very
limited applicability. Code Studio is real programming and if it
doesn't do exactly what you want out of the box, you can get what you
want by learning a little more JavaScript, which is likely to be
useful in other contexts for a long time to come.</p>
<p>Once you finish your Code Studio app, you can click a button to send
the URL to someone via SMS. They can follow the link in their phone's
web browser and then use the app.</p>
<p>Code Studio is what Scratch should have been. <a href="https://studio.code.org/projects/public">Check it out</a>.</p>
<h2>Thanks</h2>
<p>Thanks to everyone who contributed to this article, including:</p>
<ul>
<li>my daughters Toph and Katara</li>
<li>Shreevatsa R.</li>
<li>Dr. Lindsey Kuper</li>
<li>Darius Bacon</li>
<li>everyone else who emailed me</li>
</ul>
Recognizing when two arithmetic expressions are essentially the same
https://blog.plover.com/2017/08/21#24-puzzle-2
<p>[ Warning: The math formatting in the RSS / Atom feed for this article is badly mutilated. I suggest you <a href="https://blog.plover.com/math/24-puzzle-2.html">read the article on my blog</a>. ]</p>
<blockquote>
<p>In this article, I discuss “twenty-four puzzles”. The puzzle «4 6 7 9 ⇒ 24»
means that one should take the numbers 4, 6, 7, and 9, and combine
them with the usual arithmetic operations of addition, subtraction,
multiplication, and division, to make the number 24. In this case the
unique solution is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%246%c2%b7%5cfrac%7b7%20%2b%209%7d%7b4%7d%24">.</p>
<p>When the target number after the <code>⇒</code> is 24, as it often is, we omit
it and just write «4 6 7 9». Every example in this article has
target number 24.</p>
<p>This is a continuation of my previous articles on this
topic:</p>
<ul>
<li><a href="https://blog.plover.com/math/17-puzzle.html">A simple but difficult arithmetic puzzle</a> (July 2016)</li>
<li><a href="https://blog.plover.com/math/24-puzzle.html">Solving twenty-four puzzles</a> (March 2017)</li>
</ul>
</blockquote>
<p>My first cut at writing a solver for twenty-four puzzles was a straightforward
search program. It had a couple of hacks in it to cut down the search
space by recognizing that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2bE%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24E%2ba%24"> are the same, but other
than that there was nothing special about it and
<a href="https://blog.plover.com/math/24-puzzle.html">I've discussed it before</a>.</p>
<p>It would quickly and accurately report whether any particular twenty-four
puzzle was solvable, but as it turned out that wasn't quite good
enough. The original motivation for the program was this: Toph and I
play this game in the car. Pennsylvania license plates have three
letters and four digits, and if we see a license plate <code>FBV 2259</code> we
try to solve «2 2 5 9». Sometimes we can't find a solution and
then we wonder: it is because there isn't one, or is it because we
just didn't get it yet? So the searcher turned into a phone app,
which would tell us whether there was solution, so we'd know whether
to give up or keep searching.</p>
<p>But this wasn't quite good enough either, because after we would find
that first solution, say <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%c2%b7%285%20%2b%209%20%2d%202%29%24">, we would wonder: are there
any more? And here the program was useless: it would
cheerfully report that there were three, so we would rack our brains
to find another, fail, ask the program to tell us the answer, and
discover to our disgust that the three solutions it had in mind were:</p>
<p>$$
2 \cdot (5 + (9 - 2)) \\
2 \cdot (9 + (5 - 2)) \\
2 \cdot ((5 + 9) - 2)
$$</p>
<p>The computer thinks these are different, because it uses different
data structures to represent them. It represents them with an abstract
syntax tree, which means that each expression is either a single
constant, or is a structure comprising an operator and its two operand
expressions—always exactly two. The computer understands the three
expressions above as having these structures:</p>
<table align=center style="margin: auto;"><tr>
<td><img style="max-width: 100%;" src="https://pic.blog.plover.com/math/24-puzzle-2/t2.svg">
<td><img style="max-width: 100%;" src="https://pic.blog.plover.com/math/24-puzzle-2/t3.svg">
<td><img style="max-width: 100%;" src="https://pic.blog.plover.com/math/24-puzzle-2/t1.svg">
</tr></table>
<p>It's not hard to imagine that the computer could be taught to
understand that the first two trees are equivalent. Getting it to
recognize that the third one is also equivalent seems somewhat more
difficult.</p>
<h2>Commutativity and associativity</h2>
<p>I would like the computer to understand that these three expressions
should be considered “the same”. But what does “the same” mean? This
problem is of a kind I particularly like: we want the computer to do
<em>something</em>, but we're not exactly sure what that something is. Some
questions are easy to ask but hard to answer, but this is the
opposite: the real problem is to decide what question we want to ask.
Fun!</p>
<p>Certainly some of the question should involve commutativity and
associativity of addition and multiplication. If the only difference
between two expressions is that one has <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2b%20b%24"> where the other has
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24b%20%2b%20a%24">, they should be considered the same; similarly <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2b%20%28b%20%2b%0ac%29%24"> is the same expression as <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%20%2b%20b%29%20%2b%20c%24"> and as <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28b%20%2b%20a%29%20%2b%20c%24">
and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24b%20%2b%20%28a%20%2b%20c%29%24"> and so forth.</p>
<p>The «2 2 5 9» example above shows that commutativity and
associativity are not limited to addition and multiplication. There
are commutative and associative properties of subtraction also! For example,
$$a+(b-c) = (a+b)-c$$ and $$(a+b)-c = (a-c)+b.$$
There ought to be names for these laws but as far as I know there aren't. (Sure, it's
just commutativity and associativity of addition in disguise, but
nobody explaining these laws to school kids <em>ever</em> seems to point out
that subtraction can enter into it. They just observe that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%2db%29%2dc%0a%e2%89%a0%20a%2d%28b%2dc%29%24">, say “subtraction isn't associative”, and leave it at that.)</p>
<p>Closely related to these identities are operator inversion identities
like <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2d%28b%2bc%29%20%3d%20%28a%2db%29%2dc%24">, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2d%28b%2dc%29%20%3d%20%28a%2db%29%2bc%24">, and their
multiplicative analogues. I don't know names for these algebraic laws
either.</p>
<p>One way to deal with all of this would to build a complicated
comparison function for abstract syntax trees that tried to transform
one tree into another by applying these identities. A better approach
is to recognize that the data structure is over-specified. If we want
the computer to understand that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%20%2b%20b%29%20%2b%20c%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2b%20%28b%20%2b%20c%29%24">
are the same expression, we are swimming upstream by using a data
structure that was specifically designed to capture the difference
between these expressions.</p>
<p>Instead, I invented a data structure, called an <em>Ezpr</em> (“Ez-pur”), that
can represent expressions, but in a somewhat more natural way than
abstract syntax trees do, and in a way that makes commutativity and
associativity transparent. </p>
<p>An Ezpr has a simplest form, called its “canonical” or “normal” form.
Two Ezprs represent essentially the same mathematical expression if
they have the same canonical form. To decide if two abstract syntax
trees are the same, the computer converts them to Ezprs, simplifies
them, and checks to see if resulting canonical forms are identical.</p>
<h2>The Ezpr</h2>
<p>Since associativity doesn't matter, we don't want to represent it.
When we (humans) think about adding up a long column of numbers, we
don't think about associativity because we don't add them
pairwise. Instead we use an addition algorithm that adds them all at
once in a big pile. We don't treat addition as a binary operation; we
normally treat it as an operator that adds up the numbers in a list.
The Ezpr makes this explicit: its addition operator is applied to a
list of subexpressions, not to a pair. Both <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2b%20%28b%20%2b%20c%29%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%0a%2b%20b%29%20%2b%20c%24"> are represented as the Ezpr</p>
<pre><code> SUM [ a b c - ]
</code></pre>
<p>which just says that we are adding up <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%24">, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24b%24">, and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24c%24">. (The
<code>-</code> sign is just punctuation; ignore it for now.)</p>
<p>Similarly the Ezpr <code>MUL [ a b c ÷ ]</code> represents the product of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%24">,
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24b%24">, and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24c%24">. (Please ignore the <code>÷</code> sign for the time being.)</p>
<p>To handle commutativity, we want those <code>[ a b c ]</code> lists to be bags.
Perl doesn't have a built-in bag object, so instead I used arrays and
required that the array elements be in sorted order. (Exactly <em>which</em>
sorted order doesn't really matter.)</p>
<h2>Subtraction and division</h2>
<p>This doesn't yet handle subtraction and division, and the way I chose
to handle them is the only part of this that I think is at all
clever. A <code>SUM</code> object has not one but two bags, one for the positive
and one for the negative part of the expression. An expression like
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2d%20b%20%2b%20c%20%2d%20d%24"> is represented by the Ezpr:</p>
<pre><code>SUM [ a c - b d ]
</code></pre>
<p>and this is <em>also</em> the representation of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2b%20c%20%2d%20b%20%2d%20d%24">, of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24c%20%2b%20a%0a%2d%20d%20%2d%20b%24">, of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24c%20%2d%20d%2b%20a%2db%24">, and of any other expression of the
idea that we are adding up <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24c%24"> and then deducting <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24b%24">
and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24d%24">. The <code>-</code> sign separates the terms that are added from those
that are subtracted. </p>
<p>Either of the two bags may be empty, so for example <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2b%20b%24"> is just
<code>SUM [ a b - ]</code>.</p>
<p>Division is handled similarly. Here conventional mathematical
notation does a little bit better than in the sum case: <code>MUL [ a c ÷ b
d ]</code> is usually written as <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%7bac%7d%7bbd%7d%24">.</p>
<p>Ezprs handle the associativity and commutativity of subtraction and
division quite well. I pointed out earlier that subtraction has an
associative law <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%20%2b%20b%29%20%2d%20c%20%3d%20a%20%2b%0a%28b%20%2d%20c%29%24"> even though it's not usually called that.
No code is required to understand that those two expressions are
equal if they are represented as Ezprs, because they are represented
by completely identical structures:</p>
<pre><code> SUM [ a b - c ]
</code></pre>
<p>Similarly there is a commutative law for subtraction: <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2b%20b%20%2d%20c%20%3d%20a%0a%2d%20c%20%2b%20b%24"> and once again that same Ezpr does for both.</p>
<h2>Ezpr laws</h2>
<p>Ezprs are more flexible than binary trees. A binary tree can
represent the expressions <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28a%2bb%29%2bc%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2b%28b%2bc%29%24"> but not the
expression <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2bb%2bc%24">. Ezprs can represent all three and it's easy to
transform between them. Just as there are rules for building
expressions out of simpler expressions, there are a few rules for
combining and manipulating Ezprs.</p>
<h3>Lifting and flattening</h3>
<p>The most important transformation is <em>lifting</em>, which is the Ezpr
version of the associative law. In the canonical form of an Ezpr, a
<code>SUM</code> node may not have subexpressions that are also <code>SUM</code> nodes. If
you have</p>
<pre><code> SUM [ a SUM [ b c - ] - … ]
</code></pre>
<p>you should lift the terms from the inner sum into the outer one:</p>
<pre><code> SUM [ a b c - … ]
</code></pre>
<p>effectively transforming <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2b%28b%2bc%29%24"> into <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%2bb%2bc%24">. More generally,
in</p>
<pre><code> SUM [ a SUM [ b - c ]
- d SUM [ e - f ] ]
</code></pre>
<p>we lift the terms from the inner Ezprs into the outer one:</p>
<pre><code> SUM [ a b f - c d e ]
</code></pre>
<p>This effectively transforms <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2b%20%28b%20%2d%20c%29%20%2d%20d%20%2d%20%28e%20%2d%20f%29%29%24"> to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%20%2b%20b%20%2b%0af%20%2d%20c%20%2d%20d%20%2d%20e%24">.</p>
<p>Similarly, when a <code>MUL</code> node contains another <code>MUL</code>, we can flatten
the structure.</p>
<p>Say we are converting the expression <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%247%20%c3%b7%20%283%20%c3%b7%20%286%20%c3%97%204%29%29%24"> to an Ezpr.
The conversion function is recursive and the naïve version computes
this Ezpr:</p>
<pre><code> MUL [ 7 ÷ MUL [ 3 ÷ MUL [ 6 4 ÷ ] ] ]
</code></pre>
<p>But then at the bottom level we have a <code>MUL</code> inside a <code>MUL</code>, so the
4 and 6 in the innermost <code>MUL</code> are lifted upward:</p>
<pre><code> MUL [ 7 ÷ MUL [ 3 ÷ 6 4 ] ]
</code></pre>
<p>which represents <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac7%7b%5cfrac%7b3%7d%7b6%5ccdot%204%7d%7d%24">.
Then again we have a <code>MUL</code> inside a <code>MUL</code>, and again the
subexpressions of the innermost <code>MUL</code> can be lifted:</p>
<pre><code> MUL [ 7 6 4 ÷ 3 ]
</code></pre>
<p>which we can imagine as <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%7b7%c2%b76%c2%b74%7d3%24">.</p>
<p>The lifting only occurs when the sub-node has the same type as its
parent; we may not lift terms out of a <code>MUL</code> into a <code>SUM</code> or vice
versa.</p>
<h3>Trivial nodes</h3>
<p>The Ezpr <code>SUM [ a - ]</code> says we are adding up just one thing, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%24">,
and so it can be eliminated and replaced with just <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%24">. Similarly
<code>SUM [ - a ]</code> can be replaced with the constant <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%2da%24">, if <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24a%24"> is a
constant. <code>MUL</code> can be handled similarly.</p>
<p>An even simpler case is <code>SUM [ - ]</code> which can be replaced by the
constant 0; <code>MUL [ ÷ ]</code> can be replaced with 1. These sometimes arise
as a result of cancellation.</p>
<h3>Cancellation</h3>
<p>Consider the puzzle «3 3 4 6».
My first solver found 49 solutions to this puzzle. One is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%283%20%2d%203%29%20%2b%20%284%20%c3%97%206%29%24">.
Another is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%284%20%2b%20%283%20%2d%203%29%29%20%c3%97%206%24">. A third is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%20%c3%97%20%286%20%2b%20%283%20%2d%203%29%29%24">.</p>
<p>I think these are all the same: the solution is to multiply the 4 by
the 6, and to get rid of the threes by subtracting them to make a zero
term. The zero term can be added onto the rest of expression or to
any of its subexpressions—there are ten ways to do this—and it doesn't
really matter where.</p>
<p>This is easily explained in terms of Ezprs:
If the same subexpression appears in both of a node's bags,
we can drop it. For example,
the expression <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%284%20%2b%20%283%20%2d3%29%29%20%c3%97%206%24">
starts out as</p>
<pre><code> MUL [ 6 SUM [ 3 4 - 3 ] ÷ ]
</code></pre>
<p>but the duplicate threes in <code>SUM [ 3 4 - 3 ]</code> can be canceled, to
leave</p>
<pre><code> MUL [ 6 SUM [ 4 - ] ÷ ]
</code></pre>
<p>The sum is now trivial, as described in the previous section, so can
be eliminated and replaced with just 4:</p>
<pre><code> MUL [ 6 4 ÷ ]
</code></pre>
<p>This Ezpr records the essential feature of each of the three solutions to
«3 3 4 6» that I mentioned:
they all are multiplying the 6 by the 4, and then doing something else
unimportant to get rid of the threes.</p>
<p>Another solution to the same puzzle is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%286%20%c3%b7%203%29%20%c3%97%20%284%20%c3%97%203%29%24">.
Mathematically we would write this as <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac63%c2%b74%c2%b73%24"> and we can see
this is just <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%246%c3%974%24"> again, with the threes gotten rid of by
multiplication and division, instead of by addition and subtraction.
When converted to an Ezpr, this expression becomes:</p>
<pre><code> MUL [ 6 4 3 ÷ 3 ]
</code></pre>
<p>and the matching threes in the two bags are cancelled, again leaving</p>
<pre><code> MUL [ 6 4 ÷ ]
</code></pre>
<p>In fact there aren't 49 solutions to this puzzle. There is only one,
with <a href="https://pic.blog.plover.com/math/24-puzzle-2/3346-solutions.txt">49 trivial variations</a>.</p>
<h3>Identity elements</h3>
<p>In the preceding example, many of the trivial variations on the
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%c3%976%24"> solution involved multiplying some subexpression by <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%0a33%24">. When one of the input numbers in the puzzle is a 1, one can
similarly obtain a lot of useless variations by choosing where to
multiply the 1.</p>
<p>Consider
«1 3 3 5»:
We can make 24 from <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%243%20%c3%97%20%283%20%2b%205%29%24">. We then have to get
rid of the 1, but we can do that by multiplying it onto any
of the five subexpressions of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%243%20%c3%97%20%283%20%2b%205%29%24">:</p>
<p>$$
1 × (3 × (3 + 5)) \\
(1 × 3) × (3 + 5) \\
3 × (1 × (3 + 5)) \\
3 × ((1 × 3) + 5) \\
3 × (3 + (1×5))
$$</p>
<p>These should not be considered different solutions.
Whenever we see any 1's in either of the bags of a <code>MUL</code> node,
we should eliminate them.
The first expression above, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%241%20%c3%97%20%283%20%c3%97%20%283%20%2b%205%29%29%24">, is converted to the Ezpr</p>
<pre><code> MUL [ 1 3 SUM [ 3 5 - ] ÷ ]
</code></pre>
<p>but then the 1 is eliminated from the <code>MUL</code> node leaving</p>
<pre><code> MUL [ 3 SUM [ 3 5 - ] :- ]
</code></pre>
<p>The fourth expression,
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%243%20%c3%97%20%28%281%20%c3%97%203%29%20%2b%205%29%24">,
is initially converted to the Ezpr</p>
<pre><code> MUL [ 3 SUM [ 5 MUL [ 1 3 ÷ ] - ] ÷ ]
</code></pre>
<p>When the 1 is eliminated from the inner <code>MUL</code>, this leaves a trivial
<code>MUL [ 3 ÷ ]</code> which is then replaced with just 3, leaving:</p>
<pre><code> MUL [ 3 SUM [ 5 3 - ] ÷ ]
</code></pre>
<p>which is the same Ezpr as before.</p>
<p>Zero terms in the bags of a <code>SUM</code> node can similarly be dropped.</p>
<h3>Multiplication by zero</h3>
<p>One final case is that <code>MUL [ 0 … ÷ … ]</code> can just be simplified to 0.</p>
<p>The question about what to do when there is a zero in the denominator
is a bit of a puzzle.
In the presence of division by zero, some of our simplification rules
are questionable. For example, when we have <code>MUL [ a ÷ MUL [ b ÷ c ]
]</code>, the lifting rule says we can simplify this to <code>MUL [ a c ÷ b
]</code>—that is, that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20a%7b%5cfrac%20bc%7d%20%3d%20%5cfrac%7bac%7db%24">. This is correct,
except that when <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24b%3d0%24"> or <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24c%3d0%24"> it may be nonsense, depending on what else is
going on. But since zero denominators never arise in the solution of
these puzzles, there is no issue in this application.</p>
<h2>Results</h2>
<p>The <code>Ezpr</code> module is around 200 lines of Perl code, including
everything: the function that converts abstract syntax trees to Ezprs,
functions to convert Ezprs to various notations (both <code>MUL [ 4 ÷ SUM [
3 - 2 ] ]</code> and <code>4 ÷ (3 - 2)</code>), and the two versions of the
normalization process described in the previous section. The
normalizer itself is about 35 lines.</p>
<p>Associativity is taken care of by the Ezpr structure itself, and
commutativity is not too difficult; as I mentioned, it would have been
trivial if Perl had a built-in bag structure. I find it much easier
to reason about transformations of Ezprs than abstract syntax trees.
Many operations are much simpler; for example the negation of
<code>SUM [ A - B ]</code> is simply <code>SUM [ B - A ]</code>. Pretty-printing is also easier
because the Ezpr better captures the way we write and think about
expressions.</p>
<p>It took me a while to get the normalization tuned properly, but the
results have been quite successful, at least for this problem domain.
The current puzzle-solving program reports the number of distinct
solutions to each puzzle. When it reports two different solutions,
they are really different; when it fails to support the exact solution
that Toph or I found, it reports one essentially the same. (There are
some small exceptions, <a href="#arguable">which I will discuss below</a>.)</p>
<p>Since there is no specification for “essentially the same” there is no
hope of automated testing. But we have been using the app for several
months looking for mistakes, and we have not found any.
If the normalizer failed to recognize that two expressions were
essentially similar, we would be very likely to notice: we would be
solving some puzzle, be unable to find the last of the solutions that
the program claimed to exist, and then when we gave up and saw what it was we
would realize that it was essentially the same as one of the solutions we
had found. I am pretty confident that there are no errors of this
type, but see <a href="#arguable">“Arguable points”</a> below. </p>
<p>A harder error to detect is whether the computer has erroneously
conflated two essentially dissimilar expressions. To detect this we
would have to notice that an expression was missing from the computer's solution list. I am
less confident that nothing like this has occurred, but as the months
have gone by I feel better and better about it.</p>
<p>I consider the problem of “how many solutions does this puzzle
<em>really</em> have to have?” been satisfactorily solved. There are some
edge cases, but I think we have identified them.</p>
<p><a href="https://github.com/mjdominus/24-puzzle-solver">Code for my solver is on
Github</a>. The Ezpr code
is in <a href="https://github.com/mjdominus/24-puzzle-solver/blob/master/Expr.pm">the <code>Ezpr</code> package in the <code>Expr.pm</code>
file</a>.
This code is all in the public domain.</p>
<h3>Some examples</h3>
<p>The original program claims to find 35 different solutions to «4 6 6
6». The revised program recognizes that these are of only two types:</p>
<table align=center>
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%20%c3%97%206%20%c3%97%206%20%c3%b7%206%24"><td><tt>MUL [ 4 6 - ]</tt>
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%286%20%2d%204%29%20%c3%97%20%286%20%2b%206%29%24"><td><tt>MUL [ SUM [ 6 - 4 ] SUM [ 6 6 - ] ÷ ]
</table>
<p>Some of the variant forms of the first of those include:</p>
<p>$$
6 × (4 + (6 - 6)) \\
6 + ((4 × 6) - 6) \\
(6 - 6) + (4 × 6) \\
(6 ÷ 6) × (4 × 6) \\
6 ÷ ((6 ÷ 4) ÷ 6) \\
6 ÷ (6 ÷ (4 × 6)) \\
6 × (6 × (4 ÷ 6)) \\
(6 × 6) ÷ (6 ÷ 4) \\
6 ÷ ((6 ÷ 6) ÷ 4) \\
6 × (6 - (6 - 4)) \\
6 × (6 ÷ (6 ÷ 4)) \\
\ldots <br />
$$</p>
<p>In an even more extreme case, the original program finds 80 distinct
expressions that solve
«1 1 4 6»,
all of which are trivial variations on <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%c2%b76%24">.</p>
<p>Of the 715 puzzles, 466 (65%) have solutions; for 175 of these the
solution is unique. There are 3 puzzles with 8 solutions each
(«2 2 4 8»,
«2 3 6 9»,
and «2 4 6 8»), one with 9 solutions («2 3 4 6»), and one with 10 solutions
(«2 4 4 8»).</p>
<p>The 10 solutions for «2 4 4 8»
are as follows:</p>
<table align=center>
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%20%c3%97%208%20%2d%202%20%c3%97%204%20%20%20%20%24"><td>SUM [ MUL [ 4 8 ÷ ] - MUL [ 2 4 ÷ ] ]
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%20%c3%97%20%282%20%2b%208%20%2d%204%29%20%20%20%24"><td>MUL [ 4 SUM [ 2 8 - 4 ] ÷ ]
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%288%20%2d%204%29%20%c3%97%20%282%20%2b%204%29%20%24"><td>MUL [ SUM [ 8 - 4 ] SUM [ 2 4 - ] ÷ ]
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%20%c3%97%20%284%20%2b%208%29%20%c3%b7%202%20%20%24"><td>MUL [ 4 SUM [ 4 8 - ] ÷ 2 ]
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%284%20%2d%202%29%20%c3%97%20%284%20%2b%208%29%20%24"><td>MUL [ SUM [ 4 - 2 ] SUM [ 4 8 - ] ÷ ]
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%248%20%c3%97%20%282%20%2b%204%2f4%29%20%20%20%20%20%24"><td>MUL [ 8 SUM [ 1 2 - ] ÷ ]
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%20%c3%97%204%20%c3%97%204%20%2d%208%20%20%20%20%24"><td>SUM [ MUL [ 2 4 4 ÷ ] - 8 ]
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%248%20%2b%202%20%c3%97%20%284%20%2b%204%29%20%20%20%24"><td>SUM [ 8 MUL [ 2 SUM [ 4 4 - ] ÷ ] - ]
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%20%2b%204%20%2b%202%20%c3%97%208%20%20%20%20%20%24"><td>SUM [ 4 4 MUL [ 2 8 ÷ ] - ]
<tr><td><img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%20%c3%97%20%288%20%2d%204%2f2%29%20%24"><td>MUL [ 4 SUM [ 8 - MUL [ 4 ÷ 2 ] ] ÷ ]
</table>
<p><a href="https://pic.blog.plover.com/math/24-puzzle-2/Solutions-24.txt">A complete listing of every essentially different solution to every
«a b c d» puzzle is available here</a>.
There are 1,063 solutions in all.</p>
<h2>Arguable points <a name="arguable"></a></h2>
<p>There are a few places where we have not completely pinned down what
it means for two solutions to be essentially the same; I think there
is room for genuine disagreement. </p>
<ol>
<li><p>Any solution involving <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%c3%972%24"> can be changed into a slightly different
solution involving <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%2b2%24"> instead. These expressions are
arithmetically different but numerically equal. For example, I
mentioned earlier that
«2 2 4 8»
has 8 solutions. But two of these are <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%208%20%2b%204%20%c3%97%20%282%20%2b%202%29%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%0a%20%20%20%208%20%2b%204%20%c3%97%202%20%c3%97%202%24">. I am willing to accept these as
essentially different. Toph, however, disagrees.</p></li>
<li><p>A similar but more complex situation arises in connection with «1
2 3 7». Consider <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%243%c3%977%2b3%24">, which equals 24. To get a solution
to «1 2 3 7», we can replace either of the threes in <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%243%c3%977%2b3%24">
with <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%281%2b2%29%24">, obtaining <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%28%281%20%2b%202%29%20%c3%97%207%29%20%2b%203%24"> or <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%20%283%c3%977%29%2b%281%0a%20%20%20%20%2b2%29%24">. My program considers these to be different solutions.
Toph is unsure.</p></li>
</ol>
<p>It would be pretty easy to adjust the normalization process to handle
these the other way if the user wanted that.</p>
<h2>Some interesting puzzles</h2>
<p><a href="https://pic.blog.plover.com/math/24-puzzle-2/JJZ4631.jpg"><img align=right border=0 src="https://pic.blog.plover.com/math/24-puzzle-2/JJZ4631-sm.png" /></a></p>
<p>«1 2 7 7»
has only one solution, quite unusual. <a href="https://pic.blog.plover.com/math/24-puzzle-2/1277.txt">(Spoiler)</a>
«2 2 6 7»
has two solutions, both somewhat unusual. <a href="https://pic.blog.plover.com/math/24-puzzle-2/2267.txt">(Spoiler)</a></p>
<p>Somewhat similar to
«1 2 7 7»
is
«3 9 9 9»
which also has an unusual solution. But it has two other solutions
that are less surprising. <a href="https://pic.blog.plover.com/math/24-puzzle-2/3999.txt">(Spoiler)</a></p>
<p>«1 3 8 9» has an easy solution but also a quite tricky solution.
<a href="https://pic.blog.plover.com/math/24-puzzle-2/1389.txt">(Spoiler)</a></p>
<p>One of my neighbors has the license plate <code>JJZ 4631</code>.
«4 6 3 1» is one of the more difficult puzzles.</p>
<h2>What took so long?</h2>
<p><a href="https://blog.plover.com/math/24-puzzle.html">Back in March, I wrote</a>:</p>
<blockquote>
<p>I have enough material for at least three or four more articles
about this that I hope to publish here in the coming weeks.</p>
<p>But <a href="https://blog.plover.com/math/17-puzzle.html">the previous article on this
subject</a> ended similarly, saying</p>
<blockquote>
<p>I hope to write a longer article about solvers in the next week or so.</p>
</blockquote>
<p>and that was in July 2016, so don't hold your breath.</p>
</blockquote>
<p>And here we are, five months later!</p>
<p>This article was a <em>huge</em> pain to write. Sometimes I sit down to write
something and all that comes out is dreck. I sat down to write this
one at least three or four times and it never worked. The tortured
Git history bears witness. In the end I had to abandon all my earlier
drafts and start over from scratch, writing a fresh outline in an
empty file.</p>
<p>But perseverance paid off! WOOOOO.</p>
<p>[ Addendum 20170825: I completely forgot that Shreevatsa R. wrote <a href="https://shreevatsa.wordpress.com/2016/07/20/a-simple-puzzle-with-a-foray-into-inequivalent-expressions/">a very interesting article on the same topic as this one</a>,
in July of last year soon after I published my first article in this
series. ]</p>
<p>[ Addendum 20170829: A previous version of this article used the notations <code>SUM [ … # … ]</code>
and <code>MUL [ … # … ]</code>, which I said I didn't like. Zellyn Hunter has
persuaded me to replace these with <code>SUM [ … - … ]</code> and <code>MUL
[ … ÷ … ]</code>. Thank you M. Hunter! ]</p>
<p>[ <a href="https://blog.plover.com/math/24-puzzle-3.html">Yet more on this topic!</a> ]</p>
That time I met Erdős
https://blog.plover.com/2017/08/08#erdos
<p>I should have written about this sooner, by now it has been so long
that I have forgotten most of the details.</p>
<p>I first encountered Paul Erdős in the middle 1980s at a talk by
<a href="https://www.math.nyu.edu/~pach/">János Pach</a> about almost-universal
graphs. Consider graphs with a countably infinite set of vertices.
Is there a "universal" graph <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24G%24"> such that, for any finite or
countable graph <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24H%24">, there is a copy of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24H%24"> inside of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24G%24">?
(Formally, this means that there is an injection from the vertices of
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24H%24"> to the vertices of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24G%24"> that preserves adjacency.) The answer
is yes; <a href="https://en.wikipedia.org/wiki/Rado_graph">it is quite easy to construct such a
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24G%24"></a> and in fact nearly
all random graphs have this property.</p>
<p>But then the questions become more interesting. Let <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24K_%5comega%24"> be the
complete graph on a countably infinite set of vertices. Say that
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24G%24"> is “almost universal” if it includes a copy of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24H%24"> for every
finite or countable graph <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24H%24"> <em>except</em> those that contain a copy of
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24K_%5comega%24">. Is there an almost universal graph? Perhaps surprisingly,
no! (<a href="https://math.stackexchange.com/a/449625/25554">Sketch of
proof</a>.)</p>
<p>I enjoyed the talk, and afterward in the lobby I got to meet Ron
Graham and Joel Spencer and talk to them about their Ramsey theory
book, which I had been reading, and about a problem I was working on.
Graham encouraged me to write up my results on the problem and submit
them to <a href="https://en.wikipedia.org/wiki/Mathematics_Magazine"><em>Mathematics
Magazine</em></a>, but I
unfortunately never got around to this.
Graham was there babysitting Erdős, who was one of Pách's
collaborators, but I did not actually talk to Erdős at that time. I
think I didn't recognize him. I don't know why I was able to
recognize Graham.</p>
<p>I find the almost-universal graph thing very interesting.
<a href="https://arxiv.org/abs/1404.5757">It is still an open research area</a>.
But none of this was what I was planning to talk about. I will return
to the point. A couple of years later Erdős was to speak at the
University of Pennsylvania. He had a stock speech for general
audiences that I saw him give more than once. Most of the talk would
be a description of a lot of interesting problems, the bounties he
offered for their solutions, and the progress that had been made on
them so far. He would intersperse the discussions with the sort of
Erdősism that he was noted for: referring to the U.S. and the
U.S.S.R. as “Sam” and “Joe” respectively; his ever-growing series of
styles (Paul Erdős, P.G.O.M., A.D., etc.) and so on.</p>
<p>One remark I remember in particular concerned the $3000 bounty he
offered for proving what is sometimes known as
<a href="https://en.wikipedia.org/wiki/Erd%C5%91s_conjecture_on_arithmetic_progressions">the Erdős-Túran conjecture</a>:
if <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24S%24"> is a subset of the natural numbers, and if <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csum_%7bn%5cin%0aS%7d%5cfrac%201n%24"> diverges, then <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24S%24"> contains arbitrarily long arithmetic
progressions. (A special case of this is that the primes contain
arbitrarily long arithmetic progressions, which was
<a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem">proved in 2004 by Green and Tao</a>,
but which at the time was a long-standing conjecture.) Although the
$3000 was at the time the largest bounty ever offered by Erdős, he
said it was really a bad joke, because to solve the problem would
require so much effort that the per-hour payment would be minuscule.</p>
<p>I made a special trip down to Philadelphia to attend the talk, with
the intention of visiting my girlfriend at Bryn Mawr afterward. I
arrived at the Penn math building early and wandered around the halls
to kill time before the talk. And as I passed by an office with an
open door, I saw Erdős sitting in the antechamber on a small sofa. So I sat down
beside him and started telling him about my favorite graph theory
problem.</p>
<p>Many people, preparing to give a talk to a large roomful of strangers,
would have found this annoying and intrusive. Some people might not want to
talk about graph theory with a passing stranger. But most people are
not Paul Erdős, and I think what I did was probably just the right
thing; what you <em>don't</em> do is sit next to Erdős and then ask how his flight
was and what he thinks of recent politics. We talked about my problem, and to
my great regret I don't remember any of the mathematical details of
what he said. But he did not know the answer offhand, he was not able
solve it instantly, and he did say it was interesting. So! I had a
conversation with Erdős about graph theory that was not a waste of his
time, and I think I can count that as one of my lifetime
accomplishments.</p>
<p>After a little while it was time to go down to the auditorium for the
the talk, and afterward one of the organizers saw me, perhaps
recognized me from the sofa, and invited me to the guest dinner, which
I eagerly accepted. At the dinner, I was thrilled because I secured a
seat next to Erdős! But this was a beginner mistake: he fell asleep
almost immediately and slept through dinner, which, I learned later,
was completely typical.</p>
<!--
[ From searching of my email archives, I have learned that the first
episode took place on Friday, 5 January 1990, and I now think the two
episodes took place in the reverse order, with the dinner
happening in the late 1980s as I said. ]
-->