Archive:
In this section: Subtopics:
Comments disabled |
Wed, 30 Dec 2020
Benjamin Franklin and the Exercises of Ignatius
Recently I learned of the Spiritual Exercises of St. Ignatius. Wikipedia says (or quotes, it's not clear):
This reminded me very strongly of Chapter 9 of Benjamin Franklin's Autobiography, in which he presents “A Plan for Attaining Moral Perfection”:
So I wondered: was Franklin influenced by the Exercises? I don't know, but it's possible. Wondering about this I consulted the Mighty Internet, and found two items in the Woodstock Letters, a 19th-century Jesuit periodical, wondering the same thing:
(“Woodstock Letters” Volume XXXIV #2 (Sep 1905) p.311–313) I can't guess at the main question, but I can correct one small detail: although this part of the Autobiography was written around 1784, the time of which Franklin was writing, when he actually made his little book, was around 1730, well before the suppression of the Society. The following issue takes up the matter again:
(“Woodstock Letters” Volume XXXIV #3 (Dec 1905) p.459–461) Franklin describes making a decision by listing, on a divided sheet of paper, the reasons for and against the proposed action. And then a variation I hadn't seen: balance arguments for and arguments against, and cross out equally-balanced sets of arguments. Franklin even suggests evaluations as fine as matching two arguments for with three slightly weaker arguments against and crossing out all five together. I don't know what this resembles in the Exercises but it certainly was striking. [Other articles in category /book] permanent link Sat, 26 Dec 2020This tweet from Raffi Melkonian describes the appetizer plate at his house on Christmas. One item jumped out at me:
I wondered what that was like, and then I realized I do have some idea, because I recognized the word. Basterma is not an originally Armenian word, it's a Turkish loanword, I think canonically spelled pastırma. And from Turkish it made a long journey through Romanian and Yiddish to arrive in English as… pastrami… For which “spicy beef prosciutto” isn't a bad description at all. [Other articles in category /lang/etym] permanent link Tue, 15 Dec 2020The world is so complicated! It has so many things in it that I could not even have imagined. Yesterday I learned that since 1949 there has been a compact between New Mexico and Texas about how to divide up the water in the Pecos River, which flows from New Mexico to Texas, and then into the Rio Grande. New Mexico is not allowed to use all the water before it gets to Texas. Texas is entitled to receive a certain amount. There have been disputes about this in the past (the Supreme Court case has been active since 1974), so in 1988 the Supreme Court appointed Neil S. Grigg, a hydraulic engineer and water management expert from Colorado, to be “River Master of the Pecos River”, to mediate the disputes and account for the water. The River Master has a rulebook, which you can read online. I don't know how much Dr. Grigg is paid for this. In 2014, Tropical Storm Odile dumped a lot of rain on the U.S. Southwest. The Pecos River was flooding, so Texas asked NM to hold onto the Texas share of the water until later. (The rulebook says they can do this.) New Mexico arranged for the water that was owed to Texas to be stored in the Brantley Reservoir. A few months later Texas wanted their water. "OK," said New Mexico. “But while we were holding it for you in our reservoir, some of it evaporated. We will give you what is left.” “No,” said Texas, “we are entitled to a certain amount of water from you. We want it all.” But the rule book says that even though the water was in New Mexico's reservoir, it was Texas's water that evaporated. (Section C5, “Texas Water Stored in New Mexico Reservoirs”.) [Other articles in category /law] permanent link Thu, 10 Dec 2020
I had some free time yesterday, so that is what I did. The creek is pretty. I did not see anything that appeared to be white clay. Of course I did not investigate extensively, or even closely, because the weather was too cold for wading. But the park was beautiful. There is a walking trail in the park that reaches the tripoint itself. I didn't walk the whole trail. The park entrance is at the other end of the park from the tripoint. After wandering around in the park for a while, I went back to the car, drove to the Maryland end of the park, and left the car on the side of the Maryland Route 896 (or maybe Pennsylvania Route 896, it's hard to be sure). Then I cut across private property to the marker. The marker itself looks like this: As you see, the Pennsylvania sides of the monument are marked with ‘P’ and the Maryland side with ‘M’. The other ‘M’ is actually in Delaware. This Newark Post article explains why there is no ‘D’:
This does not explain the whole thing. The point was first marked in 1765 by Mason and Dixon and at that time Delaware was indeed part of Pennsylvania. But as you see the stone marker was placed in 1849, by which time Delaware had been there for some time. Perhaps the people who installed the new marker were trying to pretend that Delaware did not exist. [ Addendum 20201218: Daniel Wagner points out that even if the 1849 people were trying to depict things as they were in 1765, the marker is still wrong; it should have three ‘P’ and one ‘M’, not two of each. I did read that the surveyors who originally placed the 1849 marker put it in the wrong spot, and it had to be moved later, so perhaps they were just not careful people. ] Theron Stanford notes that this point is also the northwestern corner of the Wedge. This sliver of land was east of the Maryland border, but outside the Twelve-Mile Circle and so formed an odd prodtrusion from Pennsylvania. Pennsylvania only reliquinshed claims to it in 1921 and it is now agreed to be part of Delaware. Were the Wedge still part of Pennsylvania, the tripoint would have been at its southernmost point. Looking at the map now I see that to get to the marker, I must have driven within a hundred yards of the westmost point of the Twelve-Mile Circle itself, and there is a (somewhat more impressive) marker there. Had I realized at the time I probably would have tried to stop off. I have some other pictures of the marker if you are not tired of this yet. [ Addendum 20201211: Tim Heany asks “Is there no sign at the border on 896?” There probably is, and this observation is a strong argument that I parked the car in Maryland. ] [ Addendum 20201211: Yes, ‘prodtrusion’ was a typo, but it is a gift from the Gods of Dada and should be treasured, not thrown in the trash. ] [Other articles in category /misc] permanent link Sat, 21 Nov 2020
Testing for divisibility by 19
[ Previously, Testing for divisibility by 7. ] A couple of nights ago I was keeping Katara company while she revised an essay on The Scarlet Letter (ugh) and to pass the time one of the things I did was tinker with the tests for divisibility rules by 9 and 11. In the course of this I discovered the following method for divisibility by 19:
The result will be a smaller number which is a multiple of 19 if and only if the original number was. For example, let's consider, oh, I don't know, 2337. We calculate:
76 is a multiple of 19, so 2337 was also. But if you're not sure about 76 you can compute 2·6+7 = 19 and if you're not sure about that you need more help than I can provide. I don't claim this is especially practical, but it is fun, not completely unworkable, and I hadn't seen anything like it before. You can save a lot of trouble by reducing the intermediate values mod 19 when needed. In the example above above, after the first step you get to 17, which you can reduce mod 19 to -2, and then the next step is -2·2+3 = -1, and the final step is -1·2+2 = 0. Last time I wrote about this Eric Roode sent me a whole compendium of divisibility tests, including one for divisibility by 19. It's a little like mine, but in reverse: group the digits in pairs, left to right; multiply each pair by 5 and then add the next pair. Here's 2337 again:
Again you can save a lot trouble by reducing mod 19 before the multiplication. So instead of the first step being 23·5 + 37 you can reduce the 23·5 to 4·5 = 20 and then add the 37 to get 57 right away. [ Addendum: Of course this was discovered long ago, and in fact Wikipedia mentions it. ] [ Addendum 20201123: An earlier version of this article claimed that the double-and-add step I described preserves the mod-19 residue. It does not, of course; the doubling step doubles it. It is, however, true that it is zero afterward if and only if it was zero before. ] [Other articles in category /math] permanent link [Other articles in category /misc] permanent link Mon, 02 Nov 2020
A better way to do remote presentations
A few months ago I wrote an article about a strategy I had tried when giving a talk via videochat. Typically:
I thought I had done it a better way:
This, I thought, had several advantages:
When I brought this up with my co-workers, some of them had a very good objection:
Fair enough! I have done this. If you package your slides with page-turner, one instance becomes the “leader” and the rest are “followers”. Whenever the leader moves from one slide to the next, a very simple backend server is notified. The followers periodically contact the server to find out what slide they are supposed to be showing, and update themselves accordingly. The person watching the show can sit back and let it do all the work. But! If an audience member wants to skip ahead, or go back, that works too. They can use the arrow keys on their keyboard. Their follower instance will stop synchronizing with the leader's slide. Instead, it will display a box in the corner of the page, next to the current slide's page number, that says what slide the leader is looking at. The number in this box updates dynamically, so the audience person always knows how far ahead or behind they are.
At left, the leader is displaying slide 3, and the follower is there also. When the leader moves on to slide 4, the follower instance will switch automatically. At right, the follower is still looking at slide 3, but is detached from the leader, who has moved on to slide 007, as you can see in the gray box. When the audience member has finished their excursion, they can click the gray box and their own instance will immediately resynchronize with the leader and follow along until the next time they want to depart. I used this to give a talk to the Charlotte Perl Mongers last week and it worked. Responses were generally positive even though the UI is a little bit rough-looking. Technical detailsThe back end is a tiny server, written in Python 3 with Flask. The server is really tiny, only about 60 lines of code. It has only two endpoints: for getting the leader's current page, and for setting it. Setting requires a password.
The front end runs in the browser. The user downloads the front-end
script,
The On page switching, a tiny amount of information is stored in the
browser window's If you want page-turner to display the leader's slide number when the
follower is desynchronized, as in the example above, you include an
element with class
The password for the
to
or whatever. Many improvements are certainly possible. It could definitely look a lot better, but I leave that to someone who has interest and aptitude in design. I know of one serious bug: at present the server doesn't handle SSL,
so must be run at an Source code downloadPatches welcome. LicenseThe software is licensed under the Creative Commons Attribution 4.0 license (CC BY 4.0). You are free to share (copy and redistribute the software in any medium or format) and adapt (remix, transform, and build upon the material) for any purpose, even commercially, so long as you give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. Share and enjoy. [Other articles in category /talk] permanent link Sun, 18 Oct 2020
Newton's Method and its instability
While messing around with Newton's method for last week's article, I built this Desmos thingy: The red point represents the initial guess; grab it and drag it around, and watch how the later iterations change. Or, better, visit the Desmos site and play with the slider yourself. (The curve here is !!y = (x-2.2)(x-3.3)(x-5.5)!!; it has a local maximum at around !!2.7!!, and a local minimum at around !!4.64!!.) Watching the attractor point jump around I realized I was arriving at a much better understanding of the instability of the convergence. Clearly, if your initial guess happens to be near an extremum of !!f!!, the next guess could be arbitrarily far away, rather than a small refinement of the original guess. But even if the original guess is pretty good, the refinement might be near an extremum, and then the following guess will be somewhere random. For example, although !!f!! is quite well-behaved in the interval !![4.3, 4.35]!!, as the initial guess !!g!! increases across this interval, the refined guess !!\hat g!! decreases from !!2.74!! to !!2.52!!, and in between these there is a local maximum that kicks the ball into the weeds. The result is that at !!g=4.3!! the method converges to the largest of the three roots, and at !!g=4.35!!, it converges to the smallest. This is where the Newton basins come from: Here we are considering the function !!f:z\mapsto z^3 -1!! in the complex plane. Zero is at the center, and the obvious root, !!z=1!! is to its right, deep in the large red region. The other two roots are at the corresponding positions in the green and blue regions. Starting at any red point converges to the !!z=1!! root. Usually, if you start near this root, you will converge to it, which is why all the points near it are red. But some nearish starting points are near an extremum, so that the next guess goes wild, and then the iteration ends up at the green or the blue root instead; these areas are the rows of green and blue leaves along the boundary of the large red region. And some starting points on the boundaries of those leaves kick the ball into one of the other leaves… Here's the corresponding basin diagram for the polynomial !!y = (x-2.2)(x-3.3)(x-5.5)!! from earlier: The real axis is the horizontal hairline along the middle of the diagram. The three large regions are the main basins of attraction to the three roots (!!x=2.2, 3.3!!, and !!5.5!!) that lie within them. But along the boundaries of each region are smaller intrusive bubbles where the iteration converges to a surprising value. A point moving from left to right along the real axis passes through the large pink !!2.2!! region, and then through a very small yellow bubble, corresponding to the values right around the local maximum near !!x=2.7!! where the process unexpectedly converges to the !!5.5!! root. Then things settle down for a while in the blue region, converging to the !!3.3!! root as one would expect, until the value gets close to the local minimum at !!4.64!! where there is a pink bubble because the iteration converges to the !!2.2!! root instead. Then as !!x!! increases from !!4.64!! to !!5.5!!, it leaves the pink bubble and enters the main basin of attraction to !!5.5!! and stays there. If the picture were higher resolution, you would be able to see that the pink bubbles all have tiny yellow bubbles growing out of them (one is 4.39), and the tiny yellow bubbles have even tinier pink bubbles, and so on forever. (This was generated by the Online Fractal Generator at usefuljs.net; the labels were added later by me. The labels’ positions are only approximate.) [ Addendum: Regarding complex points and !!f : z\mapsto z^3-1!! I said “some nearish starting points are near an extremum”. But this isn't right; !!f!! has no extrema. It has an inflection point at !!z=0!! but this doesn't explain the instability along the lines !!\theta = \frac{2k+1}{3}\pi!!. So there's something going on here with the complex derivative that I don't understand yet. ] [Other articles in category /math] permanent link
Fixed points and attractors, part 3
Last week I wrote about a super-lightweight variation on Newton's method, in which one takes this function: $$f_n : \frac ab \mapsto \frac{a+nb}{a+b}$$ or equivalently $$f_n : x \mapsto \frac{x+n}{x+1}$$ Iterating !!f_n!! for a suitable initial value (say, !!1!!) converges to !!\sqrt n!!: $$ \begin{array}{rr} x & f_3(x) \\ \hline 1.0 & 2.0 \\ 2.0 & 1.667 \\ 1.667 & 1.75 \\ 1.75 & 1.727 \\ 1.727 & 1.733 \\ 1.733 & 1.732 \\ 1.732 & 1.732 \end{array} $$ Later I remembered that a few months back I wrote a couple of articles about a more general method that includes this as a special case: The general idea was:
We can see that !!\sqrt n!! is a fixed point of !!f_n!!: $$ \begin{align} f_n(\sqrt n) & = \frac{\sqrt n + n}{\sqrt n + 1} \\ & = \frac{\sqrt n(1 + \sqrt n)}{1 + \sqrt n} \\ & = \sqrt n \end{align} $$ And in fact, it is an attracting fixed point, because if !!x = \sqrt n + \epsilon!! then $$\begin{align} f_n(\sqrt n + \epsilon) & = \frac{\sqrt n + \epsilon + n}{\sqrt n + \epsilon + 1} \\ & = \frac{(\sqrt n + \sqrt n\epsilon + n) - (\sqrt n -1)\epsilon}{\sqrt n + \epsilon + 1} \\ & = \sqrt n - \frac{(\sqrt n -1)\epsilon}{\sqrt n + \epsilon + 1} \end{align}$$ Disregarding the !!\epsilon!! in the denominator we obtain $$f_n(\sqrt n + \epsilon) \approx \sqrt n - \frac{\sqrt n - 1}{\sqrt n + 1} \epsilon $$ The error term !!-\frac{\sqrt n - 1}{\sqrt n + 1} \epsilon!! is strictly smaller than the original error !!\epsilon!!, because !!0 < \frac{x-1}{x+1} < 1!! whenever !!x>1!!. This shows that the fixed point !!\sqrt n!! is attractive. In the previous articles I considered several different simple functions that had fixed points at !!\sqrt n!!, but I didn't think to consider this unusally simple one. I said at the time:
but I never did get around to the Möbius transformations, and I have long since forgotten what I planned to say. !!f_n!! is an example of a Möbius transformation, and I wonder if my idea was to systematically find all the Möbius transformations that have !!\sqrt n!! as a fixed point, and see what they look like. It is probably possible to automate the analysis of whether the fixed point is attractive, and if not to apply one of the transformations from the previous article to make it attractive. [Other articles in category /math] permanent link Tue, 13 Oct 2020
Newton's Method but without calculus — or multiplication
Newton's method goes like this: We have a function !!f!! and we want to solve the equation !!f(x) = 0.!! We guess an approximate solution, !!g!!, and it doesn't have to be a very good guess. Then we calculate the line !!T!! tangent to !!f!! through the point !!\langle g, f(g)\rangle!!. This line intersects the !!x!!-axis at some new point !!\langle \hat g, 0\rangle!!, and this new value, !!\hat g!!, is a better approximation to the value we're seeking. Analytically, we have: $$\hat g = g - \frac{f(g)}{f'(g)}$$ where !!f'(g)!! is the derivative of !!f!! at !!g!!. We can repeat the process if we like, getting better and better approximations to the solution. (See detail at left; click to enlarge. Again, the blue line is the tangent, this time at !!\langle \hat g, f(\hat g)\rangle!!. As you can see, it intersects the axis very close to the actual solution.) In general, this requires calculus or something like it, but in any particular case you can avoid the calculus. Suppose we would like to find the square root of 2. This amounts to solving the equation $$x^2-2 = 0.$$ The function !!f!! here is !!x^2-2!!, and !!f'!! is !!2x!!. Once we know (or guess) !!f'!!, no further calculus is needed. The method then becomes: Guess !!g!!, then calculate $$\hat g = g - \frac{g^2-2}{2g}.$$ For example, if our initial guess is !!g = 1.5!!, then the formula above tells us that a better guess is !!\hat g = 1.5 - \frac{2.25 - 2}{3} = 1.4166\ldots!!, and repeating the process with !!\hat g!! produces !!1.41421\mathbf{5686}!!, which is very close to the correct result !!1.41421\mathbf{3562}!!. If we want the square root of a different number !!n!! we just substitute it for the !!2!! in the numerator. This method for extracting square roots works well and requires no calculus. It's called the Babylonian method and while there's no evidence that it was actually known to the Babylonians, it is quite ancient; it was first recorded by Hero of Alexandria about 2000 years ago. How might this have been discovered if you didn't have calculus? It's actually quite easy. Here's a picture of the number line. Zero is at one end, !!n!! is at the other, and somewhere in between is !!\sqrt n!!, which we want to find. Also somewhere in between is our guess !!g!!. Say we guessed too low, so !!0 \lt g < \sqrt n!!. Now consider !!\frac ng!!. Since !!g!! is too small to be !!\sqrt n!! exactly, !!\frac ng!! must be too large. (If !!g!! and !!\frac ng!! were both smaller than !!\sqrt n!!, then their product would be smaller than !!n!!, and it isn't.) Similarly, if the guess !!g!! is too large, so that !!\sqrt n < g!!, then !!\frac ng!! must be less than !!\sqrt n!!. The important point is that !!\sqrt n!! is between !!g!! and !!\frac ng!!. We have narrowed down the interval in which !!\sqrt n!! lies, just by guessing. Since !!\sqrt n!! lies in the interval between !!g!! and !!\frac ng!! our next guess should be somewhere in this smaller interval. The most obvious thing we can do is to pick the point halfway in the middle of !!g!! and !!\frac ng!!, So if we guess the average, $$\frac12\left(g + \frac ng\right),$$ this will probably be much closer to !!\sqrt n!! than !!g!! was: This average is exactly what Newton's method would have calculated, because $$\frac12\left(g + \frac ng\right) = g - \frac{g^2-n}{2g}.$$ But we were able to arrive at the same computation with no calculus at all — which is why this method could have been, and was, discovered 1700 years before Newton's method itself. If we're dealing with rational numbers then we might write !!g=\frac ab!!, and then instead of replacing our guess !!g!! with a better guess !!\frac12\left(g + \frac ng\right)!!, we could think of it as replacing our guess !!\frac ab!! with a better guess !!\frac12\left(\frac ab + \frac n{\frac ab}\right)!!. This simplifies to $$\frac ab \Rightarrow \frac{a^2 + nb^2}{2ab}$$ so that for example, if we are calculating !!\sqrt 2!!, and we start with the guess !!g=\frac32!!, the next guess is $$\frac{3^2 + 2\cdot2^2}{2\cdot3\cdot 2} = \frac{17}{12} = 1.4166\ldots$$ as we saw before. The approximation after that is !!\frac{289+288}{2\cdot17\cdot12} = \frac{577}{408} = 1.41421568\ldots!!. Used this way, the method requires only integer calculations, and converges very quickly. But the numerators and denominators increase rapidly, which is good in one sense (it means you get to the accurate approximations quickly) but can also be troublesome because the numbers get big and also because you have to multiply, and multiplication is hard. But remember how we figured out to do this calculation in the first place: all we're really trying to do is find a number in between !!g!! and !!\frac ng!!. We did that the first way that came to mind, by averaging. But perhaps there's a simpler operation that we could use instead, something even easier to compute? Indeed there is! We can calculate the mediant. The mediant of !!\frac ab!! and !!\frac cd!! is simply $$\frac{a+c}{b+d}$$ and it is very easy to show that it lies between !!\frac ab!! and !!\frac cd!!, as we want. So instead of the relatively complicated $$\frac ab \Rightarrow \frac{a^2 + nb^2}{2ab}$$ operation, we can try the very simple and quick $$\frac ab \Rightarrow \operatorname{mediant}\left(\frac ab, \frac{nb}{a}\right) = \frac{a+nb}{b+a}$$ operation. Taking !!n=2!! as before, and starting with !!\frac 32!!, this produces: $$ \frac 32 \Rightarrow\frac{ 7 }{ 5 } \Rightarrow\frac{ 17 }{ 12 } \Rightarrow\frac{ 41 }{ 29 } \Rightarrow\frac{ 99 }{ 70 } \Rightarrow\frac{ 239 }{ 169 } \Rightarrow\frac{ 577 }{ 408 } \Rightarrow\cdots$$ which you may recognize as the convergents of !!\sqrt2!!. These are actually the rational approximations of !!\sqrt 2!! that are optimally accurate relative to the sizes of their denominators. Notice that !!\frac{17}{12}!! and !!\frac{577}{408}!! are in there as they were before, although it takes longer to get to them. I think it's cool that you can view it as a highly-simplified version of Newton's method. [ Addendum: An earlier version of the last paragraph claimed:
Simon Tatham pointed out that this was mistaken. It's true when !!n=2!!, but not in general. The sequence of fractions that you get does indeed converge to !!\sqrt n!!, but it's not usually the convergents, or even in lowest terms. When !!n=3!!, for example, the numerators and denominators are all even. ] [ Addendum: Newton's method as I described it, with the crucial !!g → g - \frac{f(g)}{f'(g)}!! transformation, was actually invented in 1740 by Thomas Simpson. Both Isaac Newton and Thomas Raphson had earlier described only special cases, as had several Asian mathematicians, including Seki Kōwa. ] [ Previous discussion of convergents: Archimedes and the square root of 3; 60-degree angles on a lattice. A different variation on the Babylonian method. ] [ Note to self: Take a look at what the AM-GM inequality has to say about the behavior of !!\hat g!!. ] [ Addendum 20201018: A while back I discussed the general method of picking a function !!f!! that has !!\sqrt 2!! as a fixed point, and iterating !!f!!. This is yet another example of such a function. ] [Other articles in category /math] permanent link Wed, 23 Sep 2020
The mystery of the malformed command-line flags
Today a user came to tell me that their command
failed, saying:
This is surprising. The command looks correct. The branch name is
required. The The
But it still didn't work:
I dug in to the script and discovered the problem, which was not actually a programming error. The crappy shell script was behaving correctly! I had written up release notes for the In an earlier draft of the release notes, when they were displayed as an HTML page, there would be bad line breaks:
or:
No problem, I can fix it! I just changed the pair of hyphens ( But then this hapless user clipboard-copied the option string out of
the release notes, including its U+2011 characters. The parser in the
script was (correctly) looking for U+002D characters, and didn't
recognize One lesson learned: people will copy-paste stuff out of documentation, and I should be prepared for that. There are several places to address this. I made the error message more transparent; formerly it would complain only about the first argument, which was confusing because it was the one argument that wasn't superfluous. Now it will say something like
which is more descriptive of what it actually doesn't like. I could change the nonbreaking hyphens in the release notes back to regular hyphens and just accept the bad line breaks. But I don't want to. Typography is important. One idea I'm toying with is to have the shell script silently replace all nonbreaking hyphens with regular ones before any further processing. It's a hack, but it seems like it might be a harmless one. So many weird things can go wrong. This computer stuff is really complicated. I don't know how anyone get anything done. [ Addendum: A reader suggests that I could have fixed the line breaks with CSS. But the release notes were being presented as a Slack “Post”, which is essentially a WYSIWYG editor for creating shared documents. It presents the document in a canned HTML style, and as far as I know there's no way to change the CSS it uses. Similarly, there's no way to insert raw HTML elements, so no way to change the style per-element. ] [Other articles in category /prog/bug] permanent link Sun, 13 Sep 2020The front page of NPR.org today has this headline: It contains this annoying phrase:
Someone has really committed to hedging. I would have said that the race would certainly help determine control of the Senate, or that it could determine control of the Senate. The statement as written makes an extremely weak claim. The article itself doesn't include this phrase. This is why reporters hate headline-writers. [Other articles in category /lang] permanent link Fri, 11 Sep 2020
Historical diffusion of words for “eggplant”
In reply to my recent article about the history of words for “eggplant”, a reader, Lydia, sent me this incredible map they had made that depicts the history and the diffusion of the terms: Lydia kindly gave me permission to share their map with you. You can see the early Dravidian term vaḻutanaṅṅa in India, and then the arrows show it travelling westward across Persia and, Arabia, from there to East Africa and Europe, and from there to the rest of the world, eventually making its way back to India as brinjal before setting out again on yet more voyages. Thank you very much, Lydia! And Happy Diada Nacional de Catalunya, everyone! [Other articles in category /lang/etym] permanent link
A maxim for conference speakers
The only thing worse than re-writing your talk the night before is writing your talk the night before. [Other articles in category /talk] permanent link Fri, 28 Aug 2020This morning Katara asked me why we call these vegetables “zucchini” and “eggplant” but the British call them “courgette” and “aubergine”. I have only partial answers, and the more I look, the more complicated they get. ZucchiniThe zucchini is a kind of squash, which means that in Europe it is a post-Columbian import from the Americas. “Squash” itself is from Narragansett, and is not related to the verb “to squash”. So I speculate that what happened here was:
The Big Dictionary has citations for “zucchini” only back to 1929, and “courgette” to 1931. What was this vegetable called before that? Why did the Americans start calling it “zucchini” instead of whatever they called it before, and why “zucchini” and not “courgette”? If it was brought in by Italian immigrants, one might expect to the word to have appeared earlier; the mass immigration of Italians into the U.S. was over by 1920. Following up on this thought, I found a mention of it in Cuniberti, J. Lovejoy., Herndon, J. B. (1918). Practical Italian recipes for American kitchens, p. 18: “Zucchini are a kind of small squash for sale in groceries and markets of the Italian neighborhoods of our large cities.” Note that Cuniberti explains what a zucchini is, rather than saying something like “the zucchini is sometimes known as a green summer squash” or whatever, which suggests that she thinks it will not already be familiar to the readers. It looks as though the story is: Colonial Europeans in North America stopped eating the zucchini at some point, and forgot about it, until it was re-introduced in the early 20th century by Italian immigrants. When did the French start calling it courgette? When did the Italians start calling it zucchini? Is the Italian term a calque of the French, or vice versa? Or neither? And since courge (and gourd) are evidently descended from Latin cucurbita, where did the Italians get zucca? So many mysteries. EggplantHere I was able to get better answers. Unlike squash, the eggplant is native to Eurasia and has been cultivated in western Asia for thousands of years. The puzzling name “eggplant” is because the fruit, in some varieties, is round, white, and egg-sized. The term “eggplant” was then adopted for other varieties of the same plant where the fruit is entirely un-egglike. “Eggplant” in English goes back only to 1767. What was it called before that? Here the OED was more help. It gives this quotation, from 1785:
I inferred that the preceding text described it under a better-known name, so, thanks to the Wonders of the Internet, I looked up the original source:
(Jean-Jacques Rosseau, Letters on the Elements of Botany, tr. Thos. Martyn 1785. Page 202. (Wikipedia)) The most common term I've found that was used before “egg-plant” itself is “mad apple”. The OED has cites from the late 1500s that also refer to it as a “rage apple”, which is a calque of French pomme de rage. I don't know how long it was called that in French. I also found “Malum Insanam” in the 1736 Lexicon technicum of John Harris, entry “Bacciferous Plants”. Melongena was used as a scientific genus name around 1700 and later adopted by Linnaeus in 1753. I can't find any sign that it was used in English colloquial, non-scientific writing. Its etymology is a whirlwind trip across the globe. Here's what the OED says about it:
Wowzers. Okay, now how do we get to “aubergine”? The list above includes Arabic bāḏinjān, and this, like many Arabic words was borrowed into Spanish, as berengena or alberingena. (The “al-” prefix is Arabic for “the” and is attached to many such borrowings, for example “alcohol” and “alcove”.) From alberingena it's a short step to French aubergine. The OED entry for aubergine doesn't mention this. It claims that aubergine is from “Spanish alberchigo, alverchiga, ‘an apricocke’”. I think it's clear that the OED blew it here, and I think this must be the first time I've ever been confident enough to say that. Even the OED itself supports me on this: the note at the entry for brinjal says: “cognate with the Spanish alberengena is the French aubergine”. Okay then. (Brinjal, of course, is a contraction of berengena, via Portuguese bringella.) Sanskrit vātiṅgaṇa is also the ultimate source of modern Hindi baingan, as in baingan bharta. (Wasn't there a classical Latin word for eggplant? If so, what was it? Didn't the Romans eat eggplant? How do you conquer the world without any eggplants?) [ Addendum: My search for antedatings of “zucchini” turned up some surprises. For example, I found what seemed to be many mentions in an 1896 history of Sicily. These turned out not to be about zucchini at all, but rather the computer's pathetic attempts at recognizing the word Σικελίαν. ] [ Addendum 20200831: Another surprise: Google Books and Hathi Trust report that “zucchini” appears in the 1905 Collier Modern Eclectic Dictionary of the English Langauge, but it's an incredible OCR failure for the word “acclamation”. ] [ Addendum 20200911: A reader, Lydia, sent me a beautiful map showing the evolution of the many words for ‘eggplant’. Check it out. ] [Other articles in category /lang/etym] permanent link Mon, 24 Aug 2020Ripta Pasay brought to my attention the English cookbook Liber Cure Cocorum, published sometime between 1420 and 1440. The recipes are conveyed as poems:
(Original plus translation by Cindy Renfrow) “Conyngus” is a rabbit; English has the cognate “coney”. If you have read my article on how to read Middle English you won't have much trouble with this. There are a few obsolete words: sere means “separately”; myed bread is bread crumbs, and amydone is starch. I translate it (very freely) as follows:
Thanks, Ripta! [Other articles in category /food] permanent link Fri, 21 Aug 2020
Mixed-radix fractions in Bengali
[ Previously, Base-4 fractions in Telugu. ] I was really not expecting to revisit this topic, but a couple of weeks ago, looking for something else, I happened upon the following curiously-named Unicode characters:
Oh boy, more base-four fractions! What on earth does “NUMERATOR ONE LESS THAN THE DENOMINATOR” mean and how is it used? An explanation appears in the Unicode proposal to add the related “ganda” sign:
(Anshuman Pandey, “Proposal to Encode the Ganda Currency Mark for Bengali in the BMP of the UCS”, 2007.) Pandey explains: prior to decimalization, the Bengali rupee (rupayā) was divided into sixteen ānā. Standard Bengali numerals were used to write rupee amounts, but there was a special notation for ānā. The sign ৹ always appears, and means sixteenths. Then. Prefixed to this is a numerator symbol, which goes ৴, ৵, ৶, ৷ for 1, 2, 3, 4. So for example, 3 ānā is written ৶৹, which means !!\frac3{16}!!. The larger fractions are made by adding the numerators, grouping by 4's:
except that three fours (৷৷৷) is too many, and is abbreviated by the intriguing NUMERATOR ONE LESS THAN THE DENOMINATOR sign ৸ when more than 11 ānā are being written. Historically, the ānā was divided into 20 gaṇḍā; the gaṇḍā amounts are written with standard (Benagli decimal) numerals instead of the special-purpose base-4 numerals just described. The gaṇḍā sign ৻ precedes the numeral, so 4 gaṇḍā (!!\frac15!! ānā) is wrtten as ৻৪. (The ৪ is not an 8, it is a four.) What if you want to write 17 rupees plus !!9\frac15!! ānā? That is 17 rupees plus 9 ānā plus 4 gaṇḍā. If I am reading this report correctly, you write it this way: ১৭৷৷৴৻৪ This breaks down into three parts as ১৭ ৷৷৴ ৻৪. The ১৭ is a 17, for 17 rupees; the ৷৷৴ means 9 ānā (the denominator ৹ is left implicit) and the ৻৪ means 4 gaṇḍā, as before. There is no separator between the rupees and the ānā. But there doesn't need to be, because different numerals are used! An analogous imaginary system in Latin script would be to write the amount as
where the ‘17’ means 17 rupees, the ‘dda’ means 4+4+1=9 ānā, and the ¢4 means 4 gaṇḍā. There is no trouble seeing where the ‘17’ ends and the ‘dda’ begins. Pandey says there was an even smaller unit, the kaṛi. It was worth ¼ of a gaṇḍā and was again written with the special base-4 numerals, but as if the gaṇḍā had been divided into 16. A complete amount might be written with decimal numerals for the rupees, base-4 numerals for the ānā, decimal numerals again for the gaṇḍā, and base-4 numerals again for the kaṛi. No separators are needed, because each section is written symbols that are different from the ones in the adjoining sections. [Other articles in category /math] permanent link Thu, 06 Aug 2020
Recommended reading: Matt Levine’s Money Stuff
Lately my favorite read has been Matt Levine’s Money Stuff articles from Bloomberg News. Bloomberg's web site requires a subscription but you can also get the Money Stuff articles as an occasional email. It arrives at most once per day. Almost every issue teaches me something interesting I didn't know, and almost every issue makes me laugh. Example of something interesting: a while back it was all over the news that oil prices were negative. Levine was there to explain what was really going on and why. Some people manage index funds. They are not trying to beat the market, they are trying to match the index. So they buy derivatives that give them the right to buy oil futures contracts at whatever the day's closing price is. But say they already own a bunch of oil contracts. If they can get the close-of-day price to dip, then their buy-at-the-end-of-the-day contracts will all be worth more because the counterparties have contracted to buy at the dip price. How can you get the price to dip by the end of the day? Easy, unload 20% of your contracts at a bizarre low price, to make the value of the other 80% spike… it makes my head swim. But there are weird second- and third-order effects too. Normally if you invest fifty million dollars in oil futures speculation, there is a worst-case: the price of oil goes to zero and you lose your fifty million dollars. But for these derivative futures, the price could in theory become negative, and for short time in April, it did:
One article I particularly remember discussed the kerfuffle a while back concerning whether Kelly Loeffler improperly traded stocks on classified coronavirus-related intelligence that she received in her capacity as a U.S. senator. I found Levine's argument persuasive:
He contrasted this case with that of Richard Burr, who, unlike Loeffler, remains under investigation. The discussion was factual and informative, unlike what you would get from, say, Twitter, or even Metafilter, where the response was mostly limited to variations on “string them up” and “eat the rich”. Money Stuff is also very funny. Today’s letter discusses a disclosure filed recently by Nikola Corporation:
A couple of recent articles that cracked me up discussed clueless day-traders pushing up the price of Hertz stock after Hertz had declared bankruptcy, and how Hertz diffidently attempted to get the SEC to approve a new stock issue to cater to these idiots. (The SEC said no.) One recurring theme in the newsletter is “Everything is Securities Fraud”. This week, Levine asks:
Of course you'd expect that the executives would be criminally charged, as they have been. But is there a cause for the company’s shareholders to sue? If you follow the newsletter, you know what the answer will be:
because Everything is Securities Fraud.
I recommend it. Levine also has a Twitter account but it is mostly just links to his newsletter articles. [ Addendum 20200821: Unfortunately, just a few days after I posted this, Matt Levine announced that his newletter would be on hiatus for a few months, as he would be on paternity leave. Sorry! ] [ Addendum 20210207: Money Stuff is back. ] [ Addendum 20221204: This article about the balance sheet circulated by FTX in the hours before its bankruptcy may be my favorite of all time. ] [Other articles in category /ref] permanent link Wed, 05 Aug 2020
A maybe-interesting number trick?
I'm not sure if this is interesting, trivial, or both. You decide. Let's divide the numbers from 1 to 30 into the following six groups:
Choose any two rows. Chose a number from each row, and multiply them mod 31. (That is, multiply them, and if the product is 31 or larger, divide it by 31 and keep the remainder.) Regardless of which two numbers you chose, the result will always be in the same row. For example, any two numbers chosen from rows B and D will multiply to yield a number in row E. If both numbers are chosen from row F, their product will always appear in row A. [Other articles in category /math] permanent link Sun, 02 Aug 2020Gulliver's Travels (1726), Part III, chapter 2:
When I first told Katara about this, several years ago, instead of “the minds of these people are so taken up with intense speculations” I said they were obsessed with their phones. Now the phones themselves have become the flappers:
Our minds are not even taken up with intense speculations, but with Instagram. Dean Swift would no doubt be disgusted. [Other articles in category /book] permanent link Sat, 01 Aug 2020
How are finite fields constructed?
Here's another recent Math Stack Exchange answer I'm pleased with.
The only “reasonable” answer here is “get an undergraduate abstract algebra text and read the chapter on finite fields”. Because come on, you can't expect some random stranger to appear and write up a detailed but short explanation at your exact level of knowledge. But sometimes Internet Magic Lightning strikes and that's what you do get! And OP set themselves up to be struck by magic lightning, because you can't get a detailed but short explanation at your exact level of knowledge if you don't provide a detailed but short explanation of your exact level of knowledge — and this person did just that. They understand finite fields of prime order, but not how to construct the extension fields. No problem, I can explain that! I had special fun writing this answer because I just love constructing extensions of finite fields. (Previously: [1] [2]) For any given !!n!!, there is at most one field with !!n!! elements: only one, if !!n!! is a power of a prime number (!!2, 3, 2^2, 5, 7, 2^3, 3^2, 11, 13, \ldots!!) and none otherwise (!!6, 10, 12, 14\ldots!!). This field with !!n!! elements is written as !!\Bbb F_n!! or as !!GF(n)!!. Suppose we want to construct !!\Bbb F_n!! where !!n=p^k!!. When !!k=1!!, this is easy-peasy: take the !!n!! elements to be the integers !!0, 1, 2\ldots p-1!!, and the addition and multiplication are done modulo !!n!!. When !!k>1!! it is more interesting. One possible construction goes like this:
Now we will see an example: we will construct !!\Bbb F_{2^2}!!. Here !!k=2!! and !!p=2!!. The elements will be polynomials of degree at most 1, with coefficients in !!\Bbb F_2!!. There are four elements: !!0x+0, 0x+1, 1x+0, !! and !!1x+1!!. As usual we will write these as !!0, 1, x, x+1!!. This will not be misleading. Addition is straightforward: combine like terms, remembering that !!1+1=0!! because the coefficients are in !!\Bbb F_2!!: $$\begin{array}{c|cccc} + & 0 & 1 & x & x+1 \\ \hline 0 & 0 & 1 & x & x+1 \\ 1 & 1 & 0 & x+1 & x \\ x & x & x+1 & 0 & 1 \\ x+1 & x+1 & x & 1 & 0 \end{array} $$ The multiplication as always is more interesting. We need to find an irreducible polynomial !!P!!. It so happens that !!P=x^2+x+1!! is the only one that works. (If you didn't know this, you could find out easily: a reducible polynomial of degree 2 factors into two linear factors. So the reducible polynomials are !!x^2, x·(x+1) = x^2+x!!, and !!(x+1)^2 = x^2+2x+1 = x^2+1!!. That leaves only !!x^2+x+1!!.) To multiply two polynomials, we multiply them normally, then divide by !!x^2+x+1!! and keep the remainder. For example, what is !!(x+1)(x+1)!!? It's !!x^2+2x+1 = x^2 + 1!!. There is a theorem from elementary algebra (the “division theorem”) that we can find a unique quotient !!Q!! and remainder !!R!!, with the degree of !!R!! less than 2, such that !!PQ+R = x^2+1!!. In this case, !!Q=1, R=x!! works. (You should check this.) Since !!R=x!! this is our answer: !!(x+1)(x+1) = x!!. Let's try !!x·x = x^2!!. We want !!PQ+R = x^2!!, and it happens that !!Q=1, R=x+1!! works. So !!x·x = x+1!!. I strongly recommend that you calculate the multiplication table yourself. But here it is if you want to check: $$\begin{array}{c|cccc} · & 0 & 1 & x & x+1 \\ \hline 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & x & x+1 \\ x & 0 & x & x+1 & 1 \\ x+1 & 0 & x+1 & 1 & x \end{array} $$ To calculate the unique field !!\Bbb F_{2^3}!! of order 8, you let the elements be the 8 second-degree polynomials !!0, 1, x, \ldots, x^2+x, x^2+x+1!! and instead of reducing by !!x^2+x+1!!, you reduce by !!x^3+x+1!!. (Not by !!x^3+x^2+x+1!!, because that factors as !!(x^2+1)(x+1)!!.) To calculate the unique field !!\Bbb F_{3^2}!! of order 27, you start with the 27 third-degree polynomials with coefficients in !!{0,1,2}!!, and you reduce by !!x^3+2x+1!! (I think). The special notation !!\Bbb F_p[x]!! means the ring of all polynomials with coefficients from !!\Bbb F_p!!. !!\langle P \rangle!! means the ring of all multiples of polynomial !!P!!. (A ring is a set with an addition, subtraction, and multiplication defined.) When we write !!\Bbb F_p[x] / \langle P\rangle!! we are constructing a thing called a “quotient” structure. This is a generalization of the process that turns the ordinary integers !!\Bbb Z!! into the modular-arithmetic integers we have been calling !!\Bbb F_p!!. To construct !!\Bbb F_p!!, we start with !!\Bbb Z!! and then agree that two elements of !!\Bbb Z!! will be considered equivalent if they differ by a multiple of !!p!!. To get !!\Bbb F_p[x] / \langle P \rangle!! we start with !!\Bbb F_p[x]!!, and then agree that elements of !!\Bbb F_p[x]!! will be considered equivalent if they differ by a multiple of !!P!!. The division theorem guarantees that of all the equivalent polynomials in a class, exactly one of them will have degree less than that of !!P!!, and that is the one we choose as a representative of its class and write into the multiplication table. This is what we are doing when we “divide by !!P!! and keep the remainder”. A particularly important example of this construction is !!\Bbb R[x] / \langle x^2 + 1\rangle!!. That is, we take the set of polynomials with real coefficients, but we consider two polynomials equivalent if they differ by a multiple of !!x^2 + 1!!. By the division theorem, each polynomial is then equivalent to some first-degree polynomial !!ax+b!!. Let's multiply $$(ax+b)(cx+d).$$ As usual we obtain $$acx^2 + (ad+bc)x + bd.$$ From this we can subtract !!ac(x^2 + 1)!! to obtain the equivalent first-degree polynomial $$(ad+bc) x + (bd-ac).$$ Now recall that in the complex numbers, !!(b+ai)(d + ci) = (bd-ac) + (ad+bc)i!!. We have just constructed the complex numbers,with the polynomial !!x!! playing the role of !!i!!. [ Note to self: maybe write a separate article about what makes this a good answer, and how it is structured. ] [Other articles in category /math/se] permanent link Fri, 31 Jul 2020
What does it mean to expand a function “in powers of x-1”?
A recent Math Stack Excahnge post was asked to expand the function !!e^{2x}!! in powers of !!(x-1)!! and was confused about what that meant, and what the point of it was. I wrote an answer I liked, which I am reproducing here. You asked:
which is a fair question. I didn't understand this either when I first learned it. But it's important for practical engineering reasons as well as for theoretical mathematical ones. Before we go on, let's see that your proposal is the wrong answer to this question, because it is the correct answer, but to a different question. You suggested: $$e^{2x}\approx1+2\left(x-1\right)+2\left(x-1\right)^2+\frac{4}{3}\left(x-1\right)^3$$ Taking !!x=1!! we get !!e^2 \approx 1!!, which is just wrong, since actually !!e^2\approx 7.39!!. As a comment pointed out, the series you have above is for !!e^{2(x-1)}!!. But we wanted a series that adds up to !!e^{2x}!!. As you know, the Maclaurin series works here: $$e^{2x} \approx 1+2x+2x^2+\frac{4}{3}x^3$$ so why don't we just use it? Let's try !!x=1!!. We get $$e^2\approx 1 + 2 + 2 + \frac43$$ This adds to !!6+\frac13!!, but the correct answer is actually around !!7.39!! as we saw before. That is not a very accurate approximation. Maybe we need more terms? Let's try ten: $$e^{2x} \approx 1+2x+2x^2+\frac{4}{3}x^3 + \ldots + \frac{8}{2835}x^9$$ If we do this we get !!7.3887!!, which isn't too far off. But it was a lot of work! And we find that as !!x!! gets farther away from zero, the series above gets less and less accurate. For example, take !!x=3.1!!, the formula with four terms gives us !!66.14!!, which is dead wrong. Even if we use ten terms, we get !!444.3!!, which is still way off. The right answer is actually !!492.7!!. What do we do about this? Just add more terms? That could be a lot of work and it might not get us where we need to go. (Some Maclaurin series just stop working at all too far from zero, and no amount of terms will make them work.) Instead we use a different technique. Expanding the Taylor series “around !!x=a!!” gets us a different series, one that works best when !!x!! is close to !!a!! instead of when !!x!! is close to zero. Your homework is to expand it around !!x=1!!, and I don't want to give away the answer, so I'll do a different example. We'll expand !!e^{2x}!! around !!x=3!!. The general formula is $$e^{2x} \approx \sum \frac{f^{(i)}(3)}{i!} (x-3)^i\tag{$\star$}\ \qquad \text{(when $x$ is close to $3$)}$$ The !!f^{(i)}(x)!! is the !!i!!'th derivative of !! e^{2x}!! , which is !!2^ie^{2x}!!, so the first few terms of the series above are: $$\begin{eqnarray} e^{2x} & \approx& e^6 + \frac{2e^6}1 (x-3) + \frac{4e^6}{2}(x-3)^2 + \frac{8e^6}{6}(x-3)^3\\ & = & e^6\left(1+ 2(x-3) + 2(x-3)^2 + \frac34(x-3)^3\right)\\ & & \qquad \text{(when $x$ is close to $3$)} \end{eqnarray} $$ The first thing to notice here is that when !!x!! is exactly !!3!!, this series is perfectly correct; we get !!e^6 = e^6!! exactly, even when we add up only the first term, and ignore the rest. That's a kind of useless answer because we already knew that !!e^6 = e^6!!. But that's not what this series is for. The whole point of this series is to tell us how different !!e^{2x}!! is from !!e^6!! when !!x!! is close to, but not equal to !!3!!. Let's see what it does at !!x=3.1!!. With only four terms we get $$\begin{eqnarray} e^{6.2} & \approx& e^6(1 + 2(0.1) + 2(0.1)^2 + \frac34(0.1)^3)\\ & = & e^6 \cdot 1.22075 \\ & \approx & 492.486 \end{eqnarray}$$ which is very close to the correct answer, which is !!492.7!!. And that's with only four terms. Even if we didn't know an exact value for !!e^6!!, we could find out that !!e^{6.2}!! is about !!22.075\%!! larger, with hardly any calculation. Why did this work so well? If you look at the expression !!(\star)!! you can see: The terms of the series all have factors of the form !!(x-3)^i!!. When !!x=3.1!!, these are !!(0.1)^i!!, which becomes very small very quickly as !!i!! increases. Because the later terms of the series are very small, they don't affect the final sum, and if we leave them out, we won't mess up the answer too much. So the series works well, producing accurate results from only a few terms, when !!x!! is close to !!3!!. But in the Maclaurin series, which is around !!x=0!!, those !!(x-3)^i!! terms are !!x^i!! terms intead, and when !!x=3.1!!, they are not small, they're very large! They get bigger as !!i!! increases, and very quickly. (The !! i! !! in the denominator wins, eventually, but that doesn't happen for many terms.) If we leave out these many large terms, we get the wrong results. The short answer to your question is:
[Other articles in category /math/se] permanent link Wed, 29 Jul 2020Toph left the cap off one of her fancy art markers and it dried out, so I went to get her a replacement. The marker costs $5.85, plus tax, and the web site wanted a $5.95 shipping fee. Disgusted, I resolved to take my business elsewhere. On Wednesday I drove over to a local art-supply store to get the marker. After taxes the marker was somehow around $8.50, but I also had to pay $1.90 for parking. So if there was a win there, it was a very small one. But also, I messed up the parking payment app, which has maybe the worst UI of any phone app I've ever used. The result was a $36 parking ticket. Lesson learned. I hope. [Other articles in category /oops] permanent link Today I learned:
Here is a video of Dan Aykroyd discussing the name, with Sherman. [Other articles in category /bio] permanent link Wed, 15 Jul 2020
More trivia about megafauna and poisonous plants
A couple of people expressed disappointment with yesterday's article, which asked were giant ground sloths immune to poison ivy?, but then failed to deliver on the implied promise. I hope today's article will make up for that. ElephantsI said:
David Formosa points out what should have been obvious: elephants are megafauna, elephants live where mangoes grow (both in Africa and in India), elephants love eating mangoes [1] [2] [3], and, not obvious at all… Elephants are immune to poison ivy!
It's sad that we no longer have megatherium. But we do have elephants, which is pretty awesome. Idiot fruitThe idiot fruit is just another one of those legendarily awful creatures that seem to infest every corner of Australia (see also: box jellyfish, stonefish, gympie gympie, etc.); Wikipedia says:
At present the seeds are mostly dispersed by gravity. The plant is believed to be an evolutionary anachronism. What Pleistocene megafauna formerly dispersed the poisonous seeds of the idiot fruit? A wombat. A six-foot-tall wombat. I am speechless with delight. [Other articles in category /bio] permanent link Tue, 14 Jul 2020
Were giant ground sloths immune to poison ivy?
The skin of the mango fruit contains urushiol, the same irritating chemical that is found in poison ivy. But why? From the mango's point of view, the whole point of the mango fruit is to get someone to come along and eat it, so that they will leave the seed somewhere else. Posioning the skin seems counterproductive. An analogous case is the chili pepper, which contains an irritating chemical, capsaicin. I think the answer here is believed to be that while capsaicin irritates mammals, birds are unaffected. The chili's intended target is birds; you can tell from the small seeds, which are the right size to be pooped out by birds. So chilis have a chemical that encourages mammals to leave the fruit in place for birds. What's the intended target for the mango fruit? Who's going to poop out a seed the size of a mango pit? You'd need a very large animal, large enough to swallow a whole mango. There aren't many of these now, but that's because they became extinct at the end of the Pleistocene epoch: woolly mammoths and rhinoceroses, huge crocodiles, giant ground sloths, and so on. We may have eaten the animals themselves, but we seem to have quite a lot of fruits around that evolved to have their seeds dispersed by Pleistocene megafauna that are now extinct. So my first thought was, maybe the mango is expecting to be gobbled up by a giant gound sloth, and have its giant seed pooped out elsewhere. And perhaps its urushiol-laden skin makes it unpalatable to smaller animals that might not disperse the seeds as widely, but the giant ground sloth is immune. (Similarly, I'm told that goats are immune to urushiol, and devour poison ivy as they do everything else.) Well, maybe this theory is partly correct, but even if so, the animal definitely wasn't a giant ground sloth, because those lived only in South America, whereas the mango is native to South Asia. Ground slots and avocados, yes; mangos no. Still the theory seems reasonable, except that mangoes are tropical fruit and I haven't been able to find any examples of Pleistocene megafauna that lived in the tropics. Still I didn't look very hard. Wikipedia has an article on evolutionary anachronisms that lists a great many plants, but not the mango. [ Addendum: I've eaten many mangoes but never noticed any irritation from the peel. I speculate that cultivated mangoes are varieties that have been bred to contain little or no urushiol, or that there is a post-harvest process that removes or inactivates the urushiol, or both. ] [ Addendum 20200715: I know this article was a little disappointing and that it does not resolve the question in the title. Sorry. But I wrote a followup that you might enjoy anyway. ] [Other articles in category /bio] permanent link Wed, 08 Jul 2020Ron Graham has died. He had a good run. When I check out I will probably not be as accomplished or as missed as Graham, even if I make it to 84. I met Graham once and he was very nice to me, as he apparently was to everyone. I was planning to write up a reminiscence of the time, but I find I've already done it so you can read that if you care. Graham's little book Rudiments of Ramsey Theory made a big impression on me when I was an undergraduate. Chapter 1, if I remember correctly, is a large collection of examples, which suited me fine. Chapter 2 begins by introducing a certain notation of Erdős and Rado: !!\left[{\Bbb N\atop k}\right]!! is the family of subsets of !!\Bbb N!! of size !!k!!, and $$\left[{\Bbb N\atop k}\right] \to \left[{\Bbb N\atop k}\right]_r$$ is an abbreviation of the statement that for any !!r!!-coloring of members of !!\left[{\Bbb N\atop k}\right]!! there is always an infinite subset !!S\subset \Bbb N!! for which every member of !!\left[{S\atop k}\right]!! is the same color. I still do not find this notation perspicuous, and at the time, with much less experience, I was boggled. In the midst of my bogglement I was hit with the next sentence, which completely derailed me: After this I could no longer think about the mathematics, but only about the sentence. Outside the mathematical community Graham is probably best-known for juggling, or for Graham's number, which Wikipedia describes:
One of my better Math Stack Exchange posts was in answer to the question Graham's Number : Why so big?. I love the phrasing of this question! And that, even with the strange phrasing, there is an answer! This type of huge number is quite typical in proofs of Ramsey theory, and I answered in detail. The sense of humor that led Graham to write “danger of no confusion” is very much on display in the paper that gave us Graham's number. If you are wondering about Graham's number, check out my post. [Other articles in category /math] permanent link
Addendum to “Weirdos during the Depression”
[ Previously ] Ran Prieur had a take on this that I thought was insightful:
[Other articles in category /addenda] permanent link Mon, 06 Jul 2020
Weird constants in math problems
Michael Lugo recently considered a problem involving the allocation of swimmers to swim lanes at random, ending with:
I love when stuff like this happens. The computer is great at doing a quick random simulation and getting you some weird number, and you have no idea what it really means. But mathematical technique can unmask the weird number and learn its true identity. (“It was Old Man Haskins all along!”) A couple of years back Math Stack Exchange had Expected Number and Size of Contiguously Filled Bins, and although it wasn't exactly what was asked, I ended up looking into this question: We take !!n!! balls and throw them at random into !!n!! bins that are lined up in a row. A maximal contiguous sequence of all-empty or all-nonempty bins is called a “cluster”. For example, here we have 13 balls that I placed randomly into 13 bins: In this example, there are 8 clusters, of sizes 1, 1, 1, 1, 4, 1, 3, 1. Is this typical? What's the expected cluster size? It's easy to use Monte Carlo methods and find that when !!n!! is large, the average cluster size is approximately !!2.15013!!. Do you recognize this number? I didn't. But it's not hard to do the calculation analytically and discover that that the reason it's approximately !!2.15013!! is that the actual answer is $$\frac1{2(e^{-1} - e^{-2})}$$ which is approximately !!2.15013!!. Math is awesome and wonderful. (Incidentally, I tried the Inverse Symbolic Calculator just now, but it was no help. It's also not in Plouffe's Miscellaneous Mathematical Constants) [ Addendum 20200707: WolframAlpha does correctly identify the !!2.15013!! constant. ] [Other articles in category /math] permanent link
Useful and informative article about privately funded border wall
The Philadelphia Inquirer's daily email newsletter referred me to this excellent article, by Jeremy Schwartz and Perla Trevizo. Wow!” I said. “This is way better than the Inquirer's usual reporting. I wonder what that means?” Turns out it meant that the Inquirer was not responsible for the article. But thanks for the pointer, Inquirer folks! The article is full of legal, political, and engineering details about why it's harder to build a border wall than I would have expected. I learned a lot! I had known about the issue that most of the land is privately owned. But I hadn't considered that there are international water-use treaties that come into play if the wall is built too close to the Rio Grande, or that the wall would be on the river's floodplain. (Or that the Rio Grande even had a floodplain.) He built a privately funded border wall. It's already at risk of falling down if not fixed, courtesy of The Texas Tribune and ProPublica. [Other articles in category /tech] permanent link Wed, 24 Jun 2020
Geeking out over arbitrary boundaries
Reddit today had this delightful map, drawn by Peter Klumpenhower, of “the largest city in each 10-by-10 degree area of latitude-longitude in the world”: Almost every square is a kind of puzzle! Perhaps it is surprising that Philadelphia is there? Clearly New York dominates its square, but Philadelphia is just barely across the border in the next square south: the 40th parallel runs right through North Philadelphia. (See map at right.) Philadelphia City Hall (the black dot on the map) is at 39.9524 north latitude. This reminds me of the time I was visiting Tom Christiansen in Boulder, Colorado. We were driving on Baseline Road and he remarked that it was so named because it runs exactly along the 40th parallel. Then he said “that's rather farther south than where you live”. And I said no, the 40th parallel also runs through Philadelphia! Other noteworthy cities at this latitude include Madrid, Ankara, Yerevan, and Beijing. Anyway speaking of Boulder, the appearance of Fort Collins was the first puzzle I noticed. If you look at the U.S. cities that appear on the map, you see most of the Usual Suspects: New York, Philadelphia, Chicago, Los Angeles, Seattle, Dallas, Houston. And then you have Fort Collins. “Fort Collins?” I said. “Why not Denver? Or even Boulder?” Boulder, it turns out, is smaller than Fort Collins. (I did not know this.) And Denver, being on the other side of Baseline Road, doesn't compete with Fort Collins. Everything south of Baseline Road, including Denver, is shut out by Ciudad Juárez, México (population 1.5 million). There is a Chinese version of this. The Chinese cities on the map include the big Chinese cities: Shanghai, Beijing, Chongqing, Shenzhen, Chengdu. Shenyang. A couple of cities in Inner Mongolia, analogous to the appearance of Boise on the U.S. map. And… Wenchang. What the heck is Wenchang? It's the county seat of Wenchang County, in Hainan, not even as important as Fort Collins. China has 352 cities with populations over 125,000. Wenchang isn't one of them. According to the list I found, Wenchang is the 379th-largest city in China. (Fort Collins, by the way, is 159th-largest in the United States. For the 379th, think of Sugar Land, Texas or Cicero, Illinois.) Since we're in China, please notice how close Beijing is to the 40th parallel. Ten kilometers farther north and it would have displaced Boise — sorry, I meant Hohhot — and ceded its box to Tianjin (pop. 15.6 million). Similarly (but in reverse), had Philadelphia been a bit farther north, it would have disappeared into New York's box, and yielded its own box to Baltimore or Washington or some other hamlet.)
But what the heck happened to Nairobi? (Nairobi is the ninth-largest city in Africa. Dar Es Salaam is the sixth-largest and is in the same box.) What the heck happened to St. Petersburg? (at 59.938N, 30.309E, it is just barely inside the same box as Moscow. The map is quite distorted in this region.) What the heck happened to Tashkent? (It's right where it should be. I just missed it somehow.)
[ Addendum: Reddit discussion has pointed out that Clifden (pop. 1,597) , in western Ireland, is not the largest settlement in its box. There are two slivers of Ireland in that box, and Dingle, four hours away in County Kerry, has a population of 2,050. The Reddit discussion has a few other corrections. The most important is probably that Caracas should beat out Santo Domingo. M. Klumpenhower says that they will send me a revised version of the map. ] [ Thanks to Hacker News user oefrha for pointing out that Hohhot and Baotou are in China, not Mongolia as I originally said. ] [ Addendum 20200627: M. Klumpenhower has sent me a revised map, which now appears in place of the old one. It corrects the errors mentioned above. Here's a graphic that shows the differences. But Walt Mankowski pointed out another possible error: The box with Kochi (southern India) should probably be owned by Colombo. ] [Other articles in category /geo] permanent link Thu, 11 Jun 2020
Malicious trojan horse code hidden in large patches
This article isn't going to be fun to write but I'm going to push through it because I think it's genuinely important. How often have you heard me say that? A couple of weeks ago the Insurrection Act of 1807 was in the news. I noticed that the Wikipedia article about it contained this very strange-seeming claim:
“What the heck is a ‘secret amendment’?” I asked myself. “Secret from whom? Sounds like Wikipedia crackpottery.” But there was a citation, so I could look to see what it said. The citation is Hoffmeister, Thaddeus (2010). "An Insurrection Act for the Twenty-First Century". Stetson Law Review. 39: 898. Sometimes Wikipedia claims will be accompanied by an authoritative-seeming citation — often lacking a page number, as this one did at the time — that doesn't actually support the claim. So I checked. But Hoffmeister did indeed make that disturbing claim:
I had sometimes wondered if large, complex acts such as HIPAA or the omnibus budget acts sometimes contained provisions that were smuggled into law without anyone noticing. I hoped that someone somewhere was paying attention, so that it couldn't happen. But apparently the answer is that it does. [Other articles in category /law] permanent link Wed, 10 Jun 2020
Middle English fonts and orthography
In case you're interested, here's what the Caxton “eggys” anecdote looked like originally:
1
2 3 4 5 6 7 8 9 10 11 12 In my dayes happened that It takes a while to get used to the dense black-letter font, and I think it will help to know the following:
[Other articles in category /IT/typo] permanent link Tue, 09 Jun 2020
The two-bit huckster in medieval Italy
The eighth story on the seventh day of the Decameron concerns a Monna Sismonda, a young gentlewoman who is married to a merchant. She contrives to cheat on him, and then when her husband Arriguccio catches her, she manages to deflect the blame through a cunning series of lies. Arriguccio summons Sismonda's mother and brothers to witness her misbehavior, but when Sismonda seems to refute his claims, they heap abuse on him. Sismonda's mother rants about merchants with noble pretensions who marry above their station. My English translation (by G.H. McWilliam, 1972) included this striking phrase:
“Tuppeny-ha’penny” seemed rather odd in the context of medieval Florentines. It put me in mind of Douglas Hofstadter's complaint about an English translation of Crime and Punishment that rendered “S[toliarny] Pereulok” as “Carpenter’s Lane”:
Intrigued by McWilliam's choice, I went to look at the other translation I had handy, John Payne's of 1886, as adapted by Cormac Ó Cuilleanáin in 2004:
This seemed even more jarring, because Payne was English and Ó Cuilleanáin is Irish, but “two-bit” is 100% American. I wondered what the original had said. Brown University has the Italian text online, so I didn't even have to go into the house to find out the answer:
In the coinage of the time, the denier or denarius was the penny, equal in value (at least notionally) to !!\frac1{240}!! of a pound (lira) of silver. It is the reason that pre-decimal British currency wrote fourpence as “4d.”. I think ‘-uolo’ is a diminutive suffix, so that Sismonda's mother is calling Arriguccio a fourpenny merchantling. McWilliam’s and Ó Cuilleanáin’s translations are looking pretty good! I judged them too hastily. While writing this up I was bothered by something else. I decided it was impossible that John Payne, in England in 1886, had ever written the words “two-bit huckster”. So I hunted up the original Payne translation from which Ó Cuilleanáin had adapted his version. I was only half right:
“Four-farthing” is a quite literal translation of the original Italian, a farthing being an old-style English coin worth one-fourth of a penny. I was surprised to see “huckster”, which I would have guessed was 19th-century American slang. But my guess was completely wrong: “Huckster” is Middle English, going back at least to the 14th century. In the Payne edition, there's a footnote attached to “four-farthing” that explains:
which is what McWilliam had. I don't know if the footnote is Payne's or belongs to the 1925 editor. The Internet Archive's copy of the Payne translation was published in 1925, with naughty illustrations by Clara Tice. Wikipedia says “According to herself and the New York Times, in 1908 Tice was the first woman in Greenwich Village to bob her hair.” [ Addendum 20210331: It took me until now to realize that -uolo is probably akin to the -ole suffix one finds in French words like casserole and profiterole, and derived from the Latin diminutive suffix -ulus that one finds in calculus and annulus. ] [Other articles in category /lang] permanent link Mon, 08 Jun 2020
More about Middle English and related issues
Quite a few people wrote me delightful letters about my recent article about how to read Middle English. Corrections
Regarding Old English / Anglo-Saxon
Regarding Dutch
Regarding German
Final noteThe previous article about weirdos during the Depression hit #1 on Hacker News and was viewed 60,000 times. But I consider the Middle English article much more successful, because I very much prefer receiving interesting and thoughtful messages from six Gentle Readers to any amount of attention from Hacker News. Thanks to everyone who wrote, and also to everyone who read without writing. [Other articles in category /lang] permanent link Fri, 05 Jun 2020
You can learn to read Middle English
In a recent article I quoted this bit of Middle English:
and I said:
Yup! If you can read English, you can learn to read Middle English. It looks like a foreign language, but it's not. Not entirely foreign, anyway. There are tricks you can pick up. The tricks get you maybe 90 or 95% of the way there, at least for later texts, say after 1350 or so. Disclaimer: I have never studied Middle English. This is just stuff I've picked up on my own. Any factual claims in this article might be 100% wrong. Nevertheless I have pretty good success reading Middle English, and this is how I do it. Some quick historical notesIt helps to understand why Middle English is the way it is. English started out as German. Old English, also called Anglo-Saxon, really is a foreign language, and requires serious study. I don't think an anglophone can learn to read it with mere tricks. Over the centuries Old English diverged from German. In 1066 the Normans invaded England and the English language got a thick layer of French applied on top. Middle English is that mashup of English and French. It's still German underneath, but a lot of the spelling and vocabulary is Frenchified. This is good, because a lot of that Frenchification is still in Modern English, so it will be familiar. For a long time each little bit of England had its own little dialect. The printing press was introduced in the late 15th century, and at that point, because most books were published in or around London, the Midlands dialect used there became the standard, and the other dialects started to disappear. [ Addendum 20200606: The part about Midlands dialect is right. The part about London is wrong. London is not in the Midlands. ] With the introduction of printing, the spelling, which had been fluid and do-as-you-please, became frozen. Unfortunately, during the 15th century, the Midlands dialect had been undergoing a change in pronunciation now called the Great Vowel Shift and many words froze with spelling and pronunciations that didn't match. This is why English vowel spellings are such a mess. For example, why are “meat” and “meet” spelled differently but pronounced the same? Why are “read” (present tense) and “read” (past tense) pronounced differently but spelled the same? In Old English, it made more sense. Modern English is a snapshot of the moment in the middle of a move when half your stuff is sitting in boxes on the sidewalk. By the end of the 17th century things had settled down to the spelling mess that is Modern English. The letters are a little funnyDepending on when it was written and by whom, you might see some of these obsolete letters:
Some familiar letters behave a little differently:
The quotation I discussed in the earlier article looks like this:
Daunting, right? But it's not as bad as it looks. Let's get rid of the yoghs and thorns:
The spelling is a little funnyHere's the big secret of reading Middle English: it sounds better than it looks. If you're not sure what a word is, try reading it aloud. For example, what's “alle men”? Oh, it's just “all men”, that was easy. What's “youre dettes”? It turns out it's “your debts”. That's not much of a disguise! It would be a stretch to call this “translation”.
“Yelde” and “trybut” are a little trickier. As languages change, vowels nearly always change faster than consonants. Vowels in Middle English can be rather different from their modern counterparts; consonants less so. So if you can't figure out a word, try mashing on the vowels a little. For example, “much” is usually spelled “moche”. With a little squinting you might be able to turn “trybut” into “tribute”, which is what it is. The first “tribute” is a noun, the second a verb. The construction is analogous to “if you have a drink, drink!” I had to look up “yelde”, but after I had I felt a little silly, because it's “yield”.
We'll deal with “schuleth” a little later. The word order is pretty much the sameThat's because the basic grammar of English is still mostly the same as German. One thing English now does differently from German is that we no longer put the main verb at the end of the sentence. If a Middle English sentence has a verb hanging at the end, it's probably the main verb. Just interpret it as if you had heard it from Yoda. The words are a little bit old-fashioned… but many of them are old-fashioned in a way you might be familiar with. For example, you probably know what “ye” means: it's “you”, like in “hear ye, hear ye!” or “o ye of little faith!”. Verbs in second person singular end in ‘-st’; in third person singular, ‘-th’. So for example:
In particular, the forms of “do” are: I do, thou dost, he doth. Some words that were common in Middle English are just gone. You'll probably need to consult a dictionary at some point. The Oxford English Dictionary is great if you have a subscription. The University of Michigan has a dictionary of Middle English that you can use for free. Here are a couple of common words that come to mind:
Verbs change form to indicate tenseIn German (and the proto-language from which German descended), verb tense is indicated by a change in the vowel. Sometimes this persists in modern English. For example, it's why we have “drink, drank, drunk” and “sleep, slept”. In Modern German this is more common than in Modern English, and in Middle English it's also more common than it is now. Past tense usually gets an ‘-ed’ on the end, like in Modern English. The last mystery word here is “schuleth”:
This is the hard word here. The first thing to know is that “sch-” is always pronounced “sh-” as it still is in German, never with a hard sound like “school” or “schedule”. What's “schuleth” then? Maybe something do to with schools? It turns out not. This is a form of “shall, should” but in this context it has its old meaning, now lost, of “owe”. If I hadn't run across this while researching the history of the word “should”, I wouldn't have known what it was, and would have had to look it up. But notice that it does follow a typical Middle English pattern: the consonants ‘sh-’ and ‘-l-’ stayed the same, while the vowels changed. In the modern word “should” we have a version of “schulen” with the past tense indicated by ‘-d’ just like usual. “Schuleth” goes with ‘ye’ so it ought to be ‘schulest’. I don't know what's up with that. [ Addendum 20200608: “ye” is plural, and ‘-st’ only goes on singular verbs. ] Prose exampleLet's try Wycliffe's Bible, which was written around 1380ish. This is Matthew 6:1:
Most of this reads right off:
“Take heed” is a bit archaic but still good English; it means “Be careful”. Reading “riytwisnesse” aloud we can guess that it is actually “righteousness”. (Remember that that ‘y’ started out as a ‘gh’.) “Schulen“ we've already seen; here it just means “shall”. I had to look up “meede”, which seems to have disappeared since 1380. It meant “reward”, and that's exactly how the NIV translates it:
That was fun, let's do another:
The same tricks work for most of this. “Whanne” is “when”. We still have the word “almes”, now spelled “alms”: it's the handout you give to beggars. The “sch” in “worschipid” is pronounced like ‘sh’ so it's “worshipped”. “Resseyued” looks hard, but if you remember to try reading the ‘u’ as a ‘v’ and the ‘y’ as an ‘i’, you get “resseived” which is just one letter off of “received”. “Meede” we just learned. So this is:
Now we have the general meaning and some of the other words become clearer. What's “trumpe”? It's “trumpeting”. When you give to the needy, don't you trumpet before you, as the hypocrites do. So even though I don't know what “nyle” is exactly, the context makes it clear that it's something like “do not”. Negative words often began with ‘n’ just as they do now (no, nor, not, never, neither, nothing, etc.). Looking it up, I find that it's more usually spelled “nill”. This word is no longer used; it means the opposite of “will”. (It still appears in the phrase “willy-nilly”, which means “whether you want to or not”.) “Sothely” means “truly”. “Soth” or “sooth” is an archaic word for truth, like in “soothsayer”, a truth-speaker. Here's the NIV translation:
Poetic exampleLet's try something a little harder, a random sentence from The Canterbury Tales, written around 1390. Wish me luck!
The main difficulty is that it's poetic language, which might be a bit obscure even in Modern English. But first let's fix the spelling of the obvious parts:
The University of Michigan dictionary can be a bit tricky to use. For example, if you look up “meede” it won't find it; it's listed under “mede”. If you don't find the word you want as a headword, try doing full-text search. Anyway, hoppen is in there. It can mean “hopping”, but in this poetic context it means dancing.
“Pipe” is a verb here, it means (even now) to play the pipes. You try!William Caxton is thought to have been the first person to print and sell books in England. This anecdote of his is one of my favorites. He wrote it the late 1490s, at the very tail end of Middle English:
A “mercer” is a merchant, and “taryed“ is now spelled “tarried”, which is now uncommon and means to stay somewhere temporarily. I think the only other part of this that doesn't succumb to the tricks in this article is the place names:
Caxton is bemoaning the difficulties of translating into “English” in 1490, at a time when English was still a collection of local dialects. He ends the anecdote by asking:
Thanks to Caxton and those that followed him, we can answer: definitely “egges”. [ Addenda 20200608: More about Middle English. ] [ Addendum 20211027: An extended example of “half your stuff is sitting on the sidewalk” ] [ Addendum 20211028: More about “eke” ] [Other articles in category /lang] permanent link Wed, 03 Jun 2020Lately I've been rereading To Kill a Mockingbird. There's an episode in which the kids meet Mr. Dolphus Raymond, who is drunk all the time, and who lives with his black spouse and their mixed-race kids. The ⸢respectable⸣ white folks won't associate with him. Scout and Jem see him ride into town so drunk he can barely sit on his horse. He is noted for always carrying around a paper bag with a coke bottle filled with moonshine. At one point Mr. Dolphus Raymond offers Dill a drink out of his coke bottle, and Dill is surprised to discover that it actually contains Coke. Mr. Raymond explains he is not actually a drunk, he only pretends to be one so that the ⸢respectable⸣ people will write him off, stay off his back about his black spouse and kids, and leave him alone. If they think it's because he's an alcoholic they can fit it into their worldview and let it go, which they wouldn't do if they suspected the truth, which is that it's his choice. There's a whole chapter in Cannery Row on the same theme! Doc has a beard, and people are always asking him why he has a beard. Doc learned a long time ago that it makes people angry and suspicious if he tells the truth, which is he has a beard because he likes having a beard. So he's in the habit of explaining that the beard covers up an ugly scar. Then people are okay with it and even sympathetic. (There is no scar.) Doc has a whim to try drinking a beer milkshake, and when he orders one the waitress is suspicious and wary until he explains to her that he has a stomach ulcer, and his doctor has ordered him to drink beer milkshakes daily. Then she is sympathetic. She says it's a shame about the ulcer, and gets him the milkshake, instead of kicking him out for being a weirdo. Both books are set at the same time. Cannery Row was published in 1945 but is set during the Depression; To Kill a Mockingbird was published in 1960, and its main events take place in 1935. I think it must be a lot easier to be a weird misfit now than it was in 1935. [ Sort of related. ] [ 20200708: Addendum ] [Other articles in category /book] permanent link Sun, 31 May 2020
Reordering git commits (not patches) with interactive rebase
This is the third article in a series. ([1] [2]) You may want to reread the earlier ones, which were in 2015. I'll try to summarize. The original issue considered the implementation of some program feature X. In commit A, the feature had not yet been implemented. In the next commit C it had been implemented, and was enabled. Then there was a third commit, B, that left feature X implemented but disabled it:
but what I wanted was to have the commits in this order:
so that when X first appeared in the history, it was disabled, and then a following commit enabled it. The first article in the series began:
Using interactive rebase here “to reorder B and C” will not work
because My original articles described a way around this, using the plumbing
command Recently, Curtis Dunham wrote to me with a much better idea that uses the
interactive rebase UI to accomplish the same thing much more cleanly.
If we had B checked out and we tried
As I said before, just switching the order of these two M. Dunham's suggestion is to use
But what's
That is, “take a snapshot of that commit”. It's simple to implement:
There needs to be a bit of cleanup to get the working tree back into
sync with the new index.
M. Dunham's actual implementation
does this with I hadn't know about the Thank you again, M. Dunham! [Other articles in category /prog] permanent link Sat, 30 May 2020Rob Hoelz mentioned that (one of?) the Nenets languages has different verb moods for concepts that in English are both indicated by “should” and “ought”:
Examples of the former being:
and of the latter:
I have often wished that English were clearer about distinguishing these. For example, someone will say to me
and it is not always clear whether they are advising me how to fix the problem, or lamenting that it won't. A similar issue applies to the phrase “ought to”. As far as I can tell the two phrases are completely interchangeable, and both share the same ambiguities. I want to suggest that everyone start using “should” for the deontic version (“you should go now”) and “ought to” for the predictive verion (“you ought to be able to see the lighthouse from here”) and never vice versa, but that's obviously a lost cause. I think the original meaning of both forms is more deontic. Both words originally meant monetary debts or obligations. With ought this is obvious, at least once it's pointed out, because it's so similar in spelling and pronunciation to owed. (Compare pass, passed, past with owe, owed, ought.) For shall, the Big Dictionary has several citations, none later than 1425 CE. One is a Middle English version of Romans 13:7:
As often with Middle English, this is easier than it looks at first. In fact this one is so much easier than it looks that it might become my go-to example. The only strange word is schuleþ itself. Here's my almost word-for-word translation:
The NIV translates it like this:
Anyway, this is a digression. I wanted to talk about different kinds of should. The Big Dictionary distinguishes two types but mixes them together in its listing, saying:
The first of these seems to correspond to what I was calling the deontic form (“people should be kinder”) and the second (“you should be able to catch that train”.) But their quotations reveal several other shades of meaning that don't seem exactly like either of these:
This is not any of duty, obligation, propriety, expectation, likelihood, or prediction. But it is exactly M. Hoelz’ “state of the world as it currently isn't”. Similarly:
Again, this isn't (necessarily) duty, obligation, or propriety. It's just a wish contrary to fact. The OED does give several other related shades of “should” which are not always easy to distinguish. For example, its definition 18b says
and gives as an example
Compare “We should be able to see Barbados from here”. Its 18c is “you should have seen that fight!” which again is of the wish-contrary-to-fact type; they even gloss it as “I wish you could have…”. Another distinction I would like to have in English is between “should” used for mere suggestion in contrast with the deontic use. For example
(suggestion) versus
(obligation). Say this distinction was marked in English. Then your mom might say to you in the morning
and later, when you still haven't washed the dishes:
meaning that you'll be in trouble if you fail in your dish washing duties. When my kids were small I started to notice how poorly this sort of thing was communicated by many parents. They would regularly say things like
(the second one in particular) when what they really meant, it seemed to me, was “I want you to be quiet now”. [ I didn't mean to end the article there, but after accidentally publishing it I decided it was as good a place as any to stop. ] [Other articles in category /lang] permanent link [ Previously ] John Gleeson points out that my optical illusion (below left) appears to be a variation on the classic Poggendorff illusion (below right): [Other articles in category /brain] permanent link Fri, 29 May 2020
Infinite zeroes with one on the end
I recently complained about people who claim:
When I read something like this, the first thing that usually comes to mind is the ordinal number !!\omega+1!!, which is a canonical representative of this type of ordering. But I think in the future I'll bring up this much more elementary example: $$ S = \biggl\{-1, -\frac12, -\frac13, -\frac14, \ldots, 0\biggr\} $$ Even a middle school student can understand this, and I doubt they'd be able to argue seriously that it doesn't have an infinite sequence of elements that is followed by yet another final element. Then we could define the supposedly problematic !!0, 0, 0, \ldots, 1!! thingy as a map from !!S!!, just as an ordinary sequence is defined as a map from !!\Bbb N!!. [ Related: A familiar set with an unexpected order type. ] [Other articles in category /math] permanent link A couple of years ago I wrote an article about a topological counterexample, which included this illustration: Since then, every time I have looked at this illustration, I have noticed that the blue segment is drawn wrong. It is intended to be a portion of a radius, so if I extend it toward the center of the circle it ought to intersect the center point. But clearly, the slope is too high and the extension passes significantly below the center. Or so it appears. I have checked the original SVG, using Inkscape to extend the blue segment: the extension passes through the center of the circle. I have also checked the rendered version of the illustration, by holding a straightedge up to the screen. The segment is pointing in the correct direction. So I know it's right, but still I'm jarred every time I see it, because to me it looks so clearly wrong. [ Addendum 20200530: John Gleeson has pointed out the similarity to the Poggendorff illusion. ] [Other articles in category /brain] permanent link Thu, 21 May 2020There have been several reports of the theft of catalytic converters in our neighborhood, the thieves going under the car and cutting the whole thing out. The catalytic converter contains a few grams of some precious metal, typically platinum, and this can be recycled by a sufficiently crooked junk dealer. Why weren't these being stolen before? I have a theory. The catalytic converter contains only a few grams of platinum, worth around $30. Crawling under a car to cut one out is a lot of trouble and risk to go to for $30. I think the stay-at-home order has put a lot of burglars and housebreakers out of work. People aren't leaving their houses and in particular they aren't going on vacations. So thieves have to steal what they can get. [ Addendum 20200522: An initial glance at the official crime statistics suggests that my theory is wrong. I'll try to make a report over the weekend. ] [Other articles in category /misc] permanent link Thu, 07 May 2020Yesterday I went through the last few months of web server logs, used them to correct some bad links in my blog articles. Today I checked the logs again and all the "page not found" errors are caused by people attacking my WordPress and PHP installations. So, um, yay, I guess? [Other articles in category /meta] permanent link Tue, 05 May 2020
Article explodes at the last moment
I wrote a really great blog post over the last couple of days. Last year I posted about the difference between !!\frac10!! and !!\frac00!! and this was going to be a followup. I had a great example from linear regression, where the answer comes out as !!\frac10!! in situations where the slope of computed line is infinite (and you can fix it, by going into projective space and doing everything in homogeneous coordinates), and as !!\frac00!! in situations where the line is completely indeterminite, and you can't fix it, but instead you can just pick any slope you want and proceed from there. Except no, it never does come out !!\frac10!!. It always comes out !!\frac00!!, even in the situations I said were !!\frac10!!. I think maybe I can fix it though, I hope, maybe. If so, then I will be able to write a third article. Maybe. It could have been worse. I almost published the thing, and only noticed my huge mistake because I was going to tack on an extra section at the end that it didn't really need. When I ran into difficulties with the extra section, I was tempted to go ahead and publish without it. [Other articles in category /oops] permanent link Fri, 01 May 2020
More about Sir Thomas Urquhart
I don't have much to add at this point, but when I looked into Sir Thomas Urquhart a bit more, I found this amazing article by Denton Fox in London Review of Books. It's a review of a new edition of Urquhart's 1652 book The Jewel (Ekskybalauron), published in 1984. The whole article is worth reading. It begins:
and then oh boy, does it deliver. So much of this article is quotable that I'm not sure what to quote. But let's start with:
Some excerpts will follow. You may enjoy reading the whole thing. TrissotetrasI spent much way more time on this than I expected. Fox says:
Thanks to the Wonders of the Internet, a copy is available, and I have been able to glance at it. Urquhart has invented a microlanguage along the lines of Wilkins’ philosophical language, in which the words are constructed systematically. But the language of Trissotetras has a very limited focus: it is intended only for expressing statements of trigonometry. Urquhart says:
The sentence continues for another 118 words but I think the point is made: the idea is not obviously terrible. Here is an example of Urquhart's trigonometric language in action:
A person skilled in the art might be able to infer the meaning of this axiom from its name:
That is, a side (of a triangle) is proportional to the sine of the opposite angle. This principle is currently known as the law of sines. Urquhart's idea of constructing mnemonic nonsense words for basic laws was not a new one. There was a long-established tradition of referring to forms of syllogistic reasoning with constructed mnemonics. For example a syllogism in “Darii” is a deduction of this form:
The ‘A’ in “Darii” is a mnemonic for the “all” clause and the ‘I’s for the “some” clauses. By memorizing a list of 24 names, one could remember which of the 256 possible deductions were valid. Urquhart is following this well-trodden path and borrows some of its terminology. But the way he develops it is rather daunting:
I think that ‘Pubkegdaxesh’ is compounded from the initial syllables of the seven enodandas, with p from upalem, ub from ubamen, k from ekarul, eg from egalem, and so on. I haven't been able to decipher any of these, although I didn't try very hard. There are many difficulties. Sometimes the language is obscure because it's obsolete and sometimes because Urquhart makes up his own words. (What does “enodandas” mean?) Let's just take “Upalem”. Here are Urquhart's glosses:
I believe “a tangent complement” is exactly what we would now call a cotangent; that is, the tangent of the complementary angle. But how these six items relate to one another, I do not know. Here's another difficulty: I'm not sure that ‘al’ is one component or two. It might be one:
Either way I'm not sure what is meant. Wait, there is a helpful diagram, and an explanation of it:
This is unclear but tantalizing. Urquhart is solving a problem of elementary plane trigonometry. Some of the sides and angles are known, and we are to find one of the unknown ones. I think if if I read the book from the beginning I think I might be able to make out better what Urquhart was getting at. Tempting as it is I am going to abandon it here. Trissotetras is dedicated to Urquhart's mother. In the introduction, he laments that
He must have been an admirer of Alexander Rosse, because the front matter ends with a little poem attributed to Rosse. PantochronachanonFox again:
This is Pantochronachanon, which Wikipedia says “has been the subject of ridicule since the time of its first publication, though it was likely an elaborate joke”, with no citation given. Fox mentions that Urquhart claims Alcibiades as one of his ancestors. He also claims the Queen of Sheba. According to Pantochronachanon the world was created in 3948 BC (Ussher puts it in 4004), and Sir Thomas belonged to the 153rd generation of mankind. The JewelDenton Fox:
Wowzers. Fox claims that the title Εκσκυβαλαυρον means “from dung, gold” but I do not understand why he says this. λαύρα might be a sewer or privy, and I think the word σκυβα means garden herbs. (Addendum: the explanation.)
Fox says that in spite of the general interest in universal languages, “parts of his prospectus must have seemed absurd even then”, quoting this item:
Urquhart boasts that where other, presumably inferior languages have only five or six cases, his language has ten “besides the nominative”. I think Finnish has fourteen but I am not sure even the Finns would be so sure that more was better. Verbs in Urquhart's language have one of ten tenses, seven moods, and four voices. In addition to singular and plural, his language has dual (like Ancient Greek) and also ‘redual’ numbers. Nouns may have one of eleven genders. It's like a language designed by the Oglaf Dwarves. A later item states:
and another item claims that a word of one syllable is capable of expressing an exact date and time down to the “half quarter of the hour”. Sir Thomas, I believe that Entropia, the goddess of Information Theory, would like a word with you about that. Wrapping upOne final quote from Fox:
Fox says “Nothing much came of this, either.”. I really wish I had made a note of what I had planned to say about Urquhart in 2008. [ Addendum 20200502: Brent Yorgey has explained Εκσκυβαλαυρον for me. Σκύβαλα (‘skubala’) is dung, garbage, or refuse; it appears in the original Greek text of Philippians 3:8:
And while the usual Greek word for gold is χρῡσός (‘chrysos’), the word αύρον (‘auron’, probably akin to Latin aurum) is also gold. The screenshot at right is from the 8th edition of Liddell and Scott. Thank you, M. Yorgey! ] [Other articles in category /book] permanent link Thu, 30 Apr 2020
Geeky boasting about dictionaries
Yesterday Katara and I were talking about words for ‘song’. Where did ‘song’ come from? Obviously from German, because sing, song, sang, sung is maybe the perfect example of ablaut in Germanic languages. (In fact, I looked it up in Wikipedia just now and that's the example they actually used in the lede.) But the German word I'm familiar with is Lied. So what happened there? Do they still have something like Song? I checked the Oxford English Dictionary but it was typically unhelpful. “It just says it's from Old German Sang, meaning ‘song’. To find out what happened, we'd need to look in the Oxford German Dictionary.” Katara considered. “Is that really a thing?” “I think so, except it's written in German, and obviously not published by Oxford.” “What's it called?” I paused and frowned, then said “Deutsches Wörterbuch.” “Did you just happen to know that?” “Well, I might be totally wrong, but yeah.” But I looked. Yeah, it's called Deutsches Wörterbuch:
So, yes, I just happened to know that. Yay me! Deutsches Wörterbuch was begun by Wilhelm and Jakob Grimm (yes, those Brothers Grimm) although the project was much too big to be finished in their lifetimes. Wilhelm did the letter ‘D’. Jakob lived longer, and was able to finish ‘A’, ‘B’, ‘C’, and ‘E’. Wikipedia mentions the detail that he died “while working on the entry for ‘Frucht’ (fruit)”. Wikipedia says “the work … proceeded very slowly”:
(This isn't as ridiculous as it seems; German has a lot of words that begin with ‘ge-’.) The project came to an end in 2016, after 178 years of effort. The revision of the Grimms’ original work on A–F, planned since the 1950s, is complete, and there are no current plans to revise the other letters. [Other articles in category /lang] permanent link Sun, 26 Apr 2020I ran into this album by Jools Holland: What do you see when you look at this? If you're me, you spend a few minutes wondering why there is a map of Delaware. [Other articles in category /misc] permanent link Fri, 24 Apr 2020
Tiers of answers to half-baked questions
[ This article is itself somewhat half-baked. ] There's this thing that happens on Stack Exchange sometimes. A somewhat-clueless person will show up and ask a half-baked question about something they are thinking about. Their question is groping toward something sensible but won't be all the way there, and then several people will reply saying no, that is not sensible, your idea is silly, without ever admitting that there is anything to the idea at all. I have three examples of this handy, and I'm sure I could find many more.
In a recent blog article I proposed a classification of answers to certain half-baked software questions (“Is it possible to do X?”):
and I said:
These mathematically half-baked questions also deserve better answers. A similar classification of answers to “can we do this” might look like this:
The category theory answer was from tier 4, but should have been from tier 2. People asking about !!0.0000…1!! often receive answers from tier 5, but ought to get answers from tier 4, or even tier 3, if you wanted to get into nonstandard analysis à la Robinson. There is a similar hierarchy for questions of the type “can we model this concept mathematically”, ranging from “yes, all the time” through “nobody has figured that out yet” and “it seems unlikely, because”, to “what would that even mean?”. The topological chirality question was of this type and the answers given were from the “no we can't and we don't” tiers, when they could have been from a much higher tier: “yes, it's more complicated than that but there is an entire subfield devoted to dealing with it.” This is a sort of refinement of the opposition of “yes, and…” versus “no, but…”, with the tiers something like:
When formulating the answer to a question, aiming for the upper tiers usually produces more helpful, more useful, and more interesting results. [ Addendum 20200525: Here's a typical dismissal of the !!0.\bar01!! suggestion: “This is confusing because !!0.\bar01!! seems to indicate a decimal with ‘infinite zeros and then a one at the end.’ Which, of course, is absurd.” ] [Other articles in category /misc] permanent link Wed, 22 Apr 2020This morning I got spam with this subject:
Now what language is that? The ‘şı’ looks Turkish, but I don't think Turkish has a letter ‘ə’. It took me a little while to find out the answer.
It's Azerbaijani. Azerbaijani has an Arabic script and a Latin script;
this is the Latin script.
Azerbaijani is very similar to Turkish and I suppose they use the ‘ş’
and ‘ı’ for the same things. I speculated that the ‘x’ was analogous to
Turkish ‘ğ’, but it appears not; Azerbaijani also has
‘ğ’ and in former times they used ‘ƣ’ for this.
Bonus trivia: The official Unicode name of ‘ƣ’ is
[ Addendum 20210215: I was please to discover today that I have not yet forgotten what Azeri looks like. ] [Other articles in category /lang] permanent link Dave Turner pointed me to the 1939 Russian-language retelling of The Wizard of Oz, titled The Wizard of the Emerald City. In Russian the original title was Волшебник Изумрудного Города. It's fun to try to figure these things out. Often Russian words are borrowed from English or are at least related to things I know but this one was tricky. I didn't recognize any of the words. But from the word order I'd expect that Волшебник was the wizard. -ого is a possessive ending so maybe Изумрудного is “of emeralds”? But Изумрудного didn't look anything like emeralds… until it did. Изумрудного is pronounced (approximately) “izumrudnogo”. But “emerald” used to have an ‘s’ in it, “esmerald”. (That's where we get the name “Esmeralda”.) So the “izumrud” is not that far off from “esmerad” and there they are! [Other articles in category /lang] permanent link Fri, 17 Apr 2020In my previous article I claimed
However, this is mistaken. Eric Harley has brought to my attention that the phrase was used as early as 2003 to describe The Texas Chainsaw Massacre. According to this Salt Lake Tribune article:
If Sherwood is affiliated with Oxford Dictionaries, I wonder why this citation hasn't gotten into the Big Dictionary. The Tribune also pointed me to Claire Fallon's 2016 discussion of the phrase. Thank you, M. Harley. [Other articles in category /lang] permanent link Thu, 16 Apr 2020Today I learned that the oldest known metaphorical use of “dumpster fire” (to mean “a chaotic or disastrously mishandled situation”) is in reference to the movie Shrek the Third. The OED's earliest citation is from
a 2008 Usenet post,
oddly in I missed the movie, and now that I know it was the original Dumpster Fire, I feel lucky. [ Addendum 20200417: More about this. ] [Other articles in category /lang] permanent link Tue, 07 Apr 2020
Fern motif experts on the Internet
I live near Woodlands Cemetery and by far the largest monument there, a thirty-foot obelisk, belongs to Thomas W. Evans, who is an interesting person. In his life he was a world-famous dentist, whose clients included many crowned heads of Europe. He was born in Philadelphia, and land to the University of Pennsylvania to found a dental school, which to this day is located at the site of Evans’ former family home at 40th and Spruce Street. A few days ago my family went to visit the cemetery and I insisted on visting the Evans memorial. The obelisk has this interesting ornament: The thing around the middle is evidently a wreath of pine branches, but what is the thing in the middle? Some sort of leaf, or frond perhaps? Or is it a feather? If Evans had been a writer I would have assumed it was a quill pen, but he was a dentist. Thanks to the Wonders of the Internet, I was able to find out. First I took the question to Reddit's /r/whatisthisthing forum. Reddit didn't have the answer, but Reddit user @hangeryyy had something better: they observed that there was a fad for fern decorations, called pteridomania, in the second half of the 19th century. Maybe the thing was a fern. I was nerdsniped by pteridomania and found out that a book on pteridomania had been written by Dr. Sarah Whittingham, who goes by the encouraging Twitter name of @DrFrond. Dr. Whittingham's opinion is that this is not a fern frond, but a palm frond. The question has been answered to my full and complete satisfaction. My thanks to Dr. Whittingham, @hangeryyy, and the /r/whatisthisthing community. [Other articles in category /art] permanent link Mon, 06 Apr 2020
Anglo-Saxon and Hawai‘ian Wikipedias
Yesterday browsing the list of Wikipedias I learned there is an Anglo-Saxon Wikipedia. This seems really strange to me for several reasons: Who is writing it? And why? And there is a vocabulary problem. Not just because Anglo-Saxon is dead, and one wouldn't expect it to have any words for anything not invented in the last 900 years or so. But also, there are very few extant Anglo-Saxon manuscripts, so we don't have a lot of vocabulary, even for things that had been invented beore the language died. Helene Hanff said:
I don't read Anglo-Saxon but if you want to investigate, you might look at the Anglo-Saxon article about the Maybach Exelero (a hēahfremmende sportƿægn), Barack Obama, or taekwondo. I am pre-committing to not getting sucked into this, but sportƿægn is evidently intended to mean “sportscar” (the ƿ is an obsolete letter called wynn and is approximately a W, so that ƿægn is “wagon”) and I think that fremmende is “foreign” and hēah is something like "high" or "very". But I'm really not sure. Anyway Wikipedia reports that the Anglo-Saxon Wikipedia has 3,197 articles (although most are very short) and around 30 active users. In contrast, the Hawai‘ian Wikipedia has 3,919 articles and only around 14 active users, and that is a language that people actually speak. [Other articles in category /lang] permanent link
Caricatures of Nazis and the number four in Russian
[ Warning: this article is kinda all over the place. ] I was looking at this awesome poster of D. Moor (Д. Моор), one of Russia's most famous political poster artists: (original source at Artchive.RU) This is interesting for a couple of reasons. First, in Russian, “Himmler”, “Göring”, “Hitler”, and “Goebbels” all begin with the same letter, ‘Г’, which is homologous to ‘G’. (Similarly, Harry Potter in Russian is Га́рри, ‘Garri’.) I also love the pictures, and especially Goebbels. These four men were so ugly, each in his own distinctively loathsome way. The artist has done such a marvelous job of depicting them, highlighting their various hideousness. It's exaggerated, and yet not unfair, these are really good likenesses! It's as if D. Moor had drawn a map of all the ways in which these men were ugly. My all-time favorite depiction of Goebbels is this one, by Boris Yefimov (Бори́с Ефи́мов): For comparison, here's the actual Goebbels: Looking at pictures of Goebbels, I had often thought “That is one ugly guy,” but never been able to put my finger on what specifically was wrong with his face. But since seeing the Yefimov picture, I have never been able to look at a picture of Goebbels without thinking of a rat. D. Moor has also drawn Goebbels as a tiny rat, scurrying around the baseboards of his poster. Anyway, that was not what I had planned to write about. The right-hand side of D. Moor's poster imagines the initial ‘Г’ of the four Nazis’ names as the four bent arms of the swastika. The captions underneath mean “first Г”, “second Г” and so on. [ Addendum: Darrin Edwards explains the meaning here that had escaped me:
Thank you, M. Edwards! ] Looking at the fourth one, четвертое /chetvyertoye/, I had a sudden brainwave. “Aha,” I thought, “I bet this is akin to Greek “tetra”, and the /t/ turned into /ch/ in Russian.” Well, now that I'm writing it down it doesn't seem that exciting. I now remember that all the other Russian number words are clearly derived from PIE just as Greek, Latin, and German are:
In Latin that /t/ turned into a /k/ and we get /quadra/ instead of /tetra/. The Russian Ч /ch/ is more like a /t/ than it is like a /k/. The change from /t/ to /f/ in English and /v/ in German is a bit weird. (The Big Dictionary says it “presents anomalies of which the explanation is still disputed”.) The change from the /p/ of ‘pente’ to the /f/ of ‘five’ is much more typical. (Consider Latin ‘pater’, ‘piscum’, ‘ped’ and the corresponding English ‘father’, ‘fish’, ‘foot’.) This is called Grimm's Law, yeah, after that Grimm. The change from /q/ in quinque to /p/ in pente is also not unusual. (The ancestral form in PIE is believed to have been more like the /q/.) There's a classification of Celtic lanugages into P-Celtic and Q-Celtic that's similar, exemplified by the change from the Irish patronymic prefix Mac- into the Welsh patronymic map or ap. I could probably write a whole article comparing the numbers from one to ten in these languages. (And Sanskrit. Wouldn't want to leave out Sanskrit.) The line for ‘two’ would be a great place to begin because all those words are basically the same, with only minor and typical variations in the spelling and pronunciation. Maybe someday. [Other articles in category /lang/etym] permanent link Sun, 05 Apr 2020
Screensharing your talk slides is skeuomorphic
Back when the Web was much newer, and people hadn't really figured it out yet, there was an attempt to bring a dictionary to the web. Like a paper dictionary, its text was set in a barely-readable tiny font, and there were page breaks in arbitrary places. That is a skeuomorph: it's an incidental feature of an object that persists even in a new medium where the incidental feature no longer makes sense. Anyway, I was scheduled to give a talk to the local Linux user group last week, and because of current conditions we tried doing it as a videoconference. I thought this went well! We used Jitsi Meet, which I thought worked quite well, and which I recommend. The usual procedure is for the speaker to have some sort of presentation materials, anachronistically called “slides”, which they display one at a time to the audience. In the Victorian age these were glass plates, and the image was projected on a screen with a slide projector. Later developments replaced the glass with celluloid or other transparent plastic, and then with digital projectors. In videoconferences, the slides are presented by displaying them on the speaker's screen, and then sharing the screen image to the audience. This last development is skeuomorphic. When the audience is together in a big room, it might make sense to project the slide images on a shared screen. But when everyone is looking at the talk on their own separate screen anyway, why make them all use the exact same copy? Instead, I published the slides on my website ahead of time, and sent the link to the attendees. They had the option to follow along on the web site, or to download a copy and follow along in their own local copy. This has several advantages:
Some co-workers suggested the drawback that it might be annoying to try to stay synchronized with the speaker. It didn't take me long to get in the habit of saying “Next slide, #18” or whatever as I moved through the talk. If you try this, be sure to put numbers on the slides! (This is a good practice anyway, I have found.) I don't know if my audience found it annoying. The whole idea only works if you can be sure that everyone will have suitable display software for your presentation materials. If you require WalSoft AwesomePresent version 18.3, it will be a problem. But for the past 25 years I have made my presentation materials in HTML, so this wasn't an issue. If you're giving a talk over videoconference, consider trying this technique. [ Addendum: I should write an article about all the many ways in which the HTML has been a good choice. ] [ Addendum 20201102: I implemented a little software system,
[Other articles in category /talk] permanent link Fri, 27 Mar 2020Last week Pierre-Françoys Brousseau and I invented a nice chess variant that I've never seen before. The main idea is: two pieces can be on the same square. Sometimes when you try to make a drastic change to the rules, what you get fails completely. This one seemed to work okay. We played a game and it was fun. Specfically, our rules say:
Miscellaneous notesPierre-Françoys says he wishes that more than two pieces could share a square. I think it could be confusing. (Also, with the chess set we had, more than two did not really fit within the physical confines of the squares.) Similarly, I proposed the castling rule because I thought it would be less confusing. And I did not like the idea that you could castle on the first move of the game. The role of pawns is very different than in standard chess. In this variant, you cannot stop a pawn from advancing by blocking it with another pawn. Usually when you have the chance to capture an enemy piece that is alone on its square you will want to do that, rather than move your own piece into its square to share space. But it is not hard to imagine that in rare circumstances you might want to pick a nonviolent approach, perhaps to avoid a stalemate. Some discussion of similar variants is on Chess Stack Exchange. The name “Pauli Chess”, is inspired by the Pauli exclusion principle, which says that no more than two electrons can occupy the same atomic orbital. [Other articles in category /games] permanent link Tue, 24 Mar 2020
git log --author=... confused me
Today I was looking for recent commits by co worker Fred Flooney,
address
but nothing came up. I couldn't remember if
and still nothing came up. “Okay,” I said, “probably I have Fred's address wrong.” Then I did
The I changed this to
which also prints out the full hash of the matching commits. The first one was 542ab72c92c2692d223bfca4470cf2c0f2339441. Then I had a perplexity. When I did
it told me the author email address was
the address displayed was The answer is, the repository might have a file in its root named
But my Also, I learned that Also, I learned that Thanks to Cees Hek, Gerald Burns, and Val Kalesnik for helping me get to the bottom of this. The [ Addendum: I could also have used [Other articles in category /prog] permanent link Sun, 16 Feb 2020Over on the other blog I said “Midichlorians predated The Phantom Menace.” No, the bacterium was named years after the movie was released. Thanks to Eyal Joseph Minsky-Fenick and Shreevatsa R. for (almost simultaneously) pointing out this mistake. [Other articles in category /oops] permanent link Thu, 13 Feb 2020
Gentzen's rules for natural deduction
Here is Gerhard Gentzen's original statement of the rules of Natural Deduction (“ein Kalkül für ‘natürliche’, intuitionistische Herleitungen”): Natural deduction looks pretty much exactly the same as it does today, although the symbols are a little different. But only a little! Gentzen has not yet invented !!\land!! for logical and, and is still using !!\&!!. But he has invented !!\forall!!. The style of the !!\lnot!! symbol is a little different from what we use now, and he has that tent thingy !!⋏!! where we would now use !!\bot!!. I suppose !!⋏!! didn't catch on because it looks too much like !!\land!!. (He similarly used !!⋎!! to mean !!\top!!, but as usual, that doesn't appear in the deduction rules.) We still use Gentzen's system for naming the rules. The notations “UE” and “OB” for example, stand for “und-Einführung” and “oder-Beseitigung”, which mean “and-introduction” and “or-elimination”. Gentzen says (footnote 4, page 178) that he got the !!\lor, \supset, \exists!! signs from Russell, but he didn't want to use Russell's signs !!\cdot, \equiv, \sim, ()!! because they already had other meanings in mathematics. He took the !!\&!! from Hilbert, but Gentzen disliked his other symbols. Gentzen objected especially to the “uncomfortable” overbar that Hilbert used to indicate negation (“[Es] stellt eine Abweichung von der linearen Anordnung der Zeichen dar”). He attributes his symbols for logical equivalence (!!\supset\subset!!) and negation to Heyting, and explains that his new !!\forall!! symbol is analogous to !!\exists!!. I find it remarkable how quickly this caught on. Gentzen also later replaced !!\&!! with !!\land!!. Of the rest, the only one that didn't stick was !!\supset\subset!! in place of !!\equiv!!. But !!\equiv!! is much less important than the others, being merely an abbreviation. Gentzen died at age 35, a casualty of the World War. Source: Gerhard Gentzen, “Untersuchungen über das logische Schließen I”, pp. 176–210 Mathematische Zeitschrift v. 39, Springer, 1935. The display above appears on page 186. [ Addendum 20200214: Thanks to Andreas Fuchs for correcting my German grammar. ] [Other articles in category /math/logic] permanent link Thu, 06 Feb 2020
Major screwups in mathematics: example 3
[ Previously: “Cases in which some statement S was considered to be proved, and later turned out to be false”. ] In 1905, Henri Lebesgue claimed to have proved that if !!B!! is a subset of !!\Bbb R^2!! with the Borel property, then its projection onto a line (the !!x!!-axis, say) is a Borel subset of the line. This is false. The mistake was apparently noticed some years later by Andrei Souslin. In 1912 Souslin and Luzin defined an analytic set as the projection of a Borel set. All Borel sets are analytic, but, contrary to Lebesgue's claim, the converse is false. These sets are counterexamples to the plausible-seeming conjecture that all measurable sets are Borel. I would like to track down more details about this. This Math Overflow post summarizes Lebesgue's error:
[Other articles in category /math] permanent link Tue, 28 Jan 2020Today I learned that James Blaine (U.S. Speaker of the House, senator, perennial presidential candidate, and Secretary of State under Presidents Cleveland, Garfield, and Arthur; previously) was the namesake of the notorious “Blaine Amendments”. These are still an ongoing legal issue! The Blaine Amendment was a proposed U.S. constitutional amendment rooted in anti-Catholic, anti-immigrant sentiment, at a time when the scary immigrant bogeymen were Irish and Italian Catholics. The amendment would have prevented the U.S. federal government from providing aid to any educational institution with a religious affiliation; the specific intention was to make Catholic parochial schools ineligible for federal education funds. The federal amendment failed, but many states adopted it and still have it in their state constitutions. Here we are 150 years later and this is still an issue! It was the subject of the 2017 Supreme Court case Trinity Lutheran Church of Columbia, Inc. v. Comer. My quick summary is:
It's interesting to me that now that Blaine is someone I recognize, he keeps turning up. He was really important, a major player in national politics for thirty years. But who remembers him now? [Other articles in category /law] permanent link Fri, 17 Jan 2020As Middle English goes, Pylgremage of the Sowle (unknown author, 1413) is much easier to read than Chaucer:
I initially misread “Enuye” as “ennui”, understanding it as sloth. But when sloth showed up at the end, I realized that it was simpler than I thought, it's just “envy”. [Other articles in category /book] permanent link Thu, 16 Jan 2020
A serious proposal to exploit the loophole in the U.S. Constitution
In 2007 I described an impractical scheme to turn the U.S. into a dictatorship, or to make any other desired change to the Constitution, by having Congress admit a large number of very small states, which could then ratify any constitutional amendments deemed desirable. An anonymous writer (probably a third-year law student) has independently discovered my scheme, and has proposed it as a way to “fix” the problems that they perceive with the current political and electoral structure. The proposal has been published in the Harvard Law Review in an article that does not appear to be an April Fools’ prank. The article points out that admission of new states has sometimes been done as a political hack. It says:
Specifically, the proposal is that the new states should be allocated out of territory currently in the District of Columbia (which will help ensure that they are politically aligned in the way the author prefers), and that a suitable number of new states might be one hundred and twenty-seven. [Other articles in category /law] permanent link Tue, 14 Jan 2020
More about triple border points
[ Previously ] A couple of readers wrote to discuss tripoints, which are places where three states or other regions share a common border point. Doug Orleans told me about the Tri-States Monument near Port Jervis, New York. This marks the approximate location of the Pennsylvania - New Jersey - New York border. (The actual tripoint, as I mentioned, is at the bottom of the river.) I had independently been thinking about taking a drive around the entire border of Pennsylvania, and this is just one more reason to do that. (Also, I would drive through the Delaware Water Gap, which is lovely.) Looking into this I learned about the small town of North East, so-named because it's in the northeast corner of Erie County. It's also the northernmost point in Pennsylvania. (I got onto a tangent about whether it was the northeastmost point in Pennsylvania, and I'm really not sure. It is certainly an extreme northeast point in the sense that you can't travel north, east, or northeast from it without leaving the state. But it would be a very strange choice, since Erie County is at the very western end of the state.) My putative circumnavigation of Pennsylvanias would take me as close as possible to Pennsylvania's only international boundary, with Ontario; there are Pennsylvania - Ontario tripoints with New York and with Ohio. Unfortunately, both of them are in Lake Erie. The only really accessible Pennsylvania tripoints are the one with West Virginia and Maryland (near Morgantown) and Maryland and Delaware (near Newark). These points do tend to be marked, with surveyors’ markers if nothing else. Loren Spice sent me a picture of themselves standing at the tripoint of Kansas, Missouri, and Oklahoma, not too far from Joplin, Missouri. While looking into this, I discovered the Kentucky Bend, which is an exclave of Kentucky, embedded between Tennessee and Missouri: The “bubble” here is part of Fulton County, Kentucky. North, across the river, is New Madrid County, Missouri. The land border of the bubble is with Lake County, Tennessee. It appears that what happened here is that the border between Kentucky and Missouri is the river, with Kentucky getting the territory on the left bank, here the south side. And the border between Kentucky and Tennessee is a straight line, following roughly the 36.5 parallel, with Kentucky getting the territory north of the line. The bubble is south of the river but north of the line. So these three states have not one tripoint, but three, all only a few miles apart! Finally, I must mention the Lakes of Wada, which are not real lakes, but rather are three connected subsets of the unit disc which have the property that every point on their boundaries is a tripoint. [Other articles in category /misc] permanent link Thu, 09 Jan 2020I'm a fan of geographic oddities, and a few years back when I took a road trip to circumnavigate Chesapeake Bay, I planned its official start in New Castle, DE, which is noted for being the center of the only circular state boundary in the U.S.: The red blob is New Castle. Supposedly an early treaty allotted to Delaware all points west of the river that were within twelve miles of the State House in New Castle. I drove to New Castle, made a short visit to the State House, and then began my road trip in earnest. This is a little bit silly, because the border is completely invisible, whether you are up close or twelve miles away, and the State House is just another building, and would be exactly the same even if the border were actually a semicubic parabola with its focus at the second-tallest building in Wilmington. Whatever, I like going places, so I went to New Castle to check it out. Perhaps it was silly, but I enjoyed going out of my way to visit a point of purely geometric significance. The continuing popularity of Four Corners as a tourist destination shows that I'm not the only one. I don't have any plans to visit Four Corners, because it's far away, kinda in the middle of nowhere, and seems like rather a tourist trap. (Not that I begrudge the Navajo Nation whatever they can get from it.) Four Corners is famously the only point in the U.S. where four state borders coincide. But a couple of weeks ago as I was falling asleep, I had the thought that there are many triple-border points, and it might be fun to visit some. In particular, I live in southeastern Pennsylvania, so the Pennsylvania-New Jersey-Delaware triple point must be somewhere nearby. I sat up and got my phone so I could look at the map, and felt foolish: As you can see, the triple point is in the middle of the Delaware River, as of course it must be; the entire border between Pennsylvania and New Jersey, all the hundreds of miles from its northernmost point (near Port Jervis) to its southernmost (shown above), runs right down the middle of the Delaware. I briefly considered making a trip to get as close as possible, and photographing the point from land. That would not be too inconvenient. Nearby Marcus Hook is served by commuter rail. But Marcus Hook is not very attractive as a destination. Having been to Marcus Hook, it is hard for me to work up much enthusiasm for a return visit. But I may look into this further. I usually like going places and being places, and I like being surprised when I get there, so visting arbitrarily-chosen places has often worked out well for me. I see that the Pennsylvania-Delaware-Maryland triple border is near White Clay Creek State Park, outside of Newark, DE. That sounds nice, so perhaps I will stop by and take a look, and see if there really is white clay in the creek. Who knows, I may even go back to Marcus Hook one day. [ Addendum 20190114: More about nearby tripoints and related matters. ] [ Addendum 20201209: I visited the tripoint marker in White Clay Creek State Park. ] [ Addendum 20220422: More about how visting arbitrarily-chosen places has often worked out well for me. I have a superstitious belief in the power of Fate to bring me to where I am supposed to be, but rationally I understand that the true explanation is that random walks are likely to bring me to an interesting destination simply because I am easily interested and so find most destinations interesting. ] [Other articles in category /misc] permanent link Wed, 08 Jan 2020
Unix bc command and its -l flag
In a recent article about Unix utilities, I wrote:
This is wrong, as was kindly pointed out to me by Luke Shumaker. The
behavior of
In But for division, Unfortunately, if you don't set it — this is the stupid part — Long, long ago I was in the habit of manually entering Many thanks to Luke Shumaker for pointing this out. M. Shumaker adds:
Yeah, same. [Other articles in category /oops] permanent link Tue, 07 Jan 2020
Social classes identified by letters
Looking up the letter E in the Big Dictionary, I learned that British sociologists were dividing social classes into lettered strata long before Aldous Huxley did it in Brave New World (1932). The OED quoted F. G. D’Aeth, “Present Tendencies of Class Differentiation”, The Sociological Review, vol 3 no 4, October, 1910:
The OED doesn't quote further, but D’Aeth goes on to explain:
Notice that in D’Aeth's classification, the later letters are higher classes. According to the OED this was typical; they also quote a similar classification from 1887 in which A was the lowest class. But the OED labels this sort of classification, with A at the bottom, as “obsolete”. In Brave New World, you will recall, it is the in the other direction, with the Alphas (administrators and specialists), at the top, and the Epsilons (menial workers with artificially-induced fetal alcohol syndrome) at the bottom. The OED's later quotations, from 1950–2014, all follow Huxley in putting class A at the top and E at the bottom. They also follow Huxley in having only five classes instead of seven or eight. (One has six classes, but two of them are C1 and C2.) I wonder how much influence Brave New World had on this sort of classification. Was anyone before Huxley dividing British society into five lettered classes with A at the top? [ By the way, I have been informed that this paper, which I have linked above, is “Copyright © 2020 by The Sociological Review Publication Limited. All rights are reserved.” This is a bald lie. Sociological Review Publication Limited should be ashamed of themselves. ] [Other articles in category /lang] permanent link Fri, 03 Jan 2020
Benchmarking shell pipelines and the Unix “tools” philosophy
Sometimes I look through the HTTP referrer logs to see if anyone is
talking about my blog. I use the
(I've discussed This has obvious defects, but it works well enough. But every time I
used it, I wondered: is it faster to do the After years of idly wondering this, I have finally looked into it. The point of this article is that the investigation produced the following pipeline, which I think is a great example of the Unix “tools” philosophy: for i in $(seq 20); do
TIME="%U+%S" time \
sh -c f 11 access.2020-01-0* | grep -v plover | count > /dev/null' \
2>&1 | bc -l ;
done | addup
I typed this on the command line, with no backslashes or newlines, so it actually looked like this: for i in $(seq 20); do TIME="%U+%S" time sh -c 'f 11 access.2020-01-0* | grep -v plover |count > /dev/null' 2>&1 | bc -l ; done | addup
Okay, what's going on here? The pipeline I actually want to analyze,
with for i in $(seq 20); do TIME="%U+%S" time \ sh -c '¿SOMETHING? > /dev/null' 2>&1 | bc -l ; done | addup Continuing to work from inside to out, we're going to use The The default format for the report printed by [ Addendum 20200108: We don't actually need Collapsing the details I just discussed, we have: for i in $(seq 20); do (run once and emit the total CPU time) done | addup
1
2
3
…
19
20
Here we don't actually care about the output (we never actually use
All together, the command runs and prints a single number like (To do this right we also need to test a null command, say
because we might learn that 95% of the reported time is spent in running the shell, so the actual difference between the two pipelines is twenty times as large as we thought. I did this; it turns out that the time spent to run the shell is insignificant.) What to learn from all this? On the one hand, Unix wins: it's
supposed to be quick and easy to assemble small tools to do whatever it is
you're trying to do. When On the other hand, gosh, what a weird mishmash of stuff I had to
remember or look up. The Was it a win overall? What if Unix had less compositionality but I could use it with less memorized trivia? Would that be an improvement? I don't know. I rather suspect that there's no way to actually reach that hypothetical universe. The bizarre mishmash of weirdness exists because so many different people invented so many tools over such a long period. And they wouldn't have done any of that inventing if the compositionality hadn't been there. I think we don't actually get to make a choice between an incoherent mess of composable paraphernalia and a coherent, well-designed but noncompositional system. Rather, we get a choice between a incoherent but useful mess and an incomplete, limited noncompositional system. (Notes to self: (1) In connection with [ Addendum: Add this to the list of “weird mishmash of trivia”: There are two
[ Addenda 20200104: (1) Perl's module ecosystem is another example of a
successful incoherent mess of composable paraphernalia. (2) Of the
seven trivia I included in my “weird mishmash”, five were related to
the [ Addendum 20200104: And, of course, this is exactly what Richard Gabriel was thinking about in Worse is Better. Like Gabriel, I'm not sure. ] [Other articles in category /Unix] permanent link Thu, 02 Jan 2020
A sticky problem that evaporated
Back in early 1995, I worked on an incredibly early e-commerce site. One of their clients was Eddie Bauer. They wanted to put up a product catalog with a page for each product, say a sweatshirt, and the page should show color swatches for each possible sweatshirt color. “Sure, I can do that,” I said. “But you have to understand that the user may not see the color swatches exactly as you expect them to.” Nobody would need to have this explained now, but in early 1995 I wasn't sure the catalog folks would understand. When you have a physical catalog you can leaf through a few samples to make sure that the printer didn't mess up the colors. But what if two months down the line the Eddie Bauer people were shocked by how many complaints customers had about things being not quite the right color, “Hey I ordered mulberry but this is more like maroonish.” Having absolutely no way to solve the problem, I didn't want to to land in my lap, I wanted to be able to say I had warned them ahead of time. So I asked “Will it be okay that there will be variations in how each customer sees the color swatches?” The catalog people were concerned. Why wouldn't the colors be the same? And I struggled to explain: the customer will see the swatches on their monitor, and we have no idea how old or crappy it might be, we have no idea how the monitor settings are adjusted, the colors could be completely off, it might be a monochrome monitor, or maybe the green part of their RGB video cable is badly seated and the monitor is displaying everything in red, blue, and purple, blah blah blah… I completely failed to get the point across in a way that the catalog people could understand. They looked more and more puzzled, but then one of them brightened up suddenly and said “Oh, just like on TV!” “Yes!” I cried in relief. “Just like that!” “Oh sure, that's no problem.” Clearly, that was what I should have said in the first place, but I hadn't thought of it. I no longer have any idea who it was that suddenly figured out what Geek Boy's actual point was, but I'm really grateful that they did. [Other articles in category /tech] permanent link |