Archive:
In this section: Subtopics:
Comments disabled 
Wed, 30 Dec 2020
Benjamin Franklin and the Exercises of Ignatius
Recently I learned of the Spiritual Exercises of St. Ignatius. Wikipedia says (or quotes, it's not clear):
This reminded me very strongly of Chapter 9 of Benjamin Franklin's Autobiography, in which he presents “A Plan for Attaining Moral Perfection”:
So I wondered: was Franklin influenced by the Exercises? I don't know, but it's possible. Wondering about this I consulted the Mighty Internet, and found two items in the Woodstock Letters, a 19thcentury Jesuit periodical, wondering the same thing:
(“Woodstock Letters” Volume XXXIV #2 (Sep 1905) p.311–313) I can't guess at the main question, but I can correct one small detail: although this part of the Autobiography was written around 1784, the time of which Franklin was writing, when he actually made his little book, was around 1730, well before the suppression of the Society. The following issue takes up the matter again:
(“Woodstock Letters” Volume XXXIV #3 (Dec 1905) p.459–461) Franklin describes making a decision by listing, on a divided sheet of paper, the reasons for and against the proposed action. And then a variation I hadn't seen: balance arguments for and arguments against, and cross out equallybalanced sets of arguments. Franklin even suggests evaluations as fine as matching two arguments for with three slightly weaker arguments against and crossing out all five together. I don't know what this resembles in the Exercises but it certainly was striking. [Other articles in category /book] permanent link Sat, 26 Dec 2020This tweet from Raffi Melkonian describes the appetizer plate at his house on Christmas. One item jumped out at me:
I wondered what that was like, and then I realized I do have some idea, because I recognized the word. Basterma is not an originally Armenian word, it's a Turkish loanword, I think canonically spelled pastırma. And from Turkish it made a long journey through Romanian and Yiddish to arrive in English as… pastrami… For which “spicy beef prosciutto” isn't a bad description at all. [Other articles in category /lang/etym] permanent link Tue, 15 Dec 2020The world is so complicated! It has so many things in it that I could not even have imagined. Yesterday I learned that since 1949 there has been a compact between New Mexico and Texas about how to divide up the water in the Pecos River, which flows from New Mexico to Texas, and then into the Rio Grande. New Mexico is not allowed to use all the water before it gets to Texas. Texas is entitled to receive a certain amount. There have been disputes about this in the past (the Supreme Court case has been active since 1974), so in 1988 the Supreme Court appointed Neil S. Grigg, a hydraulic engineer and water management expert from Colorado, to be “River Master of the Pecos River”, to mediate the disputes and account for the water. The River Master has a rulebook, which you can read online. I don't know how much Dr. Grigg is paid for this. In 2014, Tropical Storm Odile dumped a lot of rain on the U.S. Southwest. The Pecos River was flooding, so Texas asked NM to hold onto the Texas share of the water until later. (The rulebook says they can do this.) New Mexico arranged for the water that was owed to Texas to be stored in the Brantley Reservoir. A few months later Texas wanted their water. "OK," said New Mexico. “But while we were holding it for you in our reservoir, some of it evaporated. We will give you what is left.” “No,” said Texas, “we are entitled to a certain amount of water from you. We want it all.” But the rule book says that even though the water was in New Mexico's reservoir, it was Texas's water that evaporated. (Section C5, “Texas Water Stored in New Mexico Reservoirs”.) [Other articles in category /law] permanent link Thu, 10 Dec 2020
I had some free time yesterday, so that is what I did. The creek is pretty. I did not see anything that appeared to be white clay. Of course I did not investigate extensively, or even closely, because the weather was too cold for wading. But the park was beautiful. There is a walking trail in the park that reaches the tripoint itself. I didn't walk the whole trail. The park entrance is at the other end of the park from the tripoint. After wandering around in the park for a while, I went back to the car, drove to the Maryland end of the park, and left the car on the side of the Maryland Route 896 (or maybe Pennsylvania Route 896, it's hard to be sure). Then I cut across private property to the marker. The marker itself looks like this: As you see, the Pennsylvania sides of the monument are marked with ‘P’ and the Maryland side with ‘M’. The other ‘M’ is actually in Delaware. This Newark Post article explains why there is no ‘D’:
This does not explain the whole thing. The point was first marked in 1765 by Mason and Dixon and at that time Delaware was indeed part of Pennsylvania. But as you see the stone marker was placed in 1849, by which time Delaware had been there for some time. Perhaps the people who installed the new marker were trying to pretend that Delaware did not exist. [ Addendum 20201218: Daniel Wagner points out that even if the 1849 people were trying to depict things as they were in 1765, the marker is still wrong; it should have three ‘P’ and one ‘M’, not two of each. I did read that the surveyors who originally placed the 1849 marker put it in the wrong spot, and it had to be moved later, so perhaps they were just not careful people. ] Theron Stanford notes that this point is also the northwestern corner of the Wedge. This sliver of land was east of the Maryland border, but outside the TwelveMile Circle and so formed an odd prodtrusion from Pennsylvania. Pennsylvania only reliquinshed claims to it in 1921 and it is now agreed to be part of Delaware. Were the Wedge still part of Pennsylvania, the tripoint would have been at its southernmost point. Looking at the map now I see that to get to the marker, I must have driven within a hundred yards of the westmost point of the TwelveMile Circle itself, and there is a (somewhat more impressive) marker there. Had I realized at the time I probably would have tried to stop off. I have some other pictures of the marker if you are not tired of this yet. [ Addendum 20201211: Tim Heany asks “Is there no sign at the border on 896?” There probably is, and this observation is a strong argument that I parked the car in Maryland. ] [ Addendum 20201211: Yes, ‘prodtrusion’ was a typo, but it is a gift from the Gods of Dada and should be treasured, not thrown in the trash. ] [Other articles in category /misc] permanent link Sat, 21 Nov 2020
Testing for divisibility by 19
[ Previously, Testing for divisibility by 7. ] A couple of nights ago I was keeping Katara company while she revised an essay on The Scarlet Letter (ugh) and to pass the time one of the things I did was tinker with the tests for divisibility rules by 9 and 11. In the course of this I discovered the following method for divisibility by 19:
The result will be a smaller number which is a multiple of 19 if and only if the original number was. For example, let's consider, oh, I don't know, 2337. We calculate:
76 is a multiple of 19, so 2337 was also. But if you're not sure about 76 you can compute 2·6+7 = 19 and if you're not sure about that you need more help than I can provide. I don't claim this is especially practical, but it is fun, not completely unworkable, and I hadn't seen anything like it before. You can save a lot of trouble by reducing the intermediate values mod 19 when needed. In the example above above, after the first step you get to 17, which you can reduce mod 19 to 2, and then the next step is 2·2+3 = 1, and the final step is 1·2+2 = 0. Last time I wrote about this Eric Roode sent me a whole compendium of divisibility tests, including one for divisibility by 19. It's a little like mine, but in reverse: group the digits in pairs, left to right; multiply each pair by 5 and then add the next pair. Here's 2337 again:
Again you can save a lot trouble by reducing mod 19 before the multiplication. So instead of the first step being 23·5 + 37 you can reduce the 23·5 to 4·5 = 20 and then add the 37 to get 57 right away. [ Addendum: Of course this was discovered long ago, and in fact Wikipedia mentions it. ] [ Addendum 20201123: An earlier version of this article claimed that the doubleandadd step I described preserves the mod19 residue. It does not, of course; the doubling step doubles it. It is, however, true that it is zero afterward if and only if it was zero before. ] [Other articles in category /math] permanent link [Other articles in category /misc] permanent link Mon, 02 Nov 2020
A better way to do remote presentations
A few months ago I wrote an article about a strategy I had tried when giving a talk via videochat. Typically:
I thought I had done it a better way:
This, I thought, had several advantages:
When I brought this up with my coworkers, some of them had a very good objection:
Fair enough! I have done this. If you package your slides with pageturner, one instance becomes the “leader” and the rest are “followers”. Whenever the leader moves from one slide to the next, a very simple backend server is notified. The followers periodically contact the server to find out what slide they are supposed to be showing, and update themselves accordingly. The person watching the show can sit back and let it do all the work. But! If an audience member wants to skip ahead, or go back, that works too. They can use the arrow keys on their keyboard. Their follower instance will stop synchronizing with the leader's slide. Instead, it will display a box in the corner of the page, next to the current slide's page number, that says what slide the leader is looking at. The number in this box updates dynamically, so the audience person always knows how far ahead or behind they are.
At left, the leader is displaying slide 3, and the follower is there also. When the leader moves on to slide 4, the follower instance will switch automatically. At right, the follower is still looking at slide 3, but is detached from the leader, who has moved on to slide 007, as you can see in the gray box. When the audience member has finished their excursion, they can click the gray box and their own instance will immediately resynchronize with the leader and follow along until the next time they want to depart. I used this to give a talk to the Charlotte Perl Mongers last week and it worked. Responses were generally positive even though the UI is a little bit roughlooking. Technical detailsThe back end is a tiny server, written in Python 3 with Flask. The server is really tiny, only about 60 lines of code. It has only two endpoints: for getting the leader's current page, and for setting it. Setting requires a password.
The front end runs in the browser. The user downloads the frontend
script,
The On page switching, a tiny amount of information is stored in the
browser window's If you want pageturner to display the leader's slide number when the
follower is desynchronized, as in the example above, you include an
element with class
The password for the
to
or whatever. Many improvements are certainly possible. It could definitely look a lot better, but I leave that to someone who has interest and aptitude in design. I know of one serious bug: at present the server doesn't handle SSL,
so must be run at an Source code downloadPatches welcome. LicenseThe software is licensed under the Creative Commons Attribution 4.0 license (CC BY 4.0). You are free to share (copy and redistribute the software in any medium or format) and adapt (remix, transform, and build upon the material) for any purpose, even commercially, so long as you give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. Share and enjoy. [Other articles in category /talk] permanent link Sun, 18 Oct 2020
Newton's Method and its instability
While messing around with Newton's method for last week's article, I built this Desmos thingy: The red point represents the initial guess; grab it and drag it around, and watch how the later iterations change. Or, better, visit the Desmos site and play with the slider yourself. (The curve here is !!y = (x2.2)(x3.3)(x5.5)!!; it has a local maximum at around !!2.7!!, and a local minimum at around !!4.64!!.) Watching the attractor point jump around I realized I was arriving at a much better understanding of the instability of the convergence. Clearly, if your initial guess happens to be near an extremum of !!f!!, the next guess could be arbitrarily far away, rather than a small refinement of the original guess. But even if the original guess is pretty good, the refinement might be near an extremum, and then the following guess will be somewhere random. For example, although !!f!! is quite wellbehaved in the interval !![4.3, 4.35]!!, as the initial guess !!g!! increases across this interval, the refined guess !!\hat g!! decreases from !!2.74!! to !!2.52!!, and in between these there is a local maximum that kicks the ball into the weeds. The result is that at !!g=4.3!! the method converges to the largest of the three roots, and at !!g=4.35!!, it converges to the smallest. This is where the Newton basins come from: Here we are considering the function !!f:z\mapsto z^3 1!! in the complex plane. Zero is at the center, and the obvious root, !!z=1!! is to its right, deep in the large red region. The other two roots are at the corresponding positions in the green and blue regions. Starting at any red point converges to the !!z=1!! root. Usually, if you start near this root, you will converge to it, which is why all the points near it are red. But some nearish starting points are near an extremum, so that the next guess goes wild, and then the iteration ends up at the green or the blue root instead; these areas are the rows of green and blue leaves along the boundary of the large red region. And some starting points on the boundaries of those leaves kick the ball into one of the other leaves… Here's the corresponding basin diagram for the polynomial !!y = (x2.2)(x3.3)(x5.5)!! from earlier: The real axis is the horizontal hairline along the middle of the diagram. The three large regions are the main basins of attraction to the three roots (!!x=2.2, 3.3!!, and !!5.5!!) that lie within them. But along the boundaries of each region are smaller intrusive bubbles where the iteration converges to a surprising value. A point moving from left to right along the real axis passes through the large pink !!2.2!! region, and then through a very small yellow bubble, corresponding to the values right around the local maximum near !!x=2.7!! where the process unexpectedly converges to the !!5.5!! root. Then things settle down for a while in the blue region, converging to the !!3.3!! root as one would expect, until the value gets close to the local minimum at !!4.64!! where there is a pink bubble because the iteration converges to the !!2.2!! root instead. Then as !!x!! increases from !!4.64!! to !!5.5!!, it leaves the pink bubble and enters the main basin of attraction to !!5.5!! and stays there. If the picture were higher resolution, you would be able to see that the pink bubbles all have tiny yellow bubbles growing out of them (one is 4.39), and the tiny yellow bubbles have even tinier pink bubbles, and so on forever. (This was generated by the Online Fractal Generator at usefuljs.net; the labels were added later by me. The labels’ positions are only approximate.) [ Addendum: Regarding complex points and !!f : z\mapsto z^31!! I said “some nearish starting points are near an extremum”. But this isn't right; !!f!! has no extrema. It has an inflection point at !!z=0!! but this doesn't explain the instability along the lines !!\theta = \frac{2k+1}{3}\pi!!. So there's something going on here with the complex derivative that I don't understand yet. ] [Other articles in category /math] permanent link
Fixed points and attractors, part 3
Last week I wrote about a superlightweight variation on Newton's method, in which one takes this function: $$f_n : \frac ab \mapsto \frac{a+nb}{a+b}$$ or equivalently $$f_n : x \mapsto \frac{x+n}{x+1}$$ Iterating !!f_n!! for a suitable initial value (say, !!1!!) converges to !!\sqrt n!!: $$ \begin{array}{rr} x & f_3(x) \\ \hline 1.0 & 2.0 \\ 2.0 & 1.667 \\ 1.667 & 1.75 \\ 1.75 & 1.727 \\ 1.727 & 1.733 \\ 1.733 & 1.732 \\ 1.732 & 1.732 \end{array} $$ Later I remembered that a few months back I wrote a couple of articles about a more general method that includes this as a special case: The general idea was:
We can see that !!\sqrt n!! is a fixed point of !!f_n!!: $$ \begin{align} f_n(\sqrt n) & = \frac{\sqrt n + n}{\sqrt n + 1} \\ & = \frac{\sqrt n(1 + \sqrt n)}{1 + \sqrt n} \\ & = \sqrt n \end{align} $$ And in fact, it is an attracting fixed point, because if !!x = \sqrt n + \epsilon!! then $$\begin{align} f_n(\sqrt n + \epsilon) & = \frac{\sqrt n + \epsilon + n}{\sqrt n + \epsilon + 1} \\ & = \frac{(\sqrt n + \sqrt n\epsilon + n)  (\sqrt n 1)\epsilon}{\sqrt n + \epsilon + 1} \\ & = \sqrt n  \frac{(\sqrt n 1)\epsilon}{\sqrt n + \epsilon + 1} \end{align}$$ Disregarding the !!\epsilon!! in the denominator we obtain $$f_n(\sqrt n + \epsilon) \approx \sqrt n  \frac{\sqrt n  1}{\sqrt n + 1} \epsilon $$ The error term !!\frac{\sqrt n  1}{\sqrt n + 1} \epsilon!! is strictly smaller than the original error !!\epsilon!!, because !!0 < \frac{x1}{x+1} < 1!! whenever !!x>1!!. This shows that the fixed point !!\sqrt n!! is attractive. In the previous articles I considered several different simple functions that had fixed points at !!\sqrt n!!, but I didn't think to consider this unusally simple one. I said at the time:
but I never did get around to the Möbius transformations, and I have long since forgotten what I planned to say. !!f_n!! is an example of a Möbius transformation, and I wonder if my idea was to systematically find all the Möbius transformations that have !!\sqrt n!! as a fixed point, and see what they look like. It is probably possible to automate the analysis of whether the fixed point is attractive, and if not to apply one of the transformations from the previous article to make it attractive. [Other articles in category /math] permanent link Tue, 13 Oct 2020
Newton's Method but without calculus — or multiplication
Newton's method goes like this: We have a function !!f!! and we want to solve the equation !!f(x) = 0.!! We guess an approximate solution, !!g!!, and it doesn't have to be a very good guess. Then we calculate the line !!T!! tangent to !!f!! through the point !!\langle g, f(g)\rangle!!. This line intersects the !!x!!axis at some new point !!\langle \hat g, 0\rangle!!, and this new value, !!\hat g!!, is a better approximation to the value we're seeking. Analytically, we have: $$\hat g = g  \frac{f(g)}{f'(g)}$$ where !!f'(g)!! is the derivative of !!f!! at !!g!!. We can repeat the process if we like, getting better and better approximations to the solution. (See detail at left; click to enlarge. Again, the blue line is the tangent, this time at !!\langle \hat g, f(\hat g)\rangle!!. As you can see, it intersects the axis very close to the actual solution.) In general, this requires calculus or something like it, but in any particular case you can avoid the calculus. Suppose we would like to find the square root of 2. This amounts to solving the equation $$x^22 = 0.$$ The function !!f!! here is !!x^22!!, and !!f'!! is !!2x!!. Once we know (or guess) !!f'!!, no further calculus is needed. The method then becomes: Guess !!g!!, then calculate $$\hat g = g  \frac{g^22}{2g}.$$ For example, if our initial guess is !!g = 1.5!!, then the formula above tells us that a better guess is !!\hat g = 1.5  \frac{2.25  2}{3} = 1.4166\ldots!!, and repeating the process with !!\hat g!! produces !!1.41421\mathbf{5686}!!, which is very close to the correct result !!1.41421\mathbf{3562}!!. If we want the square root of a different number !!n!! we just substitute it for the !!2!! in the numerator. This method for extracting square roots works well and requires no calculus. It's called the Babylonian method and while there's no evidence that it was actually known to the Babylonians, it is quite ancient; it was first recorded by Hero of Alexandria about 2000 years ago. How might this have been discovered if you didn't have calculus? It's actually quite easy. Here's a picture of the number line. Zero is at one end, !!n!! is at the other, and somewhere in between is !!\sqrt n!!, which we want to find. Also somewhere in between is our guess !!g!!. Say we guessed too low, so !!0 \lt g < \sqrt n!!. Now consider !!\frac ng!!. Since !!g!! is too small to be !!\sqrt n!! exactly, !!\frac ng!! must be too large. (If !!g!! and !!\frac ng!! were both smaller than !!\sqrt n!!, then their product would be smaller than !!n!!, and it isn't.) Similarly, if the guess !!g!! is too large, so that !!\sqrt n < g!!, then !!\frac ng!! must be less than !!\sqrt n!!. The important point is that !!\sqrt n!! is between !!g!! and !!\frac ng!!. We have narrowed down the interval in which !!\sqrt n!! lies, just by guessing. Since !!\sqrt n!! lies in the interval between !!g!! and !!\frac ng!! our next guess should be somewhere in this smaller interval. The most obvious thing we can do is to pick the point halfway in the middle of !!g!! and !!\frac ng!!, So if we guess the average, $$\frac12\left(g + \frac ng\right),$$ this will probably be much closer to !!\sqrt n!! than !!g!! was: This average is exactly what Newton's method would have calculated, because $$\frac12\left(g + \frac ng\right) = g  \frac{g^2n}{2g}.$$ But we were able to arrive at the same computation with no calculus at all — which is why this method could have been, and was, discovered 1700 years before Newton's method itself. If we're dealing with rational numbers then we might write !!g=\frac ab!!, and then instead of replacing our guess !!g!! with a better guess !!\frac12\left(g + \frac ng\right)!!, we could think of it as replacing our guess !!\frac ab!! with a better guess !!\frac12\left(\frac ab + \frac n{\frac ab}\right)!!. This simplifies to $$\frac ab \Rightarrow \frac{a^2 + nb^2}{2ab}$$ so that for example, if we are calculating !!\sqrt 2!!, and we start with the guess !!g=\frac32!!, the next guess is $$\frac{3^2 + 2\cdot2^2}{2\cdot3\cdot 2} = \frac{17}{12} = 1.4166\ldots$$ as we saw before. The approximation after that is !!\frac{289+288}{2\cdot17\cdot12} = \frac{577}{408} = 1.41421568\ldots!!. Used this way, the method requires only integer calculations, and converges very quickly. But the numerators and denominators increase rapidly, which is good in one sense (it means you get to the accurate approximations quickly) but can also be troublesome because the numbers get big and also because you have to multiply, and multiplication is hard. But remember how we figured out to do this calculation in the first place: all we're really trying to do is find a number in between !!g!! and !!\frac ng!!. We did that the first way that came to mind, by averaging. But perhaps there's a simpler operation that we could use instead, something even easier to compute? Indeed there is! We can calculate the mediant. The mediant of !!\frac ab!! and !!\frac cd!! is simply $$\frac{a+c}{b+d}$$ and it is very easy to show that it lies between !!\frac ab!! and !!\frac cd!!, as we want. So instead of the relatively complicated $$\frac ab \Rightarrow \frac{a^2 + nb^2}{2ab}$$ operation, we can try the very simple and quick $$\frac ab \Rightarrow \operatorname{mediant}\left(\frac ab, \frac{nb}{a}\right) = \frac{a+nb}{b+a}$$ operation. Taking !!n=2!! as before, and starting with !!\frac 32!!, this produces: $$ \frac 32 \Rightarrow\frac{ 7 }{ 5 } \Rightarrow\frac{ 17 }{ 12 } \Rightarrow\frac{ 41 }{ 29 } \Rightarrow\frac{ 99 }{ 70 } \Rightarrow\frac{ 239 }{ 169 } \Rightarrow\frac{ 577 }{ 408 } \Rightarrow\cdots$$ which you may recognize as the convergents of !!\sqrt2!!. These are actually the rational approximations of !!\sqrt 2!! that are optimally accurate relative to the sizes of their denominators. Notice that !!\frac{17}{12}!! and !!\frac{577}{408}!! are in there as they were before, although it takes longer to get to them. I think it's cool that you can view it as a highlysimplified version of Newton's method. [ Addendum: An earlier version of the last paragraph claimed:
Simon Tatham pointed out that this was mistaken. It's true when !!n=2!!, but not in general. The sequence of fractions that you get does indeed converge to !!\sqrt n!!, but it's not usually the convergents, or even in lowest terms. When !!n=3!!, for example, the numerators and denominators are all even. ] [ Addendum: Newton's method as I described it, with the crucial !!g → g  \frac{f(g)}{f'(g)}!! transformation, was actually invented in 1740 by Thomas Simpson. Both Isaac Newton and Thomas Raphson had earlier described only special cases, as had several Asian mathematicians, including Seki Kōwa. ] [ Previous discussion of convergents: Archimedes and the square root of 3; 60degree angles on a lattice. A different variation on the Babylonian method. ] [ Note to self: Take a look at what the AMGM inequality has to say about the behavior of !!\hat g!!. ] [ Addendum 20201018: A while back I discussed the general method of picking a function !!f!! that has !!\sqrt 2!! as a fixed point, and iterating !!f!!. This is yet another example of such a function. ] [Other articles in category /math] permanent link Wed, 23 Sep 2020
The mystery of the malformed commandline flags
Today a user came to tell me that their command
failed, saying:
This is surprising. The command looks correct. The branch name is
required. The The
But it still didn't work:
I dug in to the script and discovered the problem, which was not actually a programming error. The crappy shell script was behaving correctly! I had written up release notes for the In an earlier draft of the release notes, when they were displayed as an HTML page, there would be bad line breaks:
or:
No problem, I can fix it! I just changed the pair of hyphens ( But then this hapless user clipboardcopied the option string out of
the release notes, including its U+2011 characters. The parser in the
script was (correctly) looking for U+002D characters, and didn't
recognize One lesson learned: people will copypaste stuff out of documentation, and I should be prepared for that. There are several places to address this. I made the error message more transparent; formerly it would complain only about the first argument, which was confusing because it was the one argument that wasn't superfluous. Now it will say something like
which is more descriptive of what it actually doesn't like. I could change the nonbreaking hyphens in the release notes back to regular hyphens and just accept the bad line breaks. But I don't want to. Typography is important. One idea I'm toying with is to have the shell script silently replace all nonbreaking hyphens with regular ones before any further processing. It's a hack, but it seems like it might be a harmless one. So many weird things can go wrong. This computer stuff is really complicated. I don't know how anyone get anything done. [ Addendum: A reader suggests that I could have fixed the line breaks with CSS. But the release notes were being presented as a Slack “Post”, which is essentially a WYSIWYG editor for creating shared documents. It presents the document in a canned HTML style, and as far as I know there's no way to change the CSS it uses. Similarly, there's no way to insert raw HTML elements, so no way to change the style perelement. ] [Other articles in category /prog/bug] permanent link Sun, 13 Sep 2020The front page of NPR.org today has this headline: It contains this annoying phrase:
Someone has really committed to hedging. I would have said that the race would certainly help determine control of the Senate, or that it could determine control of the Senate. The statement as written makes an extremely weak claim. The article itself doesn't include this phrase. This is why reporters hate headlinewriters. [Other articles in category /lang] permanent link Fri, 11 Sep 2020
Historical diffusion of words for “eggplant”
In reply to my recent article about the history of words for “eggplant”, a reader, Lydia, sent me this incredible map they had made that depicts the history and the diffusion of the terms: Lydia kindly gave me permission to share their map with you. You can see the early Dravidian term vaḻutanaṅṅa in India, and then the arrows show it travelling westward across Persia and, Arabia, from there to East Africa and Europe, and from there to the rest of the world, eventually making its way back to India as brinjal before setting out again on yet more voyages. Thank you very much, Lydia! And Happy Diada Nacional de Catalunya, everyone! [Other articles in category /lang/etym] permanent link
A maxim for conference speakers
The only thing worse than rewriting your talk the night before is writing your talk the night before. [Other articles in category /talk] permanent link Fri, 28 Aug 2020This morning Katara asked me why we call these vegetables “zucchini” and “eggplant” but the British call them “courgette” and “aubergine”. I have only partial answers, and the more I look, the more complicated they get. ZucchiniThe zucchini is a kind of squash, which means that in Europe it is a postColumbian import from the Americas. “Squash” itself is from Narragansett, and is not related to the verb “to squash”. So I speculate that what happened here was:
The Big Dictionary has citations for “zucchini” only back to 1929, and “courgette” to 1931. What was this vegetable called before that? Why did the Americans start calling it “zucchini” instead of whatever they called it before, and why “zucchini” and not “courgette”? If it was brought in by Italian immigrants, one might expect to the word to have appeared earlier; the mass immigration of Italians into the U.S. was over by 1920. Following up on this thought, I found a mention of it in Cuniberti, J. Lovejoy., Herndon, J. B. (1918). Practical Italian recipes for American kitchens, p. 18: “Zucchini are a kind of small squash for sale in groceries and markets of the Italian neighborhoods of our large cities.” Note that Cuniberti explains what a zucchini is, rather than saying something like “the zucchini is sometimes known as a green summer squash” or whatever, which suggests that she thinks it will not already be familiar to the readers. It looks as though the story is: Colonial Europeans in North America stopped eating the zucchini at some point, and forgot about it, until it was reintroduced in the early 20th century by Italian immigrants. When did the French start calling it courgette? When did the Italians start calling it zucchini? Is the Italian term a calque of the French, or vice versa? Or neither? And since courge (and gourd) are evidently descended from Latin cucurbita, where did the Italians get zucca? So many mysteries. EggplantHere I was able to get better answers. Unlike squash, the eggplant is native to Eurasia and has been cultivated in western Asia for thousands of years. The puzzling name “eggplant” is because the fruit, in some varieties, is round, white, and eggsized. The term “eggplant” was then adopted for other varieties of the same plant where the fruit is entirely unegglike. “Eggplant” in English goes back only to 1767. What was it called before that? Here the OED was more help. It gives this quotation, from 1785:
I inferred that the preceding text described it under a betterknown name, so, thanks to the Wonders of the Internet, I looked up the original source:
(JeanJacques Rosseau, Letters on the Elements of Botany, tr. Thos. Martyn 1785. Page 202. (Wikipedia)) The most common term I've found that was used before “eggplant” itself is “mad apple”. The OED has cites from the late 1500s that also refer to it as a “rage apple”, which is a calque of French pomme de rage. I don't know how long it was called that in French. I also found “Malum Insanam” in the 1736 Lexicon technicum of John Harris, entry “Bacciferous Plants”. Melongena was used as a scientific genus name around 1700 and later adopted by Linnaeus in 1753. I can't find any sign that it was used in English colloquial, nonscientific writing. Its etymology is a whirlwind trip across the globe. Here's what the OED says about it:
Wowzers. Okay, now how do we get to “aubergine”? The list above includes Arabic bāḏinjān, and this, like many Arabic words was borrowed into Spanish, as berengena or alberingena. (The “al” prefix is Arabic for “the” and is attached to many such borrowings, for example “alcohol” and “alcove”.) From alberingena it's a short step to French aubergine. The OED entry for aubergine doesn't mention this. It claims that aubergine is from “Spanish alberchigo, alverchiga, ‘an apricocke’”. I think it's clear that the OED blew it here, and I think this must be the first time I've ever been confident enough to say that. Even the OED itself supports me on this: the note at the entry for brinjal says: “cognate with the Spanish alberengena is the French aubergine”. Okay then. (Brinjal, of course, is a contraction of berengena, via Portuguese bringella.) Sanskrit vātiṅgaṇa is also the ultimate source of modern Hindi baingan, as in baingan bharta. (Wasn't there a classical Latin word for eggplant? If so, what was it? Didn't the Romans eat eggplant? How do you conquer the world without any eggplants?) [ Addendum: My search for antedatings of “zucchini” turned up some surprises. For example, I found what seemed to be many mentions in an 1896 history of Sicily. These turned out not to be about zucchini at all, but rather the computer's pathetic attempts at recognizing the word Σικελίαν. ] [ Addendum 20200831: Another surprise: Google Books and Hathi Trust report that “zucchini” appears in the 1905 Collier Modern Eclectic Dictionary of the English Langauge, but it's an incredible OCR failure for the word “acclamation”. ] [ Addendum 20200911: A reader, Lydia, sent me a beautiful map showing the evolution of the many words for ‘eggplant’. Check it out. ] [Other articles in category /lang/etym] permanent link Mon, 24 Aug 2020Ripta Pasay brought to my attention the English cookbook Liber Cure Cocorum, published sometime between 1420 and 1440. The recipes are conveyed as poems:
(Original plus translation by Cindy Renfrow) “Conyngus” is a rabbit; English has the cognate “coney”. If you have read my article on how to read Middle English you won't have much trouble with this. There are a few obsolete words: sere means “separately”; myed bread is bread crumbs, and amydone is starch. I translate it (very freely) as follows:
Thanks, Ripta! [Other articles in category /food] permanent link Fri, 21 Aug 2020
Mixedradix fractions in Bengali
[ Previously, Base4 fractions in Telugu. ] I was really not expecting to revisit this topic, but a couple of weeks ago, looking for something else, I happened upon the following curiouslynamed Unicode characters:
Oh boy, more basefour fractions! What on earth does “NUMERATOR ONE LESS THAN THE DENOMINATOR” mean and how is it used? An explanation appears in the Unicode proposal to add the related “ganda” sign:
(Anshuman Pandey, “Proposal to Encode the Ganda Currency Mark for Bengali in the BMP of the UCS”, 2007.) Pandey explains: prior to decimalization, the Bengali rupee (rupayā) was divided into sixteen ānā. Standard Bengali numerals were used to write rupee amounts, but there was a special notation for ānā. The sign ৹ always appears, and means sixteenths. Then. Prefixed to this is a numerator symbol, which goes ৴, ৵, ৶, ৷ for 1, 2, 3, 4. So for example, 3 ānā is written ৶৹, which means !!\frac3{16}!!. The larger fractions are made by adding the numerators, grouping by 4's:
except that three fours (৷৷৷) is too many, and is abbreviated by the intriguing NUMERATOR ONE LESS THAN THE DENOMINATOR sign ৸ when more than 11 ānā are being written. Historically, the ānā was divided into 20 gaṇḍā; the gaṇḍā amounts are written with standard (Benagli decimal) numerals instead of the specialpurpose base4 numerals just described. The gaṇḍā sign ৻ precedes the numeral, so 4 gaṇḍā (!!\frac15!! ānā) is wrtten as ৻৪. (The ৪ is not an 8, it is a four.) What if you want to write 17 rupees plus !!9\frac15!! ānā? That is 17 rupees plus 9 ānā plus 4 gaṇḍā. If I am reading this report correctly, you write it this way: ১৭৷৷৴৻৪ This breaks down into three parts as ১৭ ৷৷৴ ৻৪. The ১৭ is a 17, for 17 rupees; the ৷৷৴ means 9 ānā (the denominator ৹ is left implicit) and the ৻৪ means 4 gaṇḍā, as before. There is no separator between the rupees and the ānā. But there doesn't need to be, because different numerals are used! An analogous imaginary system in Latin script would be to write the amount as
where the ‘17’ means 17 rupees, the ‘dda’ means 4+4+1=9 ānā, and the ¢4 means 4 gaṇḍā. There is no trouble seeing where the ‘17’ ends and the ‘dda’ begins. Pandey says there was an even smaller unit, the kaṛi. It was worth ¼ of a gaṇḍā and was again written with the special base4 numerals, but as if the gaṇḍā had been divided into 16. A complete amount might be written with decimal numerals for the rupees, base4 numerals for the ānā, decimal numerals again for the gaṇḍā, and base4 numerals again for the kaṛi. No separators are needed, because each section is written symbols that are different from the ones in the adjoining sections. [Other articles in category /math] permanent link Thu, 06 Aug 2020
Recommended reading: Matt Levine’s Money Stuff
Lately my favorite read has been Matt Levine’s Money Stuff articles from Bloomberg News. Bloomberg's web site requires a subscription but you can also get the Money Stuff articles as an occasional email. It arrives at most once per day. Almost every issue teaches me something interesting I didn't know, and almost every issue makes me laugh. Example of something interesting: a while back it was all over the news that oil prices were negative. Levine was there to explain what was really going on and why. Some people manage index funds. They are not trying to beat the market, they are trying to match the index. So they buy derivatives that give them the right to buy oil futures contracts at whatever the day's closing price is. But say they already own a bunch of oil contracts. If they can get the closeofday price to dip, then their buyattheendoftheday contracts will all be worth more because the counterparties have contracted to buy at the dip price. How can you get the price to dip by the end of the day? Easy, unload 20% of your contracts at a bizarre low price, to make the value of the other 80% spike… it makes my head swim. But there are weird second and thirdorder effects too. Normally if you invest fifty million dollars in oil futures speculation, there is a worstcase: the price of oil goes to zero and you lose your fifty million dollars. But for these derivative futures, the price could in theory become negative, and for short time in April, it did:
One article I particularly remember discussed the kerfuffle a while back concerning whether Kelly Loeffler improperly traded stocks on classified coronavirusrelated intelligence that she received in her capacity as a U.S. senator. I found Levine's argument persuasive:
He contrasted this case with that of Richard Burr, who, unlike Loeffler, remains under investigation. The discussion was factual and informative, unlike what you would get from, say, Twitter, or even Metafilter, where the response was mostly limited to variations on “string them up” and “eat the rich”. Money Stuff is also very funny. Today’s letter discusses a disclosure filed recently by Nikola Corporation:
A couple of recent articles that cracked me up discussed clueless daytraders pushing up the price of Hertz stock after Hertz had declared bankruptcy, and how Hertz diffidently attempted to get the SEC to approve a new stock issue to cater to these idiots. (The SEC said no.) One recurring theme in the newsletter is “Everything is Securities Fraud”. This week, Levine asks:
Of course you'd expect that the executives would be criminally charged, as they have been. But is there a cause for the company’s shareholders to sue? If you follow the newsletter, you know what the answer will be:
because Everything is Securities Fraud.
I recommend it. Levine also has a Twitter account but it is mostly just links to his newsletter articles. [ Addendum 20200821: Unfortunately, just a few days after I posted this, Matt Levin announced that his newletter would be on hiatus for a few months, as he would be on paternity leave. Sorry! ] [ Addendum 20210207: Money Stuff is back. ] [Other articles in category /ref] permanent link Wed, 05 Aug 2020
A maybeinteresting number trick?
I'm not sure if this is interesting, trivial, or both. You decide. Let's divide the numbers from 1 to 30 into the following six groups:
Choose any two rows. Chose a number from each row, and multiply them mod 31. (That is, multiply them, and if the product is 31 or larger, divide it by 31 and keep the remainder.) Regardless of which two numbers you chose, the result will always be in the same row. For example, any two numbers chosen from rows B and D will multiply to yield a number in row E. If both numbers are chosen from row F, their product will always appear in row A. [Other articles in category /math] permanent link Sun, 02 Aug 2020Gulliver's Travels (1726), Part III, chapter 2:
When I first told Katara about this, several years ago, instead of “the minds of these people are so taken up with intense speculations” I said they were obsessed with their phones. Now the phones themselves have become the flappers:
Our minds are not even taken up with intense speculations, but with Instagram. Dean Swift would no doubt be disgusted. [Other articles in category /book] permanent link Sat, 01 Aug 2020
How are finite fields constructed?
Here's another recent Math Stack Exchange answer I'm pleased with.
The only “reasonable” answer here is “get an undergraduate abstract algebra text and read the chapter on finite fields”. Because come on, you can't expect some random stranger to appear and write up a detailed but short explanation at your exact level of knowledge. But sometimes Internet Magic Lightning strikes and that's what you do get! And OP set themselves up to be struck by magic lightning, because you can't get a detailed but short explanation at your exact level of knowledge if you don't provide a detailed but short explanation of your exact level of knowledge — and this person did just that. They understand finite fields of prime order, but not how to construct the extension fields. No problem, I can explain that! I had special fun writing this answer because I just love constructing extensions of finite fields. (Previously: [1] [2]) For any given !!n!!, there is at most one field with !!n!! elements: only one, if !!n!! is a power of a prime number (!!2, 3, 2^2, 5, 7, 2^3, 3^2, 11, 13, \ldots!!) and none otherwise (!!6, 10, 12, 14\ldots!!). This field with !!n!! elements is written as !!\Bbb F_n!! or as !!GF(n)!!. Suppose we want to construct !!\Bbb F_n!! where !!n=p^k!!. When !!k=1!!, this is easypeasy: take the !!n!! elements to be the integers !!0, 1, 2\ldots p1!!, and the addition and multiplication are done modulo !!n!!. When !!k>1!! it is more interesting. One possible construction goes like this:
Now we will see an example: we will construct !!\Bbb F_{2^2}!!. Here !!k=2!! and !!p=2!!. The elements will be polynomials of degree at most 1, with coefficients in !!\Bbb F_2!!. There are four elements: !!0x+0, 0x+1, 1x+0, !! and !!1x+1!!. As usual we will write these as !!0, 1, x, x+1!!. This will not be misleading. Addition is straightforward: combine like terms, remembering that !!1+1=0!! because the coefficients are in !!\Bbb F_2!!: $$\begin{array}{ccccc} + & 0 & 1 & x & x+1 \\ \hline 0 & 0 & 1 & x & x+1 \\ 1 & 1 & 0 & x+1 & x \\ x & x & x+1 & 0 & 1 \\ x+1 & x+1 & x & 1 & 0 \end{array} $$ The multiplication as always is more interesting. We need to find an irreducible polynomial !!P!!. It so happens that !!P=x^2+x+1!! is the only one that works. (If you didn't know this, you could find out easily: a reducible polynomial of degree 2 factors into two linear factors. So the reducible polynomials are !!x^2, x·(x+1) = x^2+x!!, and !!(x+1)^2 = x^2+2x+1 = x^2+1!!. That leaves only !!x^2+x+1!!.) To multiply two polynomials, we multiply them normally, then divide by !!x^2+x+1!! and keep the remainder. For example, what is !!(x+1)(x+1)!!? It's !!x^2+2x+1 = x^2 + 1!!. There is a theorem from elementary algebra (the “division theorem”) that we can find a unique quotient !!Q!! and remainder !!R!!, with the degree of !!R!! less than 2, such that !!PQ+R = x^2+1!!. In this case, !!Q=1, R=x!! works. (You should check this.) Since !!R=x!! this is our answer: !!(x+1)(x+1) = x!!. Let's try !!x·x = x^2!!. We want !!PQ+R = x^2!!, and it happens that !!Q=1, R=x+1!! works. So !!x·x = x+1!!. I strongly recommend that you calculate the multiplication table yourself. But here it is if you want to check: $$\begin{array}{ccccc} · & 0 & 1 & x & x+1 \\ \hline 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & x & x+1 \\ x & 0 & x & x+1 & 1 \\ x+1 & 0 & x+1 & 1 & x \end{array} $$ To calculate the unique field !!\Bbb F_{2^3}!! of order 8, you let the elements be the 8 seconddegree polynomials !!0, 1, x, \ldots, x^2+x, x^2+x+1!! and instead of reducing by !!x^2+x+1!!, you reduce by !!x^3+x+1!!. (Not by !!x^3+x^2+x+1!!, because that factors as !!(x^2+1)(x+1)!!.) To calculate the unique field !!\Bbb F_{3^2}!! of order 27, you start with the 27 thirddegree polynomials with coefficients in !!{0,1,2}!!, and you reduce by !!x^3+2x+1!! (I think). The special notation !!\Bbb F_p[x]!! means the ring of all polynomials with coefficients from !!\Bbb F_p!!. !!\langle P \rangle!! means the ring of all multiples of polynomial !!P!!. (A ring is a set with an addition, subtraction, and multiplication defined.) When we write !!\Bbb F_p[x] / \langle P\rangle!! we are constructing a thing called a “quotient” structure. This is a generalization of the process that turns the ordinary integers !!\Bbb Z!! into the modulararithmetic integers we have been calling !!\Bbb F_p!!. To construct !!\Bbb F_p!!, we start with !!\Bbb Z!! and then agree that two elements of !!\Bbb Z!! will be considered equivalent if they differ by a multiple of !!p!!. To get !!\Bbb F_p[x] / \langle P \rangle!! we start with !!\Bbb F_p[x]!!, and then agree that elements of !!\Bbb F_p[x]!! will be considered equivalent if they differ by a multiple of !!P!!. The division theorem guarantees that of all the equivalent polynomials in a class, exactly one of them will have degree less than that of !!P!!, and that is the one we choose as a representative of its class and write into the multiplication table. This is what we are doing when we “divide by !!P!! and keep the remainder”. A particularly important example of this construction is !!\Bbb R[x] / \langle x^2 + 1\rangle!!. That is, we take the set of polynomials with real coefficients, but we consider two polynomials equivalent if they differ by a multiple of !!x^2 + 1!!. By the division theorem, each polynomial is then equivalent to some firstdegree polynomial !!ax+b!!. Let's multiply $$(ax+b)(cx+d).$$ As usual we obtain $$acx^2 + (ad+bc)x + bd.$$ From this we can subtract !!ac(x^2 + 1)!! to obtain the equivalent firstdegree polynomial $$(ad+bc) x + (bdac).$$ Now recall that in the complex numbers, !!(b+ai)(d + ci) = (bdac) + (ad+bc)i!!. We have just constructed the complex numbers,with the polynomial !!x!! playing the role of !!i!!. [ Note to self: maybe write a separate article about what makes this a good answer, and how it is structured. ] [Other articles in category /math/se] permanent link Fri, 31 Jul 2020
What does it mean to expand a function “in powers of x1”?
A recent Math Stack Excahnge post was asked to expand the function !!e^{2x}!! in powers of !!(x1)!! and was confused about what that meant, and what the point of it was. I wrote an answer I liked, which I am reproducing here. You asked:
which is a fair question. I didn't understand this either when I first learned it. But it's important for practical engineering reasons as well as for theoretical mathematical ones. Before we go on, let's see that your proposal is the wrong answer to this question, because it is the correct answer, but to a different question. You suggested: $$e^{2x}\approx1+2\left(x1\right)+2\left(x1\right)^2+\frac{4}{3}\left(x1\right)^3$$ Taking !!x=1!! we get !!e^2 \approx 1!!, which is just wrong, since actually !!e^2\approx 7.39!!. As a comment pointed out, the series you have above is for !!e^{2(x1)}!!. But we wanted a series that adds up to !!e^{2x}!!. As you know, the Maclaurin series works here: $$e^{2x} \approx 1+2x+2x^2+\frac{4}{3}x^3$$ so why don't we just use it? Let's try !!x=1!!. We get $$e^2\approx 1 + 2 + 2 + \frac43$$ This adds to !!6+\frac13!!, but the correct answer is actually around !!7.39!! as we saw before. That is not a very accurate approximation. Maybe we need more terms? Let's try ten: $$e^{2x} \approx 1+2x+2x^2+\frac{4}{3}x^3 + \ldots + \frac{8}{2835}x^9$$ If we do this we get !!7.3887!!, which isn't too far off. But it was a lot of work! And we find that as !!x!! gets farther away from zero, the series above gets less and less accurate. For example, take !!x=3.1!!, the formula with four terms gives us !!66.14!!, which is dead wrong. Even if we use ten terms, we get !!444.3!!, which is still way off. The right answer is actually !!492.7!!. What do we do about this? Just add more terms? That could be a lot of work and it might not get us where we need to go. (Some Maclaurin series just stop working at all too far from zero, and no amount of terms will make them work.) Instead we use a different technique. Expanding the Taylor series “around !!x=a!!” gets us a different series, one that works best when !!x!! is close to !!a!! instead of when !!x!! is close to zero. Your homework is to expand it around !!x=1!!, and I don't want to give away the answer, so I'll do a different example. We'll expand !!e^{2x}!! around !!x=3!!. The general formula is $$e^{2x} \approx \sum \frac{f^{(i)}(3)}{i!} (x3)^i\tag{$\star$}\ \qquad \text{(when $x$ is close to $3$)}$$ The !!f^{(i)}(x)!! is the !!i!!'th derivative of !! e^{2x}!! , which is !!2^ie^{2x}!!, so the first few terms of the series above are: $$\begin{eqnarray} e^{2x} & \approx& e^6 + \frac{2e^6}1 (x3) + \frac{4e^6}{2}(x3)^2 + \frac{8e^6}{6}(x3)^3\\ & = & e^6\left(1+ 2(x3) + 2(x3)^2 + \frac34(x3)^3\right)\\ & & \qquad \text{(when $x$ is close to $3$)} \end{eqnarray} $$ The first thing to notice here is that when !!x!! is exactly !!3!!, this series is perfectly correct; we get !!e^6 = e^6!! exactly, even when we add up only the first term, and ignore the rest. That's a kind of useless answer because we already knew that !!e^6 = e^6!!. But that's not what this series is for. The whole point of this series is to tell us how different !!e^{2x}!! is from !!e^6!! when !!x!! is close to, but not equal to !!3!!. Let's see what it does at !!x=3.1!!. With only four terms we get $$\begin{eqnarray} e^{6.2} & \approx& e^6(1 + 2(0.1) + 2(0.1)^2 + \frac34(0.1)^3)\\ & = & e^6 \cdot 1.22075 \\ & \approx & 492.486 \end{eqnarray}$$ which is very close to the correct answer, which is !!492.7!!. And that's with only four terms. Even if we didn't know an exact value for !!e^6!!, we could find out that !!e^{6.2}!! is about !!22.075\%!! larger, with hardly any calculation. Why did this work so well? If you look at the expression !!(\star)!! you can see: The terms of the series all have factors of the form !!(x3)^i!!. When !!x=3.1!!, these are !!(0.1)^i!!, which becomes very small very quickly as !!i!! increases. Because the later terms of the series are very small, they don't affect the final sum, and if we leave them out, we won't mess up the answer too much. So the series works well, producing accurate results from only a few terms, when !!x!! is close to !!3!!. But in the Maclaurin series, which is around !!x=0!!, those !!(x3)^i!! terms are !!x^i!! terms intead, and when !!x=3.1!!, they are not small, they're very large! They get bigger as !!i!! increases, and very quickly. (The !! i! !! in the denominator wins, eventually, but that doesn't happen for many terms.) If we leave out these many large terms, we get the wrong results. The short answer to your question is:
[Other articles in category /math/se] permanent link Wed, 29 Jul 2020Toph left the cap off one of her fancy art markers and it dried out, so I went to get her a replacement. The marker costs $5.85, plus tax, and the web site wanted a $5.95 shipping fee. Disgusted, I resolved to take my business elsewhere. On Wednesday I drove over to a local artsupply store to get the marker. After taxes the marker was somehow around $8.50, but I also had to pay $1.90 for parking. So if there was a win there, it was a very small one. But also, I messed up the parking payment app, which has maybe the worst UI of any phone app I've ever used. The result was a $36 parking ticket. Lesson learned. I hope. [Other articles in category /oops] permanent link Today I learned:
Here is a video of Dan Aykroyd discussing the name, with Sherman. [Other articles in category /bio] permanent link Wed, 15 Jul 2020
More trivia about megafauna and poisonous plants
A couple of people expressed disappointment with yesterday's article, which asked were giant ground sloths immune to poison ivy?, but then failed to deliver on the implied promise. I hope today's article will make up for that. ElephantsI said:
David Formosa points out what should have been obvious: elephants are megafauna, elephants live where mangoes grow (both in Africa and in India), elephants love eating mangoes [1] [2] [3], and, not obvious at all… Elephants are immune to poison ivy!
It's sad that we no longer have megatherium. But we do have elephants, which is pretty awesome. Idiot fruitThe idiot fruit is just another one of those legendarily awful creatures that seem to infest every corner of Australia (see also: box jellyfish, stonefish, gympie gympie, etc.); Wikipedia says:
At present the seeds are mostly dispersed by gravity. The plant is believed to be an evolutionary anachronism. What Pleistocene megafauna formerly dispersed the poisonous seeds of the idiot fruit? A wombat. A sixfoottall wombat. I am speechless with delight. [Other articles in category /bio] permanent link Tue, 14 Jul 2020
Were giant ground sloths immune to poison ivy?
The skin of the mango fruit contains urushiol, the same irritating chemical that is found in poison ivy. But why? From the mango's point of view, the whole point of the mango fruit is to get someone to come along and eat it, so that they will leave the seed somewhere else. Posioning the skin seems counterproductive. An analogous case is the chili pepper, which contains an irritating chemical, capsaicin. I think the answer here is believed to be that while capsaicin irritates mammals, birds are unaffected. The chili's intended target is birds; you can tell from the small seeds, which are the right size to be pooped out by birds. So chilis have a chemical that encourages mammals to leave the fruit in place for birds. What's the intended target for the mango fruit? Who's going to poop out a seed the size of a mango pit? You'd need a very large animal, large enough to swallow a whole mango. There aren't many of these now, but that's because they became extinct at the end of the Pleistocene epoch: woolly mammoths and rhinoceroses, huge crocodiles, giant ground sloths, and so on. We may have eaten the animals themselves, but we seem to have quite a lot of fruits around that evolved to have their seeds dispersed by Pleistocene megafauna that are now extinct. So my first thought was, maybe the mango is expecting to be gobbled up by a giant gound sloth, and have its giant seed pooped out elsewhere. And perhaps its urushiolladen skin makes it unpalatable to smaller animals that might not disperse the seeds as widely, but the giant ground sloth is immune. (Similarly, I'm told that goats are immune to urushiol, and devour poison ivy as they do everything else.) Well, maybe this theory is partly correct, but even if so, the animal definitely wasn't a giant ground sloth, because those lived only in South America, whereas the mango is native to South Asia. Ground slots and avocados, yes; mangos no. Still the theory seems reasonable, except that mangoes are tropical fruit and I haven't been able to find any examples of Pleistocene megafauna that lived in the tropics. Still I didn't look very hard. Wikipedia has an article on evolutionary anachronisms that lists a great many plants, but not the mango. [ Addendum: I've eaten many mangoes but never noticed any irritation from the peel. I speculate that cultivated mangoes are varieties that have been bred to contain little or no urushiol, or that there is a postharvest process that removes or inactivates the urushiol, or both. ] [ Addendum 20200715: I know this article was a little disappointing and that it does not resolve the question in the title. Sorry. But I wrote a followup that you might enjoy anyway. ] [Other articles in category /bio] permanent link Wed, 08 Jul 2020Ron Graham has died. He had a good run. When I check out I will probably not be as accomplished or as missed as Graham, even if I make it to 84. I met Graham once and he was very nice to me, as he apparently was to everyone. I was planning to write up a reminiscence of the time, but I find I've already done it so you can read that if you care. Graham's little book Rudiments of Ramsey Theory made a big impression on me when I was an undergraduate. Chapter 1, if I remember correctly, is a large collection of examples, which suited me fine. Chapter 2 begins by introducing a certain notation of Erdős and Rado: !!\left[{\Bbb N\atop k}\right]!! is the family of subsets of !!\Bbb N!! of size !!k!!, and $$\left[{\Bbb N\atop k}\right] \to \left[{\Bbb N\atop k}\right]_r$$ is an abbreviation of the statement that for any !!r!!coloring of members of !!\left[{\Bbb N\atop k}\right]!! there is always an infinite subset !!S\subset \Bbb N!! for which every member of !!\left[{S\atop k}\right]!! is the same color. I still do not find this notation perspicuous, and at the time, with much less experience, I was boggled. In the midst of my bogglement I was hit with the next sentence, which completely derailed me: After this I could no longer think about the mathematics, but only about the sentence. Outside the mathematical community Graham is probably bestknown for juggling, or for Graham's number, which Wikipedia describes:
One of my better Math Stack Exchange posts was in answer to the question Graham's Number : Why so big?. I love the phrasing of this question! And that, even with the strange phrasing, there is an answer! This type of huge number is quite typical in proofs of Ramsey theory, and I answered in detail. The sense of humor that led Graham to write “danger of no confusion” is very much on display in the paper that gave us Graham's number. If you are wondering about Graham's number, check out my post. [Other articles in category /math] permanent link
Addendum to “Weirdos during the Depression”
[ Previously ] Ran Prieur had a take on this that I thought was insightful:
[Other articles in category /addenda] permanent link Mon, 06 Jul 2020
Weird constants in math problems
Michael Lugo recently considered a problem involving the allocation of swimmers to swim lanes at random, ending with:
I love when stuff like this happens. The computer is great at doing a quick random simulation and getting you some weird number, and you have no idea what it really means. But mathematical technique can unmask the weird number and learn its true identity. (“It was Old Man Haskins all along!”) A couple of years back Math Stack Exchange had Expected Number and Size of Contiguously Filled Bins, and although it wasn't exactly what was asked, I ended up looking into this question: We take !!n!! balls and throw them at random into !!n!! bins that are lined up in a row. A maximal contiguous sequence of allempty or allnonempty bins is called a “cluster”. For example, here we have 13 balls that I placed randomly into 13 bins: In this example, there are 8 clusters, of sizes 1, 1, 1, 1, 4, 1, 3, 1. Is this typical? What's the expected cluster size? It's easy to use Monte Carlo methods and find that when !!n!! is large, the average cluster size is approximately !!2.15013!!. Do you recognize this number? I didn't. But it's not hard to do the calculation analytically and discover that that the reason it's approximately !!2.15013!! is that the actual answer is $$\frac1{2(e^{1}  e^{2})}$$ which is approximately !!2.15013!!. Math is awesome and wonderful. (Incidentally, I tried the Inverse Symbolic Calculator just now, but it was no help. It's also not in Plouffe's Miscellaneous Mathematical Constants) [ Addendum 20200707: WolframAlpha does correctly identify the !!2.15013!! constant. ] [Other articles in category /math] permanent link
Useful and informative article about privately funded border wall
The Philadelphia Inquirer's daily email newsletter referred me to this excellent article, by Jeremy Schwartz and Perla Trevizo. Wow!” I said. “This is way better than the Inquirer's usual reporting. I wonder what that means?” Turns out it meant that the Inquirer was not responsible for the article. But thanks for the pointer, Inquirer folks! The article is full of legal, political, and engineering details about why it's harder to build a border wall than I would have expected. I learned a lot! I had known about the issue that most of the land is privately owned. But I hadn't considered that there are international wateruse treaties that come into play if the wall is built too close to the Rio Grande, or that the wall would be on the river's floodplain. (Or that the Rio Grande even had a floodplain.) He built a privately funded border wall. It's already at risk of falling down if not fixed, courtesy of The Texas Tribune and ProPublica. [Other articles in category /tech] permanent link Wed, 24 Jun 2020
Geeking out over arbitrary boundaries
Reddit today had this delightful map, drawn by Peter Klumpenhower, of “the largest city in each 10by10 degree area of latitudelongitude in the world”: Almost every square is a kind of puzzle! Perhaps it is surprising that Philadelphia is there? Clearly New York dominates its square, but Philadelphia is just barely across the border in the next square south: the 40th parallel runs right through North Philadelphia. (See map at right.) Philadelphia City Hall (the black dot on the map) is at 39.9524 north latitude. This reminds me of the time I was visiting Tom Christiansen in Boulder, Colorado. We were driving on Baseline Road and he remarked that it was so named because it runs exactly along the 40th parallel. Then he said “that's rather farther south than where you live”. And I said no, the 40th parallel also runs through Philadelphia! Other noteworthy cities at this latitude include Madrid, Ankara, Yerevan, and Beijing. Anyway speaking of Boulder, the appearance of Fort Collins was the first puzzle I noticed. If you look at the U.S. cities that appear on the map, you see most of the Usual Suspects: New York, Philadelphia, Chicago, Los Angeles, Seattle, Dallas, Houston. And then you have Fort Collins. “Fort Collins?” I said. “Why not Denver? Or even Boulder?” Boulder, it turns out, is smaller than Fort Collins. (I did not know this.) And Denver, being on the other side of Baseline Road, doesn't compete with Fort Collins. Everything south of Baseline Road, including Denver, is shut out by Ciudad Juárez, México (population 1.5 million). There is a Chinese version of this. The Chinese cities on the map include the big Chinese cities: Shanghai, Beijing, Chongqing, Shenzhen, Chengdu. Shenyang. A couple of cities in Inner Mongolia, analogous to the appearance of Boise on the U.S. map. And… Wenchang. What the heck is Wenchang? It's the county seat of Wenchang County, in Hainan, not even as important as Fort Collins. China has 352 cities with populations over 125,000. Wenchang isn't one of them. According to the list I found, Wenchang is the 379thlargest city in China. (Fort Collins, by the way, is 159thlargest in the United States. For the 379th, think of Sugar Land, Texas or Cicero, Illinois.) Since we're in China, please notice how close Beijing is to the 40th parallel. Ten kilometers farther north and it would have displaced Boise — sorry, I meant Hohhot — and ceded its box to Tianjin (pop. 15.6 million). Similarly (but in reverse), had Philadelphia been a bit farther north, it would have disappeared into New York's box, and yielded its own box to Baltimore or Washington or some other hamlet.)
But what the heck happened to Nairobi? (Nairobi is the ninthlargest city in Africa. Dar Es Salaam is the sixthlargest and is in the same box.) What the heck happened to St. Petersburg? (at 59.938N, 30.309E, it is just barely inside the same box as Moscow. The map is quite distorted in this region.) What the heck happened to Tashkent? (It's right where it should be. I just missed it somehow.)
[ Addendum: Reddit discussion has pointed out that Clifden (pop. 1,597) , in western Ireland, is not the largest settlement in its box. There are two slivers of Ireland in that box, and Dingle, four hours away in County Kerry, has a population of 2,050. The Reddit discussion has a few other corrections. The most important is probably that Caracas should beat out Santo Domingo. M. Klumpenhower says that they will send me a revised version of the map. ] [ Thanks to Hacker News user oefrha for pointing out that Hohhot and Baotou are in China, not Mongolia as I originally said. ] [ Addendum 20200627: M. Klumpenhower has sent me a revised map, which now appears in place of the old one. It corrects the errors mentioned above. Here's a graphic that shows the differences. But Walt Mankowski pointed out another possible error: The box with Kochi (southern India) should probably be owned by Colombo. ] [Other articles in category /geo] permanent link Thu, 11 Jun 2020
Malicious trojan horse code hidden in large patches
This article isn't going to be fun to write but I'm going to push through it because I think it's genuinely important. How often have you heard me say that? A couple of weeks ago the Insurrection Act of 1807 was in the news. I noticed that the Wikipedia article about it contained this very strangeseeming claim:
“What the heck is a ‘secret amendment’?” I asked myself. “Secret from whom? Sounds like Wikipedia crackpottery.” But there was a citation, so I could look to see what it said. The citation is Hoffmeister, Thaddeus (2010). "An Insurrection Act for the TwentyFirst Century". Stetson Law Review. 39: 898. Sometimes Wikipedia claims will be accompanied by an authoritativeseeming citation — often lacking a page number, as this one did at the time — that doesn't actually support the claim. So I checked. But Hoffmeister did indeed make that disturbing claim:
I had sometimes wondered if large, complex acts such as HIPAA or the omnibus budget acts sometimes contained provisions that were smuggled into law without anyone noticing. I hoped that someone somewhere was paying attention, so that it couldn't happen. But apparently the answer is that it does. [Other articles in category /law] permanent link Wed, 10 Jun 2020
Middle English fonts and orthography
In case you're interested, here's what the Caxton “eggys” anecdote looked like originally:
1
2 3 4 5 6 7 8 9 10 11 12 In my dayes happened that It takes a while to get used to the dense blackletter font, and I think it will help to know the following:
[Other articles in category /IT/typo] permanent link Tue, 09 Jun 2020
The twobit huckster in medieval Italy
The eighth story on the seventh day of the Decameron concerns a Monna Sismonda, a young gentlewoman who is married to a merchant. She contrives to cheat on him, and then when her husband Arriguccio catches her, she manages to deflect the blame through a cunning series of lies. Arriguccio summons Sismonda's mother and brothers to witness her misbehavior, but when Sismonda seems to refute his claims, they heap abuse on him. Sismonda's mother rants about merchants with noble pretensions who marry above their station. My English translation (by G.H. McWilliam, 1972) included this striking phrase:
“Tuppenyha’penny” seemed rather odd in the context of medieval Florentines. It put me in mind of Douglas Hofstadter's complaint about an English translation of Crime and Punishment that rendered “S[toliarny] Pereulok” as “Carpenter’s Lane”:
Intrigued by McWilliam's choice, I went to look at the other translation I had handy, John Payne's of 1886, as adapted by Cormac Ó Cuilleanáin in 2004:
This seemed even more jarring, because Payne was English and Ó Cuilleanáin is Irish, but “twobit” is 100% American. I wondered what the original had said. Brown University has the Italian text online, so I didn't even have to go into the house to find out the answer:
In the coinage of the time, the denier or denarius was the penny, equal in value (at least notionally) to !!\frac1{240}!! of a pound (lira) of silver. It is the reason that predecimal British currency wrote fourpence as “4d.”. I think ‘uolo’ is a diminutive suffix, so that Sismonda's mother is calling Arriguccio a fourpenny merchantling. McWilliam’s and Ó Cuilleanáin’s translations are looking pretty good! I judged them too hastily. While writing this up I was bothered by something else. I decided it was impossible that John Payne, in England in 1886, had ever written the words “twobit huckster”. So I hunted up the original Payne translation from which Ó Cuilleanáin had adapted his version. I was only half right:
“Fourfarthing” is a quite literal translation of the original Italian, a farthing being an oldstyle English coin worth onefourth of a penny. I was surprised to see “huckster”, which I would have guessed was 19thcentury American slang. But my guess was completely wrong: “Huckster” is Middle English, going back at least to the 14th century. In the Payne edition, there's a footnote attached to “fourfarthing” that explains:
which is what McWilliam had. I don't know if the footnote is Payne's or belongs to the 1925 editor. The Internet Archive's copy of the Payne translation was published in 1925, with naughty illustrations by Clara Tice. Wikipedia says “According to herself and the New York Times, in 1908 Tice was the first woman in Greenwich Village to bob her hair.” [ Addendum 20210331: It took me until now to realize that uolo is probably akin to the ole suffix one finds in French words like casserole and profiterole, and derived from the Latin diminutive suffix ulus that one finds in calculus and annulus. ] [Other articles in category /lang] permanent link Mon, 08 Jun 2020
More about Middle English and related issues
Quite a few people wrote me delightful letters about my recent article about how to read Middle English. Corrections
Regarding Old English / AngloSaxon
Regarding Dutch
Regarding German
Final noteThe previous article about weirdos during the Depression hit #1 on Hacker News and was viewed 60,000 times. But I consider the Middle English article much more successful, because I very much prefer receiving interesting and thoughtful messages from six Gentle Readers to any amount of attention from Hacker News. Thanks to everyone who wrote, and also to everyone who read without writing. [Other articles in category /lang] permanent link Fri, 05 Jun 2020
You can learn to read Middle English
In a recent article I quoted this bit of Middle English:
and I said:
Yup! If you can read English, you can learn to read Middle English. It looks like a foreign language, but it's not. Not entirely foreign, anyway. There are tricks you can pick up. The tricks get you maybe 90 or 95% of the way there, at least for later texts, say after 1350 or so. Disclaimer: I have never studied Middle English. This is just stuff I've picked up on my own. Any factual claims in this article might be 100% wrong. Nevertheless I have pretty good success reading Middle English, and this is how I do it. Some quick historical notesIt helps to understand why Middle English is the way it is. English started out as German. Old English, also called AngloSaxon, really is a foreign language, and requires serious study. I don't think an anglophone can learn to read it with mere tricks. Over the centuries Old English diverged from German. In 1066 the Normans invaded England and the English language got a thick layer of French applied on top. Middle English is that mashup of English and French. It's still German underneath, but a lot of the spelling and vocabulary is Frenchified. This is good, because a lot of that Frenchification is still in Modern English, so it will be familiar. For a long time each little bit of England had its own little dialect. The printing press was introduced in the late 15th century, and at that point, because most books were published in or around London, the Midlands dialect used there became the standard, and the other dialects started to disappear. [ Addendum 20200606: The part about Midlands dialect is right. The part about London is wrong. London is not in the Midlands. ] With the introduction of printing, the spelling, which had been fluid and doasyouplease, became frozen. Unfortunately, during the 15th century, the Midlands dialect had been undergoing a change in pronunciation now called the Great Vowel Shift and many words froze with spelling and pronunciations that didn't match. This is why English vowel spellings are such a mess. For example, why are “meat” and “meet” spelled differently but pronounced the same? Why are “read” (present tense) and “read” (past tense) pronounced differently but spelled the same? In Old English, it made more sense. Modern English is a snapshot of the moment in the middle of a move when half your stuff is sitting in boxes on the sidewalk. By the end of the 17th century things had settled down to the spelling mess that is Modern English. The letters are a little funnyDepending on when it was written and by whom, you might see some of these obsolete letters:
Some familiar letters behave a little differently:
The quotation I discussed in the earlier article looks like this:
Daunting, right? But it's not as bad as it looks. Let's get rid of the yoghs and thorns:
The spelling is a little funnyHere's the big secret of reading Middle English: it sounds better than it looks. If you're not sure what a word is, try reading it aloud. For example, what's “alle men”? Oh, it's just “all men”, that was easy. What's “youre dettes”? It turns out it's “your debts”. That's not much of a disguise! It would be a stretch to call this “translation”.
“Yelde” and “trybut” are a little trickier. As languages change, vowels nearly always change faster than consonants. Vowels in Middle English can be rather different from their modern counterparts; consonants less so. So if you can't figure out a word, try mashing on the vowels a little. For example, “much” is usually spelled “moche”. With a little squinting you might be able to turn “trybut” into “tribute”, which is what it is. The first “tribute” is a noun, the second a verb. The construction is analogous to “if you have a drink, drink!” I had to look up “yelde”, but after I had I felt a little silly, because it's “yield”.
We'll deal with “schuleth” a little later. The word order is pretty much the sameThat's because the basic grammar of English is still mostly the same as German. One thing English now does differently from German is that we no longer put the main verb at the end of the sentence. If a Middle English sentence has a verb hanging at the end, it's probably the main verb. Just interpret it as if you had heard it from Yoda. The words are a little bit oldfashioned… but many of them are oldfashioned in a way you might be familiar with. For example, you probably know what “ye” means: it's “you”, like in “hear ye, hear ye!” or “o ye of little faith!”. Verbs in second person singular end in ‘st’; in third person singular, ‘th’. So for example:
In particular, the forms of “do” are: I do, thou dost, he doth. Some words that were common in Middle English are just gone. You'll probably need to consult a dictionary at some point. The Oxford English Dictionary is great if you have a subscription. The University of Michigan has a dictionary of Middle English that you can use for free. Here are a couple of common words that come to mind:
Verbs change form to indicate tenseIn German (and the protolanguage from which German descended), verb tense is indicated by a change in the vowel. Sometimes this persists in modern English. For example, it's why we have “drink, drank, drunk” and “sleep, slept”. In Modern German this is more common than in Modern English, and in Middle English it's also more common than it is now. Past tense usually gets an ‘ed’ on the end, like in Modern English. The last mystery word here is “schuleth”:
This is the hard word here. The first thing to know is that “sch” is always pronounced “sh” as it still is in German, never with a hard sound like “school” or “schedule”. What's “schuleth” then? Maybe something do to with schools? It turns out not. This is a form of “shall, should” but in this context it has its old meaning, now lost, of “owe”. If I hadn't run across this while researching the history of the word “should”, I wouldn't have known what it was, and would have had to look it up. But notice that it does follow a typical Middle English pattern: the consonants ‘sh’ and ‘l’ stayed the same, while the vowels changed. In the modern word “should” we have a version of “schulen” with the past tense indicated by ‘d’ just like usual. “Schuleth” goes with ‘ye’ so it ought to be ‘schulest’. I don't know what's up with that. [ Addendum 20200608: “ye” is plural, and ‘st’ only goes on singular verbs. ] Prose exampleLet's try Wycliffe's Bible, which was written around 1380ish. This is Matthew 6:1:
Most of this reads right off:
“Take heed” is a bit archaic but still good English; it means “Be careful”. Reading “riytwisnesse” aloud we can guess that it is actually “righteousness”. (Remember that that ‘y’ started out as a ‘gh’.) “Schulen“ we've already seen; here it just means “shall”. I had to look up “meede”, which seems to have disappeared since 1380. It meant “reward”, and that's exactly how the NIV translates it:
That was fun, let's do another:
The same tricks work for most of this. “Whanne” is “when”. We still have the word “almes”, now spelled “alms”: it's the handout you give to beggars. The “sch” in “worschipid” is pronounced like ‘sh’ so it's “worshipped”. “Resseyued” looks hard, but if you remember to try reading the ‘u’ as a ‘v’ and the ‘y’ as an ‘i’, you get “resseived” which is just one letter off of “received”. “Meede” we just learned. So this is:
Now we have the general meaning and some of the other words become clearer. What's “trumpe”? It's “trumpeting”. When you give to the needy, don't you trumpet before you, as the hypocrites do. So even though I don't know what “nyle” is exactly, the context makes it clear that it's something like “do not”. Negative words often began with ‘n’ just as they do now (no, nor, not, never, neither, nothing, etc.). Looking it up, I find that it's more usually spelled “nill”. This word is no longer used; it means the opposite of “will”. (It still appears in the phrase “willynilly”, which means “whether you want to or not”.) “Sothely” means “truly”. “Soth” or “sooth” is an archaic word for truth, like in “soothsayer”, a truthspeaker. Here's the NIV translation:
Poetic exampleLet's try something a little harder, a random sentence from The Canterbury Tales, written around 1390. Wish me luck!
The main difficulty is that it's poetic language, which might be a bit obscure even in Modern English. But first let's fix the spelling of the obvious parts:
The University of Michigan dictionary can be a bit tricky to use. For example, if you look up “meede” it won't find it; it's listed under “mede”. If you don't find the word you want as a headword, try doing fulltext search. Anyway, hoppen is in there. It can mean “hopping”, but in this poetic context it means dancing.
“Pipe” is a verb here, it means (even now) to play the pipes. You try!William Caxton is thought to have been the first person to print and sell books in England. This anecdote of his is one of my favorites. He wrote it the late 1490s, at the very tail end of Middle English:
A “mercer” is a merchant, and “taryed“ is now spelled “tarried”, which is now uncommon and means to stay somewhere temporarily. I think the only other part of this that doesn't succumb to the tricks in this article is the place names:
Caxton is bemoaning the difficulties of translating into “English” in 1490, at a time when English was still a collection of local dialects. He ends the anecdote by asking:
Thanks to Caxton and those that followed him, we can answer: definitely “egges”. [ Addenda 20200608: More about Middle English. ] [Other articles in category /lang] permanent link Wed, 03 Jun 2020Lately I've been rereading To Kill a Mockingbird. There's an episode in which the kids meet Mr. Dolphus Raymond, who is drunk all the time, and who lives with his black spouse and their mixedrace kids. The ⸢respectable⸣ white folks won't associate with him. Scout and Jem see him ride into town so drunk he can barely sit on his horse. He is noted for always carrying around a paper bag with a coke bottle filled with moonshine. At one point Mr. Dolphus Raymond offers Dill a drink out of his coke bottle, and Dill is surprised to discover that it actually contains Coke. Mr. Raymond explains he is not actually a drunk, he only pretends to be one so that the ⸢respectable⸣ people will write him off, stay off his back about his black spouse and kids, and leave him alone. If they think it's because he's an alcoholic they can fit it into their worldview and let it go, which they wouldn't do if they suspected the truth, which is that it's his choice. There's a whole chapter in Cannery Row on the same theme! Doc has a beard, and people are always asking him why he has a beard. Doc learned a long time ago that it makes people angry and suspicious if he tells the truth, which is he has a beard because he likes having a beard. So he's in the habit of explaining that the beard covers up an ugly scar. Then people are okay with it and even sympathetic. (There is no scar.) Doc has a whim to try drinking a beer milkshake, and when he orders one the waitress is suspicious and wary until he explains to her that he has a stomach ulcer, and his doctor has ordered him to drink beer milkshakes daily. Then she is sympathetic. She says it's a shame about the ulcer, and gets him the milkshake, instead of kicking him out for being a weirdo. Both books are set at the same time. Cannery Row was published in 1945 but is set during the Depression; To Kill a Mockingbird was published in 1960, and its main events take place in 1935. I think it must be a lot easier to be a weird misfit now than it was in 1935. [ Sort of related. ] [ 20200708: Addendum ] [Other articles in category /book] permanent link Sun, 31 May 2020
Reordering git commits (not patches) with interactive rebase
This is the third article in a series. ([1] [2]) You may want to reread the earlier ones, which were in 2015. I'll try to summarize. The original issue considered the implementation of some program feature X. In commit A, the feature had not yet been implemented. In the next commit C it had been implemented, and was enabled. Then there was a third commit, B, that left feature X implemented but disabled it:
but what I wanted was to have the commits in this order:
so that when X first appeared in the history, it was disabled, and then a following commit enabled it. The first article in the series began:
Using interactive rebase here “to reorder B and C” will not work
because My original articles described a way around this, using the plumbing
command Recently, Curtis Dunham wrote to me with a much better idea that uses the
interactive rebase UI to accomplish the same thing much more cleanly.
If we had B checked out and we tried
As I said before, just switching the order of these two M. Dunham's suggestion is to use
But what's
That is, “take a snapshot of that commit”. It's simple to implement:
There needs to be a bit of cleanup to get the working tree back into
sync with the new index.
M. Dunham's actual implementation
does this with I hadn't know about the Thank you again, M. Dunham! [Other articles in category /prog] permanent link Sat, 30 May 2020Rob Hoelz mentioned that (one of?) the Nenets languages has different verb moods for concepts that in English are both indicated by “should” and “ought”:
Examples of the former being:
and of the latter:
I have often wished that English were clearer about distinguishing these. For example, someone will say to me
and it is not always clear whether they are advising me how to fix the problem, or lamenting that it won't. A similar issue applies to the phrase “ought to”. As far as I can tell the two phrases are completely interchangeable, and both share the same ambiguities. I want to suggest that everyone start using “should” for the deontic version (“you should go now”) and “ought to” for the predictive verion (“you ought to be able to see the lighthouse from here”) and never vice versa, but that's obviously a lost cause. I think the original meaning of both forms is more deontic. Both words originally meant monetary debts or obligations. With ought this is obvious, at least once it's pointed out, because it's so similar in spelling and pronunciation to owed. (Compare pass, passed, past with owe, owed, ought.) For shall, the Big Dictionary has several citations, none later than 1425 CE. One is a Middle English version of Romans 13:7:
As often with Middle English, this is easier than it looks at first. In fact this one is so much easier than it looks that it might become my goto example. The only strange word is schuleþ itself. Here's my almost wordforword translation:
The NIV translates it like this:
Anyway, this is a digression. I wanted to talk about different kinds of should. The Big Dictionary distinguishes two types but mixes them together in its listing, saying:
The first of these seems to correspond to what I was calling the deontic form (“people should be kinder”) and the second (“you should be able to catch that train”.) But their quotations reveal several other shades of meaning that don't seem exactly like either of these:
This is not any of duty, obligation, propriety, expectation, likelihood, or prediction. But it is exactly M. Hoelz’ “state of the world as it currently isn't”. Similarly:
Again, this isn't (necessarily) duty, obligation, or propriety. It's just a wish contrary to fact. The OED does give several other related shades of “should” which are not always easy to distinguish. For example, its definition 18b says
and gives as an example
Compare “We should be able to see Barbados from here”. Its 18c is “you should have seen that fight!” which again is of the wishcontrarytofact type; they even gloss it as “I wish you could have…”. Another distinction I would like to have in English is between “should” used for mere suggestion in contrast with the deontic use. For example
(suggestion) versus
(obligation). Say this distinction was marked in English. Then your mom might say to you in the morning
and later, when you still haven't washed the dishes:
meaning that you'll be in trouble if you fail in your dish washing duties. When my kids were small I started to notice how poorly this sort of thing was communicated by many parents. They would regularly say things like
(the second one in particular) when what they really meant, it seemed to me, was “I want you to be quiet now”. [ I didn't mean to end the article there, but after accidentally publishing it I decided it was as good a place as any to stop. ] [Other articles in category /lang] permanent link [ Previously ] John Gleeson points out that my optical illusion (below left) appears to be a variation on the classic Poggendorff illusion (below right): [Other articles in category /brain] permanent link Fri, 29 May 2020
Infinite zeroes with one on the end
I recently complained about people who claim:
When I read something like this, the first thing that usually comes to mind is the ordinal number !!\omega+1!!, which is a canonical representative of this type of ordering. But I think in the future I'll bring up this much more elementary example: $$ S = \biggl\{1, \frac12, \frac13, \frac14, \ldots, 0\biggr\} $$ Even a middle school student can understand this, and I doubt they'd be able to argue seriously that it doesn't have an infinite sequence of elements that is followed by yet another final element. Then we could define the supposedly problematic !!0, 0, 0, \ldots, 1!! thingy as a map from !!S!!, just as an ordinary sequence is defined as a map from !!\Bbb N!!. [ Related: A familiar set with an unexpected order type. ] [Other articles in category /math] permanent link A couple of years ago I wrote an article about a topological counterexample, which included this illustration: Since then, every time I have looked at this illustration, I have noticed that the blue segment is drawn wrong. It is intended to be a portion of a radius, so if I extend it toward the center of the circle it ought to intersect the center point. But clearly, the slope is too high and the extension passes significantly below the center. Or so it appears. I have checked the original SVG, using Inkscape to extend the blue segment: the extension passes through the center of the circle. I have also checked the rendered version of the illustration, by holding a straightedge up to the screen. The segment is pointing in the correct direction. So I know it's right, but still I'm jarred every time I see it, because to me it looks so clearly wrong. [ Addendum 20200530: John Gleeson has pointed out the similarity to the Poggendorff illusion. ] [Other articles in category /brain] permanent link Thu, 21 May 2020There have been several reports of the theft of catalytic converters in our neighborhood, the thieves going under the car and cutting the whole thing out. The catalytic converter contains a few grams of some precious metal, typically platinum, and this can be recycled by a sufficiently crooked junk dealer. Why weren't these being stolen before? I have a theory. The catalytic converter contains only a few grams of platinum, worth around $30. Crawling under a car to cut one out is a lot of trouble and risk to go to for $30. I think the the stayathome order has put a lot of burglars and housebreakers out of work. People aren't leaving their houses and in particular they aren't going on vacations. So thieves have to steal what they can get. [ Addendum 20200522: An initial glance at the official crime statistics suggests that my theory is wrong. I'll try to make a report over the weekend. ] [Other articles in category /misc] permanent link Thu, 07 May 2020Yesterday I went through the last few months of web server logs, used them to correct some bad links in my blog articles. Today I checked the logs again and all the "page not found" errors are caused by people attacking my WordPress and PHP installations. So, um, yay, I guess? [Other articles in category /meta] permanent link Tue, 05 May 2020
Article explodes at the last moment
I wrote a really great blog post over the last couple of days. Last year I posted about the difference between !!\frac10!! and !!\frac00!! and this was going to be a followup. I had a great example from linear regression, where the answer comes out as !!\frac10!! in situations where the slope of computed line is infinite (and you can fix it, by going into projective space and doing everything in homogeneous coordinates), and as !!\frac00!! in situations where the line is completely indeterminite, and you can't fix it, but instead you can just pick any slope you want and proceed from there. Except no, it never does come out !!\frac10!!. It always comes out !!\frac00!!, even in the situations I said were !!\frac10!!. I think maybe I can fix it though, I hope, maybe. If so, then I will be able to write a third article. Maybe. It could have been worse. I almost published the thing, and only noticed my huge mistake because I was going to tack on an extra section at the end that it didn't really need. When I ran into difficulties with the extra section, I was tempted to go ahead and publish without it. [Other articles in category /oops] permanent link Fri, 01 May 2020
More about Sir Thomas Urquhart
I don't have much to add at this point, but when I looked into Sir Thomas Urquhart a bit more, I found this amazing article by Denton Fox in London Review of Books. It's a review of a new edition of Urquhart's 1652 book The Jewel (Ekskybalauron), published in 1984. The whole article is worth reading. It begins:
and then oh boy, does it deliver. So much of this article is quotable that I'm not sure what to quote. But let's start with:
Some excerpts will follow. You may enjoy reading the whole thing. TrissotetrasI spent much way more time on this than I expected. Fox says:
Thanks to the Wonders of the Internet, a copy is available, and I have been able to glance at it. Urquhart has invented a microlanguage along the lines of Wilkins’ philosophical language, in which the words are constructed systematically. But the language of Trissotetras has a very limited focus: it is intended only for expressing statements of trigonometry. Urquhart says:
The sentence continues for another 118 words but I think the point is made: the idea is not obviously terrible. Here is an example of Urquhart's trigonometric language in action:
A person skilled in the art might be able to infer the meaning of this axiom from its name:
That is, a side (of a triangle) is proportional to the sine of the opposite angle. This principle is currently known as the law of sines. Urquhart's idea of constructing mnemonic nonsense words for basic laws was not a new one. There was a longestablished tradition of referring to forms of syllogistic reasoning with constructed mnemonics. For example a syllogism in “Darii” is a deduction of this form:
The ‘A’ in “Darii” is a mnemonic for the “all” clause and the ‘I’s for the “some” clauses. By memorizing a list of 24 names, one could remember which of the 256 possible deductions were valid. Urquhart is following this welltrodden path and borrows some of its terminology. But the way he develops it is rather daunting:
I think that ‘Pubkegdaxesh’ is compounded from the initial syllables of the seven enodandas, with p from upalem, ub from ubamen, k from ekarul, eg from egalem, and so on. I haven't been able to decipher any of these, although I didn't try very hard. There are many difficulties. Sometimes the language is obscure because it's obsolete and sometimes because Urquhart makes up his own words. (What does “enodandas” mean?) Let's just take “Upalem”. Here are Urquhart's glosses:
I believe “a tangent complement” is exactly what we would now call a cotangent; that is, the tangent of the complementary angle. But how these six items relate to one another, I do not know. Here's another difficulty: I'm not sure that ‘al’ is one component or two. It might be one:
Either way I'm not sure what is meant. Wait, there is a helpful diagram, and an explanation of it:
This is unclear but tantalizing. Urquhart is solving a problem of elementary plane trigonometry. Some of the sides and angles are known, and we are to find one of the unknown ones. I think if if I read the book from the beginning I think I might be able to make out better what Urquhart was getting at. Tempting as it is I am going to abandon it here. Trissotetras is dedicated to Urquhart's mother. In the introduction, he laments that
He must have been an admirer of Alexander Rosse, because the front matter ends with a little poem attributed to Rosse. PantochronachanonFox again:
This is Pantochronachanon, which Wikipedia says “has been the subject of ridicule since the time of its first publication, though it was likely an elaborate joke”, with no citation given. Fox mentions that Urquhart claims Alcibiades as one of his ancestors. He also claims the Queen of Sheba. According to Pantochronachanon the world was created in 3948 BC (Ussher puts it in 4004), and Sir Thomas belonged to the 153rd generation of mankind. The JewelDenton Fox:
Wowzers. Fox claims that the title Εκσκυβαλαυρον means “from dung, gold” but I do not understand why he says this. λαύρα might be a sewer or privy, and I think the word σκυβα means garden herbs. (Addendum: the explanation.)
Fox says that in spite of the general interest in universal languages, “parts of his prospectus must have seemed absurd even then”, quoting this item:
Urquhart boasts that where other, presumably inferior languages have only five or six cases, his language has ten “besides the nominative”. I think Finnish has fourteen but I am not sure even the Finns would be so sure that more was better. Verbs in Urquhart's language have one of ten tenses, seven moods, and four voices. In addition to singular and plural, his language has dual (like Ancient Greek) and also ‘redual’ numbers. Nouns may have one of eleven genders. It's like a language designed by the Oglaf Dwarves. A later item states:
and another item claims that a word of one syllable is capable of expressing an exact date and time down to the “half quarter of the hour”. Sir Thomas, I believe that Entropia, the goddess of Information Theory, would like a word with you about that. Wrapping upOne final quote from Fox:
Fox says “Nothing much came of this, either.”. I really wish I had made a note of what I had planned to say about Urquhart in 2008. [ Addendum 20200502: Brent Yorgey has explained Εκσκυβαλαυρον for me. Σκύβαλα (‘skubala’) is dung, garbage, or refuse; it appears in the original Greek text of Philippians 3:8:
And while the usual Greek word for gold is χρῡσός (‘chrysos’), the word αύρον (‘auron’, probably akin to Latin aurum) is also gold. The screenshot at right is from the 8th edition of Liddell and Scott. Thank you, M. Yorgey! ] [Other articles in category /book] permanent link Thu, 30 Apr 2020
Geeky boasting about dictionaries
Yesterday Katara and I were talking about words for ‘song’. Where did ‘song’ come from? Obviously from German, because sing, song, sang, sung is maybe the perfect example of ablaut in Germanic languages. (In fact, I looked it up in Wikipedia just now and that's the example they actually used in the lede.) But the German word I'm familiar with is Lied. So what happened there? Do they still have something like Song? I checked the Oxford English Dictionary but it was typically unhelpful. “It just says it's from Old German Sang, meaning ‘song’. To find out what happened, we'd need to look in the Oxford German Dictionary.” Katara considered. “Is that really a thing?” “I think so, except it's written in German, and obviously not published by Oxford.” “What's it called?” I paused and frowned, then said “Deutsches Wörterbuch.” “Did you just happen to know that?” “Well, I might be totally wrong, but yeah.” But I looked. Yeah, it's called Deutsches Wörterbuch:
So, yes, I just happened to know that. Yay me! Deutsches Wörterbuch was begun by Wilhelm and Jakob Grimm (yes, those Brothers Grimm) although the project was much too big to be finished in their lifetimes. Wilhelm did the letter ‘D’. Jakob lived longer, and was able to finish ‘A’, ‘B’, ‘C’, and ‘E’. Wikipedia mentions the detail that he died “while working on the entry for ‘Frucht’ (fruit)”. Wikipedia says “the work … proceeded very slowly”:
(This isn't as ridiculous as it seems; German has a lot of words that begin with ‘ge’.) The project came to an end in 2016, after 178 years of effort. The revision of the Grimms’ original work on A–F, planned since the 1950s, is complete, and there are no current plans to revise the other letters. [Other articles in category /lang] permanent link Tue, 28 Apr 2020
Urquhart, Rosse, and Browne
A couple of years ago, not long before I started this blog, I read some of the works of Sir Thomas Browne. I forget exactly why: there was some footnote I read somewhere that said that something in one of Jorge Luis Borges' stories had been inspired by something he read in Browne's book The Urn Burial, which was a favorite of Borges'. I wish I could remember the details! I don't think I even remembered them at the time. But Thomas Browne turned out to be wonderful. He is witty, and learned, and wise, and humane, and to read his books is to feel that you are in the company of this witty, learned, wise, humane man, one of the best men that the English Renaissance has to offer, and that you are profiting thereby. The book of Browne's that made the biggest impression on me was Pseudodoxia Epidemica (1646), which is a compendium of erroneous beliefs that people held at the time, with discussion. For example, is it true that chameleons eat nothing but air? ("Thus much is in plain terms affirmed by Solinus, Pliny, and others...") Browne thinks not. He cites various evidence against this hypothesis: contemporary reports of the consumption of various insects by chameleons; the presence of teeth, tongues, stomachs and guts in dissected chameleons; the presence of semidigested insects in the stomachs of dissected chameleons. There's more; he attacks the whole idea that an animal could be nourished by air. Maybe all this seems obvious, but in 1672 it was still a matter for discussion. And Browne's discussion is so insightful, so pithy, so clear, that it is a delight to read. Browne's list of topics is fascinating in itself. Some of the many issues he deals with are:
Well, I digress. To return to that list of topics I quoted, you might see "of the blacknesse of Negroes", and feel your heart sink a little. What racist jackass thing is the 1646 Englishman going to say about the blackness of negroes? Actually, though, Browne comes out of it extremely well, not only much better than one would fear, but quite well even by modern standards. It is one of the more extensive discourses in Pseudodoxia Epidemica, occupying several chapters. He starts by rebutting two popular explanations: that they are burnt black by the heat of the sun, and that they are marked black because of the curse of Ham as described in Genesis 9:20–26. Regarding the latter, Browne begins by addressing the Biblical issue directly, and on its own terms, and finds against it. But then he takes up the larger question of whether black skin can be considered to be a curse at all. Browne thinks not. He spends some time rejecting this notion: "to inferr this as a curse, or to reason it as a deformity, is no way reasonable". He points out that the people who have it don't seem to mind, and that "Beauty is determined by opinion, and seems to have no essence that holds one notion with all; that seeming beauteous unto one, which hath no favour with another; and that unto every one, according as custome hath made it natural, or sympathy and conformity of minds shall make it seem agreeable." Finally, he ends by complaining that "It is a very injurious method unto Philosophy, and a perpetual promotion of Ignorance, in points of obscurity, ... to fall upon a present refuge unto Miracles; or recurr unto immediate contrivance, from the insearchable hands of God." I wish more of my contemporaries agreed. Another reason I love this book is that Browne is nearly always right. If you were having doubts that one could arrive at correct notions by thoughtful examination of theory and evidence, Pseudodoxia Epidemica might help dispel them, because Browne's record of coming to the correct conclusions is marvelous. Some time afterward, I learned that there was a rebuttal to Pseudodoxia Epidemica, written by a Dr. Alexander Rosse. (Arcana Microcosmi, ... with A Refutation of Doctor Brown's VULGAR ERRORS... (1652).) And holy cow, Rosse is an incredible knucklehead. Watching him try to argue with Browne reminded me of watching an argument on Usenet (a sort of preInternet distributed BBS) where one person is right about everything, and is being flamed point by point by some jackass who is wrong about everything, and everyone but the jackass knows it. I have seen this many, many times on Usenet, but never as far back as 1652.
This is the point at which I stopped writing the article in 2008. I had mentioned the blockheaded Mr. Rosse in an earlier article. But I have no idea what else I had planned to say about him here.
Additional notes (April 2020)
