The Universe of Discourse


Sat, 21 Nov 2020

Testing for divisibility by 19

[ Previously, Testing for divisibility by 7. ]

A couple of nights ago I was keeping Katara company while she revised an essay on The Scarlet Letter (ugh) and to pass the time one of the things I did was tinker with the tests for divisibility rules by 9 and 11. In the course of this I discovered the following method for divisibility by 19:

Double the last digit and add the next-to-last.
Double that and add the next digit over.
Repeat until you've added the leftmost digit.

The result will be a smaller number which is a multiple of 19 if and only if the original number was.

For example, let's consider, oh, I don't know, 2337. We calculate:

  • 7·2+3 = 17
  • 17·2 + 3 = 37
  • 37·2 + 2 = 76

76 is a multiple of 19, so 2337 was also. But if you're not sure about 76 you can compute 2·6+7 = 19 and if you're not sure about that you need more help than I can provide.

I don't claim this is especially practical, but it is fun, not completely unworkable, and I hadn't seen anything like it before. You can save a lot of trouble by reducing the intermediate values mod 19 when needed. In the example above above, after the first step you get to 17, which you can reduce mod 19 to -2, and then the next step is -2·2+3 = -1, and the final step is -1·2+2 = 0.

Last time I wrote about this Eric Roode sent me a whole compendium of divisibility tests, including one for divisibility by 19. It's a little like mine, but in reverse: group the digits in pairs, left to right; multiply each pair by 5 and then add the next pair. Here's 2337 again:

  • 23·5 + 37 = 152
  • 1·5 + 52 = 57

Again you can save a lot trouble by reducing mod 19 before the multiplication. So instead of the first step being 23·5 + 37 you can reduce the 23·5 to 4·5 = 20 and then add the 37 to get 57 right away.

[ Addendum: Of course this was discovered long ago, and in fact Wikipedia mentions it. ]

[ Addendum 20201123: An earlier version of this article claimed that the double-and-add step I described preserves the mod-19 residue. It does not, of course; the doubling step doubles it. It is, however, true that it is zero afterward if and only if it was zero before. ]


[Other articles in category /math] permanent link

(Untitled)

I am smiling in front of the main entrance of Four Seasons Total
Landscaping in the Tacony neighborhood of Philadelphia.  I am wearing
a gray cap and sweater, jeans, and a blue denim jacket.  I have a red
paper Wawa coffee cup in one hand, and am giving a thumbs up with the
other.


[Other articles in category /misc] permanent link

Mon, 02 Nov 2020

A better way to do remote presentations

A few months ago I wrote an article about a strategy I had tried when giving a talk via videochat. Typically:

The slides are presented by displaying them on the speaker's screen, and then sharing the screen image to the audience.

I thought I had done it a better way:

I published the slides on my website ahead of time, and sent the link to the attendees. They had the option to follow along on the web site, or to download a copy and follow along in their own local copy.

This, I thought, had several advantages:

  1. Each audience person can adjust the monitor size, font size, colors to suit their own viewing preferences.

  2. The audience can see the speaker. Instead of using my outgoing video feed to share the slides, I could share my face as I spoke.

  3. With the slides under their control, audience members can go back to refer to earlier material, or skip ahead if they want.

When I brought this up with my co-workers, some of them had a very good objection:

I am too lazy to keep clicking through slides as the talk progresses. I just want to sit back and have it do all the work.

Fair enough! I have done this.

If you package your slides with page-turner, one instance becomes the “leader” and the rest are “followers”. Whenever the leader moves from one slide to the next, a very simple backend server is notified. The followers periodically contact the server to find out what slide they are supposed to be showing, and update themselves accordingly. The person watching the show can sit back and let it do all the work.

But! If an audience member wants to skip ahead, or go back, that works too. They can use the arrow keys on their keyboard. Their follower instance will stop synchronizing with the leader's slide. Instead, it will display a box in the corner of the page, next to the current slide's page number, that says what slide the leader is looking at. The number in this box updates dynamically, so the audience person always knows how far ahead or behind they are.

Synchronized Unsynchronized

At left, the leader is displaying slide 3, and the follower is there also. When the leader moves on to slide 4, the follower instance will switch automatically.

At right, the follower is still looking at slide 3, but is detached from the leader, who has moved on to slide 007, as you can see in the gray box.

When the audience member has finished their excursion, they can click the gray box and their own instance will immediately resynchronize with the leader and follow along until the next time they want to depart.

I used this to give a talk to the Charlotte Perl Mongers last week and it worked. Responses were generally positive even though the UI is a little bit rough-looking.

Technical details

The back end is a tiny server, written in Python 3 with Flask. The server is really tiny, only about 60 lines of code. It has only two endpoints: for getting the leader's current page, and for setting it. Setting requires a password.

    @app.route('/get-page')
    def get_page():
        return { "page": app.server.get_pageName() }

    @app.route('/set-page', methods=['POST'])
    def set_page():
        …
        password = request.data["password"]
        page = request.data["page"]
        try:
            app.server.update_pageName(page, password)
        except WrongPassword:
            return failure("Incorrect password"), status.HTTP_401_UNAUTHORIZED

        return { "success": True }

The front end runs in the browser. The user downloads the front-end script, pageturner.js, from the same place they are getting the slides. Each slide contains, in its head element:

    <LINK REL='next'     HREF='slide003.html' TYPE='text/html; charset=utf-8'>
    <LINK REL='previous' HREF='slide001.html' TYPE='text/html; charset=utf-8'>
    <LINK REL='this'     HREF='slide002.html' TYPE='text/html; charset=utf-8'>

    <script language="javascript" src="pageturner.js" >
    </script>

The link elements tell page-turner where to go when someone uses the arrow keys. (This could, of course, be a simple counter, if your slides are simply numbered, but my slide decks often have slide002a.html and the like.) Most of page-turner's code is in pageturner.js, which is a couple hundred lines of JavaScript.

On page switching, a tiny amount of information is stored in the browser window's sessionStorage object. This is so that after the new page is loaded, the program can remember whether it is supposed to be synchronized.

If you want page-turner to display the leader's slide number when the follower is desynchronized, as in the example above, you include an element with class phantom_number. The phantom_click handler resynchronizes the follower:

    <b onclick="phantom_click()"  class="phantom_number"></b>

The password for the set-page endpoint is embedded in the pageturner.js file. Normally, this is null, which means that the instance is a follower. If the password is null, page-turner won't try to update set-page. If you want to be the leader, you change

   "password": null,

to

   "password": "swordfish",

or whatever.

Many improvements are certainly possible. It could definitely look a lot better, but I leave that to someone who has interest and aptitude in design.

I know of one serious bug: at present the server doesn't handle SSL, so must be run at an http://… address; if the slides reside at an https://… location, the browser will refuse to make the AJAX requests. This shouldn't be hard to fix.

Source code download

page-turner

Patches welcome.

License

The software is licensed under the Creative Commons Attribution 4.0 license (CC BY 4.0).

You are free to share (copy and redistribute the software in any medium or format) and adapt (remix, transform, and build upon the material) for any purpose, even commercially, so long as you give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

Share and enjoy.


[Other articles in category /talk] permanent link

Sun, 18 Oct 2020

Newton's Method and its instability

While messing around with Newton's method for last week's article, I built this Desmos thingy:

The red point represents the initial guess; grab it and drag it around, and watch how the later iterations change. Or, better, visit the Desmos site and play with the slider yourself.

(The curve here is !!y = (x-2.2)(x-3.3)(x-5.5)!!; it has a local maximum at around !!2.7!!, and a local minimum at around !!4.64!!.)

Watching the attractor point jump around I realized I was arriving at a much better understanding of the instability of the convergence. Clearly, if your initial guess happens to be near an extremum of !!f!!, the next guess could be arbitrarily far away, rather than a small refinement of the original guess. But even if the original guess is pretty good, the refinement might be near an extremum, and then the following guess will be somewhere random. For example, although !!f!! is quite well-behaved in the interval !![4.3, 4.35]!!, as the initial guess !!g!! increases across this interval, the refined guess !!\hat g!! decreases from !!2.74!! to !!2.52!!, and in between these there is a local maximum that kicks the ball into the weeds. The result is that at !!g=4.3!! the method converges to the largest of the three roots, and at !!g=4.35!!, it converges to the smallest.

This is where the Newton basins come from:

Julia set for the rational function
associated to Newton's method for ƒ:z→z³−1.  The plane is divided
symmetrically into three regions, colored red, green, and blue.  The
boundary of each two regions (red and green, say) is inhabited by a
series of leaf shapes of the third color (blue), and the boundaries
between the main regions (green, say) and the (blue) leaves are
inhabited by smaller leaves again of the other color (red), and so on
ad infinitum.  The boundaries are therefore an infinitely detailed
filigree of smaller and smaller leaves of all three colors.

Here we are considering the function !!f:z\mapsto z^3 -1!! in the complex plane. Zero is at the center, and the obvious root, !!z=1!! is to its right, deep in the large red region. The other two roots are at the corresponding positions in the green and blue regions.

Starting at any red point converges to the !!z=1!! root. Usually, if you start near this root, you will converge to it, which is why all the points near it are red. But some nearish starting points are near an extremum, so that the next guess goes wild, and then the iteration ends up at the green or the blue root instead; these areas are the rows of green and blue leaves along the boundary of the large red region. And some starting points on the boundaries of those leaves kick the ball into one of the other leaves…

Here's the corresponding basin diagram for the polynomial !!y = (x-2.2)(x-3.3)(x-5.5)!! from earlier:

Julia set for
the function (x-2.2)(x-3.3)(x-5.5).  There is a large pink region on
the left, a large yellow region on the right, and in between a large
blue hourglass-shaped region.  The boundary of yellow region is decorated
with pink bobbles that protrude into the blue region, and similarly
the boundary of the pink region is decorated with yellow
bobbles. Further details are in the text below.

The real axis is the horizontal hairline along the middle of the diagram. The three large regions are the main basins of attraction to the three roots (!!x=2.2, 3.3!!, and !!5.5!!) that lie within them.

But along the boundaries of each region are smaller intrusive bubbles where the iteration converges to a surprising value. A point moving from left to right along the real axis passes through the large pink !!2.2!! region, and then through a very small yellow bubble, corresponding to the values right around the local maximum near !!x=2.7!! where the process unexpectedly converges to the !!5.5!! root. Then things settle down for a while in the blue region, converging to the !!3.3!! root as one would expect, until the value gets close to the local minimum at !!4.64!! where there is a pink bubble because the iteration converges to the !!2.2!! root instead. Then as !!x!! increases from !!4.64!! to !!5.5!!, it leaves the pink bubble and enters the main basin of attraction to !!5.5!! and stays there.

If the picture were higher resolution, you would be able to see that the pink bubbles all have tiny yellow bubbles growing out of them (one is 4.39), and the tiny yellow bubbles have even tinier pink bubbles, and so on forever.

(This was generated by the Online Fractal Generator at usefuljs.net; the labels were added later by me. The labels’ positions are only approximate.)

[ Addendum: Regarding complex points and !!f : z\mapsto z^3-1!! I said “some nearish starting points are near an extremum”. But this isn't right; !!f!! has no extrema. It has an inflection point at !!z=0!! but this doesn't explain the instability along the lines !!\theta = \frac{2k+1}{3}\pi!!. So there's something going on here with the complex derivative that I don't understand yet. ]


[Other articles in category /math] permanent link

Fixed points and attractors, part 3

Last week I wrote about a super-lightweight variation on Newton's method, in which one takes this function: $$f_n : \frac ab \mapsto \frac{a+nb}{a+b}$$

or equivalently

$$f_n : x \mapsto \frac{x+n}{x+1}$$

Iterating !!f_n!! for a suitable initial value (say, !!1!!) converges to !!\sqrt n!!:

$$ \begin{array}{rr} x & f_3(x) \\ \hline 1.0 & 2.0 \\ 2.0 & 1.667 \\ 1.667 & 1.75 \\ 1.75 & 1.727 \\ 1.727 & 1.733 \\ 1.733 & 1.732 \\ 1.732 & 1.732 \end{array} $$

Later I remembered that a few months back I wrote a couple of articles about a more general method that includes this as a special case:

The general idea was:

Suppose we were to pick a function !!f!! that had !!\sqrt 2!! as a fixed point. Then !!\sqrt 2!! might be an attractor, in which case iterating !!f!! will get us increasingly accurate approximations to !!\sqrt 2!!.

We can see that !!\sqrt n!! is a fixed point of !!f_n!!:

$$ \begin{align} f_n(\sqrt n) & = \frac{\sqrt n + n}{\sqrt n + 1} \\ & = \frac{\sqrt n(1 + \sqrt n)}{1 + \sqrt n} \\ & = \sqrt n \end{align} $$

And in fact, it is an attracting fixed point, because if !!x = \sqrt n + \epsilon!! then

$$\begin{align} f_n(\sqrt n + \epsilon) & = \frac{\sqrt n + \epsilon + n}{\sqrt n + \epsilon + 1} \\ & = \frac{(\sqrt n + \sqrt n\epsilon + n) - (\sqrt n -1)\epsilon}{\sqrt n + \epsilon + 1} \\ & = \sqrt n - \frac{(\sqrt n -1)\epsilon}{\sqrt n + \epsilon + 1} \end{align}$$

Disregarding the !!\epsilon!! in the denominator we obtain $$f_n(\sqrt n + \epsilon) \approx \sqrt n - \frac{\sqrt n - 1}{\sqrt n + 1} \epsilon $$

The error term !!-\frac{\sqrt n - 1}{\sqrt n + 1} \epsilon!! is strictly smaller than the original error !!\epsilon!!, because !!0 < \frac{x-1}{x+1} < 1!! whenever !!x>1!!. This shows that the fixed point !!\sqrt n!! is attractive.

In the previous articles I considered several different simple functions that had fixed points at !!\sqrt n!!, but I didn't think to consider this unusally simple one. I said at the time:

I had meant to write about Möbius transformations, but that will have to wait until next week, I think.

but I never did get around to the Möbius transformations, and I have long since forgotten what I planned to say. !!f_n!! is an example of a Möbius transformation, and I wonder if my idea was to systematically find all the Möbius transformations that have !!\sqrt n!! as a fixed point, and see what they look like. It is probably possible to automate the analysis of whether the fixed point is attractive, and if not to apply one of the transformations from the previous article to make it attractive.


[Other articles in category /math] permanent link

Tue, 13 Oct 2020

Newton's Method but without calculus — or multiplication

Newton's method goes like this: We have a function !!f!! and we want to solve the equation !!f(x) = 0.!! We guess an approximate solution, !!g!!, and it doesn't have to be a very good guess.

The graph of a wiggly
polynomial girve with roots between 2 and 2.5, between 3 and 3.5, and
between 5 and 6.  The middle root is labeled with a question mark.
One point of the curve, not too different from the root, is marked in
blue and labeled “⟨g,f(g)⟩”.

Then we calculate the line !!T!! tangent to !!f!! through the point !!\langle g, f(g)\rangle!!. This line intersects the !!x!!-axis at some new point !!\langle \hat g, 0\rangle!!, and this new value, !!\hat g!!, is a better approximation to the value we're seeking.

The same graph as
before, but with a tangent to the curve drawn through the blue point.
It intersects the x-axis quite close to the root.  The intersection
point is labeled “ĝ”.

In the left
margin, a close-up detail of the same curve as before, this time
showing the tangent line at ⟨ĝ,f(ĝ)⟩.  The diagram is a close-up
because the line intersects the x-axis extremely close to the actual
root.

Analytically, we have:

$$\hat g = g - \frac{f(g)}{f'(g)}$$

where !!f'(g)!! is the derivative of !!f!! at !!g!!.

We can repeat the process if we like, getting better and better approximations to the solution. (See detail at left; click to enlarge. Again, the blue line is the tangent, this time at !!\langle \hat g, f(\hat g)\rangle!!. As you can see, it intersects the axis very close to the actual solution.)


In general, this requires calculus or something like it, but in any particular case you can avoid the calculus. Suppose we would like to find the square root of 2. This amounts to solving the equation $$x^2-2 = 0.$$ The function !!f!! here is !!x^2-2!!, and !!f'!! is !!2x!!. Once we know (or guess) !!f'!!, no further calculus is needed. The method then becomes: Guess !!g!!, then calculate $$\hat g = g - \frac{g^2-2}{2g}.$$ For example, if our initial guess is !!g = 1.5!!, then the formula above tells us that a better guess is !!\hat g = 1.5 - \frac{2.25 - 2}{3} = 1.4166\ldots!!, and repeating the process with !!\hat g!! produces !!1.41421\mathbf{5686}!!, which is very close to the correct result !!1.41421\mathbf{3562}!!. If we want the square root of a different number !!n!! we just substitute it for the !!2!! in the numerator.

This method for extracting square roots works well and requires no calculus. It's called the Babylonian method and while there's no evidence that it was actually known to the Babylonians, it is quite ancient; it was first recorded by Hero of Alexandria about 2000 years ago.

How might this have been discovered if you didn't have calculus? It's actually quite easy. Here's a picture of the number line. Zero is at one end, !!n!! is at the other, and somewhere in between is !!\sqrt n!!, which we want to find.

A line with
the left enpoint marked “0”, the right endpoint marked “n”, and a
point in between marked “square root of n”.

Also somewhere in between is our guess !!g!!. Say we guessed too low, so !!0 \lt g < \sqrt n!!. Now consider !!\frac ng!!. Since !!g!! is too small to be !!\sqrt n!! exactly, !!\frac ng!! must be too large. (If !!g!! and !!\frac ng!! were both smaller than !!\sqrt n!!, then their product would be smaller than !!n!!, and it isn't.)

The previous
illustration, with green points marked “g” and “\frac{n}{g}”. The
first of these it to the left of the square root of n, the second to
its right.

Similarly, if the guess !!g!! is too large, so that !!\sqrt n < g!!, then !!\frac ng!! must be less than !!\sqrt n!!. The important point is that !!\sqrt n!! is between !!g!! and !!\frac ng!!. We have narrowed down the interval in which !!\sqrt n!! lies, just by guessing.

Since !!\sqrt n!! lies in the interval between !!g!! and !!\frac ng!! our next guess should be somewhere in this smaller interval. The most obvious thing we can do is to pick the point halfway in the middle of !!g!! and !!\frac ng!!, So if we guess the average, $$\frac12\left(g + \frac ng\right),$$ this will probably be much closer to !!\sqrt n!! than !!g!! was:

The previous
illustration, but the point exactly midway between g and \frac{n}{g}
is marked in blue.  It is quote close to the point marked “square root
of n”.

This average is exactly what Newton's method would have calculated, because $$\frac12\left(g + \frac ng\right) = g - \frac{g^2-n}{2g}.$$

But we were able to arrive at the same computation with no calculus at all — which is why this method could have been, and was, discovered 1700 years before Newton's method itself.

If we're dealing with rational numbers then we might write !!g=\frac ab!!, and then instead of replacing our guess !!g!! with a better guess !!\frac12\left(g + \frac ng\right)!!, we could think of it as replacing our guess !!\frac ab!! with a better guess !!\frac12\left(\frac ab + \frac n{\frac ab}\right)!!. This simplifies to

$$\frac ab \Rightarrow \frac{a^2 + nb^2}{2ab}$$

so that for example, if we are calculating !!\sqrt 2!!, and we start with the guess !!g=\frac32!!, the next guess is $$\frac{3^2 + 2\cdot2^2}{2\cdot3\cdot 2} = \frac{17}{12} = 1.4166\ldots$$ as we saw before. The approximation after that is !!\frac{289+288}{2\cdot17\cdot12} = \frac{577}{408} = 1.41421568\ldots!!. Used this way, the method requires only integer calculations, and converges very quickly.

But the numerators and denominators increase rapidly, which is good in one sense (it means you get to the accurate approximations quickly) but can also be troublesome because the numbers get big and also because you have to multiply, and multiplication is hard.

But remember how we figured out to do this calculation in the first place: all we're really trying to do is find a number in between !!g!! and !!\frac ng!!. We did that the first way that came to mind, by averaging. But perhaps there's a simpler operation that we could use instead, something even easier to compute?

Indeed there is! We can calculate the mediant. The mediant of !!\frac ab!! and !!\frac cd!! is simply $$\frac{a+c}{b+d}$$ and it is very easy to show that it lies between !!\frac ab!! and !!\frac cd!!, as we want.

So instead of the relatively complicated $$\frac ab \Rightarrow \frac{a^2 + nb^2}{2ab}$$ operation, we can try the very simple and quick $$\frac ab \Rightarrow \operatorname{mediant}\left(\frac ab, \frac{nb}{a}\right) = \frac{a+nb}{b+a}$$ operation.

Taking !!n=2!! as before, and starting with !!\frac 32!!, this produces:

$$ \frac 32 \Rightarrow\frac{ 7 }{ 5 } \Rightarrow\frac{ 17 }{ 12 } \Rightarrow\frac{ 41 }{ 29 } \Rightarrow\frac{ 99 }{ 70 } \Rightarrow\frac{ 239 }{ 169 } \Rightarrow\frac{ 577 }{ 408 } \Rightarrow\cdots$$

which you may recognize as the convergents of !!\sqrt2!!. These are actually the rational approximations of !!\sqrt 2!! that are optimally accurate relative to the sizes of their denominators. Notice that !!\frac{17}{12}!! and !!\frac{577}{408}!! are in there as they were before, although it takes longer to get to them.

I think it's cool that you can view it as a highly-simplified version of Newton's method.


[ Addendum: An earlier version of the last paragraph claimed:

None of this is a big surprise, because it's well-known that you can get the convergents of !!\sqrt n!! by applying the transformation !!\frac ab\Rightarrow \frac{a+nb}{a+b}!!, starting with !!\frac11!!.

Simon Tatham pointed out that this was mistaken. It's true when !!n=2!!, but not in general. The sequence of fractions that you get does indeed converge to !!\sqrt n!!, but it's not usually the convergents, or even in lowest terms. When !!n=3!!, for example, the numerators and denominators are all even. ]

[ Addendum: Newton's method as I described it, with the crucial !!g → g - \frac{f(g)}{f'(g)}!! transformation, was actually invented in 1740 by Thomas Simpson. Both Isaac Newton and Thomas Raphson had earlier described only special cases, as had several Asian mathematicians, including Seki Kōwa. ]

[ Previous discussion of convergents: Archimedes and the square root of 3; 60-degree angles on a lattice. A different variation on the Babylonian method. ]

[ Note to self: Take a look at what the AM-GM inequality has to say about the behavior of !!\hat g!!. ]

[ Addendum 20201018: A while back I discussed the general method of picking a function !!f!! that has !!\sqrt 2!! as a fixed point, and iterating !!f!!. This is yet another example of such a function. ]


[Other articles in category /math] permanent link

Wed, 23 Sep 2020

The mystery of the malformed command-line flags

Today a user came to tell me that their command

  greenlight submit branch-name --require-review-by skordokott

failed, saying:

    ** 
    ** unexpected extra argument 'branch-name' to 'submit' command
    **

This is surprising. The command looks correct. The branch name is required. The --require-review-by option can be supplied any number of times (including none) and each must have a value provided. Here it is given once and the provided value appears to be skordocott.

The greenlight command is a crappy shell script that pre-validates the arguments before sending them over the network to the real server. I guessed that the crappy shell script parser wanted the branch name last, even though the server itself would have been happy to take the arguments in either order. I suggested that the user try:

  greenlight submit --require-review-by skordokott branch-name 

But it still didn't work:

    ** 
    ** unexpected extra argument '--require-review-by' to 'submit' command
    **

I dug in to the script and discovered the problem, which was not actually a programming error. The crappy shell script was behaving correctly!

I had written up release notes for the --require-review-by feature. The user had clipboard-copied the option string out of the release notes and pasted it into the shell. So why didn't it work?

In an earlier draft of the release notes, when they were displayed as an HTML page, there would be bad line breaks:

blah blah blah be sure to use the -
-require-review-by option…

or:

blah blah blah the new --
require-review-by feature is…

No problem, I can fix it! I just changed the pair of hyphens (- U+002D) at the beginning of --require-review-by to Unicode nonbreaking hyphens ( U+2011). Bad line breaks begone!

But then this hapless user clipboard-copied the option string out of the release notes, including its U+2011 characters. The parser in the script was (correctly) looking for U+002D characters, and didn't recognize --require-review-by as an option flag.

One lesson learned: people will copy-paste stuff out of documentation, and I should be prepared for that.

There are several places to address this. I made the error message more transparent; formerly it would complain only about the first argument, which was confusing because it was the one argument that wasn't superfluous. Now it will say something like

    ** 
    ** extra branch name '--require-review-by' in 'submit' command
    ** 
    ** 
    ** extra branch name 'skordokott' in 'submit' command
    ** 

which is more descriptive of what it actually doesn't like.

I could change the nonbreaking hyphens in the release notes back to regular hyphens and just accept the bad line breaks. But I don't want to. Typography is important.

One idea I'm toying with is to have the shell script silently replace all nonbreaking hyphens with regular ones before any further processing. It's a hack, but it seems like it might be a harmless one.

So many weird things can go wrong. This computer stuff is really complicated. I don't know how anyone get anything done.

[ Addendum: A reader suggests that I could have fixed the line breaks with CSS. But the release notes were being presented as a Slack “Post”, which is essentially a WYSIWYG editor for creating shared documents. It presents the document in a canned HTML style, and as far as I know there's no way to change the CSS it uses. Similarly, there's no way to insert raw HTML elements, so no way to change the style per-element. ]


[Other articles in category /prog/bug] permanent link

Sun, 13 Sep 2020

Weasel words in headlines

The front page of NPR.org today has this headline:

Screenshot of part
of web page.  The main headline is “‘So Skeptical’: As Election Nears, Iowa Senator Under Pressure For
COVID-19 Remarks”.  There is a longer subheadline undernearth, which I
discussed below.

It contains this annoying phrase:

The race for Joni Ernst's seat could help determine control of the Senate.

Someone has really committed to hedging.

I would have said that the race would certainly help determine control of the Senate, or that it could determine control of the Senate. The statement as written makes an extremely weak claim.

The article itself doesn't include this phrase. This is why reporters hate headline-writers.

(Previously)


[Other articles in category /lang] permanent link

Fri, 11 Sep 2020

Historical diffusion of words for “eggplant”

In reply to my recent article about the history of words for “eggplant”, a reader, Lydia, sent me this incredible map they had made that depicts the history and the diffusion of the terms:

A map of the world, with arrows depicting the sequential adoption
of different terms for eggplant, as the words mutated from language to
language.  For details, see the previous post.  The map is an
oval-shaped projection.  The ocean parts of the
map are a dark eggplant-purple color, and a eggplant stem has been
added at the eastern edge, in the Pacific Ocean.

Lydia kindly gave me permission to share their map with you. You can see the early Dravidian term vaḻutanaṅṅa in India, and then the arrows show it travelling westward across Persia and, Arabia, from there to East Africa and Europe, and from there to the rest of the world, eventually making its way back to India as brinjal before setting out again on yet more voyages.

Thank you very much, Lydia! And Happy Diada Nacional de Catalunya, everyone!


[Other articles in category /lang/etym] permanent link

A maxim for conference speakers

The only thing worse than re-writing your talk the night before is writing your talk the night before.


[Other articles in category /talk] permanent link

Fri, 28 Aug 2020

Zucchinis and Eggplants

This morning Katara asked me why we call these vegetables “zucchini” and “eggplant” but the British call them “courgette” and “aubergine”.

I have only partial answers, and the more I look, the more complicated they get.

Zucchini

The zucchini is a kind of squash, which means that in Europe it is a post-Columbian import from the Americas.

“Squash” itself is from Narragansett, and is not related to the verb “to squash”. So I speculate that what happened here was:

  • American colonists had some name for the zucchini, perhaps derived from an Narragansett or another Algonquian language, or perhaps just “green squash” or “little gourd” or something like that. A squash is not exactly a gourd, but it's not exactly not a gourd either, and the Europeans seem to have accepted it as a gourd (see below).

  • When the vegetable arrived in France, the French named it courgette, which means “little gourd”. (Courge = “gourd”.) Then the Brits borrowed “courgette” from the French.

  • Sometime much later, the Americans changed the name to “zucchini”, which also means “little gourd”, this time in Italian. (Zucca = “gourd”.)

The Big Dictionary has citations for “zucchini” only back to 1929, and “courgette” to 1931. What was this vegetable called before that? Why did the Americans start calling it “zucchini” instead of whatever they called it before, and why “zucchini” and not “courgette”? If it was brought in by Italian immigrants, one might expect to the word to have appeared earlier; the mass immigration of Italians into the U.S. was over by 1920.

Following up on this thought, I found a mention of it in Cuniberti, J. Lovejoy., Herndon, J. B. (1918). Practical Italian recipes for American kitchens, p. 18: “Zucchini are a kind of small squash for sale in groceries and markets of the Italian neighborhoods of our large cities.” Note that Cuniberti explains what a zucchini is, rather than saying something like “the zucchini is sometimes known as a green summer squash” or whatever, which suggests that she thinks it will not already be familiar to the readers. It looks as though the story is: Colonial Europeans in North America stopped eating the zucchini at some point, and forgot about it, until it was re-introduced in the early 20th century by Italian immigrants.

When did the French start calling it courgette? When did the Italians start calling it zucchini? Is the Italian term a calque of the French, or vice versa? Or neither? And since courge (and gourd) are evidently descended from Latin cucurbita, where did the Italians get zucca?

So many mysteries.

Eggplant

Here I was able to get better answers. Unlike squash, the eggplant is native to Eurasia and has been cultivated in western Asia for thousands of years.

The puzzling name “eggplant” is because the fruit, in some varieties, is round, white, and egg-sized.

closeup of
an eggplant with several of its  round, white, egg-sized  fruits that
do indeed look just like eggs

The term “eggplant” was then adopted for other varieties of the same plant where the fruit is entirely un-egglike.

“Eggplant” in English goes back only to 1767. What was it called before that? Here the OED was more help. It gives this quotation, from 1785:

When this [sc. its fruit] is white, it has the name of Egg-Plant.

I inferred that the preceding text described it under a better-known name, so, thanks to the Wonders of the Internet, I looked up the original source:

Melongena or Mad Apple is also of this genus [solanum]; it is cultivated as a curiosity for the largeness and shape of its fruit; and when this is white, it has the name of Egg Plant; and indeed it then perfectly resembles a hen's egg in size, shape, and colour.

(Jean-Jacques Rosseau, Letters on the Elements of Botany, tr. Thos. Martyn 1785. Page 202. (Wikipedia))

The most common term I've found that was used before “egg-plant” itself is “mad apple”. The OED has cites from the late 1500s that also refer to it as a “rage apple”, which is a calque of French pomme de rage. I don't know how long it was called that in French. I also found “Malum Insanam” in the 1736 Lexicon technicum of John Harris, entry “Bacciferous Plants”.

Melongena was used as a scientific genus name around 1700 and later adopted by Linnaeus in 1753. I can't find any sign that it was used in English colloquial, non-scientific writing. Its etymology is a whirlwind trip across the globe. Here's what the OED says about it:

  • The neo-Latin scientific term is from medieval Latin melongena

  • Latin melongena is from medieval Greek μελιντζάνα (/melintzána/), a variant of Byzantine Greek ματιζάνιον (/matizánion/) probably inspired by the common Greek prefix μελανο- (/melano-/) “dark-colored”. (Akin to “melanin” for example.)

  • Greek ματιζάνιον is from Arabic bāḏinjān (بَاذِنْجَان). (The -ιον suffix is a diminutive.)

  • Arabic bāḏinjān is from Persian bādingān (بادنگان)

  • Persian bādingān is from Sanskrit and Pali vātiṅgaṇa (भण्टाकी)

  • Sanskrit vātiṅgaṇa is from Dravidian (for example, Malayalam is vaḻutana (വഴുതന); the OED says “compare… Tamil vaṟutuṇai”, which I could not verify.)

Wowzers.

Okay, now how do we get to “aubergine”? The list above includes Arabic bāḏinjān, and this, like many Arabic words was borrowed into Spanish, as berengena or alberingena. (The “al-” prefix is Arabic for “the” and is attached to many such borrowings, for example “alcohol” and “alcove”.)

From alberingena it's a short step to French aubergine. The OED entry for aubergine doesn't mention this. It claims that aubergine is from “Spanish alberchigo, alverchiga, ‘an apricocke’”. I think it's clear that the OED blew it here, and I think this must be the first time I've ever been confident enough to say that. Even the OED itself supports me on this: the note at the entry for brinjal says: “cognate with the Spanish alberengena is the French aubergine”. Okay then. (Brinjal, of course, is a contraction of berengena, via Portuguese bringella.)

Sanskrit vātiṅgaṇa is also the ultimate source of modern Hindi baingan, as in baingan bharta.

(Wasn't there a classical Latin word for eggplant? If so, what was it? Didn't the Romans eat eggplant? How do you conquer the world without any eggplants?)

[ Addendum: My search for antedatings of “zucchini” turned up some surprises. For example, I found what seemed to be many mentions in an 1896 history of Sicily. These turned out not to be about zucchini at all, but rather the computer's pathetic attempts at recognizing the word Σικελίαν. ]

[ Addendum 20200831: Another surprise: Google Books and Hathi Trust report that “zucchini” appears in the 1905 Collier Modern Eclectic Dictionary of the English Langauge, but it's an incredible OCR failure for the word “acclamation”. ]

[ Addendum 20200911: A reader, Lydia, sent me a beautiful map showing the evolution of the many words for ‘eggplant’. Check it out. ]


[Other articles in category /lang/etym] permanent link

Mon, 24 Aug 2020

Conyngus in gravé

Ripta Pasay brought to my attention the English cookbook Liber Cure Cocorum, published sometime between 1420 and 1440. The recipes are conveyed as poems:

Conyngus in gravé.

Sethe welle þy conyngus in water clere,
  After, in water colde þou wasshe hom sere,
Take mylke of almondes, lay hit anone
  With myed bred or amydone;
Fors hit with cloves or gode gyngere;
  Boyle hit over þo fyre,
Hew þo conyngus, do hom þer to,
  Seson hit with wyn or sugur þo.

(Original plus translation by Cindy Renfrow)

“Conyngus” is a rabbit; English has the cognate “coney”.

If you have read my article on how to read Middle English you won't have much trouble with this. There are a few obsolete words: sere means “separately”; myed bread is bread crumbs, and amydone is starch.

I translate it (very freely) as follows:

Rabbit in gravy.

Boil well your rabbits in clear water,
  then wash them separately in cold water.
Take almond milk, put it on them
  with grated bread or starch;
stuff them with cloves or good ginger;
  boil them over the fire,
cut them up,
  and season with wine or sugar.

Thanks, Ripta!


[Other articles in category /food] permanent link