The Universe of Discourse
https://blog.plover.com
The Universe of Discourse (Mark Dominus Blog)enNewton's Method and its instability
https://blog.plover.com/2020/10/18#newton-instability
<p>While messing around with Newton's method for
<a href="https://blog.plover.com/math/newton-mediants.html">last week's article</a>, I built this
Desmos thingy:</p>
<iframe src="https://www.desmos.com/calculator/mqkfdoyno9?embed"
width="100%" height="500px" style="border: 1px solid #ccc"
frameborder=0></iframe>
<p>The red point represents the initial guess; grab it and drag it
around, and watch how the later iterations change. Or, better, visit
the Desmos site and
<a href="https://www.desmos.com/calculator/mqkfdoyno9">play with the slider yourself</a>.</p>
<p>(The curve here is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24y%20%3d%20%28x%2d2%2e2%29%28x%2d3%2e3%29%28x%2d5%2e5%29%24">; it has a local maximum at
around <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%2e7%24">, and a local minimum at around <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%2e64%24">.)</p>
<p>Watching the attractor point jump around I realized I was arriving at
a much better understanding of the instability of the convergence.
Clearly, if your initial guess happens to be near an extremum of
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24">, the next guess could be arbitrarily far away, rather than a
small refinement of the original guess. But even if the original
guess is pretty good, the <em>refinement</em> might be near an extremum, and then the
following guess will be somewhere random. For example, although <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24">
is quite well-behaved in the interval <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5b4%2e3%2c%204%2e35%5d%24">, as the
initial guess <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24"> increases across this interval, the refined guess
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5chat%20g%24"> decreases from <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%2e74%24"> to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%2e52%24">, and in between these
there is a local maximum that kicks the ball into the weeds. The
result is that at <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%3d4%2e3%24"> the method converges to the largest of the
three roots, and at <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%3d4%2e35%24">, it converges to the smallest.</p>
<p>This is where the <a href="https://en.wikipedia.org/wiki/Newton_fractal">Newton
basins</a> come from:</p>
<p><a href="https://pic.blog.plover.com/math/newton-instability/basins.png"><img class="center" border=0
src="https://pic.blog.plover.com/math/newton-instability/basins.png" alt="Julia set for the rational function
associated to Newton's method for ƒ:z→z³−1. The plane is divided
symmetrically into three regions, colored red, green, and blue. The
boundary of each two regions (red and green, say) is inhabited by a
series of leaf shapes of the third color (blue), and the boundaries
between the main regions (green, say) and the (blue) leaves are
inhabited by smaller leaves again of the other color (red), and so on
ad infinitum. The boundaries are therefore an infinitely detailed
filigree of smaller and smaller leaves of all three colors."/></a></p>
<p>Here we are considering the function <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%3az%5cmapsto%20z%5e3%20%2d1%24"> in the
complex plane. Zero is at the center, and the obvious root, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24z%3d1%24">
is to its right, deep in the large red region. The other two roots
are at the corresponding positions in the green and blue regions.</p>
<p>Starting at any red point converges to the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24z%3d1%24"> root. Usually, if
you start near this root, you will converge to it, which is why all
the points near it are red. But some nearish starting points are near
an extremum, so that the next guess goes wild, and then the iteration
ends up at the green or the blue root instead; these areas are the
rows of green and blue leaves along the boundary of the large red
region. And some starting points on the boundaries of those leaves
kick the ball into one of the <em>other</em> leaves…</p>
<p>Here's the corresponding basin diagram for the polynomial <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24y%20%3d%0a%28x%2d2%2e2%29%28x%2d3%2e3%29%28x%2d5%2e5%29%24"> from earlier:</p>
<p><a href="https://pic.blog.plover.com/math/newton-instability/myfractal.png"><img class="center" border=0
src="https://pic.blog.plover.com/math/newton-instability/myfractal.png" style="width: 100%" alt="Julia set for
the function (x-2.2)(x-3.3)(x-5.5). There is a large pink region on
the left, a large yellow region on the right, and in between a large
blue hourglass-shaped region. The boundary of yellow region is decorated
with pink bobbles that protrude into the blue region, and similarly
the boundary of the pink region is decorated with yellow
bobbles. Further details are in the text below." /></a></p>
<p>The real axis is the horizontal hairline along the middle of the
diagram. The three large regions are the main basins of attraction to
the three roots (<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24x%3d2%2e2%2c%203%2e3%24">, and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%245%2e5%24">) that lie within them.</p>
<p>But along the boundaries of each region are smaller intrusive bubbles
where the iteration converges to a surprising value. A point moving
from left to right along the real axis passes through the large pink
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%2e2%24"> region, and then through a very small yellow bubble,
corresponding to the values right around the local maximum near
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24x%3d2%2e7%24"> where the process unexpectedly converges to the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%245%2e5%24">
root. Then things settle down for a while in the blue region,
converging to the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%243%2e3%24"> root as one would expect, until the value
gets close to the local minimum at <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%2e64%24"> where there is a pink
bubble because the iteration converges to the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%2e2%24"> root instead.
Then as <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24x%24"> increases from <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%244%2e64%24"> to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%245%2e5%24">, it leaves the pink
bubble and enters the main basin of attraction to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%245%2e5%24"> and stays
there.</p>
<p>If the picture were higher resolution, you would be able to see that
the pink bubbles all have tiny yellow bubbles growing out of them (one
is 4.39), and the tiny yellow bubbles have even tinier pink bubbles,
and so on forever.</p>
<p>(This was generated by the
<a href="http://usefuljs.net/fractals/">Online Fractal Generator at usefuljs.net</a>;
the labels were added later by me. The labels’ positions are only approximate.)</p>
<p>[ Addendum: Regarding complex points and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%20%3a%20z%5cmapsto%20z%5e3%2d1%24"> I said
“some nearish starting points are near an extremum”. But this isn't
right; <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24"> has no extrema. It has an inflection point at <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24z%3d0%24">
but this doesn't explain the instability along the lines
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5ctheta%20%3d%20%5cfrac%7b2k%2b1%7d%7b3%7d%5cpi%24">. So there's something going on
here with the complex derivative that I don't understand yet. ]</p>
Fixed points and attractors, part 3
https://blog.plover.com/2020/10/18#attractors-3
<p>Last week I wrote about <a href="https://blog.plover.com/math/newton-mediants.html">a super-lightweight variation on Newton's
method</a>, in which one takes this function:
$$f_n : \frac ab \mapsto \frac{a+nb}{a+b}$$</p>
<p>or equivalently</p>
<p>$$f_n : x \mapsto \frac{x+n}{x+1}$$</p>
<p>Iterating <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f_n%24"> for a suitable initial value (say, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%241%24">) converges
to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24">:</p>
<p>$$
\begin{array}{rr}
x & f_3(x) \\ \hline
1.0 & 2.0 \\
2.0 & 1.667 \\
1.667 & 1.75 \\
1.75 & 1.727 \\
1.727 & 1.733 \\
1.733 & 1.732 \\
1.732 & 1.732
\end{array}
$$</p>
<p>Later I remembered that
a few months back I wrote a couple of articles about a more general method
that includes this as a special case:</p>
<ul>
<li><a href="https://blog.plover.com/math/attractors.html">How to calculate the square root of 2</a></li>
<li><a href="https://blog.plover.com/math/attractors-2.html">More about fixed points and attractors</a></li>
</ul>
<p>The general idea was:</p>
<blockquote>
<p>Suppose we were to pick a function <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24"> that had <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%202%24"> as a
fixed point. Then <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%202%24"> might be an attractor, in which case
iterating <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24"> will get us increasingly accurate approximations to
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%202%24">.</p>
</blockquote>
<p>We can see that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24"> is a fixed point of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f_n%24">:</p>
<p>$$
\begin{align}
f_n(\sqrt n) & = \frac{\sqrt n + n}{\sqrt n + 1} \\
& = \frac{\sqrt n(1 + \sqrt n)}{1 + \sqrt n} \\
& = \sqrt n
\end{align}
$$</p>
<p>And in fact, it is an
attracting fixed point, because if <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24x%20%3d%20%5csqrt%20n%20%2b%20%5cepsilon%24"> then</p>
<p>$$\begin{align}
f_n(\sqrt n + \epsilon)
& = \frac{\sqrt n + \epsilon + n}{\sqrt n + \epsilon + 1} \\
& = \frac{(\sqrt n + \sqrt n\epsilon + n) - (\sqrt n -1)\epsilon}{\sqrt n + \epsilon + 1} \\
& = \sqrt n - \frac{(\sqrt n -1)\epsilon}{\sqrt n + \epsilon + 1}
\end{align}$$</p>
<p>Disregarding the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cepsilon%24"> in the denominator we obtain
$$f_n(\sqrt n + \epsilon) \approx
\sqrt n - \frac{\sqrt n - 1}{\sqrt n + 1} \epsilon
$$</p>
<p>The error term <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%2d%5cfrac%7b%5csqrt%20n%20%2d%201%7d%7b%5csqrt%20n%20%2b%201%7d%20%5cepsilon%24"> is
strictly smaller than the original error <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cepsilon%24">, because <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%240%20%3c%0a%5cfrac%7bx%2d1%7d%7bx%2b1%7d%20%3c%201%24"> whenever <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24x%3e1%24">. This shows that the fixed point
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24"> is attractive.</p>
<p>In the previous articles I considered several different simple
functions that had fixed points at <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24">, but I didn't think to
consider this unusally simple one. I said at the time:</p>
<p><blockquote>
<p>I had meant to write about Möbius transformations, but that
will have to wait until next week, I think.<p></p>
</blockquote></p>
<p>but I never did get around to the Möbius transformations, and I have long since forgotten
what I planned to say. <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f_n%24"> is an example of a Möbius transformation, and I wonder if
my idea was to systematically find all the Möbius transformations that have <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24">
as a fixed point, and see what they look like. It is probably
possible to automate the analysis of whether the fixed point is
attractive, and if not to apply one of the transformations from the
previous article to make it attractive.</p>
Newton's Method but without calculus — or multiplication
https://blog.plover.com/2020/10/13#newton-mediants
<p>Newton's method goes like this: We have a function <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24"> and we want
to solve the equation <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%28x%29%20%3d%200%2e%24"> We guess an approximate solution,
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24">, and it doesn't have to be a very good guess.</p>
<p><img class="center" src="https://pic.blog.plover.com/math/newton-mediants/graph1.png" alt="The graph of a wiggly
polynomial girve with roots between 2 and 2.5, between 3 and 3.5, and
between 5 and 6. The middle root is labeled with a question mark.
One point of the curve, not too different from the root, is marked in
blue and labeled “⟨g,f(g)⟩”." /></p>
<p>Then we calculate
the line <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24T%24"> tangent to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24"> through the point <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5clangle%20g%2c%0af%28g%29%5crangle%24">. This line intersects the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24x%24">-axis at some new point
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5clangle%20%5chat%20g%2c%200%5crangle%24">, and this new value, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5chat%20g%24">, is a better
approximation to the value we're seeking.</p>
<p><img class="center" src="https://pic.blog.plover.com/math/newton-mediants/graph2.png" alt="The same graph as
before, but with a tangent to the curve drawn through the blue point.
It intersects the x-axis quite close to the root. The intersection
point is labeled “ĝ”." /></p>
<p><a href="https://pic.blog.plover.com/math/newton-mediants/graph3.png"><img style="float: left; width: 15%;
margin-right: 1em;" src="https://pic.blog.plover.com/math/newton-mediants/graph3.png" alt="In the left
margin, a close-up detail of the same curve as before, this time
showing the tangent line at ⟨ĝ,f(ĝ)⟩. The diagram is a close-up
because the line intersects the x-axis extremely close to the actual
root." /></a></p>
<p>Analytically, we have:</p>
<p>$$\hat g = g - \frac{f(g)}{f'(g)}$$</p>
<p>where <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%27%28g%29%24"> is the derivative of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24"> at <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24">.</p>
<p>We can repeat the process if we like, getting better and better
approximations to the solution. (See detail at left; click to
enlarge. Again, the blue line is the tangent, this time at <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5clangle%0a%5chat%20g%2c%20f%28%5chat%20g%29%5crangle%24">. As you can see, it intersects the axis very close
to the actual solution.)</p>
<p><br clear="right" /></p>
<p>In general, this requires calculus or something like it, but in any
particular case you can avoid the calculus. Suppose we would like to
find the square root of 2. This amounts to solving the equation
$$x^2-2 = 0.$$ The function <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24"> here is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24x%5e2%2d2%24">, and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%27%24"> is
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242x%24">. Once we know (or guess) <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%27%24">, no further calculus is
needed. The method then becomes: Guess <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24">, then calculate $$\hat g
= g - \frac{g^2-2}{2g}.$$ For example, if our initial guess is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%20%3d%0a1%2e5%24">, then the formula above tells us that a better guess is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5chat%20g%0a%3d%201%2e5%20%2d%20%5cfrac%7b2%2e25%20%2d%202%7d%7b3%7d%20%3d%201%2e4166%5cldots%24">, and repeating the process
with <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5chat%20g%24"> produces <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%241%2e41421%5cmathbf%7b5686%7d%24">, which is very close
to the correct result <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%241%2e41421%5cmathbf%7b3562%7d%24">. If we want the square
root of a different number <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%24"> we just substitute it for the <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%242%24">
in the numerator.</p>
<p>This method for extracting square roots works well and requires no
calculus. It's called the <a href="https://en.wikipedia.org/wiki/Babylonian_method">Babylonian
method</a> and while there's no evidence that it
was actually known to the Babylonians, it is quite ancient; it was
first recorded by Hero of Alexandria about 2000 years ago.</p>
<p>How might this have been discovered if you didn't have calculus? It's
actually quite easy. Here's a picture of the number line. Zero is at
one end, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%24"> is at the other, and somewhere in between is <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24">,
which we want to find.</p>
<p><img class="center" src="https://pic.blog.plover.com/math/newton-mediants/numberline1.svg" alt="A line with
the left enpoint marked “0”, the right endpoint marked “n”, and a
point in between marked “square root of n”." /></p>
<p>Also somewhere in between is our guess <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24">. Say we guessed too low,
so <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%240%20%5clt%20g%20%3c%20%5csqrt%20n%24">. Now consider <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20ng%24">. Since <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24"> is
too small to be <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24"> exactly, <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20ng%24"> must be too large. (If
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20ng%24"> were both smaller than <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24">, then their
product would be smaller than <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%24">, and it isn't.)</p>
<p><img class="center" src="https://pic.blog.plover.com/math/newton-mediants/numberline2.svg" alt="The previous
illustration, with green points marked “g” and “\frac{n}{g}”. The
first of these it to the left of the square root of n, the second to
its right." /></p>
<p>Similarly, if the guess <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24"> is too large, so that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%20%3c%20g%24">, then <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20ng%24"> must be less than <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24">. The
important point is that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24"> is <em>between</em> <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%0ang%24">. We have narrowed down the interval in which <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24"> lies,
just by guessing.</p>
<p>Since <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24"> lies in the interval between <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20ng%24">
our next guess should be somewhere in this smaller interval.
The most obvious thing we
can do is to pick the point halfway in the middle of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%0ang%24">,
So if we guess the average, $$\frac12\left(g + \frac ng\right),$$ this will probably be
much closer to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24"> than <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24"> was:</p>
<p><img class="center" src="https://pic.blog.plover.com/math/newton-mediants/numberline3.svg" alt="The previous
illustration, but the point exactly midway between g and \frac{n}{g}
is marked in blue. It is quote close to the point marked “square root
of n”." /></p>
<p>This average is exactly what Newton's method would have calculated,
because
$$\frac12\left(g + \frac ng\right) =
g - \frac{g^2-n}{2g}.$$</p>
<p>But we were able to arrive at the same computation with no calculus at
all — which is why this method could have been, and was, discovered 1700 years
before Newton's method itself.</p>
<p>If we're dealing with rational numbers then we might write <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%3d%5cfrac%0aab%24">, and then instead of replacing our guess <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24"> with a better
guess <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac12%5cleft%28g%20%2b%20%5cfrac%20ng%5cright%29%24">, we could think of it as
replacing our guess <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20ab%24"> with a better guess
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac12%5cleft%28%5cfrac%20ab%20%2b%20%5cfrac%20n%7b%5cfrac%20ab%7d%5cright%29%24">. This simplifies
to</p>
<p>$$\frac ab \Rightarrow \frac{a^2 + nb^2}{2ab}$$</p>
<p>so that for example, if we are calculating <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%202%24">, and we start
with the guess <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%3d%5cfrac32%24">, the next guess is
$$\frac{3^2 + 2\cdot2^2}{2\cdot3\cdot 2} = \frac{17}{12} = 1.4166\ldots$$ as we saw
before. The approximation after that is
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%7b289%2b288%7d%7b2%5ccdot17%5ccdot12%7d%20%3d%20%5cfrac%7b577%7d%7b408%7d%20%3d%0a1%2e41421568%5cldots%24">. Used this way, the method requires only integer
calculations, and converges very quickly.</p>
<p>But the numerators and denominators increase rapidly, which is good in
one sense (it means you get to the accurate approximations quickly)
but can also be troublesome because the numbers get big and also
because you have to multiply, and multiplication is hard.</p>
<p>But remember how we figured out to do this calculation in the first
place: all we're really trying to
do is find a number in between <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20ng%24">. We did that
the first way that came to mind, by
averaging. But perhaps there's a simpler operation that we could use
instead, something even easier to compute?</p>
<p>Indeed there is! We can calculate the
<a href="https://en.wikipedia.org/wiki/mediant_%28mathematics%29"><em>mediant</em></a>. The mediant of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20ab%24">
and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20cd%24"> is simply $$\frac{a+c}{b+d}$$ and it is very easy to
show that it lies between <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20ab%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20cd%24">, as we want.</p>
<p>So instead of the relatively complicated $$\frac ab \Rightarrow \frac{a^2 + nb^2}{2ab}$$
operation, we can try the very simple and quick
$$\frac ab \Rightarrow \operatorname{mediant}\left(\frac ab,
\frac{nb}{a}\right) = \frac{a+nb}{b+a}$$ operation.</p>
<p>Taking <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%3d2%24"> as before, and starting with <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%2032%24">, this produces:</p>
<p>$$ \frac 32
\Rightarrow\frac{ 7 }{ 5 }
\Rightarrow\frac{ 17 }{ 12 }
\Rightarrow\frac{ 41 }{ 29 }
\Rightarrow\frac{ 99 }{ 70 }
\Rightarrow\frac{ 239 }{ 169 }
\Rightarrow\frac{ 577 }{ 408 }
\Rightarrow\cdots$$</p>
<p>which you may recognize as the
<a href="https://en.wikipedia.org/wiki/Continued_fraction%23Infinite_continued_fractions_and_convergents">convergents</a>
of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt2%24">. These are actually the rational
approximations of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%202%24"> that are optimally accurate relative to the sizes of their
denominators. Notice that <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%7b17%7d%7b12%7d%24"> and <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%7b577%7d%7b408%7d%24">
are in there as they were before, although it takes longer to get to them.</p>
<p>I think it's cool that you can view it as a highly-simplified version
of Newton's method.</p>
<hr />
<p>[ Addendum: An earlier version of the last paragraph claimed:</p>
<blockquote>
<p>None of this is a big surprise, because it's well-known that you
can get the convergents of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24"> by applying the transformation
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac%20ab%5cRightarrow%20%5cfrac%7ba%2bnb%7d%7ba%2bb%7d%24">, starting with <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac11%24">.</p>
</blockquote>
<p>Simon Tatham pointed out that this was mistaken. It's true when
<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%3d2%24">, but not in general. The sequence of fractions that you get
does indeed converge to <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%20n%24">, but it's not usually the
convergents, or even in lowest terms. When <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24n%3d3%24">, for example, the numerators and denominators are all even. ]</p>
<p>[ Addendum: Newton's method as I described it, with the crucial <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24g%20%e2%86%92%20g%20%2d%0a%5cfrac%7bf%28g%29%7d%7bf%27%28g%29%7d%24"> transformation, was actually invented in 1740
by <a href="https://en.wikipedia.org/wiki/Thomas_Simpson">Thomas Simpson</a>. Both Isaac Newton and Thomas
Raphson had earlier described only special cases, as had several Asian
mathematicians, including <a href="https://en.wikipedia.org/wiki/Seki_K%c5%8dwa">Seki Kōwa</a>. ]</p>
<p>[ Previous discussion of convergents: <a href="https://blog.plover.com/math/sqrt-3.html">Archimedes and the square root
of 3</a>; <a href="https://blog.plover.com/math/60-degree-angles.html">60-degree angles on a
lattice</a>. A different <a href="https://blog.plover.com/math/attractors-2.html">variation
on the Babylonian method</a>. ]</p>
<p>[ Note to self: Take a look at what the AM-GM inequality has to say
about the behavior of <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5chat%20g%24">. ]</p>
<p>[ Addendum 20201018: A while back I discussed the general method of
picking a function <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24"> that has <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5csqrt%202%24"> as a fixed point, and
iterating <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24f%24">. <a href="https://blog.plover.com/math/attractors.html">This is yet another example of such a function</a>. ]</p>
The mystery of the malformed command-line flags
https://blog.plover.com/2020/09/23#hyphens
<p>Today a user came to tell me that their command</p>
<pre><code> greenlight submit branch-name --require-review-by skordokott
</code></pre>
<p>failed, saying:</p>
<pre><code> **
** unexpected extra argument 'branch-name' to 'submit' command
**
</code></pre>
<p>This is surprising. The command <em>looks</em> correct. The branch name is
required. The <span style="white-space: nowrap;"><code>--require-review-by</code></span> option can be supplied any
number of times (including none) and each must have a value provided.
Here it is given once and the provided value appears to be
<code>skordocott</code>.</p>
<p>The <code>greenlight</code> command is a crappy shell script that pre-validates
the arguments before sending them over the network to the real server.
I guessed that the crappy shell script parser wanted the branch name
last, even though the server itself would have been happy to take the
arguments in either order. I suggested that the user try:</p>
<pre><code> greenlight submit --require-review-by skordokott branch-name
</code></pre>
<p>But it still didn't work:</p>
<pre><code> **
** unexpected extra argument '--require-review-by' to 'submit' command
**
</code></pre>
<p>I dug in to the script and discovered the problem, which was not
actually a programming error. The crappy shell script was behaving
correctly!</p>
<p>I had written up release notes for the <span style="white-space: nowrap;"><code>--require-review-by</code></span> feature.
The user had clipboard-copied the option string out of
the release notes and pasted it into the shell. So why didn't it work?</p>
<p>In an earlier draft of the release notes, when they were displayed as
an HTML page, there would be bad line breaks:</p>
<blockquote>
<p>blah blah blah be sure to use the <code>-</code> <br />
<code>-require-review-by</code> option…</p>
</blockquote>
<p>or:</p>
<blockquote>
<p>blah blah blah the new <code>--</code> <br />
<code>require-review-by</code> feature is…</p>
</blockquote>
<p>No problem, I can fix it! I just changed the pair of hyphens (<code>-</code> U+002D)
at the beginning of <span style="white-space: nowrap;"><code>--require-review-by</code></span> to Unicode nonbreaking
hyphens (<code>‑</code> U+2011). Bad line breaks begone!</p>
<p>But then this hapless user clipboard-copied the option string out of
the release notes, including its U+2011 characters. The parser in the
script was (correctly) looking for U+002D characters, and didn't
recognize <span style="white-space: nowrap;"><code>--require-review-by</code></span> as an option flag.</p>
<p>One lesson learned: people will copy-paste stuff out of documentation,
and I should be prepared for that.</p>
<p>There are several places to address this. I made the error message
more transparent; formerly it would complain <em>only</em> about the first
argument, which was confusing because it was the one argument that
<em>wasn't</em> superfluous. Now it will say something like</p>
<pre><code> **
** extra branch name '--require-review-by' in 'submit' command
**
**
** extra branch name 'skordokott' in 'submit' command
**
</code></pre>
<p>which is more descriptive of what it actually doesn't like.</p>
<p>I could change the nonbreaking hyphens in the release notes back to
regular hyphens and just accept the bad line breaks. But I don't want
to. Typography is important.</p>
<p>One idea I'm toying with is to have the shell script silently replace
all nonbreaking hyphens with regular ones before any further
processing. It's a hack, but it seems like it might be a harmless
one.</p>
<p>So many weird things can go wrong. This computer stuff is really
complicated. I don't know how anyone get anything done.</p>
<p>[ Addendum: A reader suggests that I could have fixed the line breaks with CSS. But the release notes were being presented as a <a href="https://slack.com/help/articles/203950418-Use-posts-in-Slack">Slack “Post”</a>,
which is essentially a WYSIWYG editor for creating shared documents.
It presents the document in a canned HTML style, and as far as I know
there's no way to change the CSS it uses. Similarly, there's no way to insert raw
HTML elements, so no way to change the style per-element. ]</p>
Weasel words in headlines
https://blog.plover.com/2020/09/13#weasel-words
<p><a href="https://www.npr.org/">The front page of NPR.org today has this headline</a>:</p>
<p><img class="center" src="https://pic.blog.plover.com/lang/weasel-words/npr.png" alt="Screenshot of part
of web page. The main headline is “‘So Skeptical’: As Election Nears, Iowa Senator Under Pressure For
COVID-19 Remarks”. There is a longer subheadline undernearth, which I
discussed below."/></p>
<p>It contains this annoying phrase:</p>
<blockquote>
<p>The race for Joni Ernst's seat could help determine control of the
Senate.</p>
</blockquote>
<p>Someone has really committed to hedging.</p>
<p>I would have said that the race <em>would certainly help</em> determine
control of the Senate, or that it <em>could determine</em> control
of the Senate. The statement as written makes an extremely weak
claim.</p>
<p><a href="https://www.npr.org/2020/09/13/912055824/so-skeptical-as-election-nears-iowa-senator-under-pressure-for-covid-19-remarks">The article itself</a>
doesn't include this phrase. This is why reporters hate
headline-writers.</p>
<p>(<a href="https://blog.plover.com/lang/times-headline.html">Previously</a>)</p>
Historical diffusion of words for “eggplant”
https://blog.plover.com/2020/09/11#eggplant-map
<p>In reply to my recent <a href="https://blog.plover.com/lang/etym/zucchini.html">article about the history of words for “eggplant”</a>,
a reader, Lydia, sent me this incredible map they had made that depicts the history and the diffusion of the terms:</p>
<p><a href="https://pic.blog.plover.com/lang/etym/eggplant-map/eggplant-map.png"><img class="center"
style="width: 100%;" border=0 src="https://pic.blog.plover.com/lang/etym/eggplant-map/eggplant-map.png"
alt="A map of the world, with arrows depicting the sequential adoption
of different terms for eggplant, as the words mutated from language to
language. For details, see the previous post. The map is an
oval-shaped projection. The ocean parts of the
map are a dark eggplant-purple color, and a eggplant stem has been
added at the eastern edge, in the Pacific Ocean." /></a></p>
<p>Lydia kindly gave me permission to share their map with you. You can
see the early Dravidian term <em>vaḻutanaṅṅa</em> in India, and then the arrows
show it travelling westward across Persia and, Arabia, from there to
East Africa and Europe, and from there to the rest of the world,
eventually making its way <em>back</em> to India as <em>brinjal</em> before setting
out again on yet more voyages.</p>
<p>Thank you very much, Lydia! And Happy Diada Nacional de Catalunya, everyone!</p>
<!-- note to self: do not criticize Lydia's choice to make the
arrows in gradient colors, instead of the same color at each
end. -->
A maxim for conference speakers
https://blog.plover.com/2020/09/11#aphorism
<p>The only thing worse than re-writing your talk the night before is writing your talk the night before.</p>
Zucchinis and Eggplants
https://blog.plover.com/2020/08/28#zucchini
<p>This morning Katara asked me why we call these vegetables “zucchini”
and “eggplant” but the British call them “courgette” and “aubergine”.</p>
<p>I have only partial answers, and the more I look, the more complicated
they get.</p>
<h3>Zucchini</h3>
<p>The zucchini is a kind of squash, which means that in Europe it is a
post-Columbian import from the Americas.</p>
<p>“Squash” itself is from <a href="Narragansett_language">Narragansett</a>, and is
not related to the verb “to squash”. So I speculate that what
happened here was:</p>
<ul>
<li><p>American colonists had some name for the zucchini, perhaps derived
from an Narragansett or another Algonquian language, or perhaps just
“green squash” or “little gourd” or something like that. A squash
is not exactly a gourd, but it's not exactly not a gourd either, and
the Europeans seem to have accepted it as a gourd (see below).</p></li>
<li><p>When the vegetable arrived in France, the French named it
<em>courgette</em>, which means “little gourd”. (<em>Courge</em> = “gourd”.)
Then the Brits borrowed “courgette” from the French.</p></li>
<li><p>Sometime much later, the Americans changed the name to “zucchini”,
which also means “little gourd”, this time in Italian. (<em>Zucca</em> =
“gourd”.)</p></li>
</ul>
<p>The Big Dictionary has citations for “zucchini” only back to 1929, and
“courgette” to 1931. What was this vegetable called before that? Why
did the Americans start calling it “zucchini” instead of whatever they
called it before, and why “zucchini” and not “courgette”? If it was
brought in by Italian immigrants, one might expect to the word to have
appeared earlier; the mass immigration of Italians into the U.S. was
over by 1920.</p>
<p>Following up on this thought, I found a mention of it in Cuniberti,
J. Lovejoy., Herndon,
J. B. (1918). <a href="https://catalog.hathitrust.org/Record/006772230/Home"><em>Practical Italian recipes for American kitchens</em></a>,
p. 18: “Zucchini are a kind of small squash for sale in groceries and
markets of the Italian neighborhoods of our large cities.” Note that
Cuniberti explains what a zucchini is, rather than saying something
like “the zucchini is sometimes known as a green summer squash” or
whatever, which suggests that she thinks it will not already be
familiar to the readers. It looks as though the story is: Colonial
Europeans in North America stopped eating the zucchini at some point,
and forgot about it, until it was re-introduced in the early 20th
century by Italian immigrants.</p>
<p>When did the French start calling it <em>courgette</em>? When did the
Italians start calling it <em>zucchini</em>? Is the Italian term a calque of
the French, or vice versa? Or neither? And since <em>courge</em> (and <em>gourd</em>) are evidently descended from
Latin <em>cucurbita</em>, where did the Italians
get <em>zucca</em>?</p>
<p>So many mysteries.</p>
<h3>Eggplant</h3>
<p>Here I was able to get better answers. Unlike squash, the eggplant is
native to Eurasia and has been cultivated in western Asia for
thousands of years.</p>
<p>The puzzling name “eggplant” is because the fruit, in some varieties,
is round, white, and egg-sized.</p>
<p><img src="https://pic.blog.plover.com/lang/etym/zucchini/eggplant.jpg" class="center" alt="closeup of
an eggplant with several of its round, white, egg-sized fruits that
do indeed look just like eggs" /></p>
<p>The term “eggplant” was then adopted for other varieties of the same
plant where the fruit is entirely un-egglike.</p>
<p>“Eggplant” in English goes back only to 1767. What was it called
before that? Here the OED was more help. It gives this quotation,
from 1785:</p>
<blockquote>
<p>When this [sc. its fruit] is white, it has the name of Egg-Plant.</p>
</blockquote>
<p>I inferred that the preceding text described it under a better-known
name, so, thanks to the Wonders of the Internet, I looked up the
original source:</p>
<blockquote>
<p><em>Melongena</em> or <em>Mad Apple</em> is also of this genus [solanum]; it is
cultivated as a curiosity for the largeness and shape of its fruit;
and when this is white, it has the name of <em>Egg Plant</em>; and indeed it
then perfectly resembles a hen's egg in size, shape, and colour.</p>
</blockquote>
<p>(<a href="https://en.wikipedia.org/wiki/Jean%2dJacques_Rousseau">Jean-Jacques Rosseau</a>, <a href="https://archive.org/details/lettersonelemen00unkngoog/page/n232/mode/2up?q=egg-plant"><em>Letters on the
Elements of
Botany</em></a>,
tr. <a href="https://en.wikipedia.org/wiki/Thomas_Martyn">Thos. Martyn</a> 1785. Page 202. (<a href="https://en.wikipedia.org/wiki/Letters_on_the_Elements_of_Botany">Wikipedia</a>))</p>
<p>The most common term I've found that was used before “egg-plant”
itself is “mad apple”. The OED has cites from the late 1500s that
also refer to it as a “rage apple”, which is a calque of French <em>pomme
de rage</em>. I don't know how long it was called that in French. I also
found “<em>Malum Insanam</em>” in the 1736 <a href="https://archive.org/details/b30457257_0001/page/n176/mode/1up?q=+malum"><em>Lexicon technicum</em> of John
Harris</a>,
entry “Bacciferous Plants”.</p>
<p><em>Melongena</em> was used as a scientific genus name around 1700 and later
adopted by Linnaeus in 1753. I can't find any sign that it was used
in English colloquial, non-scientific writing. Its etymology is a
whirlwind trip across the globe. Here's what the OED says about it:</p>
<ul>
<li><p>The neo-Latin scientific term is from medieval Latin <em>melongena</em></p></li>
<li><p>Latin <em>melongena</em> is from medieval Greek <em>μελιντζάνα</em> (/melintzána/), a variant of
Byzantine Greek <em>ματιζάνιον</em> (/matizánion/) probably inspired by the
common Greek prefix <em>μελανο-</em> (/melano-/) “dark-colored”. (Akin to
“melanin” for example.)</p></li>
<li><p>Greek <em>ματιζάνιον</em> is from Arabic <em>bāḏinjān</em> (بَاذِنْجَان).
(The <em>-ιον</em> suffix is a diminutive.)</p></li>
<li><p>Arabic <em>bāḏinjān</em> is from Persian <em>bādingān</em> (بادنگان)</p></li>
<li><p>Persian <em>bādingān</em> is from Sanskrit and Pali <em>vātiṅgaṇa</em> (भण्टाकी)</p></li>
<li><p>Sanskrit <em>vātiṅgaṇa</em> is from Dravidian (for example, Malayalam is
<em>vaḻutana</em> (വഴുതന); the OED says “compare… Tamil
<em>vaṟutuṇai</em>”, which I could not verify.)</p></li>
</ul>
<p>Wowzers.</p>
<p>Okay, now how do we get to “aubergine”? The list above includes
Arabic <em>bāḏinjān</em>, and this, like many Arabic words was borrowed into
Spanish, as <em>berengena</em> or <em>alberingena</em>. (The “al-” prefix is Arabic
for “the” and is attached to <a href="https://en.wikipedia.org/wiki/Arabic_language_influence_on_the_Spanish_language#A_(Ababol_to_Alguaza)">many such
borrowings</a>,
for example “alcohol” and “alcove”.)</p>
<p>From <em>alberingena</em> it's a short step to French <em>aubergine</em>. The OED
entry for <em>aubergine</em> doesn't mention this. It claims that <em>aubergine</em>
is from “Spanish <em>alberchigo</em>, <em>alverchiga</em>, ‘an apricocke’”. I think
it's clear that the OED blew it here, and I think this must be the
first time I've ever been confident enough to say that. Even the
OED itself supports me on this: the note at the entry for <em>brinjal</em>
says: “cognate with the Spanish <em>alberengena</em> is the French
<em>aubergine</em>”. Okay then. (<em>Brinjal</em>, of course, is a contraction of
<em>berengena</em>, via Portuguese <em>bringella</em>.)</p>
<p>Sanskrit <em>vātiṅgaṇa</em> is also the ultimate source of modern Hindi
<em>baingan</em>, as in <a href="https://www.cookwithmanali.com/baingan-bharta/"><em>baingan
bharta</em></a>.</p>
<p>(Wasn't there a classical Latin word for eggplant? If so, what was it? Didn't the
Romans eat eggplant? How do you conquer the world without any eggplants?)</p>
<p>[ Addendum: My search for antedatings of “zucchini” turned up some
surprises. For example, I found what seemed to be many mentions in an
1896 history of Sicily. These turned out not to be about zucchini at
all, but rather the computer's pathetic attempts at recognizing the
word <em>Σικελίαν</em>. ]</p>
<p>[ Addendum 20200831: Another surprise: Google Books and Hathi Trust
report that “zucchini” appears in the 1905 Collier <em>Modern Eclectic
Dictionary of the English Langauge</em>, but it's an incredible OCR
failure for the word “acclamation”. ]</p>
<p>[ Addendum 20200911: A reader, Lydia, sent me a beautiful map showing the evolution of the many words for ‘eggplant’. <a href="https://blog.plover.com/lang/etym/eggplant-map.html">Check it out</a>. ]</p>
Conyngus in gravé
https://blog.plover.com/2020/08/24#conyngus
<p>Ripta Pasay brought to my attention the English cookbook
<a href="https://www.pbm.com/~lindahl/lcc/"><em>Liber Cure Cocorum</em></a>,
published sometime between 1420 and 1440. The recipes are conveyed as
poems:</p>
<blockquote>
<p><em>Conyngus in gravé.</em></p>
<p>Sethe welle þy conyngus in water clere, <br />
After, in water colde þou wasshe hom sere, <br />
Take mylke of almondes, lay hit anone <br />
With myed bred or amydone; <br />
Fors hit with cloves or gode gyngere; <br />
Boyle hit over þo fyre, <br />
Hew þo conyngus, do hom þer to, <br />
Seson hit with wyn or sugur þo.</p>
</blockquote>
<p>(<a href="https://www.pbm.com/~lindahl/lcc/parallel.html#q03">Original plus translation by Cindy Renfrow</a>)</p>
<p>“Conyngus” is a rabbit; English has the cognate “coney”.</p>
<p>If you have read
<a href="https://blog.plover.com/lang/middle-english.html">my article on how to read Middle English</a>
you won't have much trouble with this. There are a few obsolete
words: <em>sere</em> means “separately”; <em>myed bread</em> is bread crumbs, and
<em>amydone</em> is starch.</p>
<p>I translate it (very freely) as follows:</p>
<blockquote>
<p><em>Rabbit in gravy.</em></p>
<p>Boil well your rabbits in clear water, <br />
then wash them separately in cold water. <br />
Take almond milk, put it on them <br />
with grated bread or starch; <br />
stuff them with cloves or good ginger; <br />
boil them over the fire, <br />
cut them up, <br />
and season with wine or sugar.</p>
</blockquote>
<p>Thanks, Ripta!</p>
Mixed-radix fractions in Bengali
https://blog.plover.com/2020/08/21#bengali
<p>[ Previously, <a href="https://blog.plover.com/math/telugu.html">Base-4 fractions in Telugu</a>. ]</p>
<p>I was really not expecting to revisit this topic, but a couple of
weeks ago, looking for something else, I happened upon the following
curiously-named Unicode characters:</p>
<pre><code> U+09F4 (e0 a7 b4): BENGALI CURRENCY NUMERATOR ONE [৴]
U+09F5 (e0 a7 b5): BENGALI CURRENCY NUMERATOR TWO [৵]
U+09F6 (e0 a7 b6): BENGALI CURRENCY NUMERATOR THREE [৶]
U+09F7 (e0 a7 b7): BENGALI CURRENCY NUMERATOR FOUR [৷]
U+09F8 (e0 a7 b8): BENGALI CURRENCY NUMERATOR ONE LESS THAN THE DENOMINATOR [৸]
U+09F9 (e0 a7 b9): BENGALI CURRENCY DENOMINATOR SIXTEEN [৹]
</code></pre>
<p>Oh boy, more base-four fractions! What on earth does “NUMERATOR ONE
LESS THAN THE DENOMINATOR” mean and how is it used?</p>
<p>An explanation appears in <a href="http://www.unicode.org/L2/L2007/07192-bengali-ganda.pdf">the Unicode proposal to add the related
“ganda”
sign</a>:</p>
<pre><code> U+09FB (e0 a7 bb): BENGALI GANDA MARK [৻]
</code></pre>
<p>(Anshuman Pandey, “Proposal to Encode the Ganda Currency Mark for
Bengali in the BMP of the UCS”, 2007.)</p>
<p>Pandey explains: prior to decimalization, the Bengali rupee (<em>rupayā</em>)
was divided into sixteen <i>ānā</i>. Standard Bengali numerals were used to
write rupee amounts, but there was a special notation for <i>ānā</i>. The
sign ৹ always appears, and means sixteenths. Then. Prefixed to this
is a numerator symbol, which goes ৴, ৵, ৶, ৷ for 1, 2, 3, 4. So for
example, 3 <i>ānā</i> is written ৶৹, which means <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac3%7b16%7d%24">.</p>
<p>The larger fractions are made by adding the numerators, grouping by 4's:</p>
<table align="center" cellpadding="3" cellspacing="0" style="border-collapse: collapse;">
<tr style="background-color: white;"><td> <td align="right">৴ <td align="right">৵ <td align="right">৶ <td align="right" style="">1, 2, 3
<tr style="background-color: pink;"><td align="right">৷<td align="right">৷৴ <td align="right">৷৵ <td align="right">৷৶ <td align="right" style="">4, 5, 6, 7
<tr style="background-color: white;"><td align="right">৷৷<td align="right">৷৷৴ <td align="right">৷৷৵ <td align="right">৷৷৶ <td align="right" style=" padding-left: 1em;">8, 9, 10, 11
<tr style="background-color: pink;"><td align="right">৸ <td align="right">৸৴ <td align="right">৸৵ <td align="right">৸৶ <td align="right" style=" padding-left: 1em;">12, 13, 14, 15
</table>
<p>except that three fours (৷৷৷) is too many, and is abbreviated by the
intriguing NUMERATOR ONE LESS THAN THE DENOMINATOR sign ৸ when more
than 11 <i>ānā</i> are being written.</p>
<p>Historically, the <i>ānā</i> was divided into 20 <i>gaṇḍā</i>; the <i>gaṇḍā</i> amounts are
written with standard (Benagli decimal) numerals instead of the
special-purpose base-4 numerals just described. The <i>gaṇḍā</i> sign ৻
<em>precedes</em> the numeral, so 4 <i>gaṇḍā</i> (<img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%24%5cfrac15%24"> <i>ānā</i>) is wrtten as ৻৪.
(The ৪ is not an 8, it is a four.)</p>
<p>What if you want to write 17 rupees plus <img src="https://chart.apis.google.com/chart?chf=bg,s,00000000&cht=tx&chl=%249%5cfrac15%24"> <i>ānā</i>? That is 17
rupees plus 9 <i>ānā</i> plus 4 <i>gaṇḍā</i>. If I am reading this report correctly, you
write it this way:</p>
<blockquote style="margin: auto; text-align: center">১৭৷৷৴৻৪</blockquote>
<p>This breaks down into three parts as
<span style="background-color: white;">১৭</span>
<span style="background-color: silver;">৷৷৴</span>
<span style="background-color: white;">৻৪</span>. The ১৭ is a 17, for 17
rupees; the ৷৷৴ means 9 <i>ānā</i> (the denominator ৹ is left implicit) and the
৻৪ means 4 <i>gaṇḍā</i>, as before. There is no separator between the rupees
and the <i>ānā</i>. But there doesn't need to be, because different numerals
are used! An analogous imaginary system in Latin script would be to write the amount as</p>
<blockquote>
<p>17dda¢4</p>
</blockquote>
<p>where the ‘17’ means 17 rupees, the ‘dda’ means 4+4+1=9 <i>ānā</i>, and the ¢4
means 4 <i>gaṇḍā</i>. There is no trouble seeing where the ‘17’ ends and the
‘dda’ begins.</p>
<p>Pandey says there was an even smaller unit, the <i>kaṛi</i>. It was worth ¼
of a <i>gaṇḍā</i> and was again written with the special base-4 numerals, but
as if the <i>gaṇḍā</i> had been divided into 16. A complete amount might be
written with decimal numerals for the rupees, base-4 numerals for the
<i>ānā</i>, decimal numerals again for the <i>gaṇḍā</i>, and base-4 numerals again
for the <i>kaṛi</i>. No separators are needed, because each section is written symbols that are different from the ones in the adjoining sections.</p>
Recommended reading: Matt Levine’s Money Stuff
https://blog.plover.com/2020/08/06#money-stuff
<p>Lately my favorite read has been Matt Levine’s <em>Money Stuff</em> articles
from Bloomberg News. Bloomberg's web site requires a subscription but
you can also get the <em>Money Stuff</em> articles as an
<a href="http://link.mail.bloombergbusiness.com/join/4wm/moneystuff-signup">occasional email</a>.
It arrives at most once per day.</p>
<p>Almost every issue teaches me something interesting I didn't know, and
almost every issue makes me laugh.</p>
<p>Example of something interesting: a while back it was all over the
news that oil prices were negative. Levine was there to explain what
was really going on and why. Some people manage index funds. They
are not trying to beat the market, they are trying to match the index.
So they buy derivatives that give them the right to buy oil futures
contracts at whatever the day's closing price is. But say they
already own a bunch of oil contracts. If they can get the close-of-day
price to dip, then their buy-at-the-end-of-the-day contracts will all
be worth more because the counterparties have contracted to buy at the
dip price. How can you get the price to dip by the end of the day?
Easy, unload 20% of your contracts at a bizarre low price, to make the
value of the other 80% spike… it makes my head swim.</p>
<p>But there are weird second- and third-order effects too. Normally if
you invest fifty million dollars in oil futures speculation, there is
a worst-case: the price of oil goes to zero and you lose your fifty
million dollars. But for these derivative futures, the price could in
theory become negative, and for short time in April, it did:</p>
<blockquote>
<p>If the ETF’s oil futures go to -$37.63 a barrel, as some futures did
recently, the ETF investors lose $20—their entire investment—and, uh,
oops? The ETF runs out of money when the futures hit zero; someone
else has to come up with the other $37.63 per barrel.</p>
</blockquote>
<p>One article I particularly remember discussed the kerfuffle a while
back concerning whether Kelly Loeffler improperly traded stocks on
classified coronavirus-related intelligence that she received in her
capacity as a U.S. senator. I found
<a href="https://www.bloomberg.com/opinion/articles/2020-05-14/senator-s-stock-trades-make-trouble">Levine's argument</a>
persuasive:</p>
<blockquote>
<p>“I didn’t dump stocks, I am a well-advised rich person, someone
else manages my stocks, and they dumped stocks without any input
from me” … is a good defense! It’s not insider trading if you don’t
trade; if your investment manager sold your stocks without input
from you then you’re fine. Of course they could be lying, but in
context the defense seems pretty plausible. (Kelly Loeffler, for
instance, controversially dumped about 0.6% of her portfolio at
around the same time, which sure seems like the sort of thing an
investment adviser would do without any input from her? You could
call your adviser and say “a disaster is coming, sell everything!,”
but calling them to say “a disaster is coming, sell a tiny bit!”
seems pointless.)</p>
</blockquote>
<p>He contrasted this case with that of Richard Burr, who, unlike
Loeffler, remains under investigation. The discussion was factual and
informative, unlike what you would get from, say, Twitter, or even
Metafilter, where the response was mostly limited to variations on
“string them up” and “eat the rich”.</p>
<p><em>Money Stuff</em> is also very funny. Today’s letter discusses a
disclosure filed recently by
<a href="https://en.wikipedia.org/wiki/Nikola_Corporation">Nikola Corporation</a>:</p>
<blockquote>
<p>More impressive is that Nikola’s revenue for the second quarter was
very small, just
$36,000. Most impressive, though, is how they earned that revenue:</p>
<blockquote>
<p>During the three months ended June 30, 2020 and 2019 the Company
recorded solar revenues of $0.03 million and $0.04 million,
respectively, for the provision of solar installation services to
the Executive Chairman, which are billed on time and materials
basis. … </p>
</blockquote>
<p>“Solar installation projects are not related to our primary operations
and are expected to be discontinued,” says Nikola, but I guess they
are doing one last job, specifically installing solar panels at
founder and executive chairman Trevor Milton’s house? It is a $13
billion company whose only business so far is doing odd jobs around
its founder’s house. </p>
</blockquote>
<p>A couple of recent articles that cracked me up discussed clueless
day-traders pushing up the price of Hertz stock <em>after</em> Hertz had
declared bankruptcy, and how Hertz diffidently attempted to get the
SEC to approve a new stock issue to cater to these idiots. (The SEC
said no.)</p>
<p>One recurring theme in the newsletter is “Everything is Securities
Fraud”. This week, Levine asks:</p>
<blockquote>
<p>Is it securities fraud for a public company to pay bribes to public
officials in exchange for lucrative public benefits?</p>
</blockquote>
<p>Of course you'd expect that the executives would be criminally
charged, as they have been. But is there a cause for the company’s
shareholders to sue? If you follow the newsletter, you know what the
answer will be:</p>
<blockquote>
<p>Oh absolutely…</p>
</blockquote>
<p>because Everything is Securities Fraud.</p>
<blockquote>
<p>Still it is a little weird. Paying bribes to get public benefits is,
you might think, the sort of activity that benefits
shareholders. Sure they were deceived, and sure the stock price was
too high because investors thought the company’s good performance
was more legitimate and sustainable than it was, etc., but the
shareholders are strange victims. In effect, executives broke the
law in order to steal money for the shareholders, and when the
shareholders found out they sued? It seems a little ungrateful?</p>
</blockquote>
<p>I recommend it.</p>
<ul>
<li><a href="https://www.bloomberg.com/authors/ARbTQlRLRjE/matthew-s-levine">Matt Levine on Bloomberg.com</a> </li>
<li><a href="http://link.mail.bloombergbusiness.com/join/4wm/moneystuff-signup">Free email newsletter signup</a></li>
</ul>
<p><a href="https://twitter.com/matt_levine">Levine also has a Twitter account</a> but it
is mostly just links to his newsletter articles.</p>
<p>[ Addendum 20200821: Unfortunately, just a few days after I posted
this, Matt Levin announced that his newletter would be on hiatus for a
few months, as he would be on paternity leave. Sorry! ]</p>
A maybe-interesting number trick?
https://blog.plover.com/2020/08/05#GF32
<p>I'm not sure if this is interesting, trivial, or both. You decide.</p>
<p>Let's divide the numbers from 1 to 30 into the following six groups:</p>
<table cellpadding="20em" cellspacing=0 align="center">
<tr bgcolor='white'><td><b>A
<td align='right'>1
<td align='right'>2
<td align='right'>4
<td align='right'>8
<td align='right'>16
<tr bgcolor='pink'><td><b>B
<td align='right'>3
<td align='right'>6
<td align='right'>12
<td align='right'>17
<td align='right'>24
<tr bgcolor='white'><td><b>C
<td align='right'>5
<td align='right'>9
<td align='right'>10
<td align='right'>18
<td align='right'>20
<tr bgcolor='pink'><td><b>D
<td align='right'>7
<td align='right'>14
<td align='right'>19
<td align='right'>25
<td align='right'>28
<tr bgcolor='white'><td><b>E
<td align='right'>11
<td align='right'>13
<td align='right'>21
<td align='right'>22
<td align='right'>26
<tr bgcolor='pink'><td><b>F
<td align='right'>15
<td align='right'>23
<td align='right'>27
<td align='right'>29
<td align='right'>30
</table>
<p>Choose any two rows. Chose a number from each row, and multiply
them mod 31. (That is, multiply them, and if the product is 31 or
larger, divide it by 31 and keep the remainder.)</p>
<p>Regardless of which two numbers you chose, the result will always be
in the same row. For example, any two numbers chosen from rows <em>B</em>
and <em>D</em> will multiply to yield a number in row <em>E</em>. If both numbers
are chosen from row <em>F</em>, their product will always appear in row <em>A</em>.</p>