The Universe of Discourse


Mon, 20 Aug 2018

Miscellaneous addenda about Catholic priest sex abuse

I hope and expect this will be my last post on this loathsome subject.

Diocesan vs. religious priests

In the previous article, I said:

let us say that there are currently around 1620 priests total in the other six dioceses. (This seems on the high side, since my hand-count of Pittsburgh priests contains only about 200. I don't know what to make of this.)

Later I added a guess about why:

I wonder if it is because my hand counts, taken from diocesan directory pages, include only diocesan priests, where the total of 2500 also includes the religious priests. I should look into this.

I think this is probably correct. The list of Pittsburgh priests has a section the the bottom headed “Religious Priests Serving in the Diocese”, but it is empty.

Kennedy's Catholic Directory

While attempting to get better estimates for the total number of active priests, I located part of The Official Catholic Directory of P.J. Kennedy & Sons for the year 1980, available on the Internet Archive. It appears that this is only one volume of many, and unfortunately IA does not seem to have the others. Luckily, though, this happens to be the volume that includes information for Philadelphia, Pittsburgh, and Scranton. It reports:

 Philadelphia Pittsburgh Scranton
 (p. 695)(p. 715)(p. 892)
Priests: diocesian active in diocese 831 547 374
Priests: active outside diocese 33 29 13
Priests: retired, sick or absent: 121 60 52
Diocesan Priests in foreign missions 2 2 26
Religious Priests resident in diocese 553 243 107
Religious Priests from diocese in foreign missions: 93 78 26
Total Priests in Diocese 1631 957 572

I have no idea how authoritative this is, or what is the precise meaning of “official” in the title. The front matter would probably explain, but it does not appear in the one volume I have. The cover also advises “Important: see explanatory notes, pp. vi–viii”, which I have not seen.

The Directory also includes information for the Ukrainian Catholic Archepathy of Philadelphia, which, being part of a separate (but still Catholic) church is separate from the Philadelphia Roman Catholic dioceses. (It reports 127 total priests.) It's not clear whether to absorb this into my estimate because I'm not sure if it was part of the total number I got from the Pennsylvania Catholic Conference.

But I am not going back to check because I feel there is no point in trying to push on in this way. An authoritative and accurate answer is available from the official census, and my next step, should I care to take one, should be to go to the library and look at it, rather than continuing to pile up inaccurate guesses based on incomplete information.

Sipe's earlier estimate

Jonathan Segal points out that A.W. Richard Sipe, a famous expert on clergy sexual abuse, had estimated in 1990 that about 6% of U.S. priests has sexually abused children. This is close to my own estimate of 6.1% for the six Pennsylvania dioceses. Most of this agreement should be ascribed to luck.

Final remark

In 1992, Magda Davitt, the Irish singer formerly known as Sinéad O'Connor, became infamous for protesting systematic Catholic sex abuse by tearing up a photograph of John Paul II on live American television. She was almost universally condemned. (The Wikipedia article has a few of the details.)

This week, America: The Jesuit Review, which claims to be “the leading Catholic journal of opinion in the United States”, reported:

Pope Francis issued a letter to Catholics around the world Monday condemning the "crime" of priestly sexual abuse and its cover-up and demanding accountability…

The Vatican issued the three-page letter ahead of Francis' trip this weekend to Ireland, a once staunchly Roman Catholic country where the church's credibility has been damaged by years of revelations that priests raped and molested children with impunity and their superiors covered up for them.

(“Pope Francis issues new letter on sex abuse: ‘We showed no care for the little ones’”.)

A lot of people owe Sinéad O'Connor a humble apology.


[Other articles in category /religion] permanent link

Fri, 17 Aug 2018

What fraction of Pennsylvania Catholic priests were child molesters?

The grand jury report on Catholic clergy sexual abuse has been released and I have been poring over it. The great majority of it is details about the church's handling of the 301 “predator priests” that the grand jury identified.

I have seen several places the suggestion that this is one-third of a total of 900. This is certainly not the case. There may be 900 priests there now, but the report covers all the abuse that the grand jury found in examining official records from the past seventy years or so. Taking a random example, pages 367–368 of the report concern the Reverend J. Pascal Sabas, who abused a 14-year-old boy starting in 1964. Sabas was ordained in 1954 and died in 1996.

I tried to get a good estimate of the total number of priests over the period covered by the report. Information was rather sketchy. The Vatican does do an annual census of priests, the Annuarium Statisticum Ecclesiae, but I could not find it online and hardcopies sell for around €48. Summary information by continent is reported elsewhere [2], but the census unfortunately aggregates North and South America as a single continent. I did not think it reasonable to try to extrapolate from the aggregate to the number of priests in the U.S. alone, much less to Pennsylvania.

Still we might get a very rough estimate as follows. The Pennsylvania Catholic Conference says that there were a total of just about 2500 priests in Pennsylvania in 2017 or 2016. The Philadelphia Archdiocese and the Altoona-Johnstown Diocese are not discussed in the grand jury report, having been the subject of previous investigations. The official websites of these two dioceses contain lists of priests and I counted 792 in the Philadelphia directory and 80 in the Altoona-Johnstown directory, so let us say that there are currently around 1620 priests total in the other six dioceses. (This seems on the high side, since my hand-count of Pittsburgh priests contains only about 200. I don't know what to make of this.)

[ Addendum 20180820: I wonder if it is because my hand counts, taken from diocesan directory pages, include only diocesan priests, where the total of 2500 also includes the relgiious priests. I should look into this. ]

Suppose that in 1950 there were somewhat more, say 2160. The average age of ordination is around 35 years; say that a typical priestly career lasts around 40 years further. So say that each decade, around one-quarter of the priesthood retires. If around 84% of the retirees are replaced, the replacement brings the total number back up to 96% of its previous level, so that after 70 years about 75% remain. Then the annual populations might be approximately:

$$\begin{array}{rrrrr} \text{year} & \text{total population} & \text{retirements} & \text{new arrivals} & \\ 1950 & 2160 & 540 & 453 \\ 1960 & 2073 & 518 & 434 \\ 1970 & 1989 & 497 & 417 \\ 1980 & 1909 & 477 & 400 \\ 1990 & 1832 & 458 & 384 \\ 2000 & 1758 & 439 & 368 \\ 2010 & 1687 & 421 & 354 \\ \hline \text{(total)} & & 3350 & 2810 \\ \end{array}$$

From the guesses above we might estimate a total number of individual priests serving between 1950 and 2018 as !!2160 + 2810 - 70 = 4900!!. (That's 2160 priests who were active in 1950, plus 2810 new arrivals since then, except minus !!354\cdot20\% \approx 70!! because it's only 2018 and only 20% of the new arrivals for 2010–2020 have happened so far.)

So the 301 predator priests don't represent one-third of the population, they probably represent “only” around !!\frac{301}{4900} \approx 6.1\%!!.

The church's offical response is availble.

[ Addenda: An earlier version of this article estimated around 900 current priests instead of 1620; I believe that this was substantially too low. Also, that earlier version incorrectly assumed that ordinations equalled new priests, which is certainly untrue, since ordained priests can and do arrive in Pennsylvania from elsewhere. ]

[ Addendum 20180820: Some followup notes. ]


[Other articles in category /religion] permanent link

Tue, 14 Aug 2018

What causes traffic jams

A few years ago Katara was very puzzled by traffic jams and any time we were in a traffic slowdown she would ask questions about it. For example, why is traffic still moving, and why does your car eventually get through even though the traffic jam is still there? Why do they persist even after the original cause is repaired? But she seemed unsatisfied with my answers. Eventually in a flash of inspiration I improvised the following example, which seemed to satisfy her, and I still think it's a good explanation.


Suppose you have a four-lane highway where each lane can carry up to 25 cars per minute. Right now only 80 cars per minute are going by so the road is well under capacity.

But then there is a collision or some other problem that blocks one of the lanes. Now only three lanes are open and they have a capacity of 75 cars per minute. 80 cars per minute are arriving, but only 75 per minute can get past the blockage. What happens now? Five cars more are arriving every minute than can leave, and they will accumulate at the blocked point. After two hours, 600 cars have piled up.

But it's not always the same 600 cars! 75 cars per minute are still leaving from the front of the traffic jam. But as they do, 80 cars have arrived at the back to replace them. If you are at the back, there are 600 cars in front of you waiting to leave. After a minute, the 75 at the front have left and there are only 525 cars in front of you; meanwhile 80 more cars have joined the line. After 8 minutes all the cars in front of you have left and you are at the front and can move on. Meanwhile, the traffic jam has gotten bigger.

Suppose that after two hours the blockage is cleared. The road again has a capacity of 100 cars per minute. But cars are still arriving at 80 per minute, so each minute 20 more cars can leave than arrived. There are 600 waiting, so it takes another 30 minutes for the entire traffic jam to disperse.


This leaves out some important second-order (and third-order) effects. For example, traffic travels more slowly on congested roads; maximum safe speed decreases with following distance. But as a first explanation I think it really nails the phenomenon.


[Other articles in category /tech] permanent link

Wed, 08 Aug 2018

Fake keyword origins

[ Previously: [1] [2] ]

In my original article, I said:

I was fairly confident I had seen something like this somewhere before, and that it was not original to me.

Jeremy Yallop brought up an example that I had definitely seen before.

In 2008 Conor McBride and Ross Paterson wrote an influential paper, “Idioms: applicative programming with effects” that introduced the idea of an applicative functor, a sort of intermediate point between functors and monads. It has since made its way into standard Haskell and was deemed sufficiently important to be worth breaking backward compatibility.

McBride and Paterson used several notations for operations in an applicative functor. Their primary notation was !!\iota!! for what is now known as pure and !!\circledast!! for what has since come to be written as <*>. But the construction

$$\iota f \circledast is_1 \circledast \ldots \circledast is_n$$

came up so often they wanted a less cluttered notation for it:

We therefore find it convenient, at least within this paper, to write this form using a special notation

$$ [\![ f is_1 \ldots is_n ]\!] $$

The brackets indicate a shift into an idiom where a pure function is applied to a sequence of computations. Our intention is to provide a sufficient indication that effects are present without compromising the readability of the code.

On page 5, they suggested an exercise:

… show how to replace !![\![!! and !!]\!]!! by identifiers iI and Ii whose computational behaviour delivers the above expansion.

They give a hint, intended to lead the reader to the solution, which involves a function named iI that does some legerdemain on the front end and then a singleton type data Ii = Ii that terminates the legerdemain on the back end. The upshot is that one can write

iI f x y Ii

and have it mean

(pure f) <*> x <*> y

The haskell wiki has details, written by Don Stewart when the McBride-Paterson paper was still in preprint. The wiki goes somewhat further, also defining

 data J = J

so that

iI f x y J z Ii

now does a join on the result of f x y before applying the result to z.

I have certainly read this paper more than once, and I was groping for this example while I was writing the original article, but I couldn't quite put my finger on it. Thank you, M. Yallop!

[ By the way, I am a little bit disappointed that the haskell wiki is not called “Hicki”. ]


[Other articles in category /prog/haskell] permanent link

A fake keyword example

In the previous article I described a rather odd abuse of the Haskell type system to use a singleton type as a sort of pseudo-keyword, and asked if anyone had seen this done elsewhere.

Joachim Breitner reported having seen this before. Most recently in LiquidHaskell, which defines a QED singleton type:

 data QED = QED
 infixl 2 ***

 (***) :: a -> QED -> Proof
 _ *** _ = ()

so that they can end every proof with *** QED:

singletonP x
      =   reverse [x]
      ==. reverse [] ++ [x]
      ==. [] ++ [x]
      ==. [x]
      *** QED

This example is from Vazou et al., Functional Pearl: Theorem Proving for All, p. 3. The authors explain: “The QED argument serves a purely aesthetic purpose, allowing us to conclude proofs with *** QED.”.

Or see the examples from the bottom of the LH splash page, proving the associative law for ++.

I looked in the rest of the LiquidHaskell distribution but did not find any other uses of the singleton-type trick. I would still be interested to see more examples.

[ Addendum: Another example. ]


[Other articles in category /prog/haskell] permanent link

Is this weird Haskell technique something I made up?

A friend asked me the other day about techniques in Haskell to pretend to make up keywords. For example, suppose we want something like a (monadic) while loop, say like this:

      while cond act =
          cond >>= \b -> if b then act >> while cond act
                              else return ()   

This uses a condition cond (which might be stateful or exception-throwing or whatever, but which must yield a boolean value) and an action act (likewise, but its value is ignored) and it repeates the action over and over until the condition is false.

Now suppose for whatever reason we don't like writing it as while condition action and we want instead to write while condition do action or something of that sort. (This is a maximally simple example, but the point should be clear even though it is silly.) My first suggestion was somewhat gross:

      while c _ a = ...

Now we can write

      while condition "do" action

and the "do" will be ignored. Unfortunately we can also write while condition "wombat" action and you know how programmers are when you give them enough rope.

But then I had a surprising idea. We can define it this way:

      data Do = Do
      while c Do a = ...

Now we write

      while condition 
        Do action

and if we omit or misspell the Do we get a compile-time type error that is not even too obscure.

For a less trivial (but perhaps sillier) example, consider:

    data Exception a = OK a | Exception String
    instance Monad Exception where ...

    data Catch = Catch
    data OnSuccess = OnSuccess
    data AndThen = AndThen

    try computation Catch handler OnSuccess success AndThen continuation =
      case computation of OK a        -> success >> (OK a) >>= continuation
                          Exception e ->            (handler e) >>= continuation

The idea here is that we want to try a computation, and do one thing if it succeeds and another if it throws an exception. The point is not the usefulness of this particular and somewhat contrived exception handling construct, it's the syntactic sugar of the Catch, OnSuccess, and AndThen:

    try (evaluate some_expression)
      Catch (\error -> case error of "Divison by zero" -> ... 
                                      ... )
      OnSuccess ...
      AndThen ...

I was fairly confident I had seen something like this somewhere before, and that it was not original to me. But I've asked several Haskell experts and nobody has said it was familar. I thought perhaps I had seen it somewhere in Brent Yorgey's code, but he vehemently denied it.

So my question is, did I make up this technique of using a one-element type as a pretend keyword?

[ Addendum: At least one example of this trick appears in LiquidHaskell. I would be interested to hear about other places it has been used. ]

[ Addendum: Jeremy Yallop points out that a similar trick was hinted at in McBride and Paterson “Idioms: applicative programming with effects” (2008), with which I am familiar, although their trick is both more useful and more complex. So this might have been what I was thinking of. ]


[Other articles in category /prog/haskell] permanent link

Thu, 02 Aug 2018

How to explain infinity to kids

A professor of mine once said to me that all teaching was a process of lying, and then of replacing the lies with successively better approximations of the truth. “I say it's like this,” he said, “and then later I say, well, it's not actually like that, it's more like this, because the real story is too complicated to explain all at once.” I wouldn't have phrased it like this, but I agree with him in principle. One of the most important issues in pedagogical practice is deciding what to leave out, and for how long.

Kids inevitably want to ask about numerical infinity, and many adults will fumble the question, mumbling out some vague or mystical blather. Mathematics is prepared to offer a coherent and carefully-considered answer. Unfortunately, many mathematically-trained people also fumble the question, because mathematics is prepared to offer too many answers. So the mathematical adult will often say something like “well, it's a lot of things…” which for this purpose is exactly not what is wanted. When explaining the concept for the very first time, it is better to give a clear and accurate partial explanation than a vague and imprecise overview. This article suggests an answer that is short, to the point, and also technically correct.

In mathematics “infinity” names a whole collection of not always closely related concepts from analysis, geometry, and set theory. Some of the concepts that come under this heading are:

  • The !!+\infty!! and !!-\infty!! one meets in real analysis, which can be seen either as a convenient fiction (with “as !!x!! goes to infinity” being only a conventional rephrasing of “as !!x!! increases without bound”) or as the endpoints in the two-point compactification of !!\Bbb R!!.

  • The !!\infty!! one meets in complex analysis, which is a single point one adds to compactify the complex plane into the Riemann sphere.

  • The real version of the preceding, the “point at infinity” on the real projective line.

  • The entire “line at infinity” that bounds the real projective plane.

  • The vast family of set-theoretic infinite cardinals !!\aleph_0, \aleph_1, \ldots!!.

  • The vast family of set-theoretic infinite ordinals, !!\omega, \omega+1, \ldots, \epsilon_0, \ldots, \Omega, \ldots!!.

I made a decision ahead of time that when my kids first asked what infinity was, I would at first adopt the stance that “infinity” referred specifically to the set-theoretic ordinal !!\omega!!, and that the two terms were interchangeable. I could provide more details later on. But my first answer to “what is infinity” was:

It's the smallest number you can't count to.

As an explanation of !!\omega!! for kids, I think this is flawless. It's briefand it's understandable. It phrases the idea in familiar terms: counting. And it is technically unimpeachable. !!\omega!! is, in fact, precisely the unique smallest number you can't count to.

How can there be a number that you can't count to? Kids who ask about infinity are already quite familiar with the idea that the sequence of natural numbers is unending, and that they can count on and on without bound. “Imagine taking all the numbers that you could reach by counting,” I said. “Then add one more, after all of them. That is infinity.” This is a bit mind-boggling, but again it is technically unimpeachable, and the mind-bogglyness of it is nothing more than the intrinsic mind-bogglyness of the concept of infinity itself. None has been added by vagueness or imprecise metaphor. When you grapple with this notion, you are grappling with the essence of the problem of the completed infinity.

In my experience all kids make the same move at this point, which is to ask “what comes after infinity?” By taking “infinity” to mean !!\omega!!, we set ourselves up for an answer that is much better than the perplexing usual one “nothing comes after infinity”, which, if infinity is to be considered a number, is simply false. Instead we can decisively say that there is another number after infinity, which is called “infinity plus one”. This suggests further questions. “What comes after infinity plus one?” is obvious, but a bright kid will infer the existence of !!2\cdot\omega!!. A different bright kid might ask about !!\omega-1!!, which opens a different but fruitful line of discussion: !!\omega!! is not a successor ordinal, it is a limit ordinal.

Or the kid might ask if infinity plus one isn't equal to infinity, in which case you can discuss the non-commutativity of ordinal addition: if you add the “plus one” at the beginning, it is the same (except for !!\omega!! itself, the picture has just been shifted over on the page):

But if you add the new item at the other end, it is a different picture:

Before there was only one extra item on the right, and now there are two. The first picture exemplifies the Dedekind property, which is an essential feature of infinity. But the existence of an injection from !!\omega+1!! to !!\omega!! does not mean that every such map is injective.

Just use !!\omega!!. Later on the kid may ask questions that will need to be answered with “Earlier, I did not tell you the whole story.” That is all right. At that point you can reveal the next thing.


[Other articles in category /math] permanent link

Mon, 30 Jul 2018

Lost and found authors: The Poppy Seed Cakes

As a very small child, I loved The Poppy Seed Cakes (1924), by Margery Clark. Later I read it to my own kids. Toph never cared for it, but it was by far Katara's favorite book when she was three. The illustrations are by Maud and Miska Petersham, who later won the Caldecott Medal.

Andrewshek, a small boy in an
orange cap with a black bill, is leaning over green gate, standing on
its bottom rail.  The gate is unlatched.  Andrewshek is conversing
with a white goat.  The picture is colored in green, golden yellow,
grayish blue, orange, black, and a bit of pink, but no red.  The caption says “‘How
do you do, Andrewshek?’” Auntie Katushka is a smiling
woman in a yellow dress with large orange flowers, a jacket trimmed
with black fur, and a polka-dotted kerchief tied on her head.  A stray
lock of white hair escapes from the front of the kerchief.  She is
managing a very large, overflowing valise, barely held closed by blue
string.  She also carries a green handbag, and has an
umbrella under her arm whose handle is shaped like a duck.  In the
background is a steamship.

(At left, the protagonist, Andrewshek, meeting Auntie Katushka's new white goat, who has run home from the market without her. He is swinging on the gate, which his Autie Katushka has specifically told him not to do, or the chickens will run out into the road. At right, Auntie Katushka arriving from the Old Country. Depicted are the fine feather bed, the umbrella with the crooked handle, and probably the bag of poppy seeds, all of which are plot points.)

As an adult I wanted to know if Margery Clark had written anything else. And the answer was, yes! Margery Clark, I learned, was a pseudonym for the two authors, Margery Closey Quigley and Mary E. Clark, who were librarians in Endicott, New York. The town was full of Russian and Polish immigrants who came there to make shoes, and Andrewshek and his Auntie Katushka were no doubt inspired by real library patrons. I looked up maps of Endicott in 1924 and found the little park with the stream where Andrewshek's picnic was nearly stolen by a swan. I spent way too much time trying to find out if Andrewshek was more likely to be Russian or Polish by poring over census data.

And Quigley had written another book! After Endicott she became a librarian in Montclair, New Jersey and wrote Portrait of a Library (1936) about how the Montclair Public Library was run. I found it fascinating and compelling. One detail I remember is that when automobiles became common, they started to offer a pickup and delivery service, which the citizens loved:

Delivery of books has been added to the borrowing system. Delivery is made by parcel-post, by Postal Telegraph, by the library messenger, and by the A. & P. delivery man. A messenger who spends one day a week following up overdue books was added in the fall of 1930.

She mentions that in 1934, over 3,000 books were delivered via Western Union messenger, with the borrower paying the 10¢ delivery fee. I think it's pretty awesome that they were able to press the A&P guy into library service.

My favorite detail, though, concerned a major renovation to the library building. They took the opportunity to reorganize the layout of the sections and shelves:

… the confusion and the press of work upon assistants and the criticisms from the public about overcrowding at the library came much too thick and fast. At this point an offer of professional services came from one of Montclair's citizens who is an efficiency engineer. Simultaneously government-subsidized workers were assigned to the library.

The findings made by the library staff under the direction of the efficiency engineer became the beginning of a complete physical rearrangement of the main library and of a redistribution of duties among the staff.

“Oh, wow!” I said. “I know who that was!” That was a fun moment.

(But I see now that I was wrong! Montclair was home to two world-famous efficiency engineers, but one of them had died in 1924, and the library renovation was done in 1931–1932. Quigley's efficiency engineer was surely Lillian Moller Gilbreth. Ooops!)

Regrettably, the Internet Archive has only an abridged version of The Poppy Seed Cakes, with only three of the eight stories. I may have it professionally scanned.


[Other articles in category /book] permanent link

Sat, 28 Jul 2018

George Washington's Breakfast

When I was a kid I enjoyed a story called George Washington's Breakfast, which I have since learned was written in 1969 by Jean Fritz. The protagonist is a boy named George Washington Allen who is fascinated by all things related to his namesake. One morning at breakfast he wonders what George Washington ate for his own breakfast. He has read all the books in his school library about George Washington, but does not know this important detail. His grandmother promises him that when he finds out what George Washington had for breakfast, she will cook it for him, whatever it is.

This launches the whole family on an odyssey that takes them as far as Mount Vernon, but even the people at Mount Vernon don't know. In the end George gets lucky: he finds an old book in the attic that authoritatively states that George Washington

breakfasts about seven o'clock on three small Indian hoecakes, and as many dishes of tea.

George is thrilled, and, having looked up hoecake in the dictionary to find out what it is (a corn cake cooked in the fireplace, on a hoe), jumps up from the table. His grandmother asks him where he is going, and of course he is going to the basement to get the hoe. Grandma refuses to cook on a hoe; George objects.

“When you were in George Washington's kitchen in Mount Vernon, did you see any hoes?”

“Well no, but…”

“Did you see any black iron griddles?”

“Yes.”

“Then that's what I'll use.”

This stuck with me for many years, and thinking back on it one day as an adult, I was suddenly certain that George Allen and his family were black, notwithstanding the plump white kid in the illustrations. At some point I even looked up the original publication to see if the kid was black in the 1969 illustrations. Nope, they are by Paul Galdone and he has made George white. A new edition was published in 1998 with new illustrations by Tomie dePaola, again with a white kid. Ms. Fritz died last year at the age of 102, so it is too late to ask her what she had in mind. But it doesn't matter to me; I am sure George is black. There was a trend in the 1960s for white authors of children’s books to make an effort to depict black kids. (For example: The Snowy Day (1962) and its sequels (through 1968); Corduroy (1968); and many others less well-known.)

Anyway, chalk this up as another story that could not happen in 2018. George's family would search on the Internet, and immediately find out about the hoecakes. I don't recall exactly book it was that George found in the attic, but in a minute’s searching I was able to find out that it was probably Paul Leicester Ford’s George Washington (1896), and the authoritative statement about Washington's hoecake-and-tea breakfast, on page 193, is quoted from Samuel Stearns, a contemporary of Washington's.

The Stearns quotation definitely appeared in Fritz's story; I remember the wording, and even Samuel Stearns rings a bell now that I see his name again.

For oddballs like me and George Washington Allen, who become obsessed by trivial questions, the Internet is a magnificent and glorious boon. I often think that for me, one of the best results of the rise of the Internet is that I can now track down all the books from my childhood that I liked but only half-remember, find out who wrote them and read them again.


[Other articles in category /book] permanent link

Thu, 26 Jul 2018

Testing for divisibility by 7

There are well-known tests if a number (represented as a base-10 numeral) is divisible by 2, 3, 5, 9, or 11. What about 7?

Let's look at where the divisibility-by-9 test comes from. We add up the digits of our number !!n!!. The sum !!s(n)!! is divisible by !!9!! if and only if !!n!! is. Why is that?

Say that !!d_nd_{n-1}\ldots d_0!! are the digits of our number !!n!!. Then

$$n = \sum 10^id_i.$$

The sum of the digits is

$$s(n) = \sum d_i$$

which differs from !!n!! by $$\sum (10^i-1)d_i.$$ Since !!10^i-1!! is a multiple of !!9!! for every !!i!!, every term in the last sum is a multiple of !!9!!. So by passing from !!n!! to its digit sum, we have subtracted some multiple of !!9!!, and the residue mod 9 is unchanged. Put another way:

$$\begin{align} n &= \sum 10^id_i \\ &\equiv \sum 1^id_i \pmod 9 \qquad\text{(because $10 \equiv 1\pmod 9$)} \\ &= \sum d_i \end{align} $$

The same argument works for the divisibility-by-3 test.

For !!11!! the analysis is similar. We add up the digits !!d_0+d_2+\ldots!! and !!d_1+d_3+\ldots!! and check if the sums are equal mod 11. Why alternating digits? It's because !!10\equiv -1\pmod{11}!!, so $$n\equiv \sum (-1)^id_i \pmod{11}$$

and the sum is zero only if the sum of the positive terms is equal to the sum of the negative terms.

The same type of analysis works similarly for !!2, 4, 5, !! and !!8!!. For !!4!! we observe that !!10^i\equiv 0\pmod 4!! for all !!i>1!!, so all but two terms of the sum vanish, leaving us with the rule that !!n!! is a multiple of !!4!! if and only if !!10d_1+d_0!! is. We could simplify this a bit: !!10\equiv 2\pmod 4!! so !!10d_1+d_0 \equiv 2d_1+d_0\pmod 4!!, but we don't usually bother. Say we are investigating !!571496!!; the rule tells us to just consider !!96!!. The "simplified" rule says to consider !!2\cdot9+6 = 24!! instead. It's not clear that that is actually easier.

This approach works badly for divisibility by 7, because !!10^i\bmod 7!! is not simple. It repeats with period 6.

$$\begin{array}{c|cccccc|ccc} i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ %10^i & 1 & 10 & 100 & 1000 & 10000 & \ldots \\ 10^i\bmod 7 & 1 & 3 & 2 & 6 & 4 & 5 & 1 & 3 & 2 & 6 & 4 &\ldots \\ \end{array} $$

The rule we get from this is:

Take the units digit. Add three times the ones digit, twice the hundreds digit, six times the thousands digit… (blah blah blah) and the original number is a multiple of !!7!! if and only if the sum is also.

For example, considering !!12345678!! we must calculate $$\begin{align} 12345678 & \Rightarrow & 3\cdot1 + 1\cdot 2 + 5\cdot 3 + 4\cdot 4 + 6\cdot 5 + 2\cdot6 + 3\cdot 7 + 1\cdot 8 & = & 107 \\\\ 107 & \Rightarrow & 2\cdot1 + 3\cdot 0 + 1\cdot7 & = & 9 \end{align} $$ and indeed !!12345678\equiv 107\equiv 9\pmod 7!!.

My kids were taught the practical divisibility tests in school, or perhaps learned them from YouTube or something like that. Katara was impressed by my ability to test large numbers for divisibility by 7 and asked how I did it. At first I didn't think about my answer enough, and just said “Oh, it's not hard, just divide by 7 and look at the remainder.” (“Just count the legs and divide by 4.”) But I realized later that there are several tricks I was using that are not obvious. First, she had never learned short division. When I was in school I had been tormented extensively with long division, which looks like this:

This was all Katara had been shown, so when I said “just divide by 7” this is what she was thinking of. But you only need long division for large divisors. For simple divisors like !!7!!, I was taught short division, an easier technique:

Yeah, I wrote 4 when I meant 3. It doesn't matter, we don't care about the quotient anyway.

But that's one of the tricks I was using that wasn't obvious to Katara: we don't care about the quotient anyway, only the remainder. So when I did this in my head, I discarded the parts of the calculation that were about the quotient, and only kept the steps that pertained to the remainder. The way I was actually doing this sounded like this in my mind:

7 into 12 leaves 5. 7 into 53 leaves 4. 7 into 44 leaves 2. 7 into 25 leaves 4. 7 into 46 leaves 4. 7 into 57 leaves 5. 7 into 58 leaves 9. The answer is 9.

At each step, we consider only the leftmost part of the number, starting with !!12!!. !!12\div 7 !! has a remainder of 5, and to this 5 we append the next digit of the dividend, 3, giving 53. Then we continue in the same way: !!53\div 7!! has a remainder of 4, and to this 4 we append the next digit, giving 44. We never calculate the quotient at all.

I explained the idea with a smaller example, like this:

Suppose you want to see if 1234 is divisible by 7. It's 1200-something, so take away 700, which leaves 500-something. 500-what? 530-something. So take away 490, leaving 40-something. 40-what? 44. Now take away 42, leaving 2. That's not 0, so 1234 is not divisible by 7.

This is how I actually do it. For me this works reasonably well up to 13, and after that it gets progressively more difficult until by 37 I can't effectively do it at all. A crucial element is having the multiples of the divisor memorized. If you're thinking about the mod-13 residue of 680-something, it is a big help to know immediately that you can subtract 650.

A year or two ago I discovered a different method, which I'm sure must be ancient, but is interesting because it's quite different from the other methods I described.

Suppose that the final digit of !!n!! is !!b!!, so that !!n=10a+b!!. Then !!-2n = -20a-2b!!, and this is a multiple of !!7!! if and only if !!n!! is. But !!-20a\equiv a\pmod7 !!, so !!a-2b!! is a multiple of !!7!! if and only if !!n!! is. This gives us the rule:

To check if !!n!! is a multiple of 7, chop off the last digit, double it, and subtract it from the rest of the number. Repeat until the answer becomes obvious.

For !!1234!! we first chop off the !!4!! and subtract !!2\cdot4!! from !!123!! leaving !!115!!. Then we chop off the !!5!! and subtract !!2\cdot5!! from !!11!!, leaving !!1!!. This is not a multiple of !!7!!, so neither is !!1234!!. But with !!1239!!, which is a multiple of !!7!!, we get !!123-2\cdot 9 = 105!! and then !!10-2\cdot5 = 0!!, and we win.

In contrast to the other methods in this article, this method does not tell you the remainder on dividing because it does not preserve the residue mod 7. When we started with !!1234!! we ended with !!1!!. But !!1234\not\equiv 1\pmod 7!!; rather !!1234\equiv 2!!. Each step in this method multiplies the residue by -2, or, if you prefer, by 5. The original residue was 2, so the final residue is !!2\cdot-2\cdot-2 = 8 \equiv 1\pmod 7!!. (Or, if you prefer, !!2\cdot 5\cdot 5= 50 \equiv 1\pmod 7!!.) But if we only care about whether the residue is zero, multiplying it by !!-2!! doesn't matter.

There are some shortcuts in this method too. If the final digit is !!7!!, then rather than doubling it and subtracting 14 you can just chop it off and throw it away, going directly from !!10a+7!! to !!a!!. If your number is !!10a+8!! you can subtract !!7!! from it to make it easier to work with, getting !!10a+1!! and then going to !!a-2!! instead of to !!a-16!!. Similarly when your number ends in !!9!! you can go to !!a-4!! instead of to !!a-18!!. And on the other side, if it ends in !!4!! it is easier to go to !!a-1!! instead of to !!a-8!!.

But even with these tricks it's not clear that this is faster or easier than just doing the short division. It's the same number of steps, and it seems like each step is about the same amount of work.

Finally, I once wowed Katara on an airplane ride by showing her this:

To check !!1429!! using this device, you start at ⓪. The first digit is !!1!!, so you follow one black arrow, to ①, and then a blue arrow, to ③. The next digit is !!4!!, so you follow four black arrows, back to ⓪, and then a blue arrow which loops around to ⓪ again. The next digit is !!2!!, so you follow two black arrows to ② and then a blue arrow to ⑥. And the last digit is 9 so you then follow 9 black arrows to ① and then stop. If you end where you started, at ⓪, the number is divisible by 7. This time we ended at ①, so !!1429!! is not divisible by 7. But if the last digit had been !!1!! instead, then in the last step we would have followed only one black arrow from ⑥ to ⓪, before we stopped, so !!1421!! is a multiple of 7.

This probably isn't useful for mental calculations, but I can imagine that if you were stuck on a long plane ride with no calculator and you needed to compute a lot of mod-7 residues for some reason, it could be quicker than the short division method. The chart is easy to construct and need not be memorized. The black arrows obviously point from !!n!! to !!n+1!!, and the blue arrows all point from !!n!! to !!10n!!.

I made up a whole set of these diagrams and I think it's fun to see how the conventional divisibility rules turn up in them. For example, the rule for divisibility by 3 that says just add up the digits:

Or the rule for divisibility by 5 that says to ignore everything but the last digit:


[Other articles in category /math] permanent link

Wed, 25 Jul 2018

Self-warming sake bottle

In my email today I found a note I sent to myself on 24 June 2015 that says only:

Self-warming sake bottle

which would be useful, if you don't drink your sake so quickly that it is gone before it has cooled.

As far as I know, these are not common and perhaps do not exist at all. Why not? Is this a billion-dollar idea?

A few problems come to mind. If the bottle has a cord, it will be hard to pour and will be easily upset. Maybe the best choice here would be a special power supply with relatively high voltage and a thin cord such as is used for earbuds. But this might be dangerous, or impractical for other reasons I am not thinking of.

I think battery power is also probably impractical. Heating requires a lot of energy and batteries don't supply enough. They would need to be frequently replaced, which gets expensive.

Also temperature control might be troublesome. You would need some sort of thermostat in the bottle. Typical inexpensive heating containers and immersion heaters are for bringing water to a boil, which is much simpler than warming sake to the right temperature: just dump in as much heat as possible, as quickly as possible, and perhaps also arrange to have the heat shut off if the heating element gets up to 100°C. This is much too hot for sake.

I think a more practical solution would be a tabletop hot water bath with a bottle-shaped depression in it. The hot water bath could have the thermostat and be permanently attached to a wall outlet. You insert the bottle in its bath to keep it warm when you are not pouring.

This seems practical enough that I imagine it already exists. In fact it occurs to me that I owned a similar sort of warmer at one point, for warming baby bottles. But a quick perusal of available sake warmers suggests that this approach is not common. What gives?


[Other articles in category /food] permanent link

Mon, 23 Jul 2018

Operations that are not quite associative

The paper “Verification of Identities” (1997) of Rajagopalan and Schulman discusses fast ways to test whether a binary operation on a finite set is associative. In general, there is no method that is faster than the naïve algorithm of simply checking whether $$a\ast(b\ast c) = (a\ast b)\ast c$$ for all triples !!\langle a, b, c\rangle!!. This is because:

For every !!n≥3!!, there exists an operation with just one nonassociative triple.

(Page 3.)

But R&S do not give an example. I now have a very nice example, and I think the process that led me to it is a good example of Lower Mathematics at work.

Let's say that an operation !!\ast!! is “good” if it is associative except in exactly one case. We want to find a “good” operation. My first idea was that if we could find a primitive good operation on a set of 3 elements, we could probably extend it to give a good operation on a larger set. Say the set is !!\{a_0, a_1, a_2, b_0, b_1, \ldots, b_{n-4}\}!!. We just need to define the extended operation so that if either of the operands is !!b_i!!, it is associative, and if both operands are !!a_i!! then it is the same as the primitive good operation we already found. The former part is easy: just make it constant, say !!x\ast y = b_0!! except when !!x,y\in\{a_0, a_1, a_2\}!!.

So now all we need to do is find a single good primitive operation, and I did not expend any thought on this at all. There are only !!3^9=19683!! binary operations, and we could quite easily write a program to check them all. In fact, we can do better: generate a binary operation at random and check it. If it's not the primitive good operation we want, just throw it away and try again. This could take longer to run than the exhaustive search, if there happen to be very few good operations and if the program is unlucky, but the program is easier to write, and the run time will be utterly insignificant either way.

I wrote the program, which instantly produced:

$$ \begin{array}{c|ccc} \ast & 0 & 1 & 2 \\ \hline 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 2 & 0 & 1 & 2 \end{array} $$

This is associative except for the case !!1\ast(2\ast 1) \ne (1\ast 2)\ast 1!!. This solves the problem. But confirming that this is a good operation requires manually checking 27 cases, or a perhaps not-immediately-obvious case analysis.

But on a later run of the program, I got lucky and it found a simpler operation, which I can explain without a table:

!!a\ast b = 0!!, except !!2\ast 1=2!!.

Now we don't have to check 27 cases. The operation is simpler, so the proof is too: We know !!b\ast c\ne 1!!, so !!a\ast(b\ast c) !! must be !!0!!. And the only way !!(a \ast b)\ast c \ne 0!! can occur is when !!a\ast b=2!! and !!c=1!!, so !!a=2, b=1, c=1!!.

Now we can even dispense with the construction that extended the operation from !!3!! to !!n!! elements, because the description of the extended operation is the same. We wanted to extend it to be constant whenever !!a!! or !!b!! was larger than !!2!!, and that's what the description already says!

So that reduces the whole thing to about two sentences, which explain the example and why it works. But when reduced in that way, you see how the example works but not how you might go about finding it. I think the interesting part is to see how to get there, and quite a lot of mathematical education tends to over-emphasize analysis (“how can we show this is a good operation?”) at the expense of synthesis (“how can we find a good operation?”).

The exhaustive search would probably have produced the simple operation early on in its run, so there is something to be said for that approach too.


[Other articles in category /math] permanent link