The Universe of Discourse


Mon, 31 Oct 2022

Emoji for U.S. presidents

Content warning: something here to offend almost everyone

A while back I complained that there were no emoji portraits of U.S. presidents. Not that there a Chester A. Arthur portrait would see a lot of use. But some of the others might come in handy.

I couldn't figure them all out. I have no idea what a Chester Arthur emoji would look like. And I assigned 🧔🏻 to all three of Garfield, Harrison, and Hayes, which I guess is ambiguous but do you really need to be able to tell the difference between Garfield, Harrison, and Hayes? I don't think you do. But I'm pretty happy with most of the rest.

George Washington 💵
John Adams
Thomas Jefferson 📜
James Madison
James Monroe
John Quincy Adams 🍐
Andrew Jackson
Martin Van Buren 🌷
William Henry Harrison 🪦
John Tyler
James K. Polk
Zachary Taylor
Millard Fillmore
Franklin Pierce
James Buchanan
Abraham Lincoln 🎭
Andrew Johnson 💩
Ulysses S. Grant 🍸
Rutherford B. Hayes 🧔🏻
James Garfield 🧔🏻
Chester A. Arthur
Grover Cleveland 🔂
Benjamin Harrison 🧔🏻
Grover Cleveland 🔂
William McKinley
Theodore Roosevelt 🧸
William Howard Taft 🛁
Woodrow Wilson 🎓
Warren G. Harding 🫖
Calvin Coolidge 🙊
Herbert Hoover
Franklin D. Roosevelt 👨‍🦽
Harry S. Truman 🍄
Dwight D. Eisenhower 🪖
John F. Kennedy 🍆
Lyndon B. Johnson 🗳️
Richard M. Nixon 🐛
Gerald R. Ford 🏈
Jimmy Carter 🥜
Ronald Reagan 💸
George H. W. Bush 👻
William J. Clinton 🎷
George W. Bush 👞
Barack Obama 🇰🇪
Donald J. Trump 🍊
Joseph R. Biden 🕶️

Honorable mention: Benjamin Franklin 🪁

Dishonorable mention: J. Edgar Hoover 👚

If anyone has better suggestions I'm glad to hear them. Note that I considered, and rejected 🎩 for Lincoln because it doesn't look like his actual hat. And I thought maybe McKinley should be 🏔️ but since they changed the name of the mountain back I decided to save it in case we ever elect a President Denali.

(Thanks to Liam Damewood for suggesting Harding, and to Colton Jang for Clinton's saxophone.)

[ Addendum 20221106: Twitter user Simon suggests emoji for UK prime ministers. ]

[ Addendum 20221108: Rasmus Villemoes makes a good suggestion of 😼 for Garfield. I had considered this angle, but abandoned it because there was no way to be sure that the cat would be orange, overweight, or grouchy. Also the 🧔🏻 thing is funnier the more it is used. But I had been unaware that there is CAT FACE WITH WRY SMILE until M. Villemoes brought it to my attention, so maybe. (Had there been an emoji resembling a lasagna I would have chosen it instantly.) ]

[ Addendum 20221108: January First-of-May has suggested 🌷 for Maarten van Buren, a Dutch-American whose first language was not English but Dutch. Let it be so! ]

[ Addendum 20221218: Lorrie Kim reminded me that an old friend of ours, a former inhabitant of New Orleans, used to pass through Jackson Square, see the big statue of Andrew Jackson, and reflect gleefully on how he was covered in pigeon droppings. Lorrie suggested that I add a pigeon to the list above.

However, while there is a generic "bird" emoji, it is not specifically a pigeon. There are also three kinds of baby chicks; chickens both male and female; doves, ducks, eagles, flamingos, owls, parrots, peacocks, penguins, swans and turkeys; also single eggs and nests with multiple eggs. But no pigeons. ]


[Other articles in category /history] permanent link

Sun, 30 Oct 2022

Trollopes

A while back, discussing Vladimir Putin (not putain) I said

In English we don't seem to be so quivery. Plenty of people are named “Hoare”. If someone makes a joke about the homophone, people will just conclude that they're a boor.

Today I remembered Frances Trollope and her son Anthony Trollope. Where does the name come from? Surely it's not occupational?

Happily no, just another coincidence. According to Wikipedia it is a toponym, referring to a place called Troughburn in Northumberland, which was originally known as Trolhop, “troll valley”. Sir Andrew Trollope is known to have had the name as long ago as 1461.

According to the Times of London, Joanna Trollope, a 6th-generation descendant of Frances, once recalled

a night out with a “very prim and proper” friend who had the surname Hoare. The friend was dismayed by the amusement she caused in the taxi office when she phoned to book a car for Hoare and Trollope.

I guess the common name "Hooker" is occupational, perhaps originally referring to a fisherman.

[ Frances Trollope previously on this blog: [1] [2] ]

[ Addendum: (Wiktionary says that Hooker is occupational, a person who makes hooks. I find it surprising that this would be a separate occupattion. And what kind of hooks? I will try to look into this later. ]


[Other articles in category /lang] permanent link

Sun, 23 Oct 2022

This search algorithm is usually called "group testing"

Yesterday I described an algorithm that locates the ‘bad’ items among a set of items, and asked:

does this technique have a name? If I wanted to tell someone to use it, what would I say?

The answer is: this is group testing, or, more exactly, the “binary splitting” version of adaptive group testing, in which we are allowed to adjust the testing strategy as we go along. There is also non-adaptive group testing in which we come up with a plan ahead of time for which tests we will perform.

I felt kinda dumb when this was pointed out, because:

  • A typical application (and indeed the historically first application) is for disease testing
  • My previous job was working for a company doing high-throughput disease testing
  • I found out about the job when one of the senior engineers there happened to overhear me musing about group testing
  • Not only did I not remember any of this when I wrote the blog post, I even forgot about the disease testing application while I was writing the post!

Oh well. Thanks to everyone who wrote in to help me! Let's see, that's Drew Samnick, Shreevatsa R., Matt Post, Matt Heilige, Eric Harley, Renan Gross, and David Eppstein. (Apologies if I left out your name, it was entirely unintentional.)

I also asked:

Is the history of this algorithm lost in time, or do we know who first invented it, or at least wrote it down?

Wikipedia is quite confident about this:

The concept of group testing was first introduced by Robert Dorfman in 1943 in a short report published in the Notes section of Annals of Mathematical Statistics. Dorfman's report – as with all the early work on group testing – focused on the probabilistic problem, and aimed to use the novel idea of group testing to reduce the expected number of tests needed to weed out all syphilitic men in a given pool of soldiers.

Eric Harley said:

[It] doesn't date back as far as you might think, which then makes me wonder about the history of those coin weighing puzzles.

Yeah, now I wonder too. Surely there must be some coin-weighing puzzles in Sam Loyd or H.E. Dudeney that predate Dorfman?

Dorfman's original algorithm is not the one I described. He divides the items into fixed-size groups of n each, and if a group of n contains a bad item, he tests the n items individually. My proposal was to always split the group in half. Dorfman's two-pass approach is much more practical than mine for disease testing, where the test material is a body fluid sample that may involve a blood draw or sticking a swab in someone's nose, where the amount of material may be limited, and where each test offers a chance to contaminate the sample.

Wikipedia has an article about a more sophisticated of the binary-splitting algorithm I described. The theory is really interesting, and there are many ingenious methods.

Thanks to everyone who wrote in. Also to everyone who did not. You're all winners.

[ Addendum 20221108: January First-of-May has brought to my attention section 5c of David Singmaster's Sources in Recreational Mathematics, which has notes on the known history of coin-weighing puzzles. To my surprise, there is nothing there from Dudeney or Loyd; the earliest references are from the American Mathematical Monthly in 1945. I am sure that many people would be interested in further news about this. ]


[Other articles in category /prog] permanent link

Fri, 21 Oct 2022

More notes on deriving Applicative from Monad

A year or two ago I wrote about what you do if you already have a Monad and you need to define an Applicative instance for it. This comes up in converting old code that predates the incorporation of Applicative into the language: it has these monad instance declarations, and newer compilers will refuse to compile them because you are no longer allowed to define a Monad instance for something that is not an Applicative. I complained that the compiler should be able to infer this automatically, but it does not.

My current job involves Haskell programming and I ran into this issue again in August, because I understood monads but at that point I was still shaky about applicatives. This is a rough edit of the notes I made at the time about how to define the Applicative instance if you already understand the Monad instance.

pure is easy: it is identical to return.

Now suppose we have >>=: how can we get <*>? As I eventually figured out last time this came up, there is a simple solution:

    fc <*> vc = do
      f <- fc
      v <- vc
      return $ f v

or equivalently:

    fc <*> vc = fc >>= \f -> vc >>= \v -> return $ f v

And in fact there is at least one other way to define it is just as good:

    fc <*> vc = do
      v <- vc
      f <- fc
      return $ f v

(Control.Applicative.Backwards provides a Backwards constructor that reverses the order of the effects in <*>.)

I had run into this previously and written a blog post about it. At that time I had wanted the second <*>, not the first.

The issue came up again in August because, as an exercise, I was trying to implement the StateT state transformer monad constructor from scratch. (I found this very educational. I had written State before, but StateT was an order of magnitude harder.)

I had written this weird piece of code:

    instance Applicative f => Applicative (StateT s f) where
         pure a = StateT $ \s -> pure (s, a)
         stf <*> stv = StateT $
             \s -> let apf   = run stf s
                       apv   = run stv s
                   in liftA2 comb apf apv where
                       comb = \(s1, f) (s2, v)  -> (s1, f v)  -- s1? s2?

It may not be obvious why this is weird. Normally the definition of <*> would look something like this:

  stf <*> stv = StateT $
    \s0 ->  let (s1, f) = run stf s0
            let (s2, v) = run stv s1
            in (s2, f v)

This runs stf on the initial state, yielding f and a new state s1, then runs stv on the new state, yielding v and a final state s2. The end result is f v and the final state s2.

Or one could just as well run the two state-changing computations in the opposite order:

  stf <*> stv = StateT $
    \s0 ->  let (s1, v) = run stv s0
            let (s2, f) = run stf s1
            in (s2, f v)

which lets stv mutate the state first and gives stf the result from that.

I had been unsure of whether I wanted to run stf or stv first. I was familiar with monads, in which the question does not come up. In v >>= f you must run v first because you will pass its value to the function f. In an Applicative there is no such dependency, so I wasn't sure what I neeeded to do. I tried to avoid the question by running the two computations ⸢simultaneously⸣ on the initial state s0:

    stf <*> stv = StateT $
          \s0 ->  let (sf, f) = run stf s0
                  let (sv, v) = run stv s0
                  in (sf, f v)

Trying to sneak around the problem, I was caught immediately, like a small child hoping to exit a room unseen but only getting to the doorway. I could run the computations ⸢simultaneously⸣ but on the very next line I still had to say what the final state was in the end: the one resulting from computation stf or the one resulting from computation stv. And whichever I chose, I would be discarding the effect of the other computation.

My co-worker Brandon Chinn opined that this must violate one of the applicative functor laws. I wasn't sure, but he was correct. This implementation of <*> violates the applicative ”interchange” law that requires:

    f <*> pure x  ==  pure ($ x) <*> f

Suppose f updates the state from !!s_0!! to !!s_f!!. pure x and pure ($ x), being pure, leave it unchanged.

My proposed implementation of <*> above runs the two computations and then updates the state to whatever was the result of the left-hand operand, sf discarding any updates performed by the right-hand one. In the case of f <*> pure x the update from f is accepted and the final state is !!s_f!!. But in the case of pure ($ x) <*> f the left-hand operand doesn't do an update, and the update from f is discarded, so the final state is !!s_0!!, not !!s_f!!. The interchange law is violated by this implementation.

(Of course we can't rescue this by yielding (sv, f v) in place of (sf, f v); the problem is the same. The final state is now the state resulting from the right-hand operand alone, !!s_0!! on the left side of the law and !!s_f!! on the right-hand side.)

Stack Overflow discussion

I worked for a while to compose a question about this for Stack Overflow, but it has been discussed there at length, so I didn't need to post anything:

That first thread contains this enlightening comment:

  • Functors are generalized loops

    [ f x | x <- xs];

  • Applicatives are generalized nested loops

    [ (x,y) | x <- xs, y <- ys];

  • Monads are generalized dynamically created nested loops

    [ (x,y) | x <- xs, y <- k x].

That middle dictum provides another way to understand why my idea of running the effects ⸢simultaneously⸣ was doomed: one of the loops has to be innermost.

The second thread above (“How arbitrary is the ap implementation for monads?”) is close to what I was aiming for in my question, and includes a wonderful answer by Conor McBride (one of the inventors of Applicative). Among other things, McBride points out that there are at least four reasonable Applicative instances consistent with the monad definition for nonempty lists. (There is a hint in his answer here.)

Another answer there sketches a proof that if the applicative ”interchange” law holds for some applicative functor f, it holds for the corresponding functor which is the same except that its <*> sequences effects in the reverse order.


[Other articles in category /prog/haskell] permanent link

Thu, 20 Oct 2022

A linguistic oddity

Last week I was in the kitchen and Katara tried to tell Toph a secret she didn't want me to hear. I said this was bad opsec, told them that if they wanted to exchange secrets they should do it away from me, and without premeditating it, I uttered the following:

You shouldn't talk about things you shouldn't talk about while I'm in the room while I'm in the room.

I suppose this is tautological. But it's not any sillier than Tarski's observation that "snow is white" is true exactly if snow is white, and Tarski is famous.

I've been trying to think of more examples that really work. The best I've been able to come up with is:

You shouldn't eat things you shouldn't eat because they might make you sick, because they might make you sick.

I'm trying to decide if the nesting can be repeated. Is this grammatical?

You shouldn't talk about things you shouldn't talk about things you shouldn't talk about while I'm in the room while I'm in the room while I'm in the room.

I think it isn't. But if it is, what does it mean?

[ Previously, sort of. ]


[Other articles in category /lang] permanent link

Wed, 19 Oct 2022

What's this search algorithm usually called?

Consider this problem:

Input: A set !!S!! of items, of which an unknown subset, !!S_{\text{bad}}!!, are ‘bad’, and a function, !!\mathcal B!!, which takes a subset !!S'!! of the items and returns true if !!S'!! contains at least one bad item:

$$ \mathcal B(S') = \begin{cases} \mathbf{false}, & \text{if $S'\cap S_{\text{bad}} = \emptyset$} \\ \mathbf{true}, & \text{otherwise} \\ \end{cases} $$

Output: The set !!S_{\text{bad}}!! of all the bad items.

Think of a boxful of electronic components, some of which are defective. You can test any subset of components simultaneously, and if the test succeeds you know that each of those components is good. But if the test fails all you know is that at least one of the components was bad, not how many or which ones.

The obvious method is simply to test the components one at a time:

$$ S_{\text{bad}} = \{ x\in S \mid \mathcal B(\{x\}) \} $$

This requires exactly !!|S|!! calls to !!\mathcal B!!.

But if we expect there to be relatively few bad items, we may be able to do better:

  • Call !!\mathcal B(S)!!. That is, test all the components at once. If none is bad, we are done.
  • Otherwise, partition !!S!! into (roughly-equal) halves !!S_1!! and !!S_2!!, and recurse.

In the worst case this takes (nearly) twice as many calls as just calling !!\mathcal B!! on the singletons. But if !!k!! items are bad it requires only !!O(k\log |S|)!! calls to !!\mathcal B!!, a big win if !!k!! is small compared with !!|S|!!.

My question is: does this technique have a name? If I wanted to tell someone to use it, what would I say?

It's tempting to say "binary search" but it's not very much like binary search. Binary search finds a target value in a sorted array. If !!S!! were an array sorted by badness we could use something like binary search to locate the first bad item, which would solve this problem. But !!S!! is not a sorted array, and we are not really looking for a target value.

Is the history of this algorithm lost in time, or do we know who first invented it, or at least wrote it down? I think it sometimes pops up in connection with coin-weighing puzzles.

[ Addendum 20221023: this is the pure binary-splitting variation of adaptive group testing. I wrote a followup. ]


[Other articles in category /prog] permanent link

Tue, 18 Oct 2022

Tree search in Haskell

In Perl I would often write a generic tree search function:

    # Perl
    sub search {
      my ($is_good, $children_of, $root) = @_;
      my @queue = ($root);
      return sub {
        while (1) {
          return unless @queue;
          my $node = shift @queue;
          push @queue, $children_of->($node);
          return $node if $is_good->($node);
        }
      }
    }

For example, see Higher-Order Perl, section 5.3.

To use this, we provide two callback functions. $is_good checks whether the current item has the properties we were searching for. $children_of takes an item and returns its children in the tree. The search function returns an iterator object, which, each time it is called, returns a single item satisfying the $is_good predicate, or undef if none remains. For example, this searches the space of all strings over abc for palindromic strings:

    # Perl
    my $it = search(sub { $_[0] eq reverse $_[0] },
                    sub { return map "$_[0]$_" => ("a", "b", "c") },
                    "");

    while (my $pal = $it->()) {
      print $pal, "\n";
    }

Many variations of this are possible. For example, replacing push with unshift changes the search from breadth-first to depth-first. Higher-Order Perl shows how to modify it to do heuristically-guided search.

I wanted to do this in Haskell, and my first try didn’t work at all:

    -- Haskell
    search1 :: (n -> Bool) -> (n -> [n]) -> n -> [n]
    search1 isGood childrenOf root =
      s [root]
        where
          s nodes = do
            n <- nodes
            filter isGood (s $ childrenOf n)

There are two problems with this. First, the filter is in the wrong place. It says that the search should proceed downward only from the good nodes, and stop when it reaches a not-good node. To see what's wrong with this, consider a search for palindromes. Th string ab isn't a palindrome, so the search would be cut off at ab, and never proceed downward to find aba or abccbccba. It should be up to childrenOf to decide how to continue the search. If the search should be pruned at a particular node, childrenOf should return an empty list of children. The $isGood callback has no role here.

But the larger problem is that in most cases this function will compute forever without producing any output at all, because the call to s recurses before it returns even one list element.

Here’s the palindrome example in Haskell:

    palindromes = search isPalindrome extend ""
      where
        isPalindrome s = (s == reverse s)
        extend s = map (s ++) ["a", "b", "c"]

This yields a big fat !!\huge \bot!!: it does nothing, until memory is exhausted, and then it crashes.

My next attempt looked something like this:

        search2 :: (n -> Bool) -> (n -> [n]) -> n -> [n]
        search2 isGood childrenOf root = filter isGood $ s [root]
            where
              s nodes = do
                n <- nodes
                n : (s $ childrenOf n)

The filter has moved outward, into a single final pass over the generated tree. And s now returns a list that at least has the node n on the front, before it recurses. If one doesn’t look at the nodes after n, the program doesn’t make the recursive call.

The palindromes program still isn’t right though. take 20 palindromes produces:

    ["","a","aa","aaa","aaaa","aaaaa","aaaaaa","aaaaaaa","aaaaaaaa",
     "aaaaaaaaa", "aaaaaaaaaa","aaaaaaaaaaa","aaaaaaaaaaaa",
     "aaaaaaaaaaaaa","aaaaaaaaaaaaaa", "aaaaaaaaaaaaaaa",
     "aaaaaaaaaaaaaaaa","aaaaaaaaaaaaaaaaa", "aaaaaaaaaaaaaaaaaa",
     "aaaaaaaaaaaaaaaaaaa"]

It’s doing a depth-first search, charging down the leftmost branch to infinity. That’s because the list returned from s (a:b:rest) starts with a, then has the descendants of a, before continuing with b and b's descendants. So we get all the palindromes beginning with “a” before any of the ones beginning with "b", and similarly all the ones beginning with "aa" before any of the ones beginning with "ab", and so on.

I needed to convert the search to breadth-first, which is memory-expensive but at least visits all the nodes, even when the tree is infinite:

        search3 :: (n -> Bool) -> (n -> [n]) -> n -> [n]
        search3 isGood childrenOf root = filter isGood $ s [root]
            where
              s nodes = nodes ++ (s $ concat (map childrenOf nodes))

This worked. I got a little lucky here, in that I had already had the idea to make s :: [n] -> [n] rather than the more obvious s :: n -> [n]. I had done that because I wanted to do the n <- nodes thing, which is no longer present in this version. But it’s just what we need, because we want s to return a list that has all the nodes at the current level (nodes) before it recurses to compute the nodes farther down. Now take 20 palindromes produces the answer I wanted:

    ["","a","b","c","aa","bb","cc","aaa","aba","aca","bab","bbb","bcb",
     "cac", "cbc","ccc","aaaa","abba","acca","baab"]

While I was writing this version I vaguely wondered if there was something that combines concat and map, but I didn’t follow up on it until just now. It turns out there is and it’s called concatMap. 😛

        search3' :: (n -> Bool) -> (n -> [n]) -> n -> [n]
        search3' isGood childrenOf root = filter isGood $ s [root]
            where
              s nodes = nodes ++ (s $ concatMap childrenOf nodes)

So this worked, and I was going to move on. But then a brainwave hit me: Haskell is a lazy language. I don’t have to generate and filter the tree at the same time. I can generate the entire (infinite) tree and filter it later:

    -- breadth-first tree search
    bfsTree :: (n -> [n]) -> [n] -> [n]
    bfsTree childrenOf nodes =
        nodes ++ bfsTree childrenOf (concatMap childrenOf nodes)

    search4 isGood childrenOf root =
        filter isGood $ bfsTree childrenOf [root]

This is much better because it breaks the generation and filtering into independent components, and also makes clear that searching is nothing more than filtering the list of nodes. The interesting part of this program is the breadth-first tree traversal, and the tree traversal part now has only two arguments instead of three; the filter operation afterwards is trivial. Tree search in Haskell is mostly tree, and hardly any search!

With this refactoring we might well decide to get rid of search entirely:

    palindromes4 = filter isPalindrome allStrings
      where
        isPalindrome s = (s == reverse s)
        allStrings = bfsTree (\s -> map (s ++) ["a", "b", "c"]) [""]

And then I remembered something I hadn’t thought about in a long, long time:

[Lazy evaluation] makes it practical to modularize a program as a generator that constructs a large number of possible answers, and a selector that chooses the appropriate one.

That's exactly what I was doing and what I should have been doing all along. And it ends:

Lazy evaluation is perhaps the most powerful tool for modularization … the most powerful glue functional programmers possess.

(”Why Functional Programming Matters”, John Hughes, 1990.)

I felt a little bit silly, because I wrote a book about lazy functional programming and yet somehow, it’s not the glue I reach for first when I need glue.

[ Addendum 20221023: somewhere along the way I dropped the idea of using the list monad for the list construction, instead using explicit map and concat. But it could be put back. For example:

        s nodes = (nodes ++) . s $ do
            n <- nodes
            childrenOf n

I don't think this is an improvement on just using concatMap. ]


[Other articles in category /prog/haskell] permanent link

Thu, 13 Oct 2022

Stethoscope

Today I realized I'm annoyed by the word "stethoscope". "Scope" is Greek for "look at". The telescope is for looking at far things (τῆλε). The microscope is for looking at small things (μικρός). The endoscope is for looking inside things (ἔνδον). The periscope is for looking around things (περί). The stethoscope is for looking at chests (στῆθος).

Excuse me? The hell it is! Have you ever tried looking through a stethoscope? You can't see for shit.

It should obviously have been called the stethophone.

(It turns out that “stethophone” was adopted as the name for a later elaboration of the stethoscope, shown at right, that can listen to two parts of the chest at the same time, and deliver the sounds to different ears.)

Stethophone illustration is in the public domain, via Wikipedia.


[Other articles in category /lang/etym] permanent link