The Universe of Discourse


Thu, 08 May 2025

A descriptive theory of seasons in the Mid-Atlantic

[ I started thinking about this about twenty years ago, and then writing it down in 2019, but it seems to be obsolete. I am publishing it anyway. ]

The canonical division of the year into seasons in the northern temperate zone goes something like this:

  • Spring: March 21 – June 21
  • Summer: June 21 – September 21
  • Autumn: September 21 – December 21
  • Winter: December 21 – March 21

Living in the mid-Atlantic region of the northeast U.S., I have never been happy with this. It is just not a good description of the climate.

I begin by observing that the year is not equally partitioned between the four seasons. The summer and winter are longer, and spring and autumn are brief and happy interludes in between.

I have no problem with spring beginning in the middle of March. I think that is just right. March famously comes in like a lion and goes out like a lamb. The beginning of March is crappy, like February, and frequently has snowstorms and freezes. By the end of March, spring is usually skipping along, with singing birds and not just the early flowers (snowdrops, crocuses, daffodil) but many of the later ones also.

By the middle of May the spring flowers are over and the weather is getting warm, often uncomfortably so. Summer continues through the beginning of September, which is still good for swimming and lightweight clothes. In late September it finally gives way to autumn.

Autumn is jacket weather but not overcoat weather. Its last gasp is in the middle of November. By this time all the leaves have changed, and the ones that are going to fall off the trees have done so. The cool autumn mist has become a chilly winter mist. The cold winter rains begin at the end of November.

So my first cut would look something like this:

Months
Seasons
January
February
March
April
May
June
July
August
September
October
November
December
Winter
Spring
Summer
Autumn
Winter

Note that this puts Thanksgiving where it belongs at the boundary between autumn (harvest season) and winter (did we harvest enough to survive?). Also, it puts the winter solstice (December 21) about one quarter of the way through the winter. This is correct. By the solstice the days have gotten short, and after that the cold starts to kick in. (“As the days begin to lengthen, the cold begins to strengthen”.) The conventional division takes the solstice as the beginning of winter, which I just find perplexing. December 1 is not the very coldest part of winter, but it certainly isn't autumn.

There is something to be said for it though. I think I can distinguish several subseasons — ten in fact:

Dominus Seasonal Calendar

Months
Seasons
Sub-seasons
January
February
March
April
May
June
July
August
September
October
November
December
Winter
Spring
Summer
Autumn
Winter
Midwinter
Late Winter
Early spring
Late spring
Early Summer
Midsummer
Late Summer
Early autumn
Late autumn
Early winter
Midwinter

Midwinter, beginning around the solstice, is when the really crappy weather arrives, day after day of bitter cold. In contrast, early and late winter are typically much milder. By late February the snow is usually starting to melt. (March, of course, is always unpredictable, and usually has one nasty practical joke hiding up its sleeve. Often, March is pleasant and springy in the second week, and then mocks you by turning back into January for the third week. This takes people by surprise almost every year and I wonder why they never seem to catch on.)

Similarly, the really hot weather is mostly confined to midsummer. Early and late summer may be warm but you do not get blazing sun and you have to fry your eggs indoors, not on the pavement.

Why the seasons seem to turn in the middle of each month, and not at the beginning, I can't say. Someone messed up, but who? Probably the Romans. I hear that the Persians and the Baha’i start their year on the vernal equinox. Smart!

Weather in other places is very different, even in the temperate zones. For example, in southern California they don't have any of the traditional seasons. They have a period of cooler damp weather in the winter months, and then instead of summer they have a period of gloomy haze from June through August.

However

I may have waited too long to publish this article, as climate change seems to have rendered it obsolete. In recent years, we have barely had midwinter, and instead of the usual two to three annual snows we have zero. Midsummer has grown from two to four months, and summer now lasts into October.


[Other articles in category /calendar] permanent link

Fri, 02 May 2025

Claude and I write a utility program

Then I had two problems…

A few days ago I got angry at xargs for the hundredth time, because for me xargs is one of those "then he had two problems" technologies. It never does what I want by default and I can never remember how to use it. This time what I wanted wasn't complicated: I had a bunch of PDF documents in /tmp and I wanted to use GPG to encrypt some of them, something like this:

    gpg -ac $(ls *.pdf | menupick)

menupick is a lovely little utility that reads lines from standard input, presents a menu, prompts on the terminal for a selection from the items, and then prints the selection to standard output. Anyway, this didn't work because some of the filenames I wanted had spaces in them, and the shell sucks. Also because gpg probably only does one file at a time.

I could have done it this way:

    ls *.pdf | menupick | while read f; do gpg -ac "$f"; done

but that's a lot to type. I thought “aha, I'll use xargs.” Then I had two problems.

    ls *.pdf | menupick | xargs gpg -ac

This doesn't work because xargs wants to batch up the inputs to run as few instances of gpg as possible, and gpg only does one file at a time. I glanced at the xargs manual looking for the "one at a time please" option (which should have been the default) but I didn't see it amongst the forest of other options.

I think now that I needed -n 1 but I didn't find it immediately, and I was tired of looking it up every time when it was what I wanted every time. After many years of not remembering how to get xargs to do what I wanted, I decided the time had come to write a stripped-down replacement that just did what I wanted and nothing else.

(In hindsight I should perhaps have looked to see if gpg's --multifile option did what I wanted, but it's okay that I didn't, this solution is more general and I will use it over and over in coming years.)

xar is a worse version of xargs, but worse is better (for me)

First I wrote a comment that specified the scope of the project:

    # Version of xargs that will be easier to use
    #
    # 1. Replace each % with the filename, if there are any
    # 2. Otherwise put the filename at the end of the line
    # 3. Run one command per argument unless there is (some flag)
    # 4. On error, continue anyway
    # 5. Need -0 flag to allow NUL-termination

There! It will do one thing well, as Brian and Rob commanded us in the Beginning Times.

I wrote a draft implementation that did not even do all those things, just items 2 and 4, then I fleshed it out with item 1. I decided that I would postpone 3 and 5 until I needed them. (5 at least isn't a YAGNI, because I know I have needed it in the past.)

The result was this:

    import subprocess
    import sys

    def command_has_percent(command):
        for word in command:
            if "%" in word:
                return True
        return False

    def substitute_percents(target, replacement):
        return [ s.replace("%", replacement) for s in target ]

    def run_command_with_filename(command_template, filename):
        command = command_template.copy()
        if not command_has_percent(command):
            command.append("%")

        res = subprocess.run(substitute_percents(command, filename), check=False)
        return res.returncode == 0

    if __name__ == '__main__':
        template = sys.argv[1:]
        ok = True
        for line in sys.stdin:
            if line.endswith("\n"):
                line = line[:-1]
            if not run_command_with_filename(template, line):
                ok = False
        exit(0 if ok else 1)

Short, clean, simple, easy to use. I called it xar, ran

    ls *.pdf | menupick | xar gpg -ac

and was content.

Now again, with Claude

The following day I thought this would be the perfect opportunity to try getting some LLM help with programming. I already had a baseline version of xar working, and had thought through the problem specification. Now I could do it over with the LLM and compare the two results. The program I wanted was small and self-contained. If the LLM sabotaged me somehow, I would probably notice, and if I didn't, it wouldn't matter, because I had a working version already.

So I asked Claude about it. I knew Claude was not the right choice, because Claude is not really designed for this task, and has no integration with the coding editor, but Claude is what I am familiar with so I thought I'd give it a shot.

On the whole it went extremely well. The complete transcript is here. I imagine this was pretty much a maximally good experience, that all the terrible problems of LLM coding arise in larger, more complicated systems. But this program is small and self-contained, with no subtleties, so the LLM could be at its best.

Phase 1

First I just told Claude what I wanted and let it write the program from scratch.

Let's write a program in Python called xar. It will be a stripped-down version of the standard Unix xargs utility, but it will do less and be simpler to use.

It will get command-line arguments that specify a command template. Then it will read lines from stdin. Each line will be appended to the command template to produce a command. The program will run the resulting command, one command for each line of input.

Any questions?

Default Claude is very chatty and I am going to omit most of its chatty comments, which make me impatient.

The program it wrote it was not what I would have written, but it was good enough. If I had just used it right off the bat, instead of writing my own, it would have been enough, and it would have taken somewhere between 2% and 10% as long to produce.

So the one-line summary of this article is: I should stop writing simple command-line utilities, and just have Claude write them instead, because Claude's are good enough, and definitely better than yak-shaving.

Here's Claude's initial version:

    #!/usr/bin/env python3

    import sys
    import subprocess
    import shlex

    def main():
        if len(sys.argv) < 2:
            print("Usage: xar command [args...]", file=sys.stderr)
            sys.exit(1)

        command_template = sys.argv[1:]

        for line in sys.stdin:
            line = line.rstrip('\n')

            full_command = command_template + [line]

            try:
                result = subprocess.run(full_command, check=True)
            except subprocess.CalledProcessError as e:
                print(f"Command failed with exit code {e.returncode}: {' '.join(full_command)}", 
                      file=sys.stderr)
            except FileNotFoundError:
                print(f"Command not found: {full_command[0]}", file=sys.stderr)
                sys.exit(1)

    if __name__ == "__main__":
        main()

Claude's version had numerous comments, which I have omitted. I later told it to stop putting in comments, which it did.

Claude's use of check here was not what I wanted, because that makes subprocess.run raise an exception when the subcommand fails, and then the exception has to be immediately caught and handled. My original control flow had been simpler:

        res = subprocess.run(substitute_percents(command, filename), check=False)
        return res.returncode == 0

        …

        if not run_command_with_filename(template, line):
            ok = False

Claude's program pulled in shlex without noticing that it was unused. But Claude did teach me about str.rstrip('\n') which I had not known about before (or maybe had forgotten), so that was one small win already.

Argument parsing

The next step was a big win. Python's library for command-line argument handling is called argparse and it is really nice. If I were still writing programs in Perl, I would implement a Perl version of argparse because Perl has 29 available argument parsing libraries and they are all loathsome. The one problem with argparse is I never remember off the top of my head how to use it. I think the module is called argparse but it provides a class called Argparser but I often get these backward and try to use argparser and Argparse.

Instead of figuring it out every time I usually dig up some previous Python program and then copy-paste the argument parser from there, amending it to suit the purpose.

But this time I didn't have to do that. Instead, I just said to Claude:

This is good, now please add code at the top to handle argument parsing with the standard Argparse library, even though there are no options yet.

Claude handed me back pretty much the same program, but with the argument parser at the top:

    def parse_arguments():
        """Parse command-line arguments using argparse."""
        parser = argparse.ArgumentParser(
            description="A simplified version of xargs that appends each line from stdin to a command template.",
            prog="xar"
        )

        # Store all positional arguments as the command to execute
        parser.add_argument(
            "command", 
            nargs="+", 
            help="The command template to execute for each input line"
        )

        return parser.parse_args()

Let's pause for a moment. Maybe you kids are unimpressed by this. But if someone had sent this interaction back in time and showed it to me even as late as five years ago, I would have been stunned. It would have been completely science-fictional. The code that it produced is not so impressive, maybe, but that I was able to get it in response to an unstructured request in plain English is like seeing a nuclear-powered jetpack in action.

I make some changes

It bothered me that the command-line arguments were being taken implicitly from sys.argv; that makes the program hard to test. A better way to do this is to have the list of command-line arguments be a function to the main function, and then for the runtime to call main(sys.argv[1:]) instead of just main(). That way you can test the program without having to fake up sys.argv. So I changed that, and while I was making changes I got rid of Claude's comments and the unnecessary exception handling. I pasted the result back at Claude, saying:

Good so far. I have adjusted your source code in several ways:

  1. I removed the comments.
  2. I removed the exception handling for the subcommand. If one of the subcommands fails, that no longer terminates the main loop
  3. Command-line arguments are passed to the main() function rather than being accessed directly

Here is the code. Tell me if you see any problems or if you have any questions about it

I had also sabotaged the code with at least one likely error. Python's subprocess.run returns a “result” structure that includes the subcommand's output and exit status, and I can never remember the name of the component that records the exit status. I either look it up or get it by trial-and-error. This time I didn't bother to either, I just guessed that it was result.exitcode, which I handed off to Claude even though I thought was probably wrong.

Claude's reply said:

Here are a couple of issues I noticed:

There's an error in the error handling code - you're using result.exitcode but in the subprocess.run() function, the attribute is called returncode not exitcode

Awesome, now I don't have to remember that name of that component.

Claude continued:

The parse_arguments() function definition is missing in the pasted code, but it would need to be updated to accept the argv parameter

I had passed sys.argv[1:] to main, and then had main call parse_arguments(argv). But I hadn't changed parse_arguments to expect an argument list. (I hadn't included it in my paste at all, in fact.) Claude was exactly right, and continued with the corrected version:

    #!/usr/bin/env python3

    import sys
    import subprocess
    import argparse

    def parse_arguments(argv):
        …
        return parser.parse_args(argv)

    def main(argv):
        # Parse command-line arguments
        args = parse_arguments(argv)
        …

    if __name__ == "__main__":
        ok = main(sys.argv[1:])
        exit(0 if ok else 1)

Handling NUL-separated values

I said

It's great, thanks for catching those errors. Now let's add a -0 command-line option that tells the program to split the input on "\0" characters instead of at line breaks.

Recall that my handwritten xar did not do this yet; I had left that feature for another day. Claude just added it for me, adding

    parser.add_argument(
        "-0", "--null",
        action="store_true",
        help="Input items are terminated by a null character instead of by newline"
    )

to the argument parser and changing the main loop to:

    # Split input based on the delimiter option
    if args.null:
        # Split on null characters
        items = stdin_data.split('\0')
    else:
        # Split on newlines
        items = stdin_data.splitlines()

    for item in items:
        if not item:
            continue

        …

I was tired of Claude's comments, so I said

Give me the same thing, with no comments

which it did, so I said

From now on leave out the comments. I'm a Python expert and I don't need them. If there is something I don't understand I will ask you.

Claude complied. If I were going to do use Claude again in the future I would include that in the canned instructions that Claude is given up front. Instead I will probably use a tool better-suited to programming, and do whatever the analogous thing is.

Template filling

Now I told Claude to add the % feature:

I changed if not item to if item == "". Now let's make the following change:

  • If the command template includes any % characters, each of these should be replaced with the input item.
  • Otherwise, if there were no % characters, the input item should be appended to the end of the command as in the current version

Claude did this. It used an explicit loop instead of the list comprehension that I had used (and preferred), but it did do it correctly:

    for arg in command_template:
        if '%' in arg:
            has_placeholder = True
            full_command.append(arg.replace('%', item))
        else:
            full_command.append(arg)

    if not has_placeholder:
        full_command.append(item)

Even without the list comprehension, I would have factored out the common code:

    for arg in command_template:
        if '%' in arg:
            has_placeholder = True
        full_command.append(arg.replace('%', item))

    if not has_placeholder:
        full_command.append(item)

But I am not going to complain, my code is simpler but is doing unnecessary work.

Claude also took my hint to change item == "" even though I didn't explicitly tell it to change that.

At this point the main loop of the main function was 15 lines long, because Claude had stuck all the %-processing inline. So I said:

Good, let's extract the command template processing into a subroutine.

It did this right, understanding correctly what code I was referring to and extracting it into a subroutine called process_command_template. More science fiction: I can say "command template processing" and it guesses what I had in mind!

This cut the main loop to 7 lines. That worked so well I tried it again:

Good, now let's extract the part of main that processes stdin into a subroutine that returns the items array

It pulled the correct code into a function called process_stdin. It did not make the novice mistake of passing the entire args structure to this function. In the caller it had process_stdin(args.null) and inside of process_stdin this parameter was named use_null_delimiter,

YAGNI?

At this point I was satisfied but I thought I might as well ask if it should do something else before we concluded:

Can you think of any features I left out that would be useful enough to warrant inclusion? Remember this program is supposed to be small and easy to use, in contrast to the existing xargs which is very complicated.

Claude had four suggestions:

  1. A -p or --parallel option to run commands in parallel

  2. A -n or --max-args option to specify the maximum number of items to pass per command

  3. A simple -v or --verbose flag to show commands as they're executed

  4. A way to replace the command's standard input with the item instead of adding it as an argument

All reasonable suggestions, nothing stupid. (It also supplied code for #3, which I had not asked for and did not want, but as I said before, default Claude is very chatty.)

Parallelization

I didn't want any of these, and I knew that #2–4 would be easy to add if I did want any of them later. But #1 was harder. I've done code like this in the past, where the program has a worker pool and runs a new process whenever the worker pool isn't at capacity. It's not even that hard. In Perl you can play a cute trick and use something like

    $workers{spawn()} = 1 while delete $workers{wait()};

where the workers hash maps process IDs to dummy values. A child exits, wait() awakens and returns the process ID of the completed child, which is then deleted from the map, and the loop starts another worker.

I wanted to see how Claude would do it, and the result was an even bigger win than I had had previously, because Claude wrote this:

   with concurrent.futures.ProcessPoolExecutor(max_workers=args.parallel) as executor:
        futures = [executor.submit(execute_command, cmd, args.verbose) for cmd in commands]

        for future in concurrent.futures.as_completed(futures):
            success = future.result()
            if not success:
                ok = False

What's so great about this? What's great is that I hadn't known about concurrent.futures or ProcessPoolExecutor. And while I might have suspected that something like them existed, I didn't know what they were called. But now I do know about them.

If someone had asked me to write the --parallel option, I would have had to have this conversation with myself:

Python probably has something like this already. But how long will it take me to track it down? And once I do, will the API documentation be any good, or will it be spotty and incorrect? And will there be only one module, or will there be three and I will have to pick the right one? And having picked module F6, will I find out an hour later that F6 is old and unmaintained and that people will tell me “Oh, you should have used A1, it is the new hotness, everyone knows that.”

When I put all that uncertainty on a balance, and weigh it against the known costs of doing it myself, which one wins?

The right choice is: I should do the research, find the good module (A1, not F6), and figure out how to use it.

But one of my biggest weaknesses as a programmer is that I too often make the wrong choice in this situation. I think “oh, I've done this before, it will be quicker to just do it myself”, and then I do and it is.

Let me repeat, it is quicker to do it myself. But that is still the wrong choice.

Maybe the thing I wrote would be sooner or smaller or faster or more technically suitable to the project than the canned module would have been. But it would only have been more be technically suitable today. If it needed a new feature in the future it might have to be changed by someone who had never seen it before, whereas the canned module could well already have the needed feature ready to go, already documented, and perhaps already familiar to whoever had to make the change. My bespoke version would certainly be unfamiliar to every other programmer on the project — including perhaps myself six months later — so would be the wrong thing to use.

I'm really good at hacking this stuff up. Which is a problem. It makes me want to hack stuff up, even when I shouldn't.

Claude tips the balance strongly toward the correct side, which is that I should use the prepackaged module that someone else wrote and not hack something up.

And now I know about concurrent.futures.ProcessPoolExecutor! The world is full of manuals, how can I decide which ones I should read? One way is to read them all, which used to work back when I was younger, but now I have more responsibilities and I don't have time to read the entire Python library reference including all the useless bits I will never use. But here's Claude pointing out to me that this is something I need to know about, now, today, and I should read this one. That is valuable knowledge.

Claude flubs

While testing the program I did

    xar --parallel=2 bash -c '...'

which didn't work because the argparse library tried to parse the -c as a flag to xar, when I wanted it to be a flag to bash. This was my fault, and there is a standard way around it, which is that I should have said

    xar --parallel=2 -- bash -c '...'

which tells argparse to drop the -- and take everything after it literally with no interpretation.

I thought I'd see what Claude had to say about this, and it dropped the ball in a very LLM sort of way.

I asked:

When I did xar --parallel=2 bash -c '...' it misunderstood the arguments. It thought I was trying to pass -c to xar and failed with unrecognized arguments. What should I do about that?

Claude replied:

There are two main approaches to handle this:

  1. Use the -- separator to indicate the end of arguments for xar: …
  2. Modify your code to handle this case specifically by looking for a command terminator in the argument parser.

Here's how you could modify your code to implement the first approach:

And then it followed with the same, unmodified code…

Which was correct, because -- is already handled by argparse, so no change was actually required to implement the first approach! But Claude's characterization of it as a modification was misleading, wrong, and potentially very confusing.

I said:

I don't see what the difference is in that argument parsing code as compared to what I had before.

and Claude got into the LLM tail-chasing mode where it got more and more confused:

You're right, I apologize for the confusion. There's no difference in the argument parsing code I provided compared to what you had before. The issue is with how argparse handles arguments by default.

To fix the issue with xar --parallel=2 bash -c '...', you need to actually implement support for the -- separator in your code…

   # Find the index of '--' if it exists
   try:
       separator_index = argv.index('--')
       xar_args = argv[:separator_index]
       command_args = argv[separator_index+1:]
   except ValueError: …

Ugh, all wrong in the worst way. The code probably works, but it is completely unnecessary. Claude's claim that “you need to actually implement support for the -- separator” is flat wrong. I pointed this out and Claude got more confused. Oh well, nobody is perfect!

Lessons learned

A long time ago, when syntax-coloring editors were still new, I tried one and didn't like it, then tried again a few years later and discovered that I liked it better than I had before, and not for the reasons that anyone had predicted or that I would have been able to predict. (I wrote an article about the surprising reasons to use the syntax coloring.)

This time also. As usual, an actual experiment produced unexpected results, because the world is complicated and interesting. Some of the results were unsurprising, but some were not anything I would have thought of beforehand.

Claude's code is good enough, but it is not a magic oracle

Getting Claude to write most of the code was a lot faster and easier than writing it myself. This is good! But I was dangerously tempted to just take Claude's code at face value instead of checking it carefully. I quickly got used to flying along at great speed, and it was tough to force myself to slow down and be methodical, looking over everything as carefully as I would if Claude were a real junior programmer. It would be easy for me to lapse into bad habits, especially if I were tired or ill. I will have to be wary.

Fortunately there is already a part of my brain trained to deal with bright kids who lack experience, and I think perhaps that part of my brain will be able to deal effectively with Claude.

I did not notice any mistakes on Claude's part — at least this time.

At one point my testing turned up what appeared to be a bug, but it was not. The testing was still time well-spent.

Claude remembers the manual better than I do

Having Claude remember stuff for me, instead of rummaging the manual, is great. Having Claude stub out an argument parser, instead of copying one from somewhere else, was pure win.

Partway along I was writing a test script and I wanted to use that Bash flag that tells Bash to quit early if any of the subcommands fails. I can never remember what that flag is called. Normally I would have hunted for it in one of my own shell scripts, or groveled over the 378 options in the bash manual. This time I just asked in plain English “What's the bash option that tells the script to abort if a command fails?” Claude told me, and we went back to what we were doing.

Claude can talk about code with me, at least small pieces

Claude easily does simple refactors. At least at this scale, it got them right. I was not expecting this to work as well as it did.

When I told Claude to stop commenting every line, it did. I wonder, if I had told it to use if not expr only for Boolean expressions, would it have complied? Perhaps, at least for a while.

When Claude wrote code I wasn't sure about, I asked it what it was doing and at least once it explained correctly. Claude had written

     parser.add_argument(
        "-p", "--parallel",
        nargs="?",
        const=5,
        type=int,
        default=1,
        help="Run up to N commands in parallel (default: 5)"
    )

Wait, I said, I know what the const=5 is doing, that's so that if you have --parallel with no number it defaults to 5. But what is the --default doing here? I just asked Claude and it told me: that's used if there is no --parallel flag at all.

This was much easier than it would have been for me to pick over the argparse manual to figure out how to do this in the first place.

More thoughts

On a different project, Claude might have done much worse. It might have given wrong explanations, or written wrong code. I think that's okay though. When I work with human programmers, they give wrong explanations and write wrong code all the time. I'm used to it.

I don't know how well it will work for larger systems. Possibly pretty well if I can keep the project sufficiently modular that it doesn't get confused about cross-module interactions. But if the criticism is “that LLM stuff doesn't work unless you keep the code extremely modular” that's not much of a criticism. We all need more encouragement to keep the code modular.

Programmers often write closely-coupled modules knowing that it is bad and it will cause maintenance headaches down the line, knowing that the problems will most likely be someone else's to deal with. But what if writing closely-coupled modules had an immediate cost today, the cost being that the LLM would be less helpful and more likely to mess up today's code? Maybe programmers would be more careful about letting that happen!

Will my programming skill atrophy?

Folks at Recurse Center were discussing this question.

I don't think it will. It will only atrophy if I let it. And I have a pretty good track record of not letting it. The essence of engineering is to pay attention to what I am doing and why, to try to produce a solid product that satisifes complex constraints, to try to spot problems and correct them. I am not going to stop doing this. Perhaps the problems will be different ones than they were before. That is all right.

Starting decades ago I have repeatedly told people

You cannot just paste code with no understanding of what is going on and expect it to work.

That was true then without Claude and it is true now with Claude. Why would I change my mind about this? How could Claude change it?

Will I lose anything from having Claude write that complex parser.add_argument call for me? Perhaps if I had figured it out on my own, on future occasions I would have remembered the const=5 and default=1 specifications and how they interacted. Perhaps.

But I suspect that I have figured it out on my own in the past, more than once, and it didn't stick. I am happy with how it went this time. After I got Claude's explanation, I checked its claimed behavior pretty carefully with a stub program, as if I had been reviewing a colleague's code that I wasn't sure about.

The biggest win Claude gave me was that I didn't know about this ProcessPoolExecutor thing before, and now I do. That is going to make me a better programmer. Now I know something about useful that I didn't know before, and I have a pointer to documentation I know I should study.

My skill at writing ad-hoc process pool managers might atrophy, but if it does, that is good. I have already written too many ad-hoc process pool managers. It was a bad habit, I should have stopped long ago, and this will help me stop.

Conclusion

This works.

Perfectly? No, it's technology, technology never works perfectly. Have you ever used a computer?

Will it introduce new problems? Probably, it's new technology, and new technology always introduces new problems.

But is it better than what we had before? Definitely.

I still see some programmers turning up their noses at this technology as if they were sure it was a silly fad that would burn itself out once people came to their senses and saw what a terrible idea it was.

I think that is not going to happen, and those nose-turning-up people, like the people who pointed out all the drawbacks and unknown-unknowns of automobiles as compared to horse-drawn wagons, are going to look increasingly foolish.

Because it works.


[Other articles in category /tech/gpt] permanent link

Thu, 01 May 2025

A puzzle about balancing test tubes in a centrifuge

!!\def\nk#1#2{\left\langle{#1 \atop #2}\right\rangle} \def\dd#1{\nk{12}{#1}} !!

Suppose a centrifuge has !!n!! slots, arranged in a circle around the center, and we have !!k!! test tubes we wish to place into the slots. If the tubes are not arranged symmetrically around the center, the centrifuge will explode.

(By "arranged symmetrically around the center, I mean that if the center is at !!(0,0)!!, then the sum of the positions of the tubes must also be at !!(0,0)!!.)

Let's consider the example of !!n=12!!. Clearly we can arrange !!2!!, !!3!!, !!4!!, or !!6!! tubes symmetrically:

twelve small circles arranged around a central point, like a
  clock; circles 5 and 11 are filled in cirles 1, 5, and 9 are filled in circles 0, 3, 6, and 9 are filled circles 1, 3, 5, 7, 9, and 11 are filled

Equally clearly we can't arrange only !!1!!. Also it's easy to see we can do !!k!! tubes if and only if we can also do !!n-k!! tubes, which rules out !!n=12, k=11!!.

From now on I will write !!\nk nk!! to mean the problem of balancing !!k!! tubes in a centrifuge with !!n!! slots. So !!\dd 2, \dd 3, \dd 4, !! and !!\dd 6!! are possible, and !!\dd 1!! and !!\dd{11}!! are not. And !!\nk nk!! is solvable if and only if !!\nk n{n-k}!! is.

It's perhaps a little surprising that !!\dd7!! is possible. If you just ask this to someone out of nowhere they might have a happy inspiration: “Oh, I'll just combine the solutions for !!\dd3!! and !!\dd4!!, easy.” But that doesn't work because two groups of the form !!3i+j!! and !!4i+j!! always overlap.

For example, if your group of !!4!! is the slots !!0, 3, 6, 9!! then you can't also have your group of !!3!! be !!1, 5, 9!!, because slot !!9!! already has a tube in it.

The k=3 and k=4 diagrams from before, superimposed; there is a
  small explosion symbol at 9 o'clock where they interfere

The other balanced groups of !!3!! are blocked in the same way. You cannot solve the puzzle with !!7=3+4!!; you have to do !!7=3+2+2!! as below left. The best way to approach this is to do !!\dd5!!, as below right. This is easy, since the triangle only blocks three of the six symmetric pairs. Then you replace the holes with tubes and the tubes with holes to turn !!\dd5!! into !!\dd{12-5}=\dd7!!.

a triangle filling slots 1, 5, and 9, plus a pair at 0, 6 and
  another pair at 2, 8 the opposite of the previous diagram, consisting of a triangle
  filling slots 3, 7, 11 and a pair at 4, 10

Given !!n!! and !!k!!, how can we decide whether the centrifuge can be safely packed?

Clearly you can solve !!\nk nk!! when !!n!! is a multiple of !!k>1!!, but the example of !!\dd5!! (or !!\dd7!!) shows this isn't a necessary condition.

A generalization of this is that !!\nk nk!! is always solvable if !!\gcd(n,k) > 1!! since you can easily balance !!g = \gcd(n, k)!! tubes at positions !!0, \frac ng, \frac{2n}g, \dots, \frac {(g-1)n}g!!, then do another !!g!! tubes one position over, and so on. For example, to do !!\dd8!! you just put first four tubes in slots !!0, 3, 6, 9!! and the next four one position over, in slots !!1, 4, 7, 10!!.

Two squares this time, as described in the previous paragraph

An interesting counterexample is that the strategy for !!\dd7!!, where we did !!7=3+2+2!!, cannot be extended to !!\nk{14}9!!. One would want to do !!k=7+2!!, but there is no way to arrange the tubes so that the group of !!2!! doesn't conflict with the group of !!7!!, which blocks one slot from every pair.

But we can see that this must be true without even considering the geometry. !!\nk{14}9!! is the reverse of !!\nk{14}{14-9} = \nk{14}5!!, which impossible: the only nontrivial divisors of !!n=14!! are !!2!! and !!7!!, so !!k!! must be a sum of !!2!!s and !!7!!s, and !!5!! is not.

You can't fit !!k=3+5=8!! tubes when !!n=15!!, but again the reason is a bit tricky. When I looked at !!8!! directly, I did a case analysis to make sure that the !!3!!-group and the !!5!!-group would always conflict. But again there was an easier was to see this: !!8=15-7!! and !!7!! clearly won't work, as !!7!! is not a sum of !!3!!s and !!5!!s. I wonder if there's an example where both !!k!! and !!n-k!! are not obvious?

For !!n=20!!, every !!k!! works except !!k=3,17!! and the always-impossible !!k=1,19!!.

What's the answer in general? I don't know.

Addenda

20250502

Now I am amusing myself thinking about the perversity of a centrifuge with a prime number of slots, say !!13!!. If you use it at all, you must fill every slot. I hope you like explosions!

While I did not explode any centrifuges in university chemistry, I did once explode an expensive Liebig condenser.

A chemistry lab
apparatus.  In the middle thje the Liebig condenser, a pair of long
concentric glass tubes, one inside the other, supported by a metal
ring stand.  At left a heater heats a flash whose top is connected to
the upper end of the inner tube of the condenser.  The condensate
collects in a flask at right.  Two rubber tubes connect to the top and
bottom of the outer tube, carrying water through it.

Condenser setup by Mario Link from an original image by Arlen on Flickr. Licensed cc-by-2.0, provided via Wikimedia Commons.

20250503

  • Michael Lugo informs me that a complete solution may be found on Matt Baker's math blog. I have not yet looked at this myself.

  • Omar Antolín points out an important consideration I missed: it may be necessary to subtract polygons. Consider !!\nk{30}6!!. This is obviously possible since !!6\mid 30!!. But there is a more interesting solution. We can add the pentagon !!{0, 6, 12, 18, 24}!! to the digons !!{5, 20}!! and !!{10, 25}!! to obtain the solution $${0,5,6,10,12,18, 20, 24, 25}.$$

    Then from this we can subtract the triangle !!{0, 10, 20}!! to obtain $${5, 6, 12, 18, 24, 25},$$ a solution to !!\nk{30}6!! which is not a sum of regular polygons:

  • Thanks to Dave Long for pointing out a small but significant error, which I have corrected.

20250505


[Other articles in category /math] permanent link

Wed, 30 Apr 2025

Proof by insufficient information

Content warning: rambly

Given the coordinates of the three vertices of a triangle, can we find the area? Yes. If by no other method, we can use the Pythagorean theorem to find the lengths of the edges, and then Heron's formula to compute the area from that.

Now, given the coordinates of the four vertices of a quadrilateral, can we find the area? And the answer is, no, there is no method to do that, because there is not enough information:

three points
arranged in an irregular triangle, with one in the middle.  Four of the six
possible edges are drawn in, definining a quadrilateral the same
four points, but with three different edges drawn in to define a
different quadrilateral with the same vertices a third
quadrilateral with the same vertices as the first two

These three quadrilaterals have the same vertices, but different areas. Just knowing the vertices is not enough; you also need their order.

I suppose one could abstract this: Let !!f!! be the function that maps the set of vertices to the area of the quadrilateral. Can we calculate values of !!f!!? No, because there is no such !!f!!, it is not well-defined.

Put that way it seems less interesting. It's just another example of the principle that, just because you put together a plausible sounding description of some object, you cannot infer that such an object must exist. One of the all-time pop hits here is:

Let !!ε!! be the smallest [real / rational] number strictly greater than !!0!!…

which appears on Math SE quite frequently. Another one I remember is someone who asked about the volume of a polyhedron with exactly five faces, all triangles. This is a fallacy at the ontological level, not the mathematical level, so when it comes up I try to demonstrate it with a nonmathematical counterexample, usually something like “the largest purple hat in my closet” or perhaps “the current Crown Prince of the Ottoman Empire”. The latter is less good because it relies on the other person to know obscure stuff about the Ottoman Empire, whatever that is.

This is also unfortunately also the error in Anselm's so-called “ontological proof of God”. A philosophically-minded friend of mine once remarked that being known for the discovery of the ontological proof of God is like being known for the discovery that you can wipe your ass with your hand.

Anyway, I'm digressing. The interesting part of the quadrilateral thing, to me, is not so much that !!f!! doesn't exist, but the specific reasoning that demonstrates that it can't exist. I think there are more examples of this proof strategy, where we prove nonexistence by showing there is not enough information for the thing to exist, but I haven't thought about it enough to come up with one.

There is a proof, the so-called “information-theoretic proof”, that a comparison sorting algorithm takes at least !!O(n\log n)!! time, based on comparing the amount of information gathered from the comparisons (one bit each) with that required to distinguish all !!n! !! possible permutations (!!\log_2 n! \ge n\log_2 n!! bits total). I'm not sure that's what I'm looking for here. But I'm also not sure it isn't, or why I feel it might be different.

Addenda

20250430

Carl Muckenhoupt suggests that logical independence proofs are of the same sort. He says, for example:

Is there a way to prove the parallel postulate from Euclid's other axioms? No, there is not enough information. Here are two geometric models that produce different results.

This is just the sort of thing I was looking for.

20250503

Rik Signes has allowed me to reveal that he was the source of the memorable disparagement of Anselm's dumbass argument.


[Other articles in category /math] permanent link

Sat, 26 Apr 2025

Willie Singletary will you please go now?

(Previously: [1] [2])

Welcome to Philadelphia! We have a lot of political corruption here. I recently wrote about the unusually corrupt Philadelphia Traffic Court, where four of the judges went to the federal pokey, and the state decided there was no way to clean it up, they had to step on it like a cockroach. I ended by saying:

One of those traffic court judges was Willie Singletary, who I've been planning to write about since 2019. But he is a hard worker who deserves better than to be stuck in an epilogue, so I'll try to get to him later this month.

This is that article from 2019, come to fruit at last. It was originally inspired by this notice that appeared at my polling place on election day that year:

(Click for uncropped version)

Cropped version of a notice posted in a Philadelphia polling place,
text as below”

VOTES FOR THIS CANDIDATE WILL NOT BE COUNTED

DEAR VOTERS:

Willie Singletary, candidate for Democratic Council At-Large, has been removed from the Primary Ballot by Court Order. Although his name appears on the ballot, votes for this candidate will not be counted because he was convicted of two Class E felonies by the United States District Court for the Eastern District of Pennsylvania, which bars his candidacy under Article 2, Section 7 of the Pennsylvania Constitution.

That's because Singletary had been one of those traffic court judges. In 2014 he had been convicted of lying to the FBI in connection with that case, and was sentenced to 20 months in federal prison; I think he actually served 12.

That didn't stop Willie from trying to run for City Council, though, and the challenge to his candidacy didn't wrap up before the ballots were printed, so they had to post these notices.

Even before the bribery scandal and the federal conviction, Singletary had already lost his Traffic Court job when it transpired that he had showed dick pics to a Traffic Court cashier.

Before that, when he was campaigning for the Traffic Court job, he was caught on video promising to give favorable treatment to campaign donors.

But Willie's enterprise and go-get-it attitude means he can't be kept down for long. Willie rises to all challenges! He is now enjoying a $90,000 annual salary as a Deputy Director of Community Partnerships in the administration of Philadelphia Mayor Cherelle Parker. Parker's spokesperson says "The Parker administration supports every person’s right to a second chance in society.”

I think he might be on his fourth or fifth chance by now, but who's counting? Let it never be said that Willie Singletary was a quitter.

Lorrie once made a remark that will live in my memory forever, about the "West Philadelphia local politics-to-prison pipeline”. Mayor Parker is such a visionary that she has been able to establish a second pipeline in the opposite direction!

Addendum 20250501

I don't know how this happened, but when I committed the final version of this article a few days ago, the commit message that my fingers typed was:

Date:   Sat Apr 26 14:24:19 2025 -0400

    Willie Wingletsray finally ready to go

And now, because Git, it's written in stone.


[Other articles in category /politics] permanent link

Thu, 24 Apr 2025

How our toy octopuses got revenge on a Philadelphia traffic court judge

[ Content warning: possibly amusing, but silly and pointless ]

My wife Lorrie wrote this on 31 January 2013:

I got an e-mail from Husband titled, "The mills of Fenchurch grind slow, but they grind exceeding small." This silliness, which is off-the-charts silly, is going to require explanation.

Fenchurch is a small blue octopus made of polyester fiberfill. He was the first one I ever bought, starting our family's octopus craze, and I gave him to Husband in 1994. He is extremely shy and introverted. He hates conflict and attention. He's a sensitive and very artistic soul. His favorite food is crab cakes, followed closely by shrimp. (We have made up favorite foods, professions, hobbies, and a zillion scenarios for all of our stuffed animals.)

In our house it was well-established canon that Fenchurch's favorite food was crab cakes. I had even included him as an example in some of my conference talks:

    my $fenchurch = Octopus->new({
                        arms => 8,
                        hearts => 3,
                        favorite_food => "crab cakes"
                    });

He has a ladylove named Junko whom he takes on buggy rides on fine days. When Husband is feeling very creative and vulnerable, he identifies with Fenchurch.

Anyway, one time Husband got a traffic ticket and this Traffic Court judge named Fortunato N. Perri was unbelievably mocking to him at his hearing. Good thing Husband has the thick skin of a native Manhattanite. … It was so awful that Husband and I remember bits of it more than a decade later.

I came before Fortunato N. Perri in, I think, 1996. I had been involved in a very low-speed collision with someone, and I was ticketed because the proof of insurance in my glove box was expired. Rather than paying the fine, I appeared in traffic court to plead not guilty.

It was clear that Perri was not happy with his job as a traffic court judge. He had to listen to hundreds of people making the same lame excuses day after day. “I didn't see the stop sign.” “The sun was in my eyes.” “I thought the U-turn was legal.” I can't blame Perri for growing tired of this. But I can blame him for the way he handled it, which was to mock and humiliate the people who came before him.

“Where are you from?”

“Ohio.”

“Do they have stop signs in Ohio?”

“Uh, yes.”

“Do you know what they look like?”

“Yes.”

“Do they look like the stop signs we have here?”

“Yes.”

“Then how come you didn't see the stop sign? You say you know what a stop sign looks like but then you didn't stop. I'm fining you $100. You're dismissed.”

He tried to hassle me also, but I kept my cool, and since I wasn't actually in violation of the law he couldn't do anything to me. He did try to ridicule my earring.

“What does that thing mean?”

“It doesn't mean anything, it's just an earring.”

“Is that what everyone is doing now?”

“I don't know what everyone is doing.”

“How long ago did you get it?”

“Thirteen years.”

“Huh. … Well, you did have insurance, so I'm dismissing your ticket. You can go.”

I'm still wearing that earring today, Fortunato. By the way, Fortunato, the law is supposed to be calm and impartial, showing favor to no one.

Fortunato didn't just mock and humiliate the unfortunate citizens who came before him. He also abused his own clerks. One of them was doing her job, stapling together court papers on the desk in front of the bench, and he harangued her for doing it too noisily. “God, you might as well bring in a hammer and nails and start hammering up here, bang bang bang!”

I once went back to traffic court just to observe, but he wasn't in that day. Instead I saw how a couple of other, less obnoxious judges ran things.

Lorrie continues:

Husband has been following news about this judge (now retired) and his family ever since, and periodically he gives me updates.

(His son, Fortunato N. Perri Jr., is a local civil litigation attorney of some prominence. As far as I know there is nothing wrong with Perri Jr.)

And we made up a story that Fenchurch was traumatized by this guy after being ticketed for parking in a No Buggy zone.

So today, he was charged with corruption after a three-year FBI probe. The FBI even raided his house

I understood everything when I read that Perri accepted graft in many forms, including shrimp and crab cakes.

OMG. No wonder my little blue octopus was wroth. No wonder he swore revenge. This crooked thief was interfering with his food supply!

Lorrie wrote a followup the next day:

I confess Husband and I spent about 15 minutes last night savoring details about Fortunato N. Perri's FBI bust. Apparently, even he had a twinge of conscience at the sheer quantity of SHRIMP and CRAB CAKES he got from this one strip club owner in return for fixing tickets. (Husband noted that he managed to get over his qualms.)

Husband said Perri hadn't been too mean to him, but Husband still feels bad about the way Perri screamed at his hapless courtroom assistant, who was innocently doing her job stapling papers until Perri stopped proceedings to holler that she was making so much noise, she may as well be using a hammer.

Fenchurch and his ladylove Junko, who specialize in avant garde performance art, greeted Husband last night with their newest creation, called "Schadenfreude." It mostly involved wild tentacle waving and uninhibited cackling. Then they declared it to be the best day of their entire lives and stayed up half the night partying.

Fenchurch, at left, is a bright blue toy octopus with a round body, beady black eyes rimmed in an olive gold color, and no visible mouth.  His body and arms are covered with shirt but shaggy plush.  Junko, who sits at right, her eight arms entwined with his, is almost exactly the same, except that her plush is hot pink.  They are resting on a quilt, presumably on a bed somewhere, and looking directly into the camera.

Epilogues

  • Later that year, the notoriously corrupt Traffic Court was abolished, its functions transferred to regular Philadelphia Municipal Court.

  • In late 2014, four of Perri's Traffic Court colleagues were convicted of federal crimes. They received prison sentences of 18 to 20 months.

  • Fortunato Perri himself, by then 78 years old and in poor health, pled guilty, and was sentenced to two years of probation.

  • The folks who supplied the traffic tickets and the seafood bribes were also charged. They tried to argue that they hadn't defrauded the City of Philadelphia because the people they paid Perri to let off the hook hadn't been found guilty, and would only have owed fines if they had been found guilty.

    The judges in their appeal were not impressed with this argument. See United States v. Hird et al..

  • One of those traffic court judges was Willie Singletary, who I've been planning to write about since 2019. But he is a hard worker who deserves better than to be stuck in an epilogue, so I'll try to get to him later this month.

    (Update 20250426: Willie Singletary, ladies and gentlemen!)


[Other articles in category /law] permanent link

Fri, 28 Mar 2025

Does someone really have to do the dirty jobs?

Doing the laundry used to be backbreaking toil. Haul the water, chop the wood, light the fire, heat the water, and now you are ready to begin the really tough part of the work. The old saying goes "Wash on Monday", because Monday is the day after your day of rest, and otherwise you won't have the strength to do the washing.

And the saying continues: “Iron on Tuesday, mend on Wednesday”. Routine management of clothing takes half of the six-day work week.

For this reason, washing is the work of last resort for the poorest and most marginal people. Widows are washerwomen. Prisons are laundries. Chinese immigrants run laundries. Anyone with enough money to outsource their laundry does so.

The invention of mechanical washing machines eliminated a great amount of human suffering and toil. Machines do the washing now. Nobody has to break their back scrubbing soiled linens against a washboard.

The black and
white photograph depicts a young child, perhaps two or three years
old, standing next to a wooden wash tub and washboard. The child is
wearing a fur parka with a fur-trimmed hood.The washboard is positioned at a slight angle, resting partially within the wooden tub. It appears weathered and worn, suggesting frequent use. The tub itself is simple, made of wood staves. A small amount of what looks like clothing or fabric is partially visible inside the tub. The ground is primarily sandy and uneven, with sparse vegetation or debris visible.“Eskimo child with wooden tub and washboard”, c. 1905, by Frank Hamilton Nowell, public domain, via Wikimedia Commons.

But the flip side of that is that there are still poor and marginalized people, who now have to find other work. Mechanical laundry has taken away their jobs. They no longer have to do the backbreaking labor of hand laundry. Now they have the option to starve to death instead.

Is it a net win? I don't know. I'd like to think so. I'd like to free people from the toil of hand laundry without also starving some of them to death. Our present system doesn't seem to be very good at that sort of thing. I'm not sure what a better system would look like.

Anyway, this is on my mind a lot lately because of the recent developments in computer-generated art. I think “well, it's not all bad, because at least now nobody will have to make a living drawing pornographic pictures of other people's furry OCs. Surely that is a slight elevation of the human condition.” On the other hand, some of those people would rather have the money and who am I to deny them that choice?


[Other articles in category /misc] permanent link

Tue, 25 Mar 2025

The mathematical past is a foreign country

A modern presentation of the Peano axioms looks like this:

  1. !!0!! is a natural number
  2. If !!n!! is a natural number, then so is the result of appending an !!S!! to the beginning of !!n!!
  3. Nothing else is a natural number

This baldly states that zero is a natural number.

I think this is a 20th-century development. In 1889, the natural numbers started at !!1!!, not at !!0!!. Peano's Arithmetices principia, nova methodo exposita (1889) is the source of the Peano axioms and in it Peano starts the natural numbers at !!1!!, not at !!0!!:

screencap of Peano's first axiom from his book.
It states “1∈N”.

There's axiom 1: !!1\in\Bbb N!!. No zero. I think starting at !! 0!! may be a Bourbakism.

In a modern presentation we define addition like this:

$$ \begin{array}{rrl} (i) & a + 0 = & a \\ (ii) & a + Sb = & S(a+b) \end{array} $$

Peano doesn't have zero, so he doesn't need item !!(i)!!. His definition just has !!(ii)!!.

But wait, doesn't his inductive definition need to have a base case? Maybe something like this?

\begin{array}{rrl} (i') & a + 1 = & Sa \\ \end{array}

Nope, Peano has nothing like that. But surely the definition must have a base case? How can Peano get around that?

Well, by modern standards, he cheats!

Peano doesn't have a special notation like !!S!! for successor. Where a modern presentation might write !!Sa!! for the successor of the number !!a!!, Peano writes “!!a + 1!!”.

So his version of !!(ii)!! looks like this:

$$ a + (b + 1) = (a + b) + 1 $$

which is pretty much a symbol-for-symbol translation of !!(ii)!!. But if we try to translate !!(i')!! similarly, it looks like this:

$$ a + 1 = a + 1 $$

That's why Peano didn't include it: to him, it was tautological.

But to modern eyes that last formula is deceptive because it equivocates between the "!!+ 1!!" notation that is being used to represent the successor operation (on the right) and the addition operation that Peano is trying to define (on the left). In a modern presentation, we are careful to distinguish between our formal symbol for a successor, and our definition of the addition operation.

Peano, working pre-Frege and pre-Hilbert, doesn't have the same concept of what this means. To Peano, constructing the successor of a number, and adding a number to the constant !!1!!, are the same operation: the successor operation is just adding !!1!!.

But to us, !!Sa!! and !!a+S0!! are different operations that happen to yield the same value. To us, the successor operation is a purely abstract or formal symbol manipulation (“stick an !!S!! on the front”). The fact that it also has an arithmetic interpretation, related to addition, appears only once we contemplate the theorem $$\forall a. a + S0 = Sa.$$ There is nothing like this in Peano.

It's things like this that make it tricky to read older mathematics books. There are deep philosophical differences about what is being done and why, and they are not usually explicit.

Another example: in the 19th century, the abstract presentation of group theory had not yet been invented. The phrase “group” was understood to be short for “group of permutations”, and the important property was closure, specifically closure under composition of permutations. In a 20th century abstract presentation, the closure property is usually passed over without comment. In a modern view, the notation !!G_1\cup G_2!! is not even meaningful, because groups are not sets and you cannot just mix together two sets of group elements without also specifying how to extend the binary operation, perhaps via a free product or something. In the 19th century, !!G_1\cup G_2!! is perfectly ordinary, because !!G_1!! and !!G_2!! are just sets of permutations. One can then ask whether that set is a group — that is, whether it is closed under composition of permutations — and if not, what is the smallest group that contains it.

It's something like a foreign language of a foreign culture. You can try to translate the words, but the underlying ideas may not be the same.

Addendum 20250326

Simon Tatham reminds me that Peano's equivocation has come up here before. I previously discussed a Math SE post in which OP was confused because Bertrand Russell's presentation of the Peano axioms similarly used the notation “!!+ 1!!” for the successor operation, and did not understand why it was not tautological.


[Other articles in category /math] permanent link

Tue, 18 Mar 2025

Baseball on the Moon

We want to adapt baseball to be played on the moon. Is there any way to make it work?

My first impression is: no, for several reasons.

The pitched ball will go a little faster (no air resistance) but breaking balls are impossible (ditto). So the batter will find it easier to get a solid hit. We can't fix this by moving the plate closer to the pitcher's rubber; that would expose both batter and pitcher to unacceptable danger. I think we also can't fix it by making the plate much wider.

Once the batter hits the ball, it will go a long long way, six times as far as a batted ball on Earth. In order for every hit to not be a home run, the outfield fence will have to be about six times as far way, so the outfield will be !!36!! times as large. I don't think the outfielders can move six times as fast to catch up to it. Perhaps if there were 100 outfielders instead of only three?

Fielding the ball will be more difficult. Note that even though the vacuum prevents the pitch from breaking, the batted ball can still take unexpected hops off the ground.

Having gotten hold of the ball, the outfielder will then need to throw it back to the infield. They will be able to throw it that far, but they probably won't be able do it accurately enough for the receiving fielder to make the play at the base. More likely the outfielder will throw it wild.

I don't think this can be easily salvaged. People do love home runs, but I don't think they would love this. Games are too long already.

Well, here's a thought. What if instead of four bases, arranged in a !!90!!-foot square, we had, I don't know, eight or ten, maybe !!200!! or !!300!! feet apart? More opportunities for outs on the basepaths, and also the middle bases would not be so far from the outfield. Instead of throwing directly to the infield, the outfielders would have a relay system where one outfielder would throw to another that was farther in, and perhaps one more, before reaching the infield. That might be pretty cool.

I think it's not easy to run fast on the Moon. On the Earth, a runner's feet are pushing against the ground many times each second. On the Moon, the runner is taking big leaps. They may only get in one-sixth as many steps over the same distance, which would give them much less opportunity to convert muscle energy into velocity. (Somewhat countervailing, though: no air resistance.) Runners would have to train specially to be able to leap accurately to the bases. Under standard rules, a runner who overshoots the base will land off the basepaths and be automatically out.

So we might expect to see the runner bounding toward first base. Then one of the thirty or so far-left fielders would get the ball, relay it to the middle-left fielder and then the near-left fielder who would make the throw back to first. The throw would be inaccurate because it has to traverse a very large infield, and the first baseman would have to go chasing after it and pick it up from foul territory. He can't get back to first base quickly enough, but that's okay, the pitcher has bounded over from the mound and is waiting near first base to make the force play. Maybe the runner isn't there yet because one of his leaps was too long and to take another he has to jump high into the air and come down again.

It would work better than Quiddich, anyway.


[Other articles in category /games] permanent link

Sat, 15 Mar 2025

Hangeul sign-engraving machine

Last summer I was privileged to visit the glorious Letterpress Museum in Paju Book City, where I spent several hours and took a collection of photos that are probably not of interest to anyone but letterpress geeks, and perhaps not even to them.

Looking back at the photos it's not always clear to me why I took each one. But some of them I can remember. For example, this one:

This is not exactly letterpress. It is a device for engraving lettered signs on thin strips of metal or perhaps plastic. Happily I don't have to spend too much time explaining this because Marcin Wichary has just published an extensively-illustrated article about the Latin-script version. The only thing different about this one is the fonts, which are for writing Korean in Hangeul script rather than English in Latin script.

(Here's my real-quick summary. There is no ink. A stylus goes into the grooves of those brass templates. The stylus is attached with a pantograph to a router bit that rests on the object that the operator wants to engrave. When operator moves the stylus in the template grooves, the router bit follows their motions and engraves matching grooves in the target object. By adjusting the pantograph, one can engrave letters that are larger or smaller than the templates.)

Hangeul has an alphabet of 24 letters, but there's a difficulty in adapting this engraving technique for written Hangeul: The letters aren't written in a simple horizontal row as European languages are. Instead, they are grouped into syllables of two or three letters. For example, consider the consider the Korean word “문어”, pronounced (roughly) "moon-aw". which means “octopus”. This is made up of five letters ㅁㅜㄴㅇㅓ, but as you see they are arranged in two syllables 문 ("moon") and 어 ("aw"). So instead of twenty-four kinds of templates, one for each letter, the Korean set needs one for every possible syllable, and there are thousands of possible syllables.

Unicode gets around this by… sorry, Unicode doesn't get around it, they just allocate eleven thousand codepoints, one for each possible syllable. But for this engraving device, it would be prohibitively expensive to make eleven thousand little templates, then another eleven thousand spares, and impractical to sort and manage them in the shop. Instead there is a clever solution.

Take a look at just one of these templates:

This is not a Hangeul syllable.

Rather, it is five. The upper-right letter in the syllable is the vowel, and the template allows the operator to engrave any of the five vowels

ㅣㅓㅏㅕㅑ

to produce the syllables

잉 엉 앙 영 양

pronounced respectively "ing", "ông", "ang", "yông", and "yang".

Similarly this one can produce six different syllables:

The upper-left part can be used to engrave either of the consonants or and the upper-right part can be used to engrave any of the vowels ㅣㅓㅏ, to produce the combined set 싱 성 상 징 정 장. I'm not sure why this template doesn't also enable vowels ㅕㅑ as the other one did.

In the picture at top you can see that while the third template can be used to engrave any of the three syllables 송 승 숭 the operator has actually used it to engrave the first of these.

This ingenious mechanism cuts down the required number of templates by perhaps a factor of five, from ten boxes to two.

Addendum 20250325

A great many of the 11,000 Unicode codepoints are for seldom-used syllables that contain four or even five letters, such as 둻. I studied Korean for a while and I think I learned only one with with more than three letters in a syllable: 닭 means “chicken”.

I don't see templates for these syllables in any of my photographs, which probaby accounts for much of the great reduction in templates from the 11,000 possible syllables. But there must have been some way to engrave the syllables with the machine.

Maybe there was a template that had a small four small ㄷsymbols, one in each of the four corners of the template, and another with four ㄹ symbols, and so on? Then the operator could have composed 닭out of bits from four different templates.


[Other articles in category /IT] permanent link

Fri, 07 Mar 2025

Claude and Merle Miller let me down

Claude

My relationship with Claude has its ups and downs, and I'm still trying to figure out what to use it for and what not. It's great at rewriting my job application cover letters to sound less like an awkward nerd.

Last week I was in the supermarket and decided to try asking it something I've been wondering for a long time:

MJD: What is the song "Mercy Street” about? What is the connection with Anne Sexton? Who is "Mary"?

I thought Claude might do well with this. I had had a conversation with it a while back about Pixies songs, which I was satisfied with. But this time Claude let me down:

Claude: … The "Mary" referenced in the song is likely Mary Gray Staples, who was the name of Sexton's childhood doll. …

(I thought: What? Am I supposed to believe that

Dreaming of the tenderness
The tremble in the hips
Of kissing Mary's lips

is about a doll?)

MJD: Mary Gray Staples is an unusual name for a doll. Are you sure about that?

Claude: You're right to question this - I made an error. Mary Gray was actually Anne Sexton's great-aunt, not the name of her doll. … The doll references in Gabriel's song likely connect to Sexton's imagery in other poems, but Mary specifically refers to her great-aunt.

MJD: I don't remember a doll in Gabriel's song.

Claude: You're absolutely right - I apologize for the error. There isn't actually a reference to a doll in Peter Gabriel's "Mercy Street" lyrics. … The "Mary" referenced in the song is indeed Anne Sexton's great-aunt Mary Gray…

MJD: Are you sure? Wikipedia says that Mary Gray Staples was the birth name of Sexton's mother.

Claude: You're absolutely right, and I apologize for the confusion. …

Claude just flubbed over and over.

I wonder if the grammatical error in “Mary Gray Staples, who was the name of …” is a kind of a tell? Perhaps Claude is fabricating, by stitching together parts of two unrelated sentences that it read somewhere, one with “Mary Gray Staples, who was…” and the other “… was the name of…”? Probably it's not that simple, but the grammatical error is striking.

Anyway, this was very annoying because I tend to remember things like this long past the time when I remember where I heard them. Ten years from now I might remember that Anne Sexton once had a doll with a very weird name.

Merle Miller

A while back I read Merle Miller's book Plain Speaking. It's an edited digest of a series of interviews Miller did with former President Truman in 1962, at his home in Independence, Missouri. The interviews were originally intended to be for a TV series, but when that fell through Miller turned them into a book. In many ways it's a really good book. I enjoyed it a lot, read it at least twice, and a good deal of it stuck in my head.

But I can't recommend it, because it has a terrible flaw. There have been credible accusations that Miller changed some of the things that Truman said, embellished or rephrased many others, that he tarted up Truman's language, and that he made up some conversations entirely.

So now whenever I remember something that I think Truman said, I have to stop and try to remember if it was from Miller. Did Truman really say that it was the worst thing in the world when records were destroyed? I'm sure I read it in Miller, so, uhh… maybe?

Miller recounts a discussion in which Truman says he is pretty sure that President Grant had never read the Constitution. Later, Miller says, he asked Truman if he thought that Nixon had read the Constitution, and reports that Truman's reply was:

I don't know. I don't know. But I'll tell you this. If he has, he doesn't understand it.

Great story! I have often wanted to repeat it. But I don't, because for all I know it never happened.

(I've often thought of this, in years past, and whatever Nixon's faults you could at least wonder what the answer was. Nobody would need to ask this about the current guy, because the answer is so clear.)

Miller, quotes Truman's remarks about Supreme Court Justice Tom Clark, “It isn't so much that he's a bad man. It's just that he's such a dumb son of a bitch.” Did Truman actually say that? Did he just imply it? Did he say anything like it? Uhhh… maybe?

There's a fun anecdote about the White House butler learning to make an Old-fashioned cocktail in the way the Trumans preferred. (The usual recipe involves whiskey, sugar, fresh fruit, and bitters.) After several attempts the butler converged on the Trumans' preferred recipe, of mostly straight bourbon. Hmm, is that something I heard from Merle Miller? I don't remember.

There's a famous story about how Paul Hume, music critic for the Washington Post, savaged an performance of Truman's daughter Margaret, and how Truman sent him an infamous letter, very un-presidential, that supposedly contained the paragraph:

Some day I hope to meet you. When that happens you'll need a new nose, a lot of beef steak for black eyes, and perhaps a supporter below!

Miller reports that he asked Truman about this, and Truman's blunt response: “I said I'd kick his nuts out.” Or so claims Miller, anyway.

I've read Truman's memoirs. Volume I, about the immediate postwar years, is fascinating; Volume II is much less so. They contain many detailed accounts of the intransigence of the Soviets and their foreign minister Vyacheslav Molotov, namesake of the Molotov Cocktail. Probably 95% of what I remember Truman saying is from those memoirs, direct from Truman himself. But some of it must be from Plain Speaking. And I don't know any longer which 5% it is.

As they say, an ice cream sundae with a turd in it isn't 95% ice cream, it's 100% shit. Merle Miller shit in the ice cream sundae of my years of reading of Truman and the Truman administrations.

Now Claude has done the same. And if I let it, Claude will keep doing it to me. Claude caga en la leche.

Addendum

The Truman Library now has the recordings of those interviews available online. I could conceivably listen to them all and find out for myself which things went as Miller said.

So there may yet be a happy ending, thanks to the Wonders of the Internet! I dream of someday going through those interviews and producing an annotated edition of Plain Speaking.


[Other articles in category /tech/gpt] permanent link

Thu, 06 Mar 2025

Reflector grids

Around here, these metal things are commonly found on streetside utility poles, attached maybe a meter off the ground.

a yellow rectangular grid has been nailed to a wooden utility pole about a
  meter off the ground, curving around the pole.  It has four rows of ten rectangular holes
  punched out of it.
Metal reflector
A very similar-looking grid on a different pole
Plastic reflector

When I first noticed one of these I said “I wonder what the holes are for. Maybe to make it more visible? And what do they do with all the leftover rectangles after they've made one?”

I eventually got a better idea: The little metal rectangles are the primary product, and after they have been die-cut out of the metal sheet, there is this waste material left over with all the holes. Instead of throwing it away someone nails it to a utility pole to make the pole easier to see at night. I felt a bit silly that my first idea had been exactly backwards.

I later learned that only the older ones are made of sheet metal. Newer ones are made of some sort of plastic, maybe polyethylene or vinyl or something, about the same thickness. They look pretty much the same. I can only tell them apart by feeling them.

Still I wondered what the little rectangles had been used for. It turns out that the purpose is this:

a wooden utility pole, attached
to which is a metal frame enclosing six of those little yellow
rectangles.  Each has been embossed with a black numeral, and together
the six rectangles announce the post's ID number.

That's according to an old Philadelphia Inquirer article, Why yellow grids are on some Philly-area utility poles. (Patricia Madej, Aug. 31, 2019.) But I measured them to make sure. They matched.


The answer came as a bit of a surprise to Jay Lipschutz, 73, of Northeast Philly …

His wife, Ruth, he said, had insisted they’re reflectors for drivers to see. She was right.

Jay, my friend, your wife is smarter than you are. Listen to her.

The article also tells us that the rectangular leftover is called a “grid reflector”. With a little more research I learned that one manufacturer of grid reflectors is Almetek. They cost $3.50 each. Pricey, for something they would have had to throw away. (Here's the old South Philly Review article that put me on to Almetek.)

What kicked off this article was that I was walking around and I saw this similar reflector grid, which felt to me like it was a bit of a farce, like a teenager sneaking into a bar wearing a fake mustache:

This one is a solid sheed of plastic,
nailed to the post.  Instead of forty little rectangular cutouts, it
has forty black rectangles that only look like cutouts.

Hey, those aren't holes! When I saw this one I wondered for a moment if I was suffering some sort of mental collapse, or if none of the others had had real holes either. But no, they had, and this one really did have fake holes.

(Also, it has been installed sideways. Normally they are oriented as the two above.)

This isn't the first time I have written about ID numbers on utility poles hereabouts.


[Other articles in category /misc] permanent link

Sat, 01 Mar 2025

Jonathan Chait

[ Content warning: angry, contemptuous ranting that accomplishes nothing. ]

I didn't really know who Jonathan Chait was until last week when I unfortunately read this essay of his (from February 2016) on “Why Liberals Should Support a Trump Republican Nomination”.

I've said a lot of dumb things in my life but I don't think I've ever been as wrong about anything as Chait was about this. I sure hope I haven't. But if I do ever find out I had been this wrong about something, I would want to retire to a cave or a mountaintop or something.

“Hey, remember Dominus? Whatever happened to him, anyway?”

“Oh, he said he was going away to cleanse himself of error, and might not be back for a long time.”

And yet this guy is still shamelessly writing. And why not? Editors are still buying his essays and maybe people are even still reading them. Why? You'd think that people would look at this essay and say “yeah, that's enough Chait for me, thanks, next time I need an opinion I'll try someone else.” I get it, nobody's right all the time. Whenever you read anyone's essay you're taking a risk, like rolling a die. Sometimes the die rolls high, sometimes it rolls low, and some dice might have higher numbers to begin with. I've usually been well-served by Daniel Dennett's dice, and Robertson Davies'.

But here people have an opportunity to toss a totally unknown die that they haven't tried before but that most likely rolls numbers from 1 to 6, and instead they toss the Jonathan Chait die when they know it has at least one side with a -1000.

A long time ago, I wrote:

I hate trying to predict the future; I don't think I'm good at it and I don't think anyone else is.

I don't think anyone could have predicted the extent of the current fiasco, but I do think it should not have been hard to predict, in 2016, that liberals should not, in fact, have supported a Trump Republican nomination.

Anyone can be wrong, even the wise cannot see all ends. But I think this one was maybe not so hard to see. Chait spends a lot of time comparing Trump with Arnold Schwartzenegger: both nominally conservative, both inexperienced in government, both assholes. I think the part that Chait ignored was that by 2016 — no, scratch that, by 1990 — it was perfectly clear that Trump was a liar, a thief, a racist, and a deadbeat, and that he had no respect for law or truth or ethics or anything other than his own convenience of the moment. (Here are just two examples. More recently, his ridiculous years-long insistence that Barack Obama was born in Kenya. And earlier, his equally ridiculous lies around his destruction of the Bonwit Teller building.)

In that old essay I said:

Most people who try don't seem to revisit their old predictions to see if they were correct, or to learn from their past errors, and the people who listen to them never do this.

I looked around a little to see if Jonathan Chait had written an essay titled “I was wrong, I was so, so wrong, I just couldn't have been wronger” but I didn't find one and I also didn't find any recent essays that said anything like “here's why I think this new essay is more reliable than that embarrassing Trump one I wrote for The New Yorker in 2016.”

I don't understand how Chait still has a job after writing this essay. Why isn't he selling shoes? How does a writer come back from this? Isn't there some charitable society for the protection for the public that could pay to have someone follow Chait around, quoting out loud from this essay, as a warning to everyone he meets for the rest of his life?

It least now I've been warned. Now when I read “Jonathan Chait said recently…” I'll remember: “Oh, you mean Jonathan ‘Why Liberals Should Support a Trump Republican Nomination’ Chait! Thanks, I'll pass.”

Addendum 20250403

I said

I looked around a little to see if Jonathan Chait had written an essay titled “I was wrong…”

but it seems not, because he has apparently learned nothing. Yesterday he came to reveal to us The Good News About Trump’s Tariffs: the tariffs are good because they will hurt the Republicans in the next election.

Just like how the earlier article said that Trump's nomination would was good news because Trump would almost certainly lose the election.


[Other articles in category /politics] permanent link

Thu, 27 Feb 2025

Claude chokes on graph theory

Having had some pleasant surprises from Claude, I thought I'd see if it could do math. It couldn't. Apparently some LLMs can sometimes solve Math Olympiad problems, but Claude didn't come close.

First I asked something simple as a warmup:

MJD: What is the largest number that is less than 1000?

I had tried this on ChatGPT a couple of years back, with tragic results:

ChatGPT: The largest number that is less than 1000 is 999.

But it should have quit while it was ahead, because its response continued:

ChatGPT: Any number that is less than 1000 will have three digits, with the first digit being 9, the second digit being 9, and the third digit being any number from 0 to 8.

and then when I questioned it further it drove off the end of the pier:

ChatGPT: Any number with four or more digits can be less than 1000, depending on the specific digits that are used. For example, the number 9991 is a four-digit number that is less than 1000.

Claude, whatever its faults, at least knew when to shut up:

MJD: What is the largest number that is less than 1000?

Claude: 999

I then asked it “What if it doesn't have to be an integer?” and it didn't do so well, but that's actually a rather tricky question, not what I want to talk about today. This article is about a less tricky question.

I have omitted some tedious parts, and formatted the mathematics to be more readable. The complete, unedited transcript can be viewed here.

I started by setting up context:

MJD: Let's say that the cube graph !!Q_n!! has as vertices the set !!\{0,1\}^n!! and two vertices are connected with an edge if and only if their coordinates differ in exactly one position. This is a standard construction. Are you familiar with it?

Claude asserted that it was familiar with this family of graphs. ((Wikipedia on cube graphs.) The basic examples, !!Q_0!! through !!Q_3!!, look like this:

Each graph consists of two copies of the previous graph, with new edges added between the corresponding vertices in the copies.

Then I got to the real question:

Now let's say that the Keane number for a graph !!G!! is the largest number !!k!! such that:

  1. !!G!! can be partitioned into !!k!! connected subgraphs, and
  2. each pair of subgraphs is adjacent, in the sense that for each pair !!H!!, !!K!! there is an edge of !!G!! with one vertex in !!H!! and one in !!K!!.

If you understand so far, tell me the Keane numbers of the cube graphs !!Q_0, Q_1!!, and !!Q_2!!.

Here are the maximal partitions for those three graphs:

The Keane number of !!Q_0!! is !!1!! because it has only one vertex. For !!Q_1!! we can put each of the two vertices into a separate part to get two parts. For !!Q_2!! we can get three parts as above. But there is no partition of !!Q_2!! into four parts that satisfies the second condition, because two of the parts would have to comprise the upper-left and lower-right vertices, and would not be connected by an edge.

Claude got this initial question right. So far so good.

Then I asked Claude for the Keane number of !!Q_3!! and this it was unable to produce.

The correct number is !!4!!. There are several essentially different partitions of !!Q_3!! into four parts, each of which touches the other three, which proves that the number is at least !!4!!. Here are two examples:

In addition to these there are also partitions into parts of sizes !!1+1+2+4!!, and sizes !!1+1+3+3!!.

On the other hand, more than !!4!! parts is impossible, and the proof is one sentence long: !!Q_3!! has only !!8!! vertices, so any partition into !!5!! or more parts must have a part of size !!1!!, and this part can't be adjacent to the other four parts, because a single vertex has only three outgoing edges. I would expect a bright middle-schooler to figure this out in at most a few minutes.

At first, Claude got the right number, but with completely bogus reasoning. To avoid belaboring the obvious, I'll highlight the errors but I won't discuss them at length.

Claude:

  • Let's find the maximum possible !!k!!:
    • One valid partition with !!k=4!! exists:
      • Take two opposite vertices (e.g., !!000!! and !!111!!)
      • Make these two single-vertex subgraphs
      • Split the remaining !!6!! vertices into two paths of !!3!! vertices each
      • Each subgraph is connected
      • Each pair of subgraphs will have an edge between them

The diagrams I showed earlier display some of the partitions that show !!k≥4!!, but the one Claude gave here does not, because two of its parts (!!D!! and !!B!!) are not adjacent:

Claude:

  • Why can't !!k!! be !!5!! or more?
    • If we had !!5!! or more subgraphs, we would need !!{5 \choose2} = 10!! edges between pairs
    • !!Q_3!! only has !!12!! edges total
    • Some edges would need to be reused for multiple pairs
    • But each edge can only connect one pair of subgraphs
    • Therefore !!5!! or more subgraphs is impossible

Okay, Claude, I agree we would need ten edges for the connections between the parts, but we have !!12!!, so why do you say that “some edges would need to be reused”? It may be correct, but it does not follow.

Claude:
Therefore, the Keane number of !!Q_3!! is !!4!!.

Would you like me to verify any part of this reasoning?

So, right answer, but seemingly by luck, since there were serious errors in reasoning, in both the !!k≥4!! part and also in the !!k< 5!! part. I decided to ignore the second one.

MJD: You said two of the subgraphs are !!H=\{000\}!! and !!K=\{111\}!!. You said each pair of subgraps will have an edge between them. What edge connects !!H!! and !!K!!?

Claude: Ah, I made a mistake.

The vertices !!000!! and !!111!! differ in all three positions, so they are not adjacent in !!Q_3!! (they need to differ in exactly one position to have an edge between them). This means that making them single-vertex subgraphs would violate condition (2) since these subgraphs would not be adjacent.

I need to rethink my claimed partition. Would you like me to try again to determine the Keane number of !!Q_3!!?

MJD: Yes.

Claude: [finds a specific example showing !!k≥3!!, and remarks that !!k=4!! “seems difficult”]
Let me work on verifying whether !!k=4!! is possible or !!k=3!! is maximum. Would you like me to continue?

MJD: Please go on.

Claude: Let me try to prove !!k=3!! is maximum in !!Q_3!!.

Suppose we had !!k=4!! subgraphs. Then:

  • We need !!{4 \choose 2} = 6!! edges between different subgraphs
    • Each subgraph needs to be connected:
      • A subgraph with !!3!! vertices needs at least !!2!! edges
      • A subgraph with !!2!! vertices needs !!1!! edge
      • A subgraph with !!1!! vertex needs !!0!! edges
    • There are !!12!! edges total in !!Q_3!!
    • If we have !!4!! subgraphs:
      • First, !!6!! edges must go between different subgraphs
      • That leaves only !!6!! edges for internal connectivity
      • But with !!8!! vertices split into !!4!! subgraphs:
        • If any subgraph has !!3+!! vertices, it needs !!2+!! edges internally
        • If any has !!4+!! vertices, it needs !!3+!! edges internally
        • With !!8!! vertices total divided into !!4!! parts, at least one part must have !!3+!! vertices
        • After using !!2+!! edges for that part and !!6!! edges between parts, we don't have enough edges left to make all other parts connected

Therefore !!k=4!! is impossible, and the Keane number of !!Q_3!! is !!3!!.

Claude seems completely undirected here. Some of the things it says are correct facts, but it can't link the facts together into actual reasoning, except by accident. Sometimes it utters A followed by B where A implies B, and it seems exciting, but just as often B is a nonsequitur.

This is not that different from how ChatGPT was two years ago. It can spout text, and often it can spout enough plausible, grammatically-correct text to suggest that it knows things and is thinking about them. Claude's text-spouting prowess is much more impressive than ChatGPT's was two years ago. But it is still just a text-spouter.

I went on, trying to detect actual thinking.

MJD: What if there were !!4!! parts of !!2!! vertices each?

Claude's response here correctly applied its earlier analysis: four parts of size !!2!! would use up four edges for internal connectivity, leaving !!8!! for external connections, and we only need !!6!!.

Claude: I was too quick to conclude 4 was impossible. Let me try to find an explicit partition into 4 parts of 2 vertices each...
Would you like me to try to find such a partition, or would you first like me to verify that this kind of partition would definitely satisfy the adjacency requirement (condition 2)?

MJD: You keep going until you think you ha e the right answer.

Claude: Let me try to construct a partition of !!Q_3!! into !!4!! parts of !!2!! vertices each: …

This time Claude tried this partition:

It noticed that two of the four parts were not adjacent, and gave up without searching further.

Claude: This makes me think !!3!! really is the maximum.

If Claude were a human mathematician, this would be a serious error. Finding one invalid partition proves nothing at all.

Claude: My previous argument was wrong, but the conclusion might be right. Would you like me to try to prove more carefully that !!4!! is impossible?

There is no proof that !!4!! is impossible, and I thought it would be unenlightening to watch Claude flounder around with it. But I wanted to see what would happen if I asked it to prove a false claim that should be easier because its premise is stronger:

MJD: Try a weaker conjecture: prove more carefully the four parts of size 2 is impossible.

Claude: Okay, let me prove that we cannot partition !!Q_3!! into four parts of size 2 each.

  1. Suppose we had such a partition !!H_1, H_2, H_3, H_4!!, where each !!H_i!! contains exactly two vertices.
  2. For any such !!H_i!!, its two vertices must be adjacent (to be connected). This means they differ in exactly one coordinate.
  3. Therefore, each !!H_i!! must be of one of these forms:
    • Two vertices differing in 1st coordinate (like !!000,100!!)
    • Two vertices differing in 2nd coordinate (like !!000,010!!)
    • Two vertices differing in 3rd coordinate (like !!000,001!!)
  4. Consider any vertex !!v!! in !!Q_3!!. It has exactly 3 neighbors:
    • One differing in 1st coordinate
    • One differing in 2nd coordinate
    • One differing in 3rd coordinate
  5. Therefore, if !!v!! is in !!H_i!!:
    • One of its neighbors must also be in !!H_i!!
    • That leaves only two edges from !!v!! that could connect to other parts
    • So !!v!! can only help !!H_i!! be adjacent to at most two other parts
  6. But each !!H_i!! needs to be adjacent to all three other parts

This contradiction proves no such partition exists.

It's tempting to look at this and say that Claude was almost right. It produced 16 lines and at least 15 of them, on their own, were correct. But it's less impressive than it might first appear. Again Claude displays the pattern of spouting text, some of which is correct, and some of which is related. But that is all I can say in its favor. Most of its statements are boilerplate. Sections 2–4 can be deleted with nothing lost. Claude has buried the crux of the argument, and its error, in section 5.

MJD: In step 5 I see that if !!v!! is in !!H_i!!, it can only help !!H_i!! be adjacent to two other parts. But !!H_i!! has size 2, so it contains a vertex other than !!v!!. If !!v!! connects !!H_i!! to two other parts, couldn't the other vertex of !!H_i!! connect !!H_i!! to a third part?

Claude: Ah, you're absolutely right! My proof attempt failed. Let me back up and try a different approach to the original question of whether !!Q_3!! can be partitioned into 4 parts that satisfy both conditions.

This time Claude did find a correct partition into four parts, showing that !!k≥4!!.

MJD: How did you come up with that example partition?

Claude: I should be honest - I didn't find it systematically. I was trying different arrangements somewhat randomly until I found one that worked. …

I don't think there is any sense in which this is true, but at this point I hadn't yet internalized that Claude's descriptions of its own internal processes are text-spouting just like the rest of its output. In any case, I ignored this and asked it to analyze its own earlier mistake:

MJD: Earlier you purported to prove that !!k=4!! was impossible, saying that !!6!! edges would be needed to connect the !!4!! parts, and that this did not leave enough edges to connect the parts internally. You didn't analyze this carefully, so you missed the solution where all four parts have !!2!! vertices each. Do that analysis more carefully than you did last time and explain why that counting method does not rule out a partition of !!Q_3!! into !!2+2+2+2!!.

Claude: (details elided)

So this counting argument shows:

  • We need 10 edges
  • We have 12 edges
  • Therefore the counting argument does not rule out a !!2+2+2+2!! partition

My earlier analysis erred by not being explicit about the numbers. I just had a vague sense that we needed "too many" edges, without actually counting them. This shows the importance of precise counting rather than relying on intuition about what seems like "too many".

Claude got the counting part right, although I think the final paragraph is just spouting, especially the claim “I just had a vague sense that…”, which should not be taken seriously.

[ Digression: This reminds me of a section in Daniel Dennett's Consciousness Explained in which he discusses the perils of asking humans about their internal processes. The resulting answers, he says, may provide interesting informaiton about what people think is going on in their heads, but we should be very wary about ascribing any accuracy or insight to these descriptions. Dennett makes an analogy with an anthropologist who asks a forest tribe about their forest god. The tribespeople agree that the forest god is eight feet tall, he wears a panther skin, and so on. And while this might be folklorically interesting, we should be very reluctant to conclude from this testimony that there is actually an eight-foot-tall fur-clad god in the forest somewhere. We should be similarly reluctant to ascribe real existence to Claude's descriptions of its “vague senses” or other internal processes suggested by its remarks like “Ah, let me think...” or “Ah, you're absolutely right!”. Claude has even less access to its own mental states (such as they are) than a human would. ]

As I pointed out earlier in this article, there are several essentially different solutions to the problem of partitioning !!Q_3!! into four parts. Claude has found one of them, the one I showed above on the left. I wanted to see if it could find another:

MJD: Are there any other partitions that are not ruled out by that argument? Or is 2+2+2+2 the only possibility?

Claude: Let me think about what other partitions could satisfy the edge-counting constraint.

If we have !!4!! parts, need !!6!! edges between parts, and have !!12!! total edges, then we can use up to !!6!! edges for internal connectivity of parts.

Possible partitions of !!8!! vertices into !!4!! parts could be:

  • !!2+2+2+2!! (needs !!4!! internal edges)
  • !!3+2+2+1!! (needs !!3!! internal edges)
  • !!4+2+1+1!! (needs !!4!! internal edges)
  • !!3+3+1+1!! (needs !!4!! internal edges)
  • !!5+1+1+1!! (needs !!6!! internal edges)

So from just edge counting:

(Claude concludes that none of these is ruled out by edge-counting.)

So the edge-counting argument alone doesn't rule out any of these distributions of part sizes. We'd need additional arguments to determine which are actually possible in !!Q_3!!.

MJD: Why didn't you consider !!4+2+2+1!!?

Claude: Ah, you're right - I should have considered !!4+2+2+1!!! For !!4+2+2+1!!:

  • The part of size !!4!! needs at least !!3!! internal edges to be connected
  • The parts of size !!2!! each need !!1!! internal edge
  • So that's !!5!! internal edges needed
  • Plus !!6!! edges needed between parts
  • Total needed = !!11!! edges
  • We have !!12!! edges available

So this case is also not ruled out by edge counting. I apologize for missing it in my list. Would you like me to check if there are any other distributions I missed?

Oh, such failure! Failures that I have highlighted are clearly false statements, But the failure here is much worse. Again, almost everything Claude said was correct! And yet deeply wrong!

A bright middle-schooler could have missed this also. It was a bit of a trick question. Not because it's ambiguous, but because problem with !!4+2+2+1!! is not in line with the direction that the discussion was taking at that point. There is nothing wrong with !!4+2+2+1!! from an edge-counting point of view!

But I think a human mathematician, even a very young one, would have been more likely to answer the question I actually asked, which was “Why didn't you consider !!4+2+2+1!!?”, and from there might have quickly come to the useful conclusion that it doesn't need to be considered. Claude didn't do anything like that.

MJD: !!4+2+2+1!! doesn't work. Think about it more.

Claude: Let me think more carefully about a potential !!4+2+2+1!! partition.

  1. Consider the part !!H!! of size !!4!!:

    • To be connected, it needs at least !!3!! edges internally
    • One vertex in !!H!! must connect to all !!3!! other vertices in !!H!!
    • But in !!Q_3!!, each vertex has only !!3!! edges total
    • If one vertex in !!H!! uses all its edges internally to connect to the other !!3!! vertices in !!H!!, it has no edges left to connect to other parts
    • But every part needs to be adjacent to every other part
  2. Therefore !!4+2+2+1!! is impossible, not because of edge counting, but because:

    • Either !!H!! isn't connected (if we don't use enough internal edges)
    • Or !!H!! can't connect to all other parts (if we use too many internal edges)

This shows why just counting edges isn't enough - we also need to think about the structure of !!Q_3!! and how the edges can be arranged.

Terrible. This is so muddled I wasn't even sure what to highlight as the errors. Yes, if one vertex in !!H!! uses all its edges internally to connect to the other !!3!! vertices in !!H!!, it has no edges left to connect to other parts. But the other three vertices in !!H!! have two edges each, so !!H!! could easily connect to the other three parts.

And, of course, Claude has still missed the main point, which is that one needn't consider !!4+2+2+1!!, because !!4+2+2+1= 9!! and !!Q_3!! has only !!8!! vertices.

MJD: There is a much simpler argument why 4+2+2+1 won't work.

Claude: Ah, let me think...

In a !!4+2+2+1!! partition:

  • The single vertex in the part of size !!1!! has degree !!3!! in !!Q_3!!
  • But it needs to connect to all three other parts!
  • It needs all three of its edges just to be adjacent to the other parts
  • So it can't connect to all three other parts AND connect to three different vertices

This is a much simpler contradiction. I was making it too complicated by thinking about the part of size !!4!! when the part of size !!1!! immediately shows it's impossible.

MJD: How many vertices are.in !!Q_3!!?

Claude: !!Q_3!! has !!8!! vertices (all binary strings of length !!3!!):
!!\{000, 001, 010, 011, 100, 101, 110, 111\}!!

And now I see an even simpler argument why !!4+2+2+1!! is impossible: !!4 + 2 + 2 + 1 = 9!! vertices, but !!Q_3!! only has !!8!! vertices!
I was doing complex analysis of connectivity and edges when I should have just checked that the claimed partition added up to the right number of vertices. This is a good reminder to check the most basic facts first.

Finally! I would love to know what really happened here. What caused Claude to emit a sentence connecting !!4 + 2 + 2 + 1 = 9!! with !!Q_8!! having only !!8!! vertices?

MJD: So remind me, did you decide that !!k(Q_3)!! is !!3!!, or !!4!!?

I wasn't sure it would remember this, by which I really mean that I wasn't sure it would be able to pick out from its previous mass of statements which one was correct. But whether by luck or not, it did answer the question correctly:

Claude: Earlier I found a valid partition of Q_3 into 4 parts:

!!H_1 = \{000, 001\} !! !!H_2 = \{100, 110\}!! !!H_3 = \{010, 011\}!! !!H_4 = \{101, 111\}!!

I verified that:

  • Each part is connected
  • Each part is adjacent to every other part

Therefore !!k(Q_3) ≥ 4!!.

I haven't yet proven that !!k(Q_3) ≤ 4!!. Would you like me to try to prove that !!5!! parts is impossible?

MJD: Please go ahead.

I found Claude's attempt at this interesting, but not in a good way. After reading it I shrugged and quit the conversation.

Claude:
Let me prove that !!5!! parts is impossible in !!Q_3!!.

  1. Five parts would require !!{5 \choose 2} = 10!! different edges just to connect all pairs of parts to each other.
  2. Each such edge must be distinct because a single edge can only connect one pair of parts.
  3. Each vertex in !!Q_3!! has degree !!3!!. With !!8!! vertices, the total number of edges is !!(8 × 3)/2 = 12!!.
  4. Out of these !!12!! edges:
    • !!10!! must be used to connect different parts to each other
    • That leaves only !!2!! edges for internal connectivity of parts
    • But with !!8!! vertices split into !!5!! parts, some part must have size !!≥ 2!!
    • Any part of size !!≥ 2!! must use at least one edge internally to be connected
    • If one part uses one edge internally, that leaves only one edge for all other parts
    • But if any other part has size !!≥ 2!!, it also needs an internal edge
    • With !!8!! vertices in !!5!! parts, at least two parts must have size !!≥ 2!!
    • Therefore !!2!! edges is not enough for internal connectivity

Thus !!k(Q_3) = 4!!.

Claude says that with !!8!! vertices in !!5!! parts, at least two parts must have size !!2!! or more. This is wrong, because you could have !!4+1+1+1+1!!.

But highlighting that wrong statement doesn't get at why this answer is so bad. It's bad because the conclusion that appears next is a complete nonsequitur.

The argument can be fixed up. I would put it like this:

  • As Claude observed, we have only !!2!! edges available for internal connections
  • A part with !!n!! vertices requires at least !!n-1!! edges for internal connections
  • Let the sizes of the parts be !!n_1\dots n_5!!. Since !!n_1+\dots +n_5 = 8!!, we need at least !!(n_1-1)+\dots + (n_5-1) = 8-5 = 3!! edges for internal connections
  • But we have only !!2!!.

It's true that !!2!! edges is not enough for internal connectivity. But in my opinion Claude didn't come close to saying why.

Back in the early part of the 20th century, we thought that chess was a suitable measure of intelligence. Surely a machine that could play chess would have to be intelligent, we thought. Then we built chess-playing computers and discovered that no, chess was easier than we thought. We are in a similar place again. Surely a machine that could hold a coherent, grammatical conversation on any topic would have to be intelligent. Then we built Claude and discovered that no, holding a conversation was easier than we thought.

Still by the standards of ten years ago this is stunning. Claude may not be able to think but it can definitely talk and this puts it on the level of most politicians, Directors of Human Resources, and telephone santizers. It will be fun to try this again next year and see whether it has improved.

The complete chat is available here.

Addendum

20250301

Many thanks to Jacob Vosmaer for his helpful discussion of how to improve this article.


[Other articles in category /tech/gpt] permanent link

Wed, 26 Feb 2025

Furniture Mecca

[ Content warning: shitpost ]

It's that time of year again! Furniture Mecca is having their annual sale.

Photograph of a
Philadelphia furniture store named Furniture Mecca.  A Red and yellow
sign announces 'Clearance Sale!  Up to 75% off;

This year the sale will run until Friday the 28th. At closing time on that day any remaining furniture will be hurriedly moved to the owner's other furniture store, Furniture Medina.


[Other articles in category /religion] permanent link

Fri, 21 Feb 2025

Leopards, faces, etc.

I expect we in the United States are about to see a wave of domestic terrorism unprecendented since the 1870s. In the wake of the Civil War, white Southerners used systematic terrorism to continue white supremacy. If a black person became too prosperous, masked thugs would come in the night to burn down their house. If a white person was seen helping a black one, the thugs would arrive, and might let them off lightly for a first offense, and administer only a severe beating, or a tar-and-feathering. If a black man voted, masked thugs would come to murder him perhaps by night, or perhaps in broad daylight and publicly. Blacks in the reconstruction South were met at polling places by armed mobs.

Local law enforcement ignored these lawless acts, and in many cases the terrorists were the local law enforcement: sheriffs, police, judges. The terrorism continued for decades, and the terrorists were restrained, to the extent they were restrained, only by federal enforcement of the anti-Klan acts.

A few weeks ago I hoped Trump might forget about the imprisoned January 6 rioters. Trump discards anyone for whom he has no use, I thought, and he has no more use for them. I was wrong. His pardon of hundreds of rioters sends a clear signal, to his followers and to his enemies, that political terrorism is now supported or at least condoned by the Federal executive branch. The federal executive will not enforce antiterrorism laws unless the terrorists are politically opposed to Trump.

Don't count on anyone to restrain Trump. If a judge rules the wrong way, they may be assaulted by masked thugs. If a congressperson becomes troublesome, their house may burn down in the night. If a newspaper reporter writes an article critical of Trump, masked thugs may kill them, perhaps gun them down in the street.

Nothing will be done. The FBI will shrug. Trump will call it fake news or will blame immigrants, Muslims, or Antifa.

And if you were one of the people cheering for Luigi Mangione last month, remember that that's what you were cheering for, a country where it's okay to gun down people in the street, as long as you hate them enough.


[Other articles in category /politics] permanent link

Wed, 19 Feb 2025

Think speak, now speak!

Katara is now in her sixth semester in college and can speak Mandarin. I am so proud!

For class she recently wrote a talk (in Mandarin) about Hua Guofeng, the often overlooked second chairman of the Chinese Communist Party. She videoed herself giving the talk, and posted it to YouTube. This somehow attracted over 700 views, and comments from a number of strangers, most of which were in Chinese. Some even offered suggestions — only minor suggestions, which she found very gratifying.

One comment, however, expressed irritation. Google translates it as:

You, a foreigner, don’t need to comment on the Chinese people’s affairs.

Shortly afterward though, there came a defense, which began with this delightful phrase:

想讲就讲

An idiomatic translation is "You can talk if you want!"

A character-by-character translation is:

  • 想 - think
  • 讲 - speak
  • 就 - right now
  • 讲 - speak

which I just love. If anyone is looking for a name for their new Chinese-language-themed blog, I think this would be a great choice.


[Other articles in category /lang] permanent link

Tue, 18 Feb 2025

David McShane's mural with 18 Franks, revealed

Yesterday I offered Gentle Readers a chance to identify the 18 famous Franks in David McShane's mural.

  1. Benjamin Franklin
  2. Frankie Avalon
  3. Detail of ornament from the Pennsylvania Academy of the Fine Arts building, designed by architect Frank Furness
  4. A frankfurter
  5. Aretha Franklin


I did not recognize the PAFA architectural detail myself, I had to find out from the Mural Arts website. I have sometimes looked for this detail on the PAFA building, but I have never found it.

  1. Barney Frank
  2. Frankenstein's monster
  3. Franklin Delano Roosevelt
  4. Frank Zappa


  1. Frank Lloyd Wright
  2. Major Frank Burns, as played by Larry Linville
  3. Frank “Tug” McGraw


  1. French 20-centime coin, representing the Franc
  2. Frank Oz
  3. Frank Perdue


I think these pictures might be so old that they predate the European currency union.

  1. Frank Sinatra
  2. Frank Morgan, in his role as the Wizard from The Wizard of Oz. (The book was written by L. Frank Baum.)
  3. St. Francis of Assisi, with bird friend. The model is the artist's brother Frank.


Not depicted: Frank Rizzo, who is burning in Hell.

I was certain that Tim Curry was there somewhere, in his role as Dr. Frank-N-Furter, but if he ever was I can't find any evidence of it. I even emailed the muralist, who confirmed that Frank-N-Furter had never been there. Still, he is in all our hearts, forever.

The mural was restored in 2015, at which time two more figures were added:

  1. Pope Francis, who had visited the city that year, and
  2. Frank Sherlock, noted poet and longtime employee at Dirty Frank's

My pictures are at least that old.


[Other articles in category /art] permanent link

Mon, 17 Feb 2025

David McShane's mural with 18 Franks

Since the demolition of Harriet Tubman, this has been my favorite mural in Philadelphia. It's by Philadelphia muralist David McShane.

The mural is outside an infamous windowless bar called Dirty Frank's. I like to say that Oscar's Tavern on Sansom is Philadelphia's best Worst Bar. That's where, when the fancy place across the street wouldn't seat us, I took my coworker from out of town, with pride. Dirty Frank's might be Philadelphia's worst Worst Bar.

I few months ago Rik Signes remarked:

Street view of the sign outside McGlinchey's.  At the top it says
“McGlinchey's Bar” with a green shamrock.  Under this in clear red
letters on white is the word “Sandwiches”.  Below this hands a red
neon sign that reads “DRAFT BEER”.

I think Mark Dominus said "Dirty Frank's is where I saw roaches walk over the food and when I told them, they shrugged"

I was at once able to refute this, because I know for a fact that I have never ordered food at Dirty Frank's. Nor would I. Actually I have only ever been there once, which was enough.

(Lorrie has a similar story about a similarly notorious bar, McGlinchey's. Hanging outside McGlinchey's is a sign that proclaims “sandwiches”. Lorrie tried to order a sandwich there and was met only with puzzled stares.)


I will stop digressing now. My current favorite mural is outside Dirty Frank's and is by David McShane. It depicts famous Franks through history. I enjoyed trying to identify the 18 Franks. Many years ago I took pictures of it so that I could offer my Gentle Readers an opportunity to enjoy this themselves. You can infer from the resolution of the pictures below how long ago that must have been. But at last, here they are. I will reveal the answers tomorrow.

(The answers.)

Addendum 20250305

After a lifetime in Philadelphia, I have at last visited McGlinchey's. I liked McGlinchey's. They have a good selection of beers on tap. The inside is pleasant, except for the residual smoke and the bathrooms. The bathrooms are gross, but, as someone who has been to the bathrooms in CBGB, I merely laughed at them.

I asked the bartender what kind of sandwiches I could get. He said “none”, and then added “That sign is like fifty years old”. Then he offered to get me a hot dog. I declined.


[Other articles in category /art] permanent link

Tue, 11 Feb 2025

Genealogy of the House of Reuss

A couple of years ago I lamented the difficulty I had in verifying what appeared to be a simple statement of fact:

[Abdullah bin Abdul-Rahman] was the seventh son of the Emir of the Second Saudi State, Abdul Rahman bin Faisal.

The essential problem is that Saudi princes have at least ten or twenty sons each, and they all reuse the same ten or twenty names.

Until today, I was not aware of any European tradition even remotely so confusing. Today I learned of the House of Reuss.

I have other things to do today, so just a couple of highlights, starting with this summary:

Since the end of the 12th century, all male members of the House of Reuss are named Heinrich.

No, don't panic, there must be some way to distinguish them, and of course there is:

For the purpose of differentiation, they are given order numbers according to certain systems (see below, section Numbering of the Heinrichs)

Yes, they are numbered. Since the 12th century. So you might think they would be up to Heinrich MCMXVII by now. No no no, that would be silly.

In the elder line the numbering covers all male children of the elder House, and the numbers increase until 100 is reached and then start again at 1.

In the younger line the system is similar but the numbers increase until the end of the century before starting again at 1.

The Wikipedia article later embarks on a list of rulers of the House of Reuss that includes 151 Henrys with numbers as high as LXXVII. I wonder at this, since if they have really exercised that numbering scheme you would expect to see mention of at least one Henry with a number in the LXXX–XCIX range, but there are none.

A few of the 151 Henrys have distinctive nicknames like Henry II the Bohemian, Henry VII the Red, or Henry VI the Peppersack. But they seem to have run out of new epithets in the 14th century, and lapsed into a habit of using and reusing "the Elder", "the Middle", and "the Younger" over and over. Around the mid-1600s they tired even of this and abandoned the epithets entirely.

Just by way of example, I searched the page for “Henry XIX” and found three rulers by that name and number:

  1. One born 1 March 1790, Heinrich XIX, Prince Reuss of Greiz

  2. Another born 16 October 1720, Count of Selbitz. The English Wikipedia page is a redlink, but the German article on the Houses of Reuss has a bit to say.

  3. And a third, born around 1440, where these is a whole article about him, in Bulgarian For some reason he is known as Хайнрих XXI фон Вайда, Henry XXI (not XIX) of Vaida.

Toward the end of the article, we learn this:

On 7 December 2022, German police conducted an operation which resulted in the arrest of 25 alleged members of the far-right group Reichsbürger, including a member of the Köstritz branch of the House of Reuss, identified as Heinrich XIII Prince Reuss. The suspects arrested in the operation were allegedly planning to overturn the existing German government, and instate Heinrich XIII as the new German de facto leader.

All I can think now is, I think of myself as someone who is good at sniffing out Wikipedia bullshit, but this entire article could be completely made up and I would never be the wiser.

By the way, the link from “Henry VI the Peppersack” is to an article in Bulgarian Wikipedia that does not appear to mention the "Peppersack" epithet, a search on the Internet Archive for books mentioning "Henry Peppersack" turns up nothing, and while the section on the plot to bring Heinrich XIII to power cites a source, the page it purports to link to is gone.

Addendum 20250215

Here's a funny coincidence. The highest-numbered Henry I could find was Henry LXXVII. Lord Sepulchrave is stated at the beginning of Titus Groan to be the 76th Earl of Groan, which makes Titus Groan the 77th.


[Other articles in category /lang] permanent link

Sun, 09 Feb 2025

Surnames from nicknames nobody has any more

English has a pattern of common patronymic names. For example, "John Peters" and "John Peterson" are someone whose father was named "Peter". ("Peters" should be understood as "Peter's".) Similarly we have John Williams and John Williamson, John Roberts and John Robertson, John Richards and John Richardson, John James and John Jameson, John Johns and John Johnson, and so on.

Often Dad's name was a nickname. For example, a common nickname for "John" is "Jack" and we have (less commonly) John Jacks and (more commonly) John Jackson. John Bills and John Bilson, John Wills and John Wilson, and John Willis and John Willison are Bill, Will, and Wille, all short for William.

"Richard" is "Dick", and we have John Dicks (or Dix) and John Dickson (or Dixon). "Nicholas" is "Nick" and we have John Nicks (or Nix) and John Nickson (or Nixon).

Sometimes the name has the diminutive suffix “-kin” inserted. Wilkins is little Will's son, as is Wilkinson; Peterkins is little Peter's son.

These patterns are so common that if you find surnames that follow them you can almost always infer a forename, although it may be one that is no longer common, or that is spelled differently. For example, many people are named Pierce, Pearse, Pierson, or Pearson, which is from the name Pierre, Piers or Pierce, still used in English although much less common than in the past. (It is from the same root as Peter.) Perkins is little Pierre. Robin used to be a nickname for Robert (it's “Robkin” with the difficult “-bk-” simplified to just “-b-”) and we have John Robins and John Robinson.

Sometimes, the pattern is there but the name is unclear because it is a nickname that is now so uncommon that it is neatly forgotten. The fathers of John Watts, Watson, and Watkins were called Wat, which used to be short for Walter. John Hobbs, John Hobson, and Hobkins are named for Hob, which was short for Robert in the same way that Rob and Bob are still. (I had a neighbor who was called Hob, and told me his family claimed that it was short for Robert, but that he wasn't sure. I assured him that they were correct.) “Daw”, an archaic nickname for “David”, gives us Dawes, Dawkins, and Dawson.

Back in September when I started this article I thought on John Gibbs and John Gibson. Who's named "Gib", and why? Is it archaic nickname? Yes! It was short for Gilbert. Then I forgot about the draft article until today when I woke up wondering about John Simpson (and, I realize now, John Simms and John Simkins). And it transpired "Sim" or "Simme" was once a common nickname for Simon.

I would welcome further examples.

Addenda

20250210

Vicki Rosenzweig reminds me that another lost nickname for Richard is "Hick" (like "Rick" and "Dick") and from this we get Hicks, Hix, Hickson, Hixon, Hickman, and (she informs me) "Hickmott".

"-mott" perhaps indicates a male in-law, according to the Dictionary of American Family Names via Ancestrry.com. The only other clear example of this that I noticed was "Willmott". Note that Gaelic names like Dermott are not of this type; the resemblance is coincidental.

We get Hodge / Hodges / Hodgson / Hodgkins from "Hodge", an arcahic nickname for Roger.

Lucy Keer suggests that Bateson / Bates is from "Bat", an archair nickname for Bartholomew.

Christopher Taylor reminds me that in Atkins and Atkinson the "At-" is short for "Adam", and similarly in "Addis" and "Addison". I know a guy named Atteson but I'm not sure it is the same thing.

20250210 (again)

Response to this article has been too much for me to keep up with.

Hacker News user tibbar mentions mentions several I either didn't know, or knew and forgot to include:

  • "Dob" is another old nickname for "Robert", giving us "Dobson", "Dobbs", "Dobkin", etc.;
  • "Dodge" another nickname for Roger, hence Dodgson, as in Louis Carrol's real name, Charles Dodgson;
  • "Tibb" is an old nickname for Theobald, giving surnames like "Tibbs" and "Tibbets";
  • "Hud" for "Hugh", giving us the Hudsons.

Citizen Claude weighs in

I also asked Claude for suggestions:

I'm going to paste a blog article that discusses common English surnames that are patronymic versions of forenames that no longer exist. For example, the common surname "Watson" is from "Wat", and "Wat" was once a common nickname for "Walter", but no longer used. I would like you to list ten similar examples that are not mentioned in the article.

Claude replied:

Here are ten examples of common English surnames derived from obsolete forenames or nicknames, different from those mentioned in the article:

  1. Higgins/Higgs - from "Hick," a medieval pet form of Richard
  2. Timms/Timson - from "Tim," originally a medieval nickname for Timothy
  3. Hodgson/Hodges - from "Hodge," once a common nickname for Roger
  4. Atkins/Atkinson - from "At" or "Atte," medieval diminutive of Adam
  5. Parkins/Parkinson - from "Perkin," diminutive of Peter/Pierre
  6. Judson/Judd - from "Judd," medieval nickname for Jordan
  7. Malkins/Malkinson - from "Malk," diminutive of Malcolm
  8. Larkins/Larkin - from "Larry" or "Laurence," medieval form of Lawrence
  9. Collins/Colson - from "Col," medieval nickname for Nicholas
  10. Huggins/Huggett - from "Hug," medieval pet form of Hugh

Remember that some of these might be made up. For example, Wiktionary claims that #7 is matronymic, from Malle, a nickname for Mary. Who's right? I don't know. The only way to settle this is with a cage match! I hope.

I complained that #2 was not to spec because “Tim” is still common, and demanded a replacement:

  1. Rankins/Rankin - from "Ran," a medieval diminutive of Randolph

I also asked Claude if I had made any errors, and it said:

The article claims that "Robin" comes from "Robkin" with "bk" simplified to "b". This is incorrect.

"Robin" is actually a French diminutive of Robert, coming from Old French Robin/Rovin. …

This seems plausible.


[Other articles in category /lang/etym] permanent link

Thu, 06 Feb 2025

Just give the man the fish!

Last week I complained about a Math SE pathology in which OP asks a simple question, and instead of an answer gets an attempt at a socratic dialog. I ended by saying:

I have been banging this drum for decades, but I will cut the scroll here. Expect a followup article.

Seeing this, Scott Francis remarked:

when you say “I have been banging this drum for decades" I hear echoes of EFnet #perl in the back of my head :)

And yes, that is one of the things I was thinking of.

Thirty years ago the regulars in the #perl IRC channel would play a cruel teasing game. A stranger would come into the channel and ask a simple technical question, like “how do I remove the first character from a string?”

Instead of giving the answer, two or three people would reply perldoc perlre.

In case it's not obvious — and there is no reason why it should be — this means you can run this command to get the manual for how to use Perl regular expressions.

This manual was about 20,000 words long.

People indulging in this shitty behavior would excuse themselves by chanting the maxim “If you give a man a fish, he can eat for one day. If you teach him to fish, he can eat for his whole life.” An actual answer to a question was a “fish”. Apparently, saying perldoc perlre was considered to be “teaching a man to fish.”

If the newbie objected that the reply perldoc perlre was unhelpful, the regulars were only too ready to lecture them on why it was helpful actually, on why they didn't deserve a better answer, on why they shouldn't expect their questions to be answered, on how they were being rude by rejecting the help that was offered them, on how they shouldn't feel entitled to answers, and on why the regulars there were all very busy people with more important things to do that to answer stupid newbie questions.

In my view, someone who is hanging around in #perl should expect newbie questions, and if they don't want to answer newbie questions they simply shouldn't do it, they should ignore them. If they can't do that, if they are so enraged by newbie questions that it ruins the rest of the chat for them, they should go start a different channel with a name that won't attract newbies. But they should not hang around and vent their impotent rage on the newbies who inevitably do show up.

I'm kind of an asshole, but I'm not that big an asshole. I'm callous, but I'm not sadistic. Someone who says they don't have time to help you, but who does have time to explain to you in detail why they aren't helping you, is sadistic.

“Well, we want them to learn to read the manual,” the regulars would claim. Maybe so, but I don't think their strategy was usually effective. If one really wants people to read the manual, a much better strategy would be to answer the question, and then having established oneself as a helpful person, suggest the manual:

By the way, you can get complete documentation about regexes with the command perldoc perlre. It's really long, but it's full of useful information. The ^ operator I mentioned is in the section called "Metacharacters". Would you like help finding it?

On the other hand if what one actually wanted was to convince someone that Perl was a language used by assholes and they might have better success with a different language whose community had fewer assholes, then the #perl regulars’ strategy was probably very effective.

Then as now my usual habit was to just answer the question. There would be this odd little moment where three people would say perldoc perlre and I would say $string =~ s/^.//. Did people yell at me for this? I don't remember. Probably, I was spoiling their fun.

But at least once someone asked me (in good faith, I'm sure) why I did it my way. I saved my answer. It was:

Because it's easy. Because it's helpful. Because I think the theory that says that people will become dependent on it is bullshit.

Because I think the theory that says that telling them to read the man page is more helpful is also bullshit.

Because in my experience people are much more likely to heed your suggestion to read the man page after you have established that you are a helpful concerned person by assisting them.

The main points are the first two: Because it's easy, and because it's helpful, so why not?

It's at least 25 years later and I'm still angry about this. Who the hell hangs around in a help forum for the purpose of refusing to help?

Social media now is toxic in ways we couldn't have imagined then. But let's not forget that it could be pretty toxic then too.

Addenda

“in good faith, I'm sure” is not sarcasm.

20250208

The previous addendum was also not sarcasm.


[Other articles in category /misc] permanent link

Wed, 05 Feb 2025

Claude helps me find more presidential emoji

A couple of years back I tried to make a list of emoji representing the U.S. presidents. Many of them were fun and easy, or at least amused me. But for some I was stumped. What emoji represents Zachary Taylor?

I've been playing around with Anthropic's LLM “Claude” for a while, so I thought I'd see what Claude had to contribute.

Last time I had looked at the LLM space I was deeply unimpressed:

  1. ChatGPT discusses four-digit numbers
  2. ChatGPT discusses a hypothetical fifth tarot suit
  3. ChatGPT discusses women named James
  4. ChatGPT discusses cauliflower, Decameron and Scheherazade

But that was two years ago, and gods, what a difference. What persuded me that it was time to take another look was two articles by Adam Unikowsky. Unikowsky is a working lawyer who has practiced before the US Supreme Court. He writes an extremly geeky blog, called Adam's Legal Newsletter. Last summer he wrote two articles that blew my mind. Here's an excerpt from the first one:

Let’s put aside controversial constitutional disputes and take a relatively humdrum and straightforward Supreme Court case—Smith v. Spizziri, decided on May 16, 2024. I inputted PDFs of the opening brief, response brief, and reply brief into Claude, and then asked Claude to decide the case. Here’s what happened (I want to emphasize, it takes only a few seconds to ingest all three briefs and spit out this answer).

The prompt he gave to Claude was:

You are a Supreme Court Justice. Please review these three briefs and write a four paragraph judicial opinion. In the first, you will describe the dispute. In the second, you will describe the parties’ positions. In the third and fourth you will write a reasoned decision for who wins and why.

Claude did so; you can read the output on Unikowsky's blog. Here's his evaluation of the result:

This is absolutely correct, and exactly what the Supreme Court decided. 10/10.

Unikowsky looks at a second specific example and concludes:

This is perfect.

(His italics.)

Unikowsky then discusses at some length why AI might be preferable to human judges. Don't sneer; he has probably thought about this more carefully than you have. Read the article.

In the second article Unikowsky tests more extensively:

I downloaded the briefs in every Supreme Court merits case that has been decided so far this Term, inputted them into Claude 3 Opus (the best version of Claude), and then asked a few follow-up questions.

And he reports:

The results were otherworldly. Claude is fully capable of acting as a Supreme Court Justice right now. When used as a law clerk, Claude is easily as insightful and accurate as human clerks, while towering over humans in efficiency.

(Emphasis mine this time.)

This is why I decided I should take another look at LLMs. Well, this is not what I planned to write about. You are here for presidential emoji, so let's do that. Here's my prompt to Claude:

I'm going to paste a full list of U.S. presidents and associated emoji, and I'd like you to explain all of them. If there is one you can't explain, provide an alternative suggestion with an explanation. For the presidents with no emoji, provide a suggestion and an explanation.

and then appended the list.

Here are Claude's explanations for the emoji I had selected:

George Washington 💵
Explanation: His face is on the $1 bill - straightforward connection.

Thomas Jefferson 📜
Explanation: References his role as primary author of the Declaration of Independence.

Right so far.

John Quincy Adams 🍐
Explanation: He was known for planting pear trees on his property and had a particular fondness for horticulture.

This wasn't what I had in mind. I was thinking that it would be funny to represent John Quincy Adams with a picture of a quince. Which is silly and dumb and I can't fault Claude for not getting it. If it had been Claude that had made this suggestion I would probably have quibbled that “Quincy” is not pronounced the same as “quince”.

Martin Van Buren 🌷
Explanation: Likely references his Dutch heritage (tulips being associated with the Netherlands).

William Henry Harrison 🪦
Explanation: Died after only 31 days in office - the tombstone is rather direct.

Both right. Although maybe it would be more amusing to use 🛶 for Harrison.

Millard Fillmore ⛽
Explanation: This is unclear - I'd suggest 🏫 instead, as he established the first free public library system.

Another pun, this one even dumber that the quince. ⛽ is the gas pump you use to fill more gas into your car, get it?


I'm not excited about Claude's suggestion, but at least it isn't stupid.

Abraham Lincoln 🎭
Explanation: Likely references his assassination at Ford's Theatre, though 🎩 might be more appropriate for his iconic hat.

Claude is a little bit prudish and tends not to like my inappropriate jokes. The hat of course is obvious. In the previous article I said:

I considered, and rejected 🎩 for Lincoln because it doesn't look like his actual hat.

Still, it's a perfectly good suggestion.

Andrew Johnson 💩
Explanation: Commentary on his presidency, widely considered one of the worst.

I notice that Claude did not object that this was inappropriate. Prudish or not, even Claude can agree that Andrew Johnson was a turd of a President.

Ulysses S. Grant 🍸
Explanation: References his noted drinking habits.

Rutherford B. Hayes 🧔🏻
Explanation: Known for his distinctive beard.

James Garfield 🧔🏻
Explanation: Also known for his beard.

I wasn't completely phoning it in here, the repeated white-guys-with-beards thing is also a joke. I don't think Garfield was actually known for his beard, but whatever. (I've already dispensed with Garfield the lazy cat in the previous article.)

Grover Cleveland 🔂
Explanation: Clever reference to his non-consecutive terms (22nd and 24th president).

I'm pretty sure I don't like that Claude appears to be trying to flatter me. What does it mean, philosophically, when Claude calls something ‘clever’? I have no idea. Being flattered by a human is bad enough, they might really mean something by it.

Benjamin Harrison 🧔🏻
Explanation: Another bearded president.

Theodore Roosevelt 🧸
Explanation: The teddy bear was named after him.

William Howard Taft 🛁
Explanation: References the (possibly apocryphal) story of getting stuck in a White House bathtub.

Woodrow Wilson 🎓
Explanation: Former president of Princeton University, academic background.

Warren G. Harding 🫖
Explanation: Likely references the Teapot Dome scandal.

Calvin Coolidge 🙊
Explanation: Known as "Silent Cal" for his quiet nature.

Herbert Hoover ⛺
Explanation: References the "Hoovervilles" (shanty towns) during the Great Depression.

I wasn't sure Claude would get these last three because they're a little bit tricky and obscure. But it did.

Franklin D. Roosevelt 👨‍🦽
Explanation: References his use of a wheelchair due to polio.

Harry S. Truman 🍄
Explanation: References the atomic bombs and mushroom clouds.

Dwight D. Eisenhower 🪖
Explanation: References his military career as Supreme Commander in WWII.

John F. Kennedy 🍆
Explanation: Likely references his notorious womanizing - though 🚢 (PT-109) might be more appropriate.

Yes, yes, yes, and yes.

Again Claude implies that my suggestion is inappropriate. Lighten up, Claude.

Lyndon B. Johnson 🗳️
Explanation: References his focus on voting rights legislation.

Uh, yeah, the Voting Rights Act of 1965 is definitely what I meant, I certainly would not have been intending to remind everyone of LBJ's propensity to stuff ballot boxes.

In some ways, Claude is a better person than I am.

Richard M. Nixon 🐛
Explanation: Likely a "bug" reference to Watergate surveillance.

Gerald R. Ford 🏈
Explanation: Was a football player at University of Michigan.

Jimmy Carter 🥜
Explanation: Was a peanut farmer before presidency.

Ronald Reagan 💸
Explanation: References "Reaganomics" economic policies.

Yes, yes, yes, and yes.

George H. W. Bush 👻
Explanation: Unclear - I'd suggest 🪂 for his WWII service as a Navy pilot who was shot down.

I had picked 👻 to recall his tenure as Director of the CIA. But on looking into it I have found he had not served in that role for nearly as long as I thought: only from 1974–1976. It is far from his most prominent accomplishment in government.

I sometimes wonder what would have happened if Bush had beaten Reagan in the 1980 election. People sometimes say that the Republican party only ever runs fools and clowns for president. George Bush was their candidate in 1988 and whatever his faults he was neither a fool nor a clown.

Bill Clinton 🎷
Explanation: Famous for playing the saxophone.

George W. Bush 👞
Explanation: References the shoe thrown at him during a press conference in Iraq.

Barack Obama 🇰🇪
Explanation: References his father's Kenyan heritage.

Here's Claude again being a better person than me. I had picked 🇰🇪 because I was trying to troll my audience with the insinuation that Obama was born in Kenya.

Donald Trump 🍊
Explanation: References his distinctive skin tone.

Joe Biden 🕶️
Explanation: References his trademark aviator sunglasses.

Right, except to me the little quiff on the tangerine is also mocking Trump's hair. But overall I give Claude no demerits. The only times Claude didn't take my meaning were on stupid shit like ⛽ Fillmore.

Here are the presidents where I couldn't come up with anything and asked for Claude's suggestions. I found Claude's suggestions mostly reasonable but also pretty dull.

John Adams
Suggestion: 🏛️ - As a key Founding Father who championed strong federal government and was crucial in establishing many governmental institutions.

I don't know, 🏛️ is supposed to be a “classical building”, and yes, the buildings in Washington were eventually in neoclassical style once they were built, mostly in the early 20th century, but okay, I guess.

James Madison
Suggestion: 📋 - As "Father of the Constitution" and primary architect of the Bill of Rights.

Okay, but… a clipboard?

James Monroe
Suggestion: 🌎 - For the Monroe Doctrine, establishing U.S. influence in the Western Hemisphere.

Full marks. The only reason I'm not kicking myself for not thinking of it first is that I just barely did think of it first. As I was composing the prompt for Claude I said to myself “Why didn't I think of 🌎 for the Monroe Doctrine? Oh well, I'll see what Claude says anyway.”

Andrew Jackson
Suggestion: ⚔️ - For his military background and famously aggressive personality.

Fine.

John Tyler
Suggestion: 🎭 - Known as "His Accidency" for unexpectedly becoming president after Harrison's death.

Not good. I had to get Claude to explain this one to me. See below.

James K. Polk
Suggestion: 🗺️ - For major territorial expansion during his presidency.

Not good.

(The emoji is “world map”.)

Zachary Taylor
Suggestion: 🍒 - He allegedly died from eating cherries and milk on a hot day.

Whatever my complaints about Claude's other suggestions, I feel that this one redeems all their faults. I love it. It's just the kind of thing I was looking for, the sort of thing Arachne would have woven into her tapestry.

James Buchanan
Suggestion: 👨‍🦳 - The only bachelor president.

I'll discuss this one later.

Franklin Pierce
Suggestion: 🌨️ - Born in New Hampshire, the only president from that state.

I had wanted to comment on Peirce's best quality which was his great hairstyle, but I couldn't find any good emoji for hair. But this is a better idea. Using 🌨️ for New Hampshire is funny.

Chester A. Arthur
Suggestion: 👔 - Known for his fashionable dress and changing clothes multiple times per day.

I don't know a damn thing about Chester Arthur except he succeeded Garfield and he had sideburns. I haven't even checked to see if Claude is right about his fashionable dress. I don't think it is physically possible to get me to care about Chester Arthur.


Okay, back to Tyler and Buchanan. I asked Claude to explain Tyler:

The only suggestion you made that I don't understand is for John Tyler. How does 🎭 relate to "his accidency"?

Claude said:

Ah, I see why that's confusing! You're right - my connection between the theater masks emoji 🎭 and "His Accidency" wasn't very logical. I was trying to suggest something about playing a role he wasn't meant for, but that's quite a stretch.

That actually makes sense! I agree it was a stretch, but I see it now.

But Claude continued:

Let me suggest a better emoji for John Tyler: ⚡️

ZOMG, hilarious! Perfect! A++++ 11/10 ⭐⭐⭐⭐⭐ would buy again.

If you don't get it, here's an excerpt of Claude's explanation:

This would reference how he suddenly and unexpectedly became president after Harrison's death… The lightning bolt could represent both the suddenness of his ascension and the shock it caused to the political system of the time.

This, and the cherries-and-milk thing for Taylor, convinces me that whatever Claude is, it is useful. I asked it for help with this ridiculous task and it provided real contributions of real value.

I remarked:

The gray hair for Buchanan I do not like, for two reasons. First, many presidents had gray hair. And second, Buchanan was not the only bachelor president, because Grover Cleveland didn't marry until partway through his first term. Do you have any suggestions for Hayes, Garfield, or Harrison?

Claude tried again for Buchanan:

Thank you for those corrections about Buchanan! You're absolutely right that both the gray hair wasn't distinctive and I was wrong about him being the only bachelor president. Let me rethink that one.

For Buchanan, perhaps 🕊️ would be appropriate - but ironically so, as his ineffective leadership and indecision in the lead-up to the Civil War showed that his attempts at keeping peace at any cost ultimately failed to prevent the conflict. Though that might be too subtle.

I don't love it, but I don't have anything better…

No, wait, I just thought of something! I'm going with 🥫 because, to my mind, Buchanan was the guy who, when he tried to kick the slavery can a little farther down the road, discovered that there was no more road down which to kick it.

I suggested this to Claude just now and it was so enthusiastically complimentary that I was embarrassed, so let's move on.

Claude didn't have any ideas I liked for Hayes, Garfield, or Harrison. I tried workshopping Hayes a little more:

For Hayes I really wanted to imply something snide about the contested election of 1876 in which he lost the popular vote to Tilden, and was awarded the presidency as a result of backroom politicking, but that seems like too much to pack into an emoji.

Claude said:

Maybe 🎲 for the "dice roll" nature of the commission's decisions? Or 🤝 for the corrupt bargain itself? But neither really captures the full gravity of how that compromise effectively ended Reconstruction and ushered in the Jim Crow era.

I think it kind of misses the point if you don't put EMOJI MODIFIER FITZPATRICK TYPE 1-2 on the corrupt handshake: 🤝🏻. But this is the amazing thing, it does feel like I'm workshopping with Claude. It really feels like a discussion between two people. This isn't Eliza parroting back IS IT BECAUSE OF YOUR MOTHER THAT YOU SAY I DON'T PUT EMOJI MODIFIER FITZPATRICK TYPE 1-2 ON THE CORRUPT HANDSHAKE?.

Could Hayes be a crow? You're supposed to be able to compose ‘bird’, ZWJ, and ‘black square’ to get a black bird. It might be too bitter, even for me.

If you want a conclusion, it is: Claude is fun and useful, even for silly stuff that nobody could have planned for.


[Other articles in category /tech/gpt] permanent link

Tue, 04 Feb 2025

Polish Spam

Content warning: mass killing of farmed animals

Today I complained that my email provider had delivered a spam message to me that was in Polish. I understand that spam can be hard to identify, but Polish isn't, I don't know Polish, and any message sent to me in Polish can be discarded. Even if it's 100% legit, I don't know Polish, so we might as well throw it out.

This led a colleague to remark:

I'd love to try Polish spam. I wonder if there'd be caraway seed.

I wonder too.

But first I have to tell this story I heard from a Romanian co-worker. He said that in Romania in the 1980s they had a lot of mink farms, for mink fur. When they werre done getting the fur they would have a big pile of dead, naked minks, so what would they do with them? Well, in Romania in the 1980s, meat was scarce, so they would eat them. The trouble is, minks are carnivores, they are tough and stringy and taste terrible. To make them edible, the Romanians chopped them finely, made them into small loaves, and canned them like Spam. Still this "Spink" was only barely edible, it was the variety of meat that was only eaten by Romanians who could afford no other meat.

I told my colleague this, and said “That's the best I can do for you regarding local versions of Spam in formerly Soviet-bloc countries.” Is not Spam and it is not Polish, but at least it is interesting. Maybe.

Thanks to the Wonders of the Internet, it is not hard to find Spamlike potted meat products from Poland. For example, konserwa lisiecka, which is actually a canned sausage:

The label has the ingredients listed clearly. I see garlic (czosnek), white pepper (pieprz biały), and sugar (cukier) but no caraway, which I think would be kminek.

Here's golonka wieprzowa:

This time the ingredients include przyprawy, which is “spices” and could conceivably include caraway, but the label specifies z gorczycą, which means “including mustard”, so if there is caraway it does not get top billing.

From the labels I guess these are something like military-issue rations, which I suppose would be seasoned to the least common denominator. Perhaps someone's grandma makes a delectable potted pork dish with lots of caraway.

I do not speak Polish. If I have made any language errors, I apologize to Maciej Cegłowski.


[Other articles in category /food] permanent link

Mon, 03 Feb 2025

Just ANSWER THE QUESTION

Here's a Math SE pathology that bugs me. OP will ask "I'm trying to prove that groups !!A!! and !!B!! are isomorphic, I constructed this bijection but I see that it's not a homomorphism. Is it sufficient, or do I need to find a bijective homomorphism?"

And respondent !!R!! will reply in the comments "How can a function which is not an homomorphism prove that the groups are isomorphic?"

Which is literally the exact question that OP was asking! "Do I need to find … a homomorphism?"

My preferred reply would be something like "Your function is not enough. You are correct that it needs to be a homomorphism."

Because what problem did OP really have? Clearly, their problem is that they are not sure what it means for two groups to be isomorphic. For the respondent to ask "How can a function which is not an homomorphism prove the the groups are isomorphic" is unhelpful because they know that OP doesn't know the answer to that question.

OP knows too, that's exactly what their question was! They're trying to find out the answer to that exact question! OP correctly identified the gap in their own understanding. Then they formulated a clear, direct question that would address the gap.

THEY ARE ASKING THE EXACT RIGHT QUESTION AND !!R!! DID NOT ANSWER IT

My advice to people answering questions on MSE:

Just answer the question

It's all very well for !!R!! to imagine that they are going to be brilliant like Socrates, conducting a dialogue for that ages that draws from OP the realization that the knowledge they sought was within them all along. Except:

  1. !!R!! is not Socrates
  2. Nobody has time for this nonsense
  3. The knowledge was not within them all along

MSE is a site where people go to get answers to their questions. That is its sole and stated purpose. If !!R!! is not going to answer questions, what are they even doing there? In my opinion, just wasting everyone's time.

Important pedagogical note

It's sufficient to say "Your function is not enough", which answers the question.

But it is much better to say "Your function is not enough. You are correct that it needs to be a homomorphism". That acknowledges the student's contribution. It tells them that their analysis of the difficulty was correct!

They may not know what it means for two groups to be isomorphic, but they do know one something almost as good: that they are unsure what it means for two groups to be isomorphic. This is valuable knowledge.

This wise student recognises that they don't know. Socrates said that he was the wisest of all men, because he at least “knew that he didn't know”. If you want to take a lesson from Socrates, take that one, not his stupid theory that all knowledge is already within us.

OP did what students are supposed to do: they reflected on their knowledge, they realized it was inadequate, and they set about rectifying it. This deserves positive reinforcement.

Addenda

  1. This is a real example. I have not altered it, because I am afraid that if I did you would think I was exaggerating.

  2. I have been banging this drum for decades, but I will cut the scroll here. Expect a followup article.

20250206

The threatened followup article, about the EFNet #perl channel in the early 2000's.


[Other articles in category /math/se] permanent link