[ I started thinking about this about twenty years ago, and then writing it down in 2019, but it seems to be obsolete. I am publishing it anyway. ]
The canonical division of the year into seasons in the northern
temperate zone goes something like this:
Spring: March 21 – June 21
Summer: June 21 – September 21
Autumn: September 21 – December 21
Winter: December 21 – March 21
Living in the mid-Atlantic region of the northeast U.S., I have never
been happy with this. It is just not a good description of the
climate.
I begin by observing that the year is not equally partitioned between
the four seasons. The summer and winter are longer, and spring and
autumn are brief and happy interludes in between.
I have no problem with spring beginning in the middle of March. I
think that is just right. March famously comes in like a lion and
goes out like a lamb. The beginning of March is crappy, like
February, and frequently has snowstorms and freezes. By the end of
March, spring is usually skipping along, with singing birds and not just the early
flowers (snowdrops, crocuses, daffodil) but many of the later ones also.
By the middle of May the spring flowers are over and the weather is
getting warm, often uncomfortably so. Summer continues through the
beginning of September, which is still good for swimming and
lightweight clothes. In late September it finally gives way to
autumn.
Autumn is jacket weather but not overcoat weather. Its last gasp is
in the middle of November. By this time all the leaves have changed,
and the ones that are going to fall off the trees have done so. The
cool autumn mist has become a chilly winter mist. The cold winter
rains begin at the end of November.
So my first cut would look something like this:
Months
Seasons
January
February
March
April
May
June
July
August
September
October
November
December
Winter
Spring
Summer
Autumn
Winter
Note that this puts Thanksgiving where it belongs at the boundary
between autumn (harvest season) and winter (did we harvest enough to
survive?). Also, it puts the winter solstice (December 21) about one
quarter of the way through the winter. This is correct. By the
solstice the days have gotten short, and after that the cold starts to
kick in. (“As the days begin to lengthen, the cold begins to
strengthen”.) The conventional division takes the solstice as the
beginning of winter, which I just find perplexing. December 1 is
not the very coldest part of winter, but it certainly isn't autumn.
There is something to be said for it though. I think I can
distinguish several subseasons — ten in fact:
Dominus Seasonal Calendar
Months
Seasons
Sub-seasons
January
February
March
April
May
June
July
August
September
October
November
December
Winter
Spring
Summer
Autumn
Winter
Midwinter
Late Winter
Early spring
Late spring
Early Summer
Midsummer
Late Summer
Early autumn
Late autumn
Early winter
Midwinter
Midwinter, beginning around the solstice, is when the really crappy
weather arrives, day after day of bitter cold. In contrast, early and
late winter are typically much milder. By late February the snow is
usually starting to melt. (March, of course, is always unpredictable,
and usually has one nasty practical joke hiding up its sleeve. Often,
March is pleasant and springy in the second week, and then mocks you
by turning back into January for the third week. This takes people by
surprise almost every year and I wonder why they never seem to catch
on.)
Similarly, the really hot weather is mostly confined to
midsummer. Early and late summer may be warm but you do not get
blazing sun and you have to fry your eggs indoors, not on the
pavement.
Why the seasons seem to turn in the middle of each month, and not at
the beginning, I can't say. Someone messed up, but who? Probably the
Romans. I hear that the Persians and the Baha’i start their year on
the vernal equinox. Smart!
Weather in other places is very different, even in the temperate
zones. For example, in southern California they don't have any of the
traditional seasons. They have a period of cooler damp weather in the
winter months, and then instead of summer they have a period of gloomy
haze from June through August.
However
I may have waited too long to publish this article, as climate change
seems to have rendered it obsolete. In recent years, we have barely
had midwinter, and instead of the usual two to three annual snows we
have zero. Midsummer has grown from two to four months, and summer
now lasts into October.
A few days ago I got angry at xargs for the hundredth time, because
for me xargs is one of those "then he had two problems" technologies.
It never does what I want by default and I can never remember how to
use it. This time what I wanted wasn't complicated: I had a bunch of
PDF documents in /tmp and I wanted to use GPG to encrypt some of
them, something like this:
gpg -ac $(ls *.pdf | menupick)
menupick
is a lovely little utility that reads lines from standard input,
presents a menu, prompts on the terminal for a selection from the
items, and then prints the selection to standard output. Anyway, this
didn't work because some of the filenames I wanted had spaces in them,
and the shell sucks. Also because
gpg probably only does one file at a time.
I could have done it this way:
ls *.pdf | menupick | while read f; do gpg -ac "$f"; done
but that's a lot to type. I thought “aha, I'll use xargs.” Then I
had two problems.
ls *.pdf | menupick | xargs gpg -ac
This doesn't work because xargs wants to batch up the inputs to run
as few instances of gpg as possible, and gpg only does one file at
a time. I glanced at the xargs manual looking for the "one at a
time please" option (which should have been the default) but I didn't
see it amongst the forest of other options.
I think now that I needed -n 1 but I didn't find it immediately, and
I was tired of looking it up every time when it was what I wanted
every time. After many years of not remembering how to get xargs to
do what I wanted, I decided the time had come to write a stripped-down
replacement that just did what I wanted and nothing else.
(In hindsight I should perhaps have looked to see if gpg's
--multifile option did what I wanted, but it's okay that I didn't,
this solution is more general and I will use it over and over in
coming years.)
xar is a worse version of xargs, but worse is better (for me)
First I wrote a comment that specified the scope of the project:
# Version of xargs that will be easier to use
#
# 1. Replace each % with the filename, if there are any
# 2. Otherwise put the filename at the end of the line
# 3. Run one command per argument unless there is (some flag)
# 4. On error, continue anyway
# 5. Need -0 flag to allow NUL-termination
There! It will do one thing well, as Brian and Rob commanded us in
the Beginning Times.
I wrote a draft implementation that did not even do all those things,
just items 2 and 4, then I fleshed it out with item 1. I decided that
I would postpone 3 and 5 until I needed them. (5 at least isn't a
YAGNI, because I know I have needed it in the past.)
The result was this:
import subprocess
import sys
def command_has_percent(command):
for word in command:
if "%" in word:
return True
return False
def substitute_percents(target, replacement):
return [ s.replace("%", replacement) for s in target ]
def run_command_with_filename(command_template, filename):
command = command_template.copy()
if not command_has_percent(command):
command.append("%")
res = subprocess.run(substitute_percents(command, filename), check=False)
return res.returncode == 0
if __name__ == '__main__':
template = sys.argv[1:]
ok = True
for line in sys.stdin:
if line.endswith("\n"):
line = line[:-1]
if not run_command_with_filename(template, line):
ok = False
exit(0 if ok else 1)
Short, clean, simple, easy to use. I called it xar, ran
ls *.pdf | menupick | xar gpg -ac
and was content.
Now again, with Claude
The following day I thought this would be the perfect opportunity to
try getting some LLM help with programming. I already had a baseline
version of xar working, and had thought through the problem
specification. Now I could do it over with the LLM and compare the
two results. The program I wanted was small and self-contained. If
the LLM sabotaged me somehow, I would probably notice, and if I
didn't, it wouldn't matter, because I had a working version
already.
So I asked Claude about it. I knew Claude was not the right choice,
because Claude is not really designed for this task, and has no
integration with the coding editor, but Claude is what I am familiar
with so I thought I'd give it a shot.
On the whole it went extremely well.
The complete transcript is here. I
imagine this was pretty much a maximally good experience, that all the
terrible problems of LLM coding arise in larger, more complicated
systems. But this program is small and self-contained, with no
subtleties, so the LLM could be at its best.
Phase 1
First I just told Claude what I wanted and let it write the program
from scratch.
Let's write a program in Python called xar. It will be a
stripped-down version of the standard Unix xargs utility, but it
will do less and be simpler to use.
It will get command-line
arguments that specify a command template. Then it will read lines
from stdin. Each line will be appended to the command template to
produce a command. The program will run the resulting command, one
command for each line of input.
Any questions?
Default Claude is very chatty and I am going to omit most of its
chatty comments, which make me impatient.
The program it wrote it was not what I would have written, but it was
good enough. If I had just used it right off the bat, instead of
writing my own, it would have been enough, and it would have taken
somewhere between 2% and 10% as long to produce.
So the one-line summary of this article is: I should stop writing
simple command-line utilities, and just have Claude write them
instead, because Claude's are good enough, and definitely better than
yak-shaving.
Here's Claude's initial version:
#!/usr/bin/env python3
import sys
import subprocess
import shlex
def main():
if len(sys.argv) < 2:
print("Usage: xar command [args...]", file=sys.stderr)
sys.exit(1)
command_template = sys.argv[1:]
for line in sys.stdin:
line = line.rstrip('\n')
full_command = command_template + [line]
try:
result = subprocess.run(full_command, check=True)
except subprocess.CalledProcessError as e:
print(f"Command failed with exit code {e.returncode}: {' '.join(full_command)}",
file=sys.stderr)
except FileNotFoundError:
print(f"Command not found: {full_command[0]}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()
Claude's version had numerous comments, which I have omitted. I later
told it to stop putting in comments, which it did.
Claude's use of check here was not what I wanted, because that makes
subprocess.run raise an exception when the subcommand fails, and
then the exception has to be immediately caught and handled. My
original control flow had been simpler:
res = subprocess.run(substitute_percents(command, filename), check=False)
return res.returncode == 0
…
if not run_command_with_filename(template, line):
ok = False
Claude's program pulled in shlex without noticing that it was
unused. But Claude did teach me about str.rstrip('\n') which I had
not known about before (or maybe had forgotten), so that was one small
win already.
Argument parsing
The next step was a big win. Python's library for command-line
argument handling is called argparse and it is really nice. If I
were still writing programs in Perl, I would implement a Perl version
of argparse because Perl has 29 available argument parsing libraries
and they are all loathsome. The one problem with argparse is I never
remember off the top of my head how to use it. I think the module is
called argparse but it provides a class called Argparser but I
often get these backward and try to use argparser and Argparse.
Instead of figuring it out every time I usually dig up some previous
Python program and then copy-paste the argument parser from there,
amending it to suit the purpose.
But this time I didn't have to do that. Instead, I just said to
Claude:
This is good, now please add code at the top to handle argument
parsing with the standard Argparse library, even though there are
no options yet.
Claude handed me back pretty much the same program, but with the
argument parser at the top:
def parse_arguments():
"""Parse command-line arguments using argparse."""
parser = argparse.ArgumentParser(
description="A simplified version of xargs that appends each line from stdin to a command template.",
prog="xar"
)
# Store all positional arguments as the command to execute
parser.add_argument(
"command",
nargs="+",
help="The command template to execute for each input line"
)
return parser.parse_args()
Let's pause for a moment. Maybe you kids are unimpressed by this.
But if someone had sent this interaction back in time and showed it to
me even as late as five years ago, I would have been stunned. It
would have been completely science-fictional. The code that it
produced is not so impressive, maybe, but that I was able to get it in
response to an unstructured request in plain English is like seeing a
nuclear-powered jetpack in action.
I make some changes
It bothered me that the command-line arguments were being taken
implicitly from sys.argv; that makes the program hard to test. A
better way to do this is to have the list of command-line arguments be a
function to the main function, and then for the runtime to call
main(sys.argv[1:]) instead of just main(). That way you can test
the program without having to fake up sys.argv. So I changed that,
and while I was making changes I got rid of Claude's comments and the
unnecessary exception handling. I pasted the result back at Claude,
saying:
Good so far. I have adjusted your source code in several ways:
I removed the comments.
I removed the exception handling for the subcommand. If one of the subcommands fails, that no longer terminates the main loop
Command-line arguments are passed to the main() function rather than being accessed directly
Here is the code. Tell me if you see any problems or if you have any questions about it
I had also sabotaged the code with at least one likely error.
Python's subprocess.run returns a “result” structure that includes
the subcommand's output and exit status, and I can never remember the
name of the component that records the exit status. I either look it
up or get it by trial-and-error. This time I didn't bother to either,
I just guessed that it was result.exitcode, which I handed off to
Claude even though I thought was probably wrong.
Claude's reply said:
Here are a couple of issues I noticed:
There's an error in the error handling code - you're using result.exitcode but in the subprocess.run() function, the attribute is called returncode not exitcode
Awesome, now I don't have to remember that name of that component.
Claude continued:
The parse_arguments() function definition is missing in the pasted code, but it would need to be updated to accept the argv parameter
I had passed sys.argv[1:] to main, and then had main call parse_arguments(argv).
But I hadn't changed parse_arguments to expect an argument list. (I
hadn't included it in my paste at all, in fact.) Claude was exactly
right, and continued with the corrected version:
#!/usr/bin/env python3
import sys
import subprocess
import argparse
def parse_arguments(argv):
…
return parser.parse_args(argv)
def main(argv):
# Parse command-line arguments
args = parse_arguments(argv)
…
if __name__ == "__main__":
ok = main(sys.argv[1:])
exit(0 if ok else 1)
Handling NUL-separated values
I said
It's great, thanks for catching those errors. Now let's add a -0 command-line option that tells the program to split the input on "\0" characters instead of at line breaks.
Recall that my handwritten xar did not do this yet; I had left that
feature for another day. Claude just added it for me, adding
parser.add_argument(
"-0", "--null",
action="store_true",
help="Input items are terminated by a null character instead of by newline"
)
to the argument parser and changing the main loop to:
# Split input based on the delimiter option
if args.null:
# Split on null characters
items = stdin_data.split('\0')
else:
# Split on newlines
items = stdin_data.splitlines()
for item in items:
if not item:
continue
…
I was tired of Claude's comments, so I said
Give me the same thing, with no comments
which it did, so I said
From now on leave out the comments. I'm a Python expert and I don't need them. If there is something I don't understand I will ask you.
Claude complied. If I were going to do use Claude again in the future
I would include that in the canned instructions that Claude is given
up front. Instead I will probably use a tool better-suited to
programming, and do whatever the analogous thing is.
Template filling
Now I told Claude to add the % feature:
I changed if not item to if item == "". Now let's make the following change:
If the command template includes any % characters, each of these should be replaced with the input item.
Otherwise, if there were no % characters, the input item should be appended to the end of the command as in the current version
Claude did this. It used an explicit loop instead of the list
comprehension that I had used (and preferred), but it did do it correctly:
for arg in command_template:
if '%' in arg:
has_placeholder = True
full_command.append(arg.replace('%', item))
else:
full_command.append(arg)
if not has_placeholder:
full_command.append(item)
Even without the list comprehension, I would have factored
out the common code:
for arg in command_template:
if '%' in arg:
has_placeholder = True
full_command.append(arg.replace('%', item))
if not has_placeholder:
full_command.append(item)
But I am not going to complain, my code is simpler but is
doing unnecessary work.
Claude also took my hint to change item == "" even though I didn't
explicitly tell it to change that.
At this point the main loop of the main function was 15 lines long,
because Claude had stuck all the %-processing inline. So I said:
Good, let's extract the command template processing into a subroutine.
It did this right, understanding correctly what code I was referring
to and extracting it into a subroutine called
process_command_template. More science fiction: I can say "command
template processing" and it guesses what I had in mind!
This cut the main loop to 7 lines. That worked so well I tried it
again:
Good, now let's extract the part of main that processes stdin into a subroutine that returns the items array
It pulled the correct code into a function called process_stdin. It
did not make the novice mistake of passing the entire args structure
to this function. In the caller it had process_stdin(args.null) and
inside of process_stdin this parameter was named
use_null_delimiter,
YAGNI?
At this point I was satisfied but I thought I might as well ask if it
should do something else before we concluded:
Can you think of any features I left out that would be useful enough
to warrant inclusion? Remember this program is supposed to be small
and easy to use, in contrast to the existing xargs which is very
complicated.
Claude had four suggestions:
A -p or --parallel option to run commands in parallel
A -n or --max-args option to specify the maximum number of items to pass per command
A simple -v or --verbose flag to show commands as they're executed
A way to replace the command's standard input with the item instead of adding it as an argument
All reasonable suggestions, nothing stupid. (It also supplied code for #3,
which I had not asked for and did not want, but as I said before,
default Claude is very chatty.)
Parallelization
I didn't want any of these, and I knew that #2–4 would be easy to add if I
did want any of them later. But #1 was harder. I've done code like
this in the past, where the program has a worker pool and runs a new
process whenever the worker pool isn't at capacity. It's not even that
hard. In Perl you can play a cute trick and use something like
$workers{spawn()} = 1 while delete $workers{wait()};
where the workers hash maps process IDs to dummy values. A child
exits, wait() awakens and returns the process ID of the completed
child, which is then deleted from the map, and the loop starts another
worker.
I wanted to see how Claude would do it, and the result was an even
bigger win than I had had previously, because Claude wrote this:
with concurrent.futures.ProcessPoolExecutor(max_workers=args.parallel) as executor:
futures = [executor.submit(execute_command, cmd, args.verbose) for cmd in commands]
for future in concurrent.futures.as_completed(futures):
success = future.result()
if not success:
ok = False
What's so great about this? What's great is that I hadn't known about
concurrent.futures or ProcessPoolExecutor. And while I might have
suspected that something like them existed, I didn't know what they
were called. But now I do know about them.
If someone had asked me to write the --parallel option, I would have
had to have this conversation with myself:
Python probably has something like this already. But how long will
it take me to track it down? And once I do, will the API
documentation be any good, or will it be spotty and incorrect? And
will there be only one module, or will there be three and I will
have to pick the right one? And having picked module F6, will I
find out an hour later that F6 is old and unmaintained and that
people will tell me “Oh, you should have used A1, it is the new
hotness, everyone knows that.”
When I put all that uncertainty on a balance, and weigh it
against the known costs of doing it myself, which one wins?
The right choice is: I should do the research, find the good module (A1, not
F6), and figure out how to use it.
But one of my biggest weaknesses as a programmer is that I too often
make the wrong choice in this situation. I think “oh, I've done this
before, it will be quicker to just do it myself”, and then I do and it
is.
Let me repeat, it is quicker to do it myself. But that is still
the wrong choice.
Maybe the thing I wrote would be sooner or smaller or faster or more technically
suitable to the project than the canned module would have been. But it
would only have been more be technically suitable today. If it
needed a new feature in the future it might have to be changed by
someone who had never seen it before, whereas the canned module could well
already have the needed feature ready to go, already documented, and
perhaps already familiar to whoever had to make the change. My
bespoke version would certainly be unfamiliar to every other
programmer on the project — including perhaps myself six months later — so would be the wrong thing to use.
I'm really good at hacking this stuff up. Which is a problem. It
makes me want to hack stuff up, even when I shouldn't.
Claude tips the balance strongly toward the correct side, which is
that I should use the prepackaged module that someone else wrote and
not hack something up.
And now I know about concurrent.futures.ProcessPoolExecutor! The
world is full of manuals, how can I decide which ones I should read?
One way is to read them all, which used to work back when I was
younger, but now I have more responsibilities and I don't have time to
read the entire Python library reference including all the useless
bits I will never use. But here's Claude pointing out to me that this
is something I need to know about, now, today, and I should read this one. That is
valuable knowledge.
Claude flubs
While testing the program I did
xar --parallel=2 bash -c '...'
which didn't work because the argparse library tried to parse the
-c as a flag to xar, when I wanted it to be a flag to bash.
This was my fault, and there is a standard way around it, which is
that I should have said
xar --parallel=2 -- bash -c '...'
which tells argparse to drop the -- and take everything after it
literally with no interpretation.
I thought I'd see what Claude had to say about this, and it dropped
the ball in a very LLM sort of way.
I asked:
When I did xar --parallel=2 bash -c '...' it misunderstood the
arguments. It thought I was trying to pass -c to xar and failed
with unrecognized arguments. What should I do about that?
Claude replied:
There are two main approaches to handle this:
Use the -- separator to indicate the end of arguments for xar: …
Modify your code to handle this case specifically by looking for a command terminator in the argument parser.
Here's how you could modify your code to implement the first
approach:
And then it followed with the same, unmodified code…
Which was correct, because -- is already handled by argparse, so
no change was actually required to implement the first approach! But
Claude's characterization of it as a modification was misleading,
wrong, and potentially very confusing.
I said:
I don't see what the difference is in that argument parsing code as compared to what I had before.
and Claude got into the LLM tail-chasing mode where it got more and
more confused:
You're right, I apologize for the confusion. There's no difference
in the argument parsing code I provided compared to what you had
before. The issue is with how argparse handles arguments by
default.
To fix the issue with xar --parallel=2 bash -c '...', you need to
actually implement support for the -- separator in your code…
# Find the index of '--' if it exists
try:
separator_index = argv.index('--')
xar_args = argv[:separator_index]
command_args = argv[separator_index+1:]
except ValueError: …
Ugh, all wrong in the worst way. The code probably works, but it is
completely unnecessary. Claude's claim that “you need to actually
implement support for the -- separator” is flat wrong. I pointed
this out and Claude got more confused. Oh well, nobody is perfect!
Lessons learned
A long time ago, when syntax-coloring editors were still new, I tried
one and didn't like it, then tried again a few years later and
discovered that I liked it better than I had before, and not for the
reasons that anyone had predicted or that I would have been able to
predict.
(I wrote an article about the surprising reasons to use the syntax coloring.)
This time also. As usual, an actual experiment produced unexpected
results, because the world is complicated and interesting. Some of
the results were unsurprising, but some were not anything I would have
thought of beforehand.
Claude's code is good enough, but it is not a magic oracle
Getting Claude to write most of the code was a lot faster and easier
than writing it myself. This is good! But I was dangerously tempted
to just take Claude's code at face value instead of checking it
carefully. I quickly got used to flying along at great speed, and it
was tough to force myself to slow down and be methodical, looking over
everything as carefully as I would if Claude were a real junior
programmer. It would be easy for me to lapse into bad habits,
especially if I were tired or ill. I will have to be wary.
Fortunately there is already a part of my brain trained to deal with
bright kids who lack experience, and I think perhaps that part of my brain
will be able to deal effectively with Claude.
I did not notice any mistakes on Claude's part — at least this time.
At one point my testing turned up what appeared to be a bug, but it
was not. The testing was still time well-spent.
Claude remembers the manual better than I do
Having Claude remember stuff for me, instead of rummaging the
manual, is great. Having Claude stub out an argument parser,
instead of copying one from somewhere else, was pure win.
Partway along I was writing a test script and I wanted to use that
Bash flag that tells Bash to quit early if any of the subcommands
fails. I can never remember what that flag is called. Normally I
would have hunted for it in one of my own shell scripts, or groveled
over the 378 options in the bash manual. This time I just asked in
plain English “What's the bash option that tells the script to abort
if a command fails?” Claude told me, and we went back to what we were
doing.
Claude can talk about code with me, at least small pieces
Claude easily does simple refactors. At least at this scale, it got
them right. I was not expecting this to work as well as it did.
When I told Claude to stop commenting every line, it did. I
wonder, if I had told it to use if not expr only for Boolean
expressions, would it have complied? Perhaps, at least for a
while.
When Claude wrote code I wasn't sure about, I asked it what it was
doing and at least once it explained correctly. Claude had written
parser.add_argument(
"-p", "--parallel",
nargs="?",
const=5,
type=int,
default=1,
help="Run up to N commands in parallel (default: 5)"
)
Wait, I said, I know what the const=5 is doing, that's so that if
you have --parallel with no number it defaults to 5. But what is
the --default doing here? I just asked Claude and it told me:
that's used if there is no --parallel flag at all.
This was much easier than it would have been for me to pick over
the argparse manual to figure out how to do this in the first
place.
More thoughts
On a different project, Claude might have done much worse. It might
have given wrong explanations, or written wrong code. I think that's
okay though. When I work with human programmers, they give wrong
explanations and write wrong code all the time. I'm used to it.
I don't know how well it will work for larger systems. Possibly pretty
well if I can keep the project sufficiently modular that it doesn't get
confused about cross-module interactions. But if the criticism is
“that LLM stuff doesn't work unless you keep the code extremely
modular” that's not much of a criticism. We all need more
encouragement to keep the code modular.
Programmers often write closely-coupled modules knowing that it is bad
and it will cause maintenance headaches down the line, knowing that the
problems will most likely be someone else's to deal with. But what if
writing closely-coupled modules had an immediate cost today, the cost
being that the LLM would be less helpful and more likely to mess up
today's code? Maybe programmers would be more careful about letting
that happen!
Will my programming skill atrophy?
Folks at Recurse Center were discussing this question.
I don't think it will. It will only atrophy if I let it. And I have a
pretty good track record of not letting it. The essence of
engineering is to pay attention to what I am doing and why, to try to
produce a solid product that satisifes complex constraints, to try
to spot problems and correct them. I am not going to stop doing
this. Perhaps the problems will be different ones than they were
before. That is all right.
Starting decades ago I have repeatedly told people
You cannot just paste code with no understanding of
what is going on and expect it to work.
That was true then without Claude and it is true now with Claude. Why
would I change my mind about this? How could Claude change it?
Will I lose anything from having Claude write that complex
parser.add_argument call for me? Perhaps if I had figured it out
on my own, on future occasions I would have remembered the const=5 and default=1
specifications and how they interacted. Perhaps.
But I suspect that I have figured it out on my own in the past, more
than once, and it didn't stick. I am happy with how it went this time.
After I got Claude's explanation, I checked its claimed behavior pretty
carefully with a stub program, as if I had been reviewing a
colleague's code that I wasn't sure about.
The biggest win Claude gave me was that I didn't know about this
ProcessPoolExecutor thing before, and now I do. That is going to
make me a better programmer. Now I know something about useful that
I didn't know before, and I have a pointer to documentation I know I
should study.
My skill at writing ad-hoc process pool managers might atrophy, but if
it does, that is good. I have already written too many ad-hoc
process pool managers. It was a bad habit, I should have stopped long
ago, and this will help me stop.
Conclusion
This works.
Perfectly? No, it's technology, technology never works perfectly.
Have you ever used a computer?
Will it introduce new problems? Probably, it's new technology, and
new technology always introduces new problems.
But is it better than what we had before? Definitely.
I still see some programmers turning up their noses at this technology
as if they were sure it was a silly fad that would burn itself out
once people came to their senses and saw what a terrible idea it was.
I think that is not going to happen, and those nose-turning-up people,
like the people who pointed out all the drawbacks and unknown-unknowns
of automobiles as compared to horse-drawn wagons, are going to look
increasingly foolish.
Suppose a centrifuge has !!n!! slots, arranged in a circle around the
center, and we have !!k!! test tubes we wish to place into the slots.
If the tubes are not arranged symmetrically around the center, the
centrifuge will explode.
(By "arranged symmetrically around the center, I mean that if the
center is at !!(0,0)!!, then the sum of the positions of the tubes
must also be at !!(0,0)!!.)
Let's consider the example of !!n=12!!. Clearly we can arrange !!2!!,
!!3!!, !!4!!, or !!6!! tubes symmetrically:
Equally clearly
we can't arrange only !!1!!. Also it's easy to see we can do !!k!! tubes if
and only if we can also do !!n-k!! tubes, which rules out !!n=12,
k=11!!.
From now on I will write !!\nk nk!! to mean the problem of balancing
!!k!! tubes in a centrifuge with !!n!! slots. So !!\dd 2, \dd 3, \dd
4, !! and !!\dd 6!! are possible, and !!\dd 1!! and !!\dd{11}!! are
not. And !!\nk nk!! is solvable if and only if !!\nk n{n-k}!! is.
It's perhaps a little surprising that !!\dd7!! is possible.
If you just ask this to someone out of nowhere they might
have a happy inspiration: “Oh, I'll just combine the solutions for
!!\dd3!! and !!\dd4!!, easy.” But that doesn't work because two groups
of the form !!3i+j!! and !!4i+j!! always overlap.
For example, if your group of !!4!! is the
slots !!0, 3, 6, 9!! then you can't also have your group of !!3!! be
!!1, 5, 9!!, because slot !!9!! already has a tube in it.
The
other balanced groups of !!3!! are blocked in the same way. You
cannot solve the puzzle with !!7=3+4!!; you have to do !!7=3+2+2!! as
below left.
The best way to approach this is to do !!\dd5!!, as below right.
This is easy,
since the triangle only blocks three of the six symmetric pairs.
Then you replace the holes with tubes and the tubes with holes to
turn !!\dd5!! into !!\dd{12-5}=\dd7!!.
Given !!n!! and !!k!!, how can we decide whether the centrifuge can be
safely packed?
Clearly you can solve !!\nk nk!! when !!n!! is a multiple of !!k>1!!, but the example
of !!\dd5!! (or !!\dd7!!) shows this isn't a necessary condition.
A generalization of this is that !!\nk nk!! is always solvable
if !!\gcd(n,k) > 1!! since you can easily
balance !!g = \gcd(n, k)!! tubes at positions !!0, \frac ng, \frac{2n}g, \dots,
\frac {(g-1)n}g!!, then do another !!g!! tubes one position over, and
so on. For example, to do !!\dd8!! you just put first four tubes
in slots !!0, 3, 6, 9!! and the next four one position over, in slots
!!1, 4, 7, 10!!.
An interesting counterexample is that the strategy for !!\dd7!!,
where we did !!7=3+2+2!!, cannot be extended to !!\nk{14}9!!. One
would want to do !!k=7+2!!, but there is no way to arrange the tubes
so that the group of !!2!! doesn't conflict with the group of !!7!!,
which blocks one slot from every pair.
But we can see that this must be true without even considering the
geometry. !!\nk{14}9!! is the reverse of !!\nk{14}{14-9} = \nk{14}5!!, which
impossible: the only nontrivial divisors of !!n=14!! are !!2!! and
!!7!!, so !!k!! must be a sum of !!2!!s and !!7!!s, and !!5!! is not.
You can't fit !!k=3+5=8!! tubes when !!n=15!!, but again the reason is
a bit tricky. When I looked at !!8!! directly, I did a case analysis
to make sure that the !!3!!-group and the !!5!!-group would always
conflict. But again there was an easier was to see this: !!8=15-7!! and
!!7!! clearly won't work, as !!7!! is not a sum of !!3!!s and !!5!!s.
I wonder if there's an example where both !!k!! and !!n-k!! are not obvious?
For !!n=20!!, every !!k!! works except !!k=3,17!! and the always-impossible !!k=1,19!!.
What's the answer in general? I don't know.
Addenda
20250502
Now I am amusing myself thinking about the perversity of a centrifuge
with a prime number of slots, say !!13!!. If you use it at all, you must
fill every slot. I hope you like explosions!
While I did not explode any centrifuges in university chemistry, I did
once explode an expensive Liebig condenser.
Omar Antolín points out an important consideration I missed:
it may be necessary
to subtract polygons. Consider !!\nk{30}6!!. This is obviously
possible since !!6\mid 30!!. But there is a more interesting
solution. We can add the pentagon !!{0, 6, 12, 18, 24}!! to the
digons !!{5, 20}!! and !!{10, 25}!! to obtain the solution
$${0,5,6,10,12,18, 20, 24, 25}.$$
Then from this we can subtract the triangle !!{0, 10,
20}!! to obtain $${5, 6, 12, 18, 24, 25},$$ a solution to
!!\nk{30}6!! which is not a sum of regular polygons:
Thanks to Dave Long for pointing out a small but significant error,
which I have corrected.
Given the coordinates of the three vertices of a triangle, can we find
the area? Yes. If by no other method, we can use the Pythagorean
theorem to find the lengths of the edges, and then
Heron's formula to compute the area from
that.
Now, given the coordinates of the four vertices of a quadrilateral,
can we find the area? And the answer is, no, there is no method to do
that, because there is not enough information:
These three quadrilaterals have the same vertices, but different
areas. Just knowing the vertices is not enough; you also need their order.
I suppose one could abstract this: Let !!f!! be the function that maps
the set of vertices to the area of the quadrilateral. Can we
calculate values of !!f!!? No, because there is no such !!f!!, it is
not well-defined.
Put that way it seems less interesting. It's just another example of
the principle that, just because you put together a plausible sounding
description of some object, you cannot infer that such an object must
exist. One of the all-time pop hits here is:
Let !!ε!! be the smallest [real / rational] number strictly greater than !!0!!…
which appears on Math SE quite frequently. Another one I remember is
someone who asked about
the volume of a polyhedron with exactly five faces, all triangles. This
is a fallacy at the ontological level, not the mathematical
level, so when it comes up I try to demonstrate it with a
nonmathematical counterexample, usually something like “the largest
purple hat in my closet” or perhaps “the current Crown Prince of the
Ottoman Empire”. The latter is less good because it relies on the
other person to know obscure stuff about the Ottoman Empire, whatever
that is.
This is also unfortunately also the error in Anselm's so-called
“ontological proof of God”. A philosophically-minded friend of mine
once remarked that being known for the discovery of the ontological
proof of God is like being known for the discovery that you can wipe
your ass with your hand.
Anyway, I'm digressing. The interesting part of the quadrilateral
thing, to me, is not so much that !!f!! doesn't exist, but the specific
reasoning that demonstrates that it can't exist. I think there are
more examples of this proof strategy, where we prove nonexistence
by showing there is not enough information for the thing to exist, but
I haven't thought about it enough to come up with one.
There is a proof, the so-called
“information-theoretic proof”,
that a comparison sorting algorithm takes at least !!O(n\log n)!! time, based
on comparing the amount of information gathered from the comparisons
(one bit each) with that required to distinguish all !!n! !! possible
permutations (!!\log_2 n! \ge n\log_2 n!! bits total). I'm not sure
that's what I'm looking for here. But I'm also not sure it isn't, or
why I feel it might be different.
Addenda
20250430
Carl Muckenhoupt suggests that logical independence proofs are of the
same sort. He says, for example:
Is there a way to prove the parallel postulate from Euclid's other
axioms? No, there is not enough information. Here are two geometric
models that produce different results.
This is just the sort of thing I was looking for.
20250503
Rik Signes has allowed me to reveal that he was the source of the
memorable disparagement of Anselm's dumbass argument.
Welcome to Philadelphia! We have a lot of political corruption here.
I recently wrote about the unusually corrupt Philadelphia Traffic Court,
where four of the judges went to the federal pokey, and the state
decided there was no way to clean it up, they had to step on it like a
cockroach. I ended by saying:
One of those traffic court judges was Willie Singletary, who I've
been planning to write about since 2019. But he is a hard worker who
deserves better than to be stuck in an epilogue, so I'll try to get
to him later this month.
This is that article from 2019, come to fruit at last. It was
originally inspired by this notice that appeared at my polling place on
election day that year:
Willie Singletary, candidate for Democratic Council At-Large,
has been removed from the Primary Ballot by Court Order.
Although his name appears on the ballot, votes for this
candidate will not be counted because he was convicted of two
Class E felonies by the United States District Court for the
Eastern District of Pennsylvania, which bars his candidacy under
Article 2, Section 7 of the Pennsylvania Constitution.
That's because Singletary had been one of those traffic court judges.
In 2014 he had been convicted of lying to the FBI in connection with
that case, and was sentenced to 20 months in federal prison; I think
he actually served 12.
That didn't stop Willie from trying to run for City Council, though,
and the challenge to his candidacy didn't wrap up before the ballots
were printed, so they had to post these notices.
Even before the bribery scandal and the federal conviction,
Singletary had already lost his Traffic Court job when it transpired
that he had showed dick pics to a Traffic Court cashier.
Before that, when he was campaigning for the Traffic Court job, he was
caught on video promising to give favorable treatment to campaign donors.
But Willie's enterprise and go-get-it attitude means he can't be kept
down for long. Willie rises to all challenges! He is now enjoying a
$90,000 annual salary as a Deputy Director of Community Partnerships
in the administration of Philadelphia Mayor Cherelle Parker. Parker's
spokesperson says
"The Parker administration supports every person’s right to a second chance in society.”
I think he might be on his fourth or fifth chance by now, but who's
counting? Let it never be said that Willie Singletary was a quitter.
Lorrie once made a remark that will live in my memory forever, about
the "West Philadelphia local politics-to-prison pipeline”. Mayor
Parker is such a visionary that she has been able to establish a
second pipeline in the opposite direction!
Addendum 20250501
I don't know how this happened, but when I committed the final version
of this article a few days ago, the commit message that my fingers
typed was:
Date: Sat Apr 26 14:24:19 2025 -0400
Willie Wingletsray finally ready to go
[ Content warning: possibly amusing, but silly and pointless ]
My wife Lorrie wrote this on 31 January 2013:
I got an e-mail from Husband titled, "The mills of Fenchurch grind slow,
but they grind exceeding small." This silliness, which is off-the-charts
silly, is going to require explanation.
Fenchurch is a small blue octopus made of polyester fiberfill. He was the
first one I ever bought, starting our family's octopus craze, and I gave
him to Husband in 1994. He is extremely shy and introverted. He hates
conflict and attention. He's a sensitive and very artistic soul. His
favorite food is crab cakes, followed closely by shrimp. (We have made up
favorite foods, professions, hobbies, and a zillion scenarios for all of
our stuffed animals.)
In our house it was well-established canon that Fenchurch's
favorite food was crab cakes. I had even included him as an example
in some of my conference talks:
He has a ladylove named Junko whom he takes on buggy
rides on fine days. When Husband is feeling very creative and vulnerable,
he identifies with Fenchurch.
Anyway, one time Husband got a traffic ticket and this Traffic Court
judge named Fortunato N. Perri was unbelievably mocking to him at
his hearing. Good thing Husband has the thick skin of a native
Manhattanite. … It was so awful that Husband and I remember bits
of it more than a decade later.
I came before Fortunato N. Perri in, I think, 1996. I had been
involved in a very low-speed collision with someone, and I was
ticketed because the proof of insurance in my glove box was expired.
Rather than paying the fine, I appeared in traffic court to plead not
guilty.
It was clear that Perri was not happy with his job as a traffic
court judge. He had to listen to hundreds of people making the same
lame excuses day after day. “I didn't see the stop sign.” “The sun
was in my eyes.” “I thought the U-turn was legal.” I can't blame
Perri for growing tired of this. But I can blame him for the way
he handled it, which was to mock and humiliate the people who came
before him.
“Where are you from?”
“Ohio.”
“Do they have stop signs in Ohio?”
“Uh, yes.”
“Do you know what they look like?”
“Yes.”
“Do they look like the stop signs we have here?”
“Yes.”
“Then how come you didn't see the stop sign? You say you know what a
stop sign looks like but then you didn't stop. I'm fining you $100.
You're dismissed.”
He tried to hassle me also, but I kept my cool, and since I wasn't
actually in violation of the law he couldn't do anything to me. He
did try to ridicule my earring.
“What does that thing mean?”
“It doesn't mean anything, it's just an earring.”
“Is that what everyone is doing now?”
“I don't know what everyone is doing.”
“How long ago did you get it?”
“Thirteen years.”
“Huh. … Well, you did have insurance, so I'm dismissing your ticket.
You can go.”
I'm still wearing that earring today, Fortunato. By the way,
Fortunato, the law is supposed to be calm and impartial, showing favor
to no one.
Fortunato didn't just mock and humiliate the unfortunate citizens who
came before him. He also abused his own clerks. One of them was doing
her job, stapling together court papers on the desk in front of the
bench, and he harangued her for doing it too noisily. “God, you might
as well bring in a hammer and nails and start hammering up here, bang
bang bang!”
I once went back to traffic court just to observe, but he wasn't in
that day. Instead I saw how a couple of other, less obnoxious judges
ran things.
Lorrie continues:
Husband has been following news about this judge (now
retired) and his family ever since, and periodically he gives me updates.
(His son, Fortunato N. Perri Jr., is a local civil litigation attorney
of some prominence. As far as I know there is nothing wrong with
Perri Jr.)
And we made up a story that Fenchurch was traumatized by this guy after
being ticketed for parking in a No Buggy zone.
I understood everything when I read that Perri accepted graft in many
forms, including shrimp and crab cakes.
OMG. No wonder my little blue octopus was wroth. No wonder he swore
revenge. This crooked thief was interfering with his food supply!
Lorrie wrote a followup the next day:
I confess Husband and I spent about 15 minutes last night savoring details
about Fortunato N. Perri's FBI bust. Apparently, even he had a twinge of
conscience at the sheer quantity of SHRIMP and CRAB CAKES he got from
this one strip club owner in return for fixing tickets. (Husband noted
that he managed to get over his qualms.)
Husband said Perri hadn't been too mean to him, but Husband still
feels bad about the way Perri screamed at his hapless courtroom
assistant, who was innocently doing her job stapling papers until
Perri stopped proceedings to holler that she was making so much
noise, she may as well be using a hammer.
Fenchurch and his ladylove Junko, who specialize in avant garde performance
art, greeted Husband last night with their newest creation, called
"Schadenfreude." It mostly involved wild tentacle waving and uninhibited
cackling. Then they declared it to be the best day of their entire lives
and stayed up half the night partying.
Epilogues
Later that year, the notoriously corrupt Traffic Court was
abolished, its functions transferred to regular Philadelphia
Municipal Court.
The folks who supplied the traffic tickets and the seafood bribes
were also charged. They tried to argue that they hadn't defrauded
the City of Philadelphia because the people they paid Perri to let
off the hook hadn't been found guilty, and would only have owed
fines if they had been found guilty.
One of those traffic court judges was Willie Singletary, who I've
been planning to write about since 2019. But he is a hard worker
who deserves better than to be stuck in an epilogue, so I'll try to
get to him later this month.
Doing the laundry used to be backbreaking toil. Haul the water, chop
the wood, light the fire, heat the water, and now you are ready to
begin the really tough part of the work. The old saying goes "Wash on
Monday", because Monday is the day after your day of rest, and
otherwise you won't have the strength to do the washing.
And the saying continues: “Iron on Tuesday, mend on Wednesday”.
Routine management of clothing takes half of the six-day work week.
For this reason, washing is the work of last resort for the poorest
and most marginal people. Widows are washerwomen.
Prisons are laundries.
Chinese immigrants run laundries. Anyone
with enough money to outsource their laundry does so.
The invention of mechanical washing machines eliminated a great amount
of human suffering and toil. Machines do the washing now. Nobody has
to break their back scrubbing soiled linens against a washboard.
But the flip side of that is that there are still poor and
marginalized people, who now have to find other work. Mechanical
laundry has taken away their jobs. They no longer have to do the
backbreaking labor of hand laundry. Now they have the option to
starve to death instead.
Is it a net win? I don't know. I'd like to think so. I'd like to
free people from the toil of hand laundry without also starving some
of them to death. Our present system doesn't seem to be very good at
that sort of thing. I'm not sure what a better system would look
like.
Anyway, this is on my mind a lot lately because of the recent
developments in computer-generated art. I think “well, it's not all
bad, because at least now nobody will have to make a living drawing
pornographic pictures of other people's furry OCs. Surely that is a
slight elevation of the human condition.” On the other hand, some of
those people would rather have the money and who am I to deny them
that choice?
A modern presentation of the Peano axioms looks like
this:
!!0!! is a natural number
If !!n!! is a natural number, then so is the result of appending an
!!S!! to the beginning of !!n!!
Nothing else is a natural number
This baldly states that zero is a natural number.
I think this is a 20th-century development. In 1889, the natural
numbers started at !!1!!, not at !!0!!. Peano's
Arithmetices principia, nova methodo exposita
(1889) is the source of the Peano axioms and in it Peano starts the
natural numbers at !!1!!, not at !!0!!:
There's axiom 1: !!1\in\Bbb N!!. No zero. I think starting at !! 0!!
may be a Bourbakism.
In a modern presentation we define addition like this:
$$
\begin{array}{rrl}
(i) & a + 0 = & a \\
(ii) & a + Sb = & S(a+b)
\end{array}
$$
Peano doesn't have zero, so he doesn't need item !!(i)!!. His definition
just has !!(ii)!!.
But wait, doesn't his inductive definition need to have a base case? Maybe something like this?
\begin{array}{rrl}
(i') & a + 1 = & Sa \\
\end{array}
Nope, Peano has nothing like that. But surely the definition must
have a base case? How can Peano get around that?
Well, by modern standards, he cheats!
Peano doesn't have a special notation like !!S!! for successor. Where
a modern presentation might write !!Sa!! for the successor of the
number !!a!!, Peano writes “!!a + 1!!”.
So his version of !!(ii)!! looks like this:
$$
a + (b + 1) = (a + b) + 1
$$
which is pretty much a symbol-for-symbol translation of !!(ii)!!. But
if we try to translate !!(i')!! similarly, it looks like this:
$$
a + 1 = a + 1
$$
That's why Peano didn't include it: to him, it was tautological.
But to modern eyes that last formula is deceptive because it
equivocates between the "!!+ 1!!" notation that is being used to
represent the successor operation (on the right) and the addition
operation that Peano is trying to define (on the left). In a modern
presentation, we are careful to distinguish between our formal symbol
for a successor, and our definition of the addition operation.
Peano, working pre-Frege and pre-Hilbert, doesn't have the same
concept of what this means. To Peano, constructing the successor of a
number, and adding a number to the constant !!1!!, are the same
operation: the successor operation is just adding !!1!!.
But to us, !!Sa!! and !!a+S0!! are different operations that happen to
yield the same value. To us, the successor operation is a purely
abstract or formal symbol manipulation (“stick an !!S!! on the
front”). The fact that it also has an arithmetic interpretation,
related to addition, appears only once we contemplate the theorem
$$\forall a. a + S0 = Sa.$$ There is nothing like this in Peano.
It's things like this that make it tricky to read older mathematics
books. There are deep philosophical differences about what is being
done and why, and they are not usually explicit.
Another example: in the 19th century, the abstract presentation of
group theory had not yet been invented. The phrase “group” was
understood to be short for “group of permutations”, and the important
property was closure, specifically closure under composition of
permutations. In a 20th century abstract presentation, the closure
property is usually passed over without comment. In a modern view, the
notation !!G_1\cup G_2!! is not even meaningful, because groups are
not sets and you cannot just mix together two sets of group elements
without also specifying how to extend the binary operation, perhaps
via a free product or something. In the 19th century, !!G_1\cup G_2!!
is perfectly ordinary, because !!G_1!! and !!G_2!! are just sets of
permutations. One can then ask whether that set is a group — that is,
whether it is closed under composition of permutations — and if not,
what is the smallest group that contains it.
It's something like a foreign language of a foreign
culture. You can try to translate the words, but the underlying ideas
may not be the same.
Addendum 20250326
Simon Tatham reminds me that Peano's equivocation has come up here
before.
I previously discussed
a Math SE post
in which OP was confused
because Bertrand Russell's presentation of the Peano axioms similarly
used the notation “!!+ 1!!” for the successor operation, and did not
understand why it was not tautological.
We want to adapt baseball to be played on the moon. Is there any way
to make it work?
My first impression is: no, for several reasons.
The pitched ball will go a little faster (no air resistance) but
breaking balls are impossible (ditto). So the batter will find it
easier to get a solid hit. We can't fix this by moving the plate
closer to the pitcher's rubber; that would expose both batter and
pitcher to unacceptable danger. I think we also can't fix it by making
the plate much wider.
Once the batter hits the ball, it will go a long long way, six times
as far as a batted ball on Earth. In order for every hit to not be a
home run, the outfield fence will have to be about six times as far
way, so the outfield will be !!36!! times as large. I don't think the
outfielders can move six times as fast to catch up to it. Perhaps if
there were 100 outfielders instead of only three?
Fielding the ball will be more difficult. Note that even though the
vacuum prevents the pitch from breaking, the batted ball can still
take unexpected hops off the ground.
Having gotten hold of the ball, the outfielder will then need to throw
it back to the infield. They will be able to throw it that far, but
they probably won't be able do it accurately enough for the receiving
fielder to make the play at the base. More likely the outfielder will
throw it wild.
I don't think this can be easily salvaged. People do love home runs,
but I don't think they would love this. Games are too long already.
Well, here's a thought. What if instead of four bases, arranged in a
!!90!!-foot square, we had, I don't know, eight or ten, maybe !!200!!
or !!300!! feet apart? More opportunities for outs on the basepaths,
and also the middle bases would not be so far from the outfield.
Instead of throwing directly to the infield, the outfielders would
have a relay system where one outfielder would throw to another that
was farther in, and perhaps one more, before reaching the infield.
That might be pretty cool.
I think it's not easy to run fast on the Moon. On the Earth, a
runner's feet are pushing against the ground many times each second.
On the Moon, the runner is taking big leaps. They may only get in
one-sixth as many steps over the same distance, which would give them
much less opportunity to convert muscle energy into velocity.
(Somewhat countervailing, though: no air resistance.) Runners would
have to train specially to be able to leap accurately to the bases.
Under standard rules, a runner who overshoots the base will land off
the basepaths and be automatically out.
So we might expect to see the runner bounding toward first base. Then
one of the thirty or so far-left fielders would get the ball, relay it
to the middle-left fielder and then the near-left fielder who would
make the throw back to first. The throw would be inaccurate because
it has to traverse a very large infield, and the first baseman would
have to go chasing after it and pick it up from foul territory. He
can't get back to first base quickly enough, but that's okay, the
pitcher has bounded over from the mound and is waiting near first base
to make the force play. Maybe the runner isn't there yet because one
of his leaps was too long and to take another he has to jump high
into the air and come down again.
Last summer I was privileged to visit the glorious Letterpress Museum
in Paju Book City, where I spent several hours
and took
a collection of photos
that are probably not of interest to anyone but letterpress geeks, and
perhaps not even to them.
Looking back at the photos it's not always clear to me why I took each
one. But some of them I can remember. For example, this one:
This is not exactly letterpress. It is a device for engraving
lettered signs on thin strips of metal or perhaps plastic. Happily I
don't have to spend too much time explaining this because Marcin
Wichary has just published
an extensively-illustrated article about the Latin-script version. The
only thing different about this one is the fonts, which are for
writing Korean in Hangeul script rather than English in Latin script.
(Here's my real-quick summary.
There is no ink. A stylus goes into the grooves of those brass
templates. The stylus is attached with a
pantograph to a
router bit that rests on the object
that the operator wants to engrave. When operator moves the stylus in the template grooves, the
router bit follows their motions and engraves matching grooves in the
target object. By adjusting the pantograph, one can engrave letters that are
larger or smaller than the templates.)
Hangeul has an alphabet of 24 letters, but there's a difficulty in
adapting this engraving technique for written Hangeul: The letters
aren't written in a simple horizontal row as European languages are.
Instead, they are grouped into syllables of two or three letters. For
example, consider the consider the Korean word “문어”, pronounced
(roughly) "moon-aw". which means “octopus”. This is made up of five
letters ㅁㅜㄴㅇㅓ, but as you see they are arranged in two syllables
문 ("moon") and 어 ("aw"). So instead of twenty-four kinds of
templates, one for each letter, the Korean set needs one for every
possible syllable, and there are thousands of possible syllables.
Unicode gets around this by… sorry, Unicode doesn't get around it,
they just allocate eleven thousand codepoints,
one for each possible syllable. But for this engraving device, it
would be prohibitively expensive to make eleven thousand little
templates, then another eleven thousand spares, and impractical to sort
and manage them in the shop. Instead there is a clever solution.
Take a look at just one of these templates:
This is not a Hangeul syllable.
Rather, it is five. The upper-right letter in the syllable is the
vowel, and the template allows the operator to engrave any of the five
vowels
ㅣㅓㅏㅕㅑ
to produce the syllables
잉 엉 앙 영 양
pronounced respectively "ing", "ông", "ang", "yông", and "yang".
Similarly this one can produce six different syllables:
The upper-left part can be used to engrave either of the consonants
ㅅ or ㅈ and the upper-right part can be used to
engrave any of the vowels ㅣㅓㅏ, to produce the combined
set 싱 성 상 징 정 장. I'm not sure why this template
doesn't also enable vowels ㅕㅑ as the other one did.
In the picture at top you can see that while the third template can be used to engrave any of the
three syllables 송 승 숭 the operator has actually used it
to engrave the first of these.
This ingenious mechanism cuts down the required number of templates by
perhaps a factor of five, from ten boxes to two.
Addendum 20250325
A great many of the 11,000 Unicode codepoints are for seldom-used
syllables that contain four or even five letters, such as 둻. I
studied Korean for a while and I think I learned only one with with
more than three letters in a syllable: 닭 means “chicken”.
I don't see templates for these syllables in any of my photographs,
which probaby accounts for much of the great reduction in templates
from the 11,000 possible syllables. But there must have been some
way to engrave the syllables with the machine.
Maybe there was a template that had a small four small ㄷsymbols, one
in each of the four corners of the template, and another with four ㄹ
symbols, and so on? Then the operator could have composed 닭out of
bits from four different templates.
My relationship with Claude has its ups and downs, and I'm still
trying to figure out what to use it for and what not. It's great at
rewriting my job application cover letters to sound less like an
awkward nerd.
Last week I was in the supermarket and decided to try asking it
something I've been wondering for a long time:
MJD: What is the song "Mercy Street” about?
What is the connection with Anne Sexton? Who is "Mary"?
I thought Claude might do well with this. I had had a conversation with
it a while back about Pixies songs, which I was satisfied with. But
this time Claude let me down:
Claude: … The "Mary" referenced in the song is likely Mary Gray Staples, who was the name of Sexton's childhood doll. …
(I thought: What? Am I supposed to believe that
Dreaming of the tenderness
The tremble in the hips
Of kissing Mary's lips
is about a doll?)
MJD: Mary Gray Staples is an unusual name for a doll. Are you sure about that?
Claude: You're right to question this - I made an error. Mary
Gray was actually Anne Sexton's great-aunt, not the name of her
doll. … The doll references in Gabriel's song likely connect to
Sexton's imagery in other poems, but Mary specifically refers to her
great-aunt.
MJD: I don't remember a doll in Gabriel's song.
Claude: You're absolutely right - I apologize for the
error. There isn't actually a reference to a doll in Peter Gabriel's
"Mercy Street" lyrics. … The "Mary" referenced in the song is indeed
Anne Sexton's great-aunt Mary Gray…
MJD: Are you sure? Wikipedia says that Mary Gray Staples was the birth name of Sexton's mother.
Claude: You're absolutely right, and I apologize for the confusion. …
Claude just flubbed over and over.
I wonder if the grammatical error in “Mary Gray Staples, who was the
name of …” is a kind of a tell? Perhaps Claude is fabricating, by
stitching together parts of two unrelated sentences that it read
somewhere, one with “Mary Gray Staples, who was…” and the other “… was
the name of…”? Probably it's not that simple, but the grammatical
error is striking.
Anyway, this was very annoying because I tend to remember things like
this long past the time when I remember where I heard them. Ten years
from now I might remember that Anne Sexton once had a doll with a very
weird name.
Merle Miller
A while back I read Merle Miller's book
Plain Speaking. It's
an edited digest of a series of interviews Miller did with former
President Truman in 1962, at his home in Independence, Missouri. The
interviews were originally intended to be for a TV series, but when
that fell through Miller turned them into a book. In many ways it's a
really good book. I enjoyed it a lot, read it at least twice, and a
good deal of it stuck in my head.
But I can't recommend it, because it has a terrible flaw.
There have been credible accusations
that Miller changed some of the things that Truman said, embellished
or rephrased many others, that he tarted up Truman's language, and
that he made up some conversations entirely.
So now whenever I remember something that I think Truman said, I have
to stop and try to remember if it was from Miller. Did Truman really
say that it was the worst thing in the world when records were
destroyed? I'm sure I read it in Miller, so, uhh… maybe?
Miller recounts a discussion in
which Truman says he is pretty sure that President Grant had never
read the Constitution. Later, Miller says, he asked Truman if he
thought that Nixon had read the Constitution, and reports that
Truman's reply was:
I don't know. I don't know. But I'll tell you this. If he has, he
doesn't understand it.
Great story! I have often wanted to repeat it. But I don't, because
for all I know it never happened.
(I've often thought of this, in years past, and whatever Nixon's
faults you could at least wonder what the answer was. Nobody would
need to ask this about the current guy, because the answer is so
clear.)
Miller, quotes Truman's remarks about
Supreme Court Justice Tom Clark, “It isn't so
much that he's a bad man. It's just that he's such a dumb son of a
bitch.” Did Truman actually say that? Did he just imply it? Did he
say anything like it? Uhhh… maybe?
There's a fun anecdote about the White House butler learning to make an
Old-fashioned cocktail in the way
the Trumans preferred. (The usual recipe involves whiskey, sugar,
fresh fruit, and bitters.) After several attempts the butler converged
on the Trumans' preferred recipe, of mostly straight bourbon. Hmm, is
that something I heard from Merle Miller? I don't remember.
There's a famous story about how Paul Hume, music critic for the
Washington Post, savaged an performance of Truman's daughter
Margaret, and how Truman sent him an infamous letter, very
un-presidential, that supposedly contained the paragraph:
Some day I hope to meet you. When that happens you'll need a new
nose, a lot of beef steak for black eyes, and perhaps a supporter
below!
Miller reports that he asked Truman about this, and Truman's blunt
response: “I said I'd kick his nuts out.” Or so claims Miller,
anyway.
I've read Truman's memoirs. Volume I, about the immediate postwar
years, is fascinating; Volume II is much less so. They contain many
detailed accounts of the intransigence of the Soviets and their
foreign minister Vyacheslav Molotov, namesake of the Molotov
Cocktail. Probably 95% of what I remember Truman saying is from those
memoirs, direct from Truman himself. But some of it must be from
Plain Speaking. And I don't know any longer which 5% it is.
As they say, an ice cream sundae with a turd in it isn't 95% ice
cream, it's 100% shit. Merle Miller shit in the ice cream sundae of
my years of reading of Truman and the Truman administrations.
Now Claude has done the same. And if I let it, Claude will keep doing
it to me. Claude caga en la leche.
So there may yet be a happy ending, thanks to the Wonders of the
Internet! I dream of someday going through those interviews and
producing an annotated edition of Plain Speaking.
Around here, these metal things are commonly found on streetside
utility poles, attached maybe a meter off the ground.
Metal reflector
Plastic reflector
When I first noticed one of these I said “I wonder what the holes are
for. Maybe to make it more visible? And what do they do with all the
leftover rectangles after they've made one?”
I eventually got a better idea: The little metal rectangles are the
primary product, and after they have been die-cut out of the metal
sheet, there is this waste material left over with all the holes.
Instead of throwing it away someone nails it to a utility pole to make
the pole easier to see at night. I felt a bit silly that my first
idea had been exactly backwards.
I later learned that only the older ones are made of sheet metal.
Newer ones are made of some sort of plastic, maybe polyethylene or
vinyl or something, about the same thickness. They look pretty much
the same. I can only tell them apart by feeling them.
Still I wondered what the little rectangles had been used for. It
turns out that the purpose is this:
The answer came as a bit of a surprise to Jay Lipschutz, 73, of
Northeast Philly …
His wife, Ruth, he said, had insisted they’re reflectors for drivers to see. She was right.
Jay, my friend, your wife is smarter than you are. Listen to her.
The article also tells us that the rectangular leftover is called a
“grid reflector”.
With a little more research I learned that one manufacturer of grid
reflectors is Almetek.
They cost $3.50 each. Pricey, for something they would have had to
throw away. (Here's
the old South Philly Review article
that put me on to Almetek.)
What kicked off this article was that I was walking around and
I saw this similar reflector grid, which felt to me like it was a bit
of a farce, like a teenager sneaking into a bar wearing a fake mustache:
Hey, those aren't holes! When I saw this one I wondered for a moment
if I was suffering some sort of mental collapse, or if none of the
others had had real holes either. But no, they had, and this one really did
have fake holes.
(Also, it has been installed sideways. Normally they are oriented as
the two above.)