The Universe of Discourse


Thu, 30 Jan 2014

Twingler, a generic data structure munger

(Like everything else in this section, these are notes for a project that was never completed.)

Introduction

These incomplete notes from 1997-2001 are grappling with the problem of transforming data structures in a language like Perl, Python, Java, Javascript, or even Haskell. A typical problem is to take an input of this type:

  [
    [Chi, Ill],
    [NY, NY],
    [Alb, NY],
    [Spr, Ill],
    [Tr, NJ],
    [Ev, Ill],
  ]

and to transform it to an output of this type:

  { Ill => [Chi, Ev, Spr],
    NY  => [Alb, NY],
    NJ  => [Tr],
  }

One frequently writes code of this sort, and it should be possible to specify the transformation with some sort of high-level declarative syntax that is easier to read and write than the following gibberish:

   my $out;
   for my $pair (@$in) {
       push @{$out->{$pair->[0]}}, $pair->[1];
   }
   for my $k (keys %$out) {
       @{$out->{$k}} = sort @{$out->{$k}};
   }

This is especially horrible in Perl, but it is bad in any language. Here it is in a hypothetical language with a much less crusty syntax:

 for pair (in.items) :
   out[pair[0]].append(pair[1])
 for list (out.values) :
   list.sort

You still can't see what it really going on without executing the code in your head. It is hard for a beginner to write, and hard to anyone to understand.

Original undated notes from around 1997–1998

Consider this data structure DS1:

  [
    [Chi, Ill],
    [NY, NY],
    [Alb, NY],
    [Spr, Ill],                DS1
    [Tr, NJ],
    [Ev, Ill],
  ]

This could be transformed several ways:

  {
    Chi => Ill,
    NY => NY, 
    Alb => NY,
    Spr => Ill,                DS2
    Tr => NJ,
    Ev => Ill,
  }

  { Ill => [Chi, Spr, Ev],
    NY  => [NY, Alb],        DS3
    NJ  => Tr,
  }
  { Ill => 3,
    NY  => 2,
    NJ  => 1,
  }

  [ Chi, Ill, NY, NY, Alb, NY, Spr, Ill, Tr, NJ, Ev, Ill]    DS4

Basic idea: Transform original structure of nesting depth N into an N-dimensional table. If Nth nest is a hash, index table ranks by hash keys; if an array, index by numbers. So for example, DS1 becomes

        1        2
    1   Chi Ill
    2   NY  NY
    3   Alb NY
    4   Spr Ill
    5   Tr  NJ
    6   Ev  Ill

Or maybe hashes should be handled a little differently? The original basic idea was more about DS2 and transformed it into

            Ill        NY        NJ
    Chi     X
    NY              X
    Alb             X
    Spr     X
    Tr                      X
    Ev      X

Maybe the rule is: For hashes, use a boolean table indexed by keys and values; for arrays, use a string table index by integers.

Notation idea: Assign names to the dimensions of the table, say X and Y. Then denote transformations by:

    [([X, Y])]        (DS1)
    {(X => Y)}        (DS2)
    {X => [Y]}        (DS3)
    [(X, Y)]        (DS4)

The (...) are supposed to incdicate a chaining of elements within the larger structure. But maybe this isn't right.

At the bottom: How do we say whether

    X=>Y, X=>Z

turns into

    [ X => Y, X => Z ]                (chaining)

or [ X => [Y, Z] ] (accumulation)

Consider

      A B C
    D . . .
    E . .
    F .   .

<...> means ITERATE over the thing inside and make a list of the results. It's basically `map'.

Note that:

    <X,Y>     |=   (D,A,D,B,D,C,E,A,E,B,F,A,F,C)
    <X,[<Y>]> |=   (D,[A,B,C],E,[A,B],F,[A,C])

Brackets and braces just mean brackets and braces. Variables at the same level of nesting imply a loop over the cartesian join. Variables subnested imply a nested loop. So:

    <X,Y> means

    for x in X
      for y in Y
        push @result, (x,y) if present(x,y);

But

    <X,<Y>> means
    for x in X
      for y in Y
        push @yresult, (y) if present(x,y);
      push @result, @yresult 

Hmmm. Maybe there's a better syntax for this.

Well, with this plan:

    DS1: [ <[X,Y]> ]
    DS2: { <X=>Y>  }
    DS3: { <X => [<Y>]> }
    DS4: [ <X, Y> ]

It seems pretty flexible. You could just as easily write

    { <X => max(<Y>) }

and you'd get

    { D => C, E => B, F => C }

If there's a `count' function, you can get

    { D => 3, E => 2, F => 2 }

or maybe we'll just overload scalar' to meancount'.

Question: How to invert this process? That's important so that you can ask it to convert one data structure to another. Also, then you could write something like

    [ <city, state> ]  |= { <state => [<city>] > }

and omit the X's and Y's.

Real example: From proddir. Given

    ID / NAME / SHADE / PALETTE / DESC

For example:

    A / AAA / red     / pink   / Aaa
    B / BBB / yellow  / tawny  / Bbb
    A / AAA / green   / nude   / Aaa
    B / BBB / blue    / violet / Bbb
    C / CCC / black   / nude   / Ccc

Turn this into

    { A => [ AAA, [ [red, pink], [green, nude] ], Aaa],
      B => [ BBB, [ [yellow, tawny], [blue, violet] ], Bbb],
      C => [ CCC, [ [black, nude] ], CCC]
    }


    { < ID => [
                 name,
                 [ <[shade, palette]> ]
                 desc
              ]>
    }

Something interesting happened here. Suppose we have

    [ [A, B]
      [A, B]
    ]

And we ask for <A, B>. Do we get (A, B, A, B), or just (A, B)? Does it remove duplicate items for us or not? We might want either.

In the example above, why didn't we get

    { A => [ AAA, [ [red, pink], [green, nude] ], Aaa],
      A => [ AAA, [ [red, pink], [green, nude] ], Aaa],
      B => [ BBB, [ [yellow, tawny], [blue, violet] ], Bbb],
      B => [ BBB, [ [yellow, tawny], [blue, violet] ], Bbb],
      C => [ CCC, [ [black, nude] ], CCC]
    }

If the outer iteration was supposed to be over all id-name-desc triples? Maybe we need

    <...>                all triples
    <!...!>                unique triples only

Then you could say

    <X>  |= <!X!>

to indicate that you want to uniq a list.

But maybe the old notation already allowed this:

    <X> |= keys %{< X => 1 >}

It's still unclear how to write the example above, which has unique key-triples. But it's in a hash, so it gets uniqed on ID anyway; maybe that's all we need.

1999-10-23

Rather than defining some bizarre metalanguage to describe the transformation, it might be easier all around if the user just enters a sample input, a sample desired output, and lets the twingler figure out what to do. Certainly the parser and internal representation will be simpler.

For example:

    [ [ A, B ],
      [ C, B ],
      [ D, E ] ]
    --------------
    { B => [A, C],
      E => [D],
    }

should be enough for it to figure out that the code is:

    for my $a1 (@$input) {
      my ($e1, $e2) = @$a1;
      push @{$output{$e2}}, $e1;
    }

Advantage: After generating the code, it can run it on the sample input to make sure that the output is correct; otherwise it has a bug.

Input grammar:

    %token ELEMENT
    expr: array | hash ;
    array: '[' csl ']' ;
    csl: ELEMENT | ELEMENT ',' csl | /* empty */ ;
    hash: '{' cspl '}' ;
    cspl: pair | pair ',' cspl | /* empty */ ;
    pair: ELEMENT '=>' ELEMENT;

Simple enough. Note that (...) lines are not allowed. They are only useful at the top level. A later version can allow them. It can replace the outer (...) with [...] or {...] as appropirate when it sees the first top-level separator. (If there is a => at the top level, it is a hash, otherwise an array.)

Idea for code generation: Generate pseudocode first. Then translate to Perl. Then you can insert a peephole optimizer later. For example

    foreachkey k (somehash) {
      push somearray, $somehash{k}
    }

could be optimized to

    somearray = values somehash;

add into hash: as key, add into value, replace value add into array: at end only

How do we analyze something like:

    [ [ A, B ],
      [ C, B ],
      [ D, E ] ]
    --------------
    { B => [A, C],
      E => [D],
    }

Idea: Analyze structure of input. Analyze structure of output and figure out an expression to deposit each kind of output item. Iterate over input items. Collect all input items into variables. Deposit items into output in appropriate places.

For an input array, tag the items with index numbers. See where the indices go in the output. Try to discern a pattern. The above example:

    Try #1:
    A: 1
    B: 2
    C: 1
    B: 2  -- consistent with B above
    D: 1
    E: 2

    Output:  2 => [1, 1]
             2 => [1]

OK—2s are keys, 1s are array elements.

A different try fails:

    A: 1
    B: 1
    C: 2
    B: 2 -- inconsistent, give up on this.

Now consider:

    [ [ A, B ],
      [ C, B ],
      [ D, E ] ]
    --------------
    { A => B,
      C => B,
      D => E,
    }

A,C,D get 1; B,E get 2. this works again. 1s are keys, 2s are values.

I need a way of describing an element of a nested data structure as a simple descriptor so that I can figure out the mappings between descriptors. For arrays and nested arrays, it's pretty easy: Use the sequence of numeric indices. What about hashes? Just K/V? Or does V need to be qualified with the key perhaps?

Example above:

    IN:        A:11  B:12 22 C:21 D:31 E:32
    OUT:    A:K   B:V     C:K  D:K  E:V

Now try to find a mapping from the top set of labels to the bottom. x1 => K, x2 => V works.

Problem with this:

    [ [ A, B ],
      [ B, C ],
    ]
    ------------
    { A => B,
      B => C,
    }

is unresolvable. Still, maybe this works well enough in most common cases.

Let's consider:

    [[ A , AAA , red     , pink   , Aaa], 
     [ B , BBB , yellow  , tawny  , Bbb],
     [ A , AAA , green   , nude   , Aaa],
     [ B , BBB , blue    , violet , Bbb],
     [ C , CCC , black   , nude   , Ccc],
    ]
    -------------------------------------------------------------
    { A => [ AAA, [ [red, pink], [green, nude] ], Aaa],
      B => [ BBB, [ [yellow, tawny], [blue, violet] ], Bbb],
      C => [ CCC, [ [black, nude] ], CCC]
    }

    A:        00,20 => K
    AAA:        01,21 => V0
    red:        02    => V100
    pink:        03    => V101
    Aaa:    04    => V2
    B:      10,30 => K
    C:      40    => K

etc.

Conclusion: x0 => K; x1 => V0; x2 => V100; x3 => V101; x4 => V2

How to reverse?

Simpler reverse example:

    { A => [ B, C ],
      E => [ D ],
    }
    ---------------------
    [ [ A, B ],
      [ A, C ],
      [ E, D ],
    ]

    A: K        => 00, 10
    B: V0   => 01
    C: V1   => 11
    D: V0   => 21
    E: K    => 20

Conclusion: K => x0; V => x1 This isn't enough information. Really, V => k1, where k is whatever the key was!

What if V items have the associated key too?

    A: K        => 00, 10
    B: V{A}0=> 01
    C: V{A}1=> 11
    D: V{E}0=> 21
    E: K    => 20

Now there's enough information to realize that B and C stay with the A, if we're smart enough to figure out how to use it.

2001-07-28

Sent to Nyk Cowham

2001-08-24

Sent to Timur Shtatland

2001-10-28

Here's a great example. The output from HTML::LinkExtor is a list like

    ([img, src, URL1, losrc, URL2],
     [a, href, URL3],
     ...
    )

we want to transform this into

    (URL1 => undef, 
     URL2 => undef, 
     URL3 => undef, 
     ...
    )


[Other articles in category /notes] permanent link

Sun, 19 Jan 2014

Notes on a system for abbreviating SQL queries

(This post inaugurates a new section on my blog, for incomplete notes. It often happens that I have some idea, usually for software, and I write up a bunch of miscellaneous notes about it, and then never work on it. I'll use this section to post some of those notes, mainly just because I think they might be interesting, but also in the faint hope that someone might get interested and do something with it.)

Why are simple SQL queries so verbose?

For example:

    UPDATE batches b
      join products p using (product_id)
      join clients c using (client_id)
    SET b.scheduled_date = NOW()
    WHERE b.scheduled_date > NOW()
      and b.batch_processor = 'batchco'
      and c.login_name = 'mjd' ;

(This is 208 characters.)

I guess about two-thirds of this is unavoidable, but those join-using clauses ought to be omittable, or inferrable, or abbreviatable, or something.

b.batch_processor should be abbreviated to at least batch_processsor, since that's the only field in those three tables with that name. In fact it could probably be inferred from b_p. Similarly c.login_name -> login_name -> log or l_n.

   update batches set sch_d = NOW()
   where sch_d > NOW()
   and bp = 'batchco'
   and cl.ln = 'mjd'

(Only 94 characters.)

cl.ln is inferrable: Only two tables begin with cl. None of the field names in the client_transaction_view table look like ln. So cl.ln unambiguously means client.login_name.

Then the question arises of how to join the batches to the clients. This is the only really interesting part of this project, and the basic rule is that it shouldn't do anything really clever. There is a graph, which the program can figure out from looking at the foreign key constraints. And the graph should clearly have a short path from batches through products to clients.

bp might be globally ambiguous, but it can be disambiguated by assuming it's in one of the three tables involved.

If something is truly ambiguous, we can issue an intelligent request for clarification:

"bp" is ambiguous. Did you mean:
  1. batches.batch_processor
  2. batches.bun_predictor
  0. None of the above
which? _

Overview

  1. Debreviate table names
  2. Figure out required joins and join fields
  3. Debreviate field names

Can 1 and 2 really be separated? They can in the example above, but maybe not in general.

I think separating 3 and putting it at the end is a good idea: don't try to use field name abbreviations to disambiguate and debreviate table names. Only go the other way. But this means that we can't debreviate cl, since it might be client_transaction_view.

What if something like cl were left as ambiguous after stage 1, and disambiguated only in stage 3? Then information would be unavailable to the join resolution, which is the part that I really want to work.

About abbreviations

Abbreviations for batch_processor:

          bp
          b_p
          ba_pr
          batch_p

There is a tradeoff here: the more different kinds of abbreviations you accept, the more likely there are to be ambiguities.

About table inference

There could also be a preferences file that lists precedences for tables and fields: if it lists clients, then anything that could debreviate to clients or to client_transaction_view automatically debreviates to clients. The first iteration could just be a list of table names.

About join inference

Short join paths are preferred to long join paths.

If it takes a long time to generate the join graph, cache it. Build it automatically on the first run, and then rebuild it on request later on.

More examples

(this section blank)

Implementation notes

Maybe convert the input to a SQL::Abstract first, then walk the resulting structure, first debreviating names, then inserting joins, then debreviating the rest of the names. Then you can output the text version of the result if you want to.

Note that this requires that the input be valid SQL. Your original idea for the abbreviated SQL began with

   update set batches.sch_d = NOW()

rather than

   update batches set sch_d = NOW()

but the original version would probably be ruled out by this implementation. In this case that is not a big deal, but this choice of implementation might rule out more desirable abbreviations in the future.

Correcting dumb mistakes in the SQL language design might be in Quirky's purview. For example, suppose you do

     select * from table where (something)

Application notes

RJBS said he would be reluctant to use the abbreviated version of a query in a program. I agree: it would be very foolish to do so, because adding a table or a field might change the meaning of an abbreviated SQL query that was written into a program ten years ago and has worked ever since. This project was never intended to abbreviate queries in program source code.

Quirky is mainly intended for one-off queries. I picture it going into an improved replacement for the MySQL command-line client. It might also find use in throwaway programs. I also picture a command-line utility that reads your abbreviated query and prints the debreviated version for inserting into your program.

Miscellaneous notes

(In the original document this section was blank. I have added here some notes I made in pen on a printout of the foregoing, on an unknown date.)

Maybe also abbreviate update => u, where => w, and => &. This cuts the abbreviated query from 94 to 75 characters.

Since debreviation is easier [than join inference] do it first!

Another idea: "id" always means the main table's primary key field.


[Other articles in category /notes] permanent link

Fri, 10 Jan 2014

DateTime::​Moonpig, a saner interface to DateTime

(This article was previously published at the Perl Advent Calendar on 2013-12-23.)

The DateTime suite is an impressive tour de force, but I hate its interface. The methods it provides are usually not the ones you want, and the things it makes easy are often things that are not useful.

Mutators

The most obvious example is that it has too many mutators. I believe that date-time values are a kind of number, and should be treated like numbers. In particular they should be immutable. Rik Signes has a hair-raising story about an accidental mutation that caused a hard to diagnose bug, because the add_duration method modifies the object on which it is called, instead of returning a new object.

DateTime::Duration

But the most severe example, the one that drives me into a rage, is that the subtract_datetime method returns a DateTime::Duration object, and this object is never what you want, because it is impossible to use it usefully.

For example, suppose you would like to know how much time elapses between 1969-04-02 02:38:17 EST and 2013-12-25 21:00:00 EST. You can set up the two DateTime objects for the time, and subtract them using the overloaded minus operator:

    #!perl
    my ($a) = DateTime->new( year => 1969, month => 04, day => 02,
                             hour => 2, minute => 38, second => 17,
                             time_zone => "America/New_York" ) ;

    my ($b) = DateTime->new( year => 2013, month => 12, day => 25,
                             hour => 21, minute => 0, second => 0,
                             time_zone => "America/New_York" ) ;

    my $diff = $b - $a;

Internally this invokes subtract_datetime to yield a DateTime::Duration object for the difference. The DateTime::Duration object $diff will contain the information that this is a difference of 536 months, 23 days, 1101 minutes, and 43 seconds, a fact which seems to me to be of very limited usefulness.

You might want to know how long this interval is, so you can compare it to similar intervals. So you might want to know how many seconds this is. It happens that the two times are exactly 1,411,669,328 seconds apart, but there's no way to get the $diff object to tell you this.

It seems like there are methods that will get you the actual elapsed time in seconds, but none of them will do it. For example, $diff->in_units('seconds') looks promising, but will return 43, which is the 43 seconds left over after you've thrown away the 536 months, 23 days, and 1101 minutes. I don't know what the use case for this is supposed to be.

And indeed, no method can tell you how long the duration really is, because the subtraction has thrown away all the information about how long the days and months and years were—days, months and years vary in length—so it simply doesn't know how much time this object actually represents.

Similarly if you want to know how many days there are between the two dates, the DateTime::Duration object won't tell you because it can't tell you. If you had the elapsed seconds difference, you could convert it to the correct number of days simply by dividing by 86400 and rounding off. This works because, even though days vary in length, they don't vary by much, and the variations cancel out over the course of a year. If you do this you find that the elapsed number of days is approximately 16338.7653, which rounds off to 16338 or 16339 depending on how you want to treat the 18-hour time-of-day difference. This result is not quite exact, but the error is on the order of 0.000002%. So the elapsed seconds are useful, and you can compute other useful values with them, and get useful answers. In contrast, DateTime::Duration's answer of "536 months and 23 days" is completely useless because months vary in length by nearly 10% and DateTime has thrown away the information about how long the months were. The best you can do to guess the number of days from this is to multiply the 536 months by 30.4375, which is the average number of days in a month, and add 23. This is clumsy, and gets you 16337.5 days—which is close, but wrong.

To get what I consider a useful answer out of the DateTime objects you must not use the overloaded subtraction operator; instead you must do this:

    #!perl
    $b->subtract_datetime_absolute($a)->in_units('seconds')

What's DateTime::Moonpig for?

DateTime::Moonpig attempts to get rid of the part of DateTime I don't like and keep the part I do like, by changing the interface and leaving the internals alone. I developed it for the Moonpig billing system that Rik Signes and I did; hence the name.

DateTime::Moonpig introduces five main changes to the interface of DateTime:

  1. Most of the mutators are gone. They throw fatal exceptions if you try to call them.

  2. The overridden addition and subtraction operators have been changed to eliminate DateTime::Duration entirely. Subtracting two DateTime::Moonpig objects yields the difference in seconds, as an ordinary Perl number. This means that instead of

      #!perl
      $x = $b->subtract_datetime_absolute($a)->in_units('seconds')
    

    one can write

      #!perl
      $x = $b - $a
    

    From here it's easy to get the approximate number of days difference: just divide by 86400. Similarly, dividing this by 3600 gets the number of hours difference.

    An integer number of seconds can be added to or subtracted from a DateTime::Moonpig object; this yields a new object representing a time that is that many seconds later or earlier. Writing $date + 2 is much more convenient than writing $date->clone->add( seconds => 2 ).

    If you are not concerned with perfect exactness, you can write

       #!perl
       sub days { $_[0] * 86400 }
    
    
       my $tomorrow = $now + days(1);
    

    This might be off by an hour if there is an intervening DST change, or by a second if there is an intervening leap second, but in many cases one simply doesn't care.

    There is nothing wrong with the way DateTime overloads < and >, so DateTime::Moonpig leaves those alone.

  3. The constructor is extended to accept an epoch time such as is returned by Perl's built-in time() or stat() functions. This means that one can abbreviate this:

      #!perl
      DateTime->from_epoch( epoch => $epoch )
    

    to this:

      #!perl
      DateTime::Moonpig->new( $epoch )
    
  4. The default time zone has been changed from DateTime's "floating" time zone to UTC. I think the "floating" time zone is a mistake, and best avoided. It has bad interactions with set_time_zone, which DateTime::Moonpig does not disable, because it is not actually a mutator—unless you use the "floating" time zone. An earlier blog article discusses this.

  5. I added a few additional methods I found convenient. For example there is a $date->st that returns the date and time in the format YYYY-MM-DD HH:MM::SS, which is sometimes handy for quick debugging. (The st is for "string".)

Under the covers, it is all just DateTime objects, which seem to do what one needs. Other than the mutators, all the many DateTime methods work just the same; you are even free to use ->subtract_datetime to obtain a DateTime::Duration object if you enjoy being trapped in an absurdist theatre production.

When I first started this module, I thought it was likely to be a failed experiment. I expected that the Moonpig::DateTime objects would break once in a while, or that some operation on them would return a DateTime instead of a Moonpig::DateTime, which would cause some later method call to fail. But to my surprise, it worked well. It has been in regular use in Moonpig for several years.

I recently split it out of Moonpig, and released it to CPAN. I will be interested to find out if it works well in other contexts. I am worried that disabling the mutators has left a gap in functionality that needs to be filled by something else. I will be interested to hear reports from people who try.

DateTime::Moonpig on CPAN.


[Other articles in category /prog/perl] permanent link

Sat, 04 Jan 2014

Cauchy and the continuum

There is a famous mistake of Augustin-Louis Cauchy, in which he is supposed to have "proved" a theorem that is false. I have seen this cited many times, often in very serious scholarly literature, and as often as not Cauchy's purported error is completely misunderstood, and replaced with a different and completely dumbass mistake that nobody could have made.

The claim is often made that Cauchy's Course d'analyse of 1821 contains a "proof" of the following statement: a convergent sequence of continuous functions has a continuous limit. For example, the Wikipedia article on "uniform convergence" claims:

Some historians claim that Augustin Louis Cauchy in 1821 published a false statement, but with a purported proof, that the pointwise limit of a sequence of continuous functions is always continuous…

The nLab article on "Cauchy sum theorem" states:

Non-theorem (attributed to Cauchy, 1821). Let !!f=(f_1,f_2,\ldots)!! be an infinite sequence of continuous functions from the real line to itself. Suppose that, for every real number !!x!!, the sequence !!(f_1(x), f_2(x), \ldots)!! converges to some (necessarily unique) real number !!f_\infty(x)!!, defining a function !!f_\infty!!; in other words, the sequence !!f!! converges pointwise? to !!f_\infty!!. Then !!f_\infty!! is also continuous.

Cauchy never claimed to have proved any such thing, and it beggars belief that Cauchy could have made such a claim, because the counterexamples are so many and so easily located. For example, the sequence !! f_n(x) = x^n!! on the interval !![-1,1]!! is a sequence of continuous functions that converges everywhere on !![0,1]!! to a discontinuous limit. You would have to be a mathematical ignoramus to miss this, and Cauchy wasn't.

Another simple example, one that converges everywhere in !!\mathbb R!!, is any sequence of functions !!f_n!! that are everywhere zero, except that each has a (continuous) bump of height 1 between !!-\frac1n!! and !!\frac1n!!. As !!n\to\infty!!, the width of the bump narrows to zero, and the limit function !!f_\infty!! is everywhere zero except that !!f_\infty(0)=1!!. Anyone can think of this, and certainly Cauchy could have. A concrete example of this type is $$f_n(x) = e^{-x^{2}/n}$$ which converges to 0 everywhere except at !! x=0 !!, where it converges to 1.

Cauchy's controversial theorem is not what Wikipedia or nLab claim. It is that that the pointwise limit of a convergent series of continuous functions is always continuous. Cauchy is not claiming that $$f_\infty(x) = \lim_{i\to\infty} f_i(x)$$ must be continuous if the limit exists and the !!f_i!! are continuous. Rather, he claims that $$S(x) = \sum_{i=1}^\infty f_i(x)$$ must be continuous if the sum converges and the !!f_i!! are continuous. This is a completely different claim. It premise, that the sum converges, is much stronger, and so the claim itself is much weaker, and so much more plausible.

Here the counterexamples are not completely trivial. Probably the best-known counterexample is that a square wave (which has a jump discontinuity where the square part begins and ends) can be represented as a Fourier series.

(Cauchy was aware of this too, but it was new mathematics in 1821. Lakatos and others have argued that the theorem, understood in the way that continuity was understood in 1821, is not actually erroneous, but that the idea of continuity has changed since then. One piece of evidence strongly pointing to this conclusion is that nobody complained about Cauchy's controversial theorem until 1847. But had Cauchy somehow, against all probability, mistakenly claimed that a sequence of continuous functions converges to a continuous limit, you can be sure that it would not have taken the rest of the mathematical world 26 years to think of the counterexample of !!x^n!!.)

The confusion about Cauchy's controversial theorem arises from a perennially confusing piece of mathematical terminology: a convergent sequence is not at all the same as a convergent series. Cauchy claimed that a convergent series of continuous functions has a continuous limit. He did not ever claim that a convergent sequence of continuous functions had a continuous limit. But I have often encountered claims that he did that, even though such such claims are extremely implausible.

The claim that Cauchy thought a sequence of continuous functions converges to a continuous limit is not only false but is manifestly so. Anyone making it has at best made a silly and careless error, and perhaps doesn't really understand what they are talking about, or hasn't thought about it.

[ I had originally planned to write about this controversial theorem in my series of articles about major screwups in mathematics, but the longer and more closely I looked at it the less clear it was that Cauchy had actually made a mistake. ]


[Other articles in category /math] permanent link