Thursday, January 02, 2014

Todd's 1 question test for all new "advanced" dynamic scripting/programming languages.

Whenever a new language comes out I have a simple test to see if it is worth looking at.  I came up with this test back in the 1990s after being frustrated by the state of "advanced programming languages".

In the 1980s, as a lowly FORTH programmer, twiddling 8-bit bytes, I was blown away with my first experience with Lisp. It was on a DEC2060 (running TOPS-20) and was called "Standard Lisp".

As a student, back in 1984, I coded up a small lisp function to compute the factorial of 120.

The terminal presented me with:

6689502913449127057588118054090372586752746333138029810295671352301633557244962989366874165271984981308157637893214090552534408589408121859898481114389650005964960521256960000000000000000000000000000

My mind was sufficiently blown.

Of course, the largest factorial computed by a programming language where the number/integer type is matched to the machine word size is much smaller.  I had already programmed in Pascal, FORTH, a bit of C and BASIC.  But, this was the first language implementation I had seen that wasn't bound by that machine word limitation.

(I quickly followed that exercise by coding for the factorial of increasing numbers until, by the time I was in the hundreds of thousands, the terminal responded with a message saying that it was taking too many resources and that the process was being "spooled" -- whatever that meant.  I went home and the next morning I was greeted with an email from the sysadmin requesting that I come get a "print job" from the ops center.  I rang the ops center door buzzer, the admin came to the door and asked what I wanted. I told him him who I was and he then frowned, told me to wait and closed the door. A few minutes later he showed up with a hand truck loaded with a big box of green bar printer paper.  Apparently, "spooled" meant the result was being submitted as a print job).

A couple of years later and I discovered that Smalltalk too had this "big num" (or arbitrary precision) feature.  Why wouldn't every language have that? (Yeah, I know... performance...but, still...)

Now, when I am presented with a programming language that is supposed to be the "next step",  I look to see if it supports bignum.  Now, when I say support, I don't mean surrounding the number by quotes and submitting it to a bignum library. That's cheating. I want to say something like:

X=6689502913449127057588118054090372586752746333138029810295671352301633557244962989366874165271984981308157637893214090552534408589408121859898481114389650005964960521256960000000000000000000000000000  / 19;

I don't want to type it as a string. That's saying that there is something "special" or "hard" about large numbers.  Why, in 2014, should I be concerned about whether a number fits into 32 or 64 bits (or in the case of Lua and Javascript: a 52 bit mantissa)?  I want tnative/natural support.

So, what other programming languages pass this bignum test?

  • Erlang does. 
  • Haskell does... sort of... got to choose the right type.
  • Perl does (and has for a while.. just type "use bigum;"  and all following numbers are not bound by machine word size.

Now, don't get me going about native/natural support for rational types.

/todd

No comments:

Post a Comment