axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Axiom-developer] CCL maintenance.


From: Arthur Norman
Subject: RE: [Axiom-developer] CCL maintenance.
Date: Tue, 12 Jun 2007 17:54:51 +0100 (GMT Standard Time)

As term comes to an end I re-scanned the Axiom list and observe the discussion about CCL. Those who view the parenthesised abstract machine with bignum arithmetic and a garbage collector as separate from Axiom itself are liable to want to fall in with whatever standards Common/ANSI Lisp purveys. If they are careful they will limit themselves to using JUST the standardised facilities so that they have multiple "vendors" of that support substrate.

An alternative view (which is closer to the one I have) is that code at that level can serve more like a kernel for an algebra system, and that almost nobody writes "(de foo (...))" directly. As little as possible of that is written to allow a bootstrap process to lift the bulk of coding to a higher level. That sort of stuff may be the output of a translator from a higher level notation, but the output from translators is essentially guaranteed to be stylised and readily controllable. If the "lisp" becomes a dedicated kernel for an algebra system one loses a big vibrant world-wide community of Lisp developers maintaining it for the benefit of other projects. Hmmm - I do not know how many of such there actually are! What one gains is the chance to have something significantly smaller and thus cheaper to support than FULL Lisp, and a chance to embed system interfaces within it where that helps flexibility of performance.

If the Axiom Community takes the first of these views then CCL is not of great interest to it, regardless of the use it had in the NAG days. If that is the case I would view it as sensible to remove the achchaic snapshop of my code from the Axiom servers since if it just sits there it causes a confusion.

For what it is worth these are some of the characteristics of CCL that may have caused NAG to view it as a plausible route...

(1) Way back, machines did not have as much memory as they do now and use of Axiom REQUIRED machines of a scale that limited potential uptake. CCL provided a modest footprint compared with alternatives and that made a big difference to real-world performance. The world has probably changed since then! Enthusiasts in well funded labs had the big machines and did not need to worry, but everybody who was not a specially funded specialist did.

(2) Geography and time-zones meant that from Cambridge in England it was easy to talk to NAG in Oxford, and I have previously worked with Griesmer, Jenks and Blair back in the Scrathpad days.

(3) CCL is designed not for developers but for delivery.
  (a) Most code is converted into compact bytecodes. But when one has
      built a system that way you can profile it to identify hot-spot
      functions. Those can then be off-line compiled into C that is
      statically linked into the CCL kernel. The scheme these is MUCH less
      flexible than the usual compile-via-C Lisp, but then it is MUCH
      easier! With say 10% of the whole code compiled into C one hopes for
      80% of the performance of a fully native-compiled system, but with
      much less bulk of compiled code. And the customer who then got a CD
      or the simple mathmatician who would now just download a pre-built
      binary just sees a smaller system and does not care about how or
      what gets compiled when.
  (b) CCL keeps all its loadable modules within a single file along
      with the initial heap image that it will reload when started.
      So Axiom (eg) could ship as a native executable file plus this
      image file (plus documentation directories etc). This keeps
      everything together in one place and reduces risks of muddle if
      one sub-file gets lost or mangled.
  (c) The image files for CCL contain bytecoded definitions (plus
      references to stuff compiled into C and linked into the kernel)
      and are machine independent. Well strctly you need to make one
      image for 32-bit and another for 64-bit platforms, but with more
      work I could fix that too. So to make a release you compile a
      simple fairly flat directory of cautiously portable C to make
      an executable. You use that to make an image file, and while you
      need to build executables for each platform your one image file
      can be shipped for Linux, Solaris, Windows, Mac, SGI, HP, ...
      and you are confident of delivering a compatible product on all.
  (d) The CCL files in C have at various stages built with essentially
      no pain on Linux (32 & 64), Windows (32. It builds on 64 but until
      there is a mingw64 the build process is a bit odd, but the result is
      OK), Solaris x86, Solaris sparc, Mac OS X, SGI, HP, a Linksys
      router, my ipaq PDA, and basically anything it is thrown at. Oh
      older Macs as well and other historic stuff. It has built with
      a range of vendor-supplied C compilers as well as gcc. Its Makefiles
      roughly just need to say "compile all the *.c files in this
      directory and link what you get", with some grungy #ifdef messing to
      provide me with portability in code to traverse directory trees etc.

(4) CCL tried tolerably hard to be safe, so it checks each CAR and CDR to ensure you are working on a cons object (or nil), and it polices array bounds etc etc. If checks that functions are called with the expected number of arguments. Depending on your point of view this is either jolly good for finding places where a typo in the non-strongly-typed Lisp was about to bite you, or a cause of existing Lisp code that used to just car/cdr through fixnums with gay abandon now fails.


There are a number of things that at a technical level may mean that CCL would cause some people pain:

(1) because loadable modules are within one file their date-stamps are something I maintain within that file, and "make" can not readily find them. That tends to be a slight blight if you want to use Make to autocompile just the modes for which source has altered. But if you want to do that properly you would have to get dependency tracking really working well and since CCL compiles things tolerably fast (because it is all within itself) I view a full clean rebuild from scratch as safest anyway, and since I do not need to repeat that on a per-platform basis I do not mind. OR I make my own smart-rebuild code live as Lisp code within the system where it obviously has easy access to all it needs. But the difference that this is from some other models may make it a pain to have build-systems for both CCL and a different Lisp?

(2) the "static optimisation" scheme is to my mind a good compromise for a system where you are looking at users who fetch and use it. My expectation would always be that open source or not MOST users of any successful package will be in that category. But for those involved in rapid development the effect is that when they change or redefine functions then the things they alter end up running as bytecoded (I use checksums to avoid messing up when a user redefines code that has been compiled into C in the kernel - the C code is only activated if a checksum match says it is the version wanted...). So over time such a user sees slowly degrading performance and gets uptight. And running a proper profile job to re-decide where hot-spots are ought to be time-consuming since it ought to be comprehensive. Equally if a user runs applications that do not match the profile scripts at all they will hurt a bit.

(3) I do not provide amazing Lisp-level debugging tools. I duck out with the view that (trace '(foo)) is good, but that anybody who feels they need a big interactive lisp-level visualisation workbench had better go elsewhere. I do not know about debuggers in the Lisps currently in use, but Harlequin used to try harder on that front, and until Common Lisp came along Interlisp's DWIM was a dream for some if a nightmare for others!

(4) If somebody is doing a lot of coding at the direct Lisp level and they are used to exploiting all the features of Common Lisp then the fact that CCL has just that subset of Common capabilities that Axiom needed will annoy them to distaction. I of course rhink they should not be coding in an agressive manner at such a low level.



If I try to give a really short summary. CCL sees itself as an "OEM product" not as a "retail product" and thus is complementary to the other Lisps used by Axiom. If Axiom is mostly targetted at hackers it is irrelevant and should be purged from the Axiom tree. If Axiom wanted to stress effortless portability and a neat deliverable package it may be of some use to you.


          Arthur Norman





reply via email to

[Prev in Thread] Current Thread [Next in Thread]