guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: What happened to the ex-Guile VM?


From: Keisuke Nishida
Subject: Re: What happened to the ex-Guile VM?
Date: Wed, 21 Mar 2001 09:31:49 -0500
User-agent: Wanderlust/2.4.0 (Rio) SEMI/1.13.7 (Awazu) FLIM/1.13.2 (Kasanui) Emacs/21.0.99 (i686-pc-linux-gnu) MULE/5.0 (SAKAKI)

At 20 Mar 2001 22:52:34 +0100,
Marius Vollmer wrote:
> 
> I'm quite in favor of replacing the current evaluator with a `better'
> one.  `Better' is a complicated requirement here, since so many
> fcators are involved.  Speed, of course, but not only speed.
> Maintainability, speed of loading code, debuggability of the executed
> program, portability, easy of hacking it (to implement new
> optimizations, say), interactive responsiveness, standards
> conformance, and probably more.

Maybe ease of development.  I want to have a new one before long.

> In my opinion, the current avaluator fares like this:

Regarding my VM:

- Speed.  Fairly good.  Since the VM itself rarely does conses,
  GC is reduced significantly compared to the current evaluator.

- Maintainability.  Good enough, I think.  The VM, the assembler,
  and the compiler are separated into modules.  All instructions are
  refereed by names, not by opcodes, so adding/removing instructions
  is quite easy.  If you want to abbreviate (local-ref 0) into a
  new instruction (local-ref:0), just add it.  The assembler detects
  it and optimizes automatically.

- Speed of loading code.  Loading a compiled file should be very
  fast (I haven't benchmarked it yet).  Since the size of bytecode
  is likely less than one tenth of equivalent machine code, I/O is
  reduced significantly when you load a huge module.

  Loading a source file is relatively slow, because it must be first
  compiled.  But given a fast evaluator, the compiler should run
  sufficiently fast if you turn optimization off.  The result of
  compilation can be saved in a file, so slowdown at the first loading
  shouldn't be a big problem.

- Debuggability.  To be improved.  Currently, the VM has seven hooks
  (boot-hook, halt-hook, next-hook, enter-hook, apply-hook, exit-hook,
  and return-hook) which can be used for various tasks, including a
  tracer, profiler, and a debugger.  Backtrace can be easily done by
  displaying stack frames.  Source level debugging facilities (e.g.,
  displaying error locations) must be supported soon.

  (Probably I should drop the fast engine and have only this debug
   engine.  Given a JIT compiler like lightning, an improvement of
   speed by 30% is not very attractive.)

- Portability.  Good.  Compiled code (bytecode) is also portable.
  The new loader has no endian problem.

- Ease of Hacking.  I don't know.  Try it ;)

- Interactive responsiveness.  Not bad.  So far, I don't see any delay
  because of compilation, except in the first evaluation when the
  current evaluator is compiling the VM compiler.  (It seems the
 `match' macro requires time.)

- Standards conformance.  Tail-call elimination and continuations are
  supported.  Unfortunately, my VM does not conform to JVM.

I think a VM like this is a good balanced solution for daily
programming.  Many systems, such as RScheme and OCaml, choose a
hybrid of bytecode and machine code.  I think we should follow them.

I'll commit my code to the CVS repository before long, though it is
not working well right now. (I changed part of the design recently.)

Kei



reply via email to

[Prev in Thread] Current Thread [Next in Thread]