gnu-arch-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] Re: darcs vs tla


From: Stephen J. Turnbull
Subject: Re: [Gnu-arch-users] Re: darcs vs tla
Date: Wed, 10 Nov 2004 17:43:18 +0900
User-agent: Gnus/5.1006 (Gnus v5.10.6) XEmacs/21.5 (chayote, linux)

>>>>> "Kannan" == Kannan Goundan <address@hidden> writes:

    Kannan> Since Haskell is, in a way, more strict and more
    Kannan> restrictive, wouldn't it be easier for a compiler to
    Kannan> optimize Haskell programs (since the code is bounded by
    Kannan> stronger guarantees)?  Dealing with side-effects and
    Kannan> aliasing is a major pain for C compilers.

That depends on what the guarantees are.  But I'm sure that if you use
C in a certain stylized way, eg, not passing around random pointers
and so on, the compilers will do quite a good job of optimization.

    Kannan> C has an advantage because it's easier for a programmer to
    Kannan> manually perform low-level optimizations

I don't think this is true any more.  It is true that as the
optimizations that compilers can do become more complex, they become
less and less optimal and more and more heuristic---that's what people
are good at doing.  But this is just a special case of "choosing a
better algorithm", really.  You would expect that (with a good
optimizing compiler) the high-level language implementors would choose
excellent algorithms for the operations the language provides.  In
general, compiler implementors can afford to spend much more effort on
algorithm selection and tuning than application programmers can, so
you'd expect something of an edge for the HLL here, too.

The problems that you'll run into with the high-level language are
situations where the operations are sufficiently new that good
algorithms aren't known yet.  An egregious example ("egregious"
because we've known how to fix it for a decade) is tail recursion in
XEmacs Lisp.  At the time that Emacs Lisp was implemented, the concept
of tail call elimination was pretty well known but compiler
implementation wasn't so well-understood.  (There may also be
technical problems because of dynamic binding and the like.)  Anyway,
we still don't do tail call elimination.  :-รพ

    Kannan>  (stuff that compilers aren't yet smart enough to perform
    Kannan> by themselves).  With Haskell, you're working at such a
    Kannan> high level that it's difficult to predict exactly how your
    Kannan> code will translate into ASM,

But most of the time "exactly" really isn't needed.  What you need to
know is worst case behavior for those operations that have bad worst
cases (n^2, say), and what triggers worst case.

There are a couple of other things worth knowing about on modern
architectures that do a lot of caching at the hardware level, like
memory usage.  For example, if you do something that involves lots of
nonlocal accesses, you're going to page fault a lot.  It might even be
worth doing a deep copy to get high localization.  If you get those
things right, you usually aren't going to really notice the effects of
a few nano-optimizations.


-- 
Institute of Policy and Planning Sciences     http://turnbull.sk.tsukuba.ac.jp
University of Tsukuba                    Tennodai 1-1-1 Tsukuba 305-8573 JAPAN
               Ask not how you can "do" free software business;
              ask what your business can "do for" free software.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]