octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Interpreter performance (was: Re: moving toward a 3.0 release)


From: John W. Eaton
Subject: Interpreter performance (was: Re: moving toward a 3.0 release)
Date: Fri, 29 Sep 2006 12:36:46 -0400

On 29-Sep-2006, David Grohmann wrote:

| Barring any speed improvements, documenting the .oct APIs would be the 
| most useful way to allow some of us to be able to put octave to work on 
| a cluster, we can just write the critical sections in C++, I attempted 
| to do some .oct work and had a lot of trouble due to the lack of 
| documentation, could never get the feel for whether I should be using an 
| octave value or and octave value list here and what not.

I agree that the documentation needs to be improved, but for now there
is some documentation on the net, the Octave sources provide lots of
examples, and the help list is pretty good for providing answers to
specific questions.

| Does octave use that API internally when its parsing octave syntax?

This question doesn't really make sense to me.  When Octave parses
input, it just converts the text of a script or function to an
internal representation.

| so has anyone taken a stab at a static octave code => C++ compiler? Not 
| sure how efficient that generated code would be though.

Yes.  It doesn't provide a dramatic increase in performace because you
still have the overhead of dispatching on types.  To make things fast,
you have to do some type inferencing.  Some work has been done on that
too, but it's a lot harder.

| I don't think the entire interpreter is slow, because non looped code 
| ran in about the same time as the matlab code, but when you have (lots 
| of) nested looping and function calling, it gets pretty bad.

Looping, conditionals, and funtion calling is pretty much what the
interpreter does.  When you do core operations on large matrices, then
you are likely out of the realm of the interpreter.  You have one
funtion call that does a lot of work.  The function call is still
slow, but that is not as noticeable because your (large) operation is
taking a lot more time, and the interpreter is doing less.

jwe



reply via email to

[Prev in Thread] Current Thread [Next in Thread]