octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Slowup in 2.1.54


From: Paul Thomas
Subject: Re: Slowup in 2.1.54
Date: Wed, 18 Feb 2004 21:07:11 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1) Gecko/20030225

Yes, that's why I said that the comment was a bit at right angles.

By the way, John, I have been slowly but surely following up those Mickey Mouse timing tests by following what the parser and then the tree-walker do with them. In essence I have been trying to understand why up to a few thousand CPU operations are needed to do a=b+c; I have come to appreciate what an extraordinary job you have done with the interpreter. So perhaps, it is timely to congratulate you and say, "Well done", as the 12th birthday of octave comes up?

One thing that I have been trying to analyse is what proportion of the time is devoted to doing what kind of task in, say, a simple binary operation and assignment. Apart from the obvious overheads, such as array bound checking, it strikes me that leaving open the variable types, until the last moment, is quite an expensive (if not the dominant) use of time. I wonder if the parser can have a stab at identifying the types and, when it is possible, a specific tree_expression substituted for the general one? Thus Paul Kienzle's tot=0; x=[1:1e5]; for i=1:1e5;tot=tot+x(i);end could be made to thunder along because the types and the requisite function for the binary_op are identified from the outset by the initial assignments. Sometimes, as for function arguments, it would be necessary to work back from the context in which variables are used later on. This argues, I guess, for a pass between the lexing and the parsing. I will try and analyse, in the following weeks, how well such a thing would work for real life functions or scripts. Clearly the likes of function [b,a]=myfunc(a,b);endfunction will beat the system (how about optional typing?) but I do not think that this need generally be the case.

Best regards

Paul Thomas


John W. Eaton wrote:

On 18-Feb-2004, Paul Thomas <address@hidden> wrote:

| Whilst it is a bit at right angles to the discussion because the fault | with the [] operator was fixed, should we really worry about constructs | like | | j = 1e 4; tic; x = []; for i=1:j; x = [x, i]; endfor; toc | | that have square law scaling with the length of i? | | j = 1e4; tic; x = zeros(1, j); for i = 1:j; x(i) = i; endfor; toc | | is 30 times faster, at j=1e4, and is linear in j.

It's true that you shouldn't be using the first form if you can avoid
it, but the fact that it was so much slower going from 2.1.53 to
2.1.54 showed that the run time was not completely dominated by
reallocating memory.  So I think it would also mean that we would have
also seen a big drop in performance for the [] operator with a large
number of elements.

jwe






reply via email to

[Prev in Thread] Current Thread [Next in Thread]