help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Memory management in .oct files


From: Richard Hindmarsh
Subject: Re: Memory management in .oct files
Date: Sat, 19 Mar 2005 12:59:40 +0000
User-agent: Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.7.1) Gecko/20040707


|   int ir = xin1.rows();                                    // #3
|   int ic = xin1.cols();
|   Matrix xout1(ic,ir);                                     // #4
|   for (int ir1 = 0 ; ir1 < ir ; ir1++)                     // #5
|     for (int ic1 = 0 ; ic1 < ic ; ic1++) {
|       xout1(ic1,ir1) = xin1(ir1,ic1);                      // #6

                        ^^^^^^^^^^^^^
Since Octave does not know that this indexing operation appears on the
RHS of the assignment (I think it would be a real mess to try to do
that automatically), it splits off a copy here.  If you don't plan on
modifying the argument that is pass to this function, then you can
avoid the extra copy and the checks on the reference count that are
done each time you use the indexing operator by declaring xin1 as
"const".

I guess this means that you decided not to impelemt e.g.

xout1(ic1,ir1) = xin1(ir1,ic1);
using a helper class. The upside of this is speed; the downside is that 
statements such as

     double y = xin1(ir+1,ic+1);

do not trip an error and statements like

    xout1(ir+1,ic+1) = 1;

do not resize the matrix, as they do in Octave. Are there function calls e.g.

    double y = xout1.rhsindexing(ir+1,ic+1);
    xout1.lhsindexing(ir+1,ic+1) = 0;

which have the desired behaviour? Is there another route to finding this out other than doxygen?

Thanks
Richard



-------------------------------------------------------------
Octave is freely available under the terms of the GNU GPL.

Octave's home on the web:  http://www.octave.org
How to fund new projects:  http://www.octave.org/funding.html
Subscription information:  http://www.octave.org/archive.html
-------------------------------------------------------------



reply via email to

[Prev in Thread] Current Thread [Next in Thread]