help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

numerical precision issues (?)


From: Michael Creel
Subject: numerical precision issues (?)
Date: Fri, 01 Apr 2005 13:33:50 +0200
User-agent: KMail/1.7.2

Hello,
I'm getting different results when doing what are in principle the same 
calculations using Octave serially and in parallel, using the MPITB toolkit. 
The computations are to minimize a function, and are a bit complex. 

* Minimization is iterative (various oct files). 

* Each iteration requires many evaluations of the objective function, to get 
the direction of search and the stepsize.

* An evaluation of the objective function can be done serially using a normal 
Octave script or in parallel, using MPITB. MPITB is a bunch of oct files that 
link in LAM/MPI (C) functions.


Depending upon whether this is done serially or in parallel, and if in 
parallel, depending upon how many nodes are used, the solution path can 
differ. That is, the number of iterations may differ, though the final 
solution is the same (up to convergence tolerances).

My question is why this is so. I'm guessing that Octave internally has a 
different numeric precision that what is used for passing values across nodes 
by MPITB-LAM/MPI. Thus the objective function value will be very slightly 
different depending on if everything stays inside Octave (serial) or passes 
through LAM/MPI (parallel). I hypothesize that slight differences cause 
eventual divergence of the solution paths since very many calls are made to 
the objective function in the course of the iterations.

Any help appreciated, thanks, Michael



-------------------------------------------------------------
Octave is freely available under the terms of the GNU GPL.

Octave's home on the web:  http://www.octave.org
How to fund new projects:  http://www.octave.org/funding.html
Subscription information:  http://www.octave.org/archive.html
-------------------------------------------------------------



reply via email to

[Prev in Thread] Current Thread [Next in Thread]