octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Enabling OpenMP by default


From: Thomas Weber
Subject: Re: Enabling OpenMP by default
Date: Mon, 18 Mar 2013 08:30:51 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Mar 12, 2013 at 11:23:06AM +0200, Susi Lehtola wrote:
> On Mon, 11 Mar 2013 21:03:30 -0300
> Júlio Hoffimann <address@hidden> wrote:
> > > I could not disagree more with this statement. It assumes no other
> > > processes
> > > ever need to use your hardware concurrently, which is usually _NOT_
> > > the case.
> > >
> > Hi Carlo, that's a good point. I always prefer write code for multiple
> > cores though, no matter the speed up. I hope as the technology
> > evolves, the performance gains will become noticeable. And of course,
> > another justification is that it can sometimes be achieved with tiny
> > effort.
> 
> But this is really unintelligent behavior. Plugging in parallellism
> always includes an overhead, and because of that parallellized code can
> be *SLOWER* than the serial code.

Not necessarily. OpenMP can be used to trivially parallize for-loops,
for example. If you have a multi-core CPU (and I would be very surprised
to see someone working with Octave to not have at least a dual-core
CPU), then your operating system might already decide to switch your
process from one core to the next when it sees fit, including at every
step of the for-loop - which is the point where OpenMP parallelizes.

Having worked with OpenMP over the last 2 years, I would say that today
the sane behaviour is to *always* activate it. Operating systems have
become far better at dealing with multiple cores and the 'more than 1
core' environment is the norm today.

> Using parallellism for parallellism's sake is not sane behavior. In
> places where the operations are CPU bound using parallellism can be
> fruitful.

How much do you use Octave in an I/O bound environment?

        Thomas


reply via email to

[Prev in Thread] Current Thread [Next in Thread]