octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Thread-safety issues in QtHandles


From: Michael Goffioul
Subject: Re: Thread-safety issues in QtHandles
Date: Sun, 30 Oct 2011 22:58:36 +0000

On Fri, Oct 28, 2011 at 7:10 PM, Michael Goffioul
<address@hidden> wrote:
> Hi,
>
> As you might know, I've designed QtHandles to run the UI in a separate
> thread from octave (which provides a much more responsive interface).
> Unfortunately, octave code is not thread-safe (for instance the
> internal reference counting system is not) and despite all my efforts
> to put guards to avoid race conditions, I experience crashes randomly
> within QtHandles under medium/heavy load. This is pretty difficult to
> track down as this happens randomly, is not reproducible and usually
> disappears when I run within a debugger or valgrind.
>
> To to some investigation, I'd like to block the octave REPL loop
> (basically using the graphics lock), but I'm not sure where to put
> that. Any pointers in the internals of the REPL loop would be greatly
> appreciated.

As a follow-up, I've been considering replacing reference counting in
octave with shared_ptr (afaik, reference counting is thread-safe in
shared_ptr implementation). I've converted liboctave (Array, Sparse,
idx_vector...) and most of octinterp. However, I'm having a hard time
converting octave_value/octave_base_value classes.

The main problem is that when using shared_ptr, the underlying class
does not have any counter anymore, as it's managed by the shared_ptr.
However, there are various places where the "count" member is used,
for instance to return itself (octave_base_value*) wrapped in an
octave_value object. Typical example is the subsasgn method. I've
considered to change return type octave_base_value::subsasgn to void
(any implementation is supposed to return itself anyway), but I think
that'll break the octave_class code. So if anybody (probably jwe?)
could give me a hand on this one, I'd really appreciate.

Thanks,
Michael.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]