discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] mblock update


From: Eric Blossom
Subject: Re: [Discuss-gnuradio] mblock update
Date: Wed, 2 May 2007 08:13:05 -0700
User-agent: Mutt/1.5.9i

On Wed, May 02, 2007 at 09:49:28AM -0400, George Nychis wrote:
> 
> Eric Blossom wrote:
> 
> > Still remaining are:
> >   * gluing mblocks and flow graphs together in the same system
> 
> Part of this includes the scheduler, right?
> 
> When we get to the point of the scheduler I want to toss it up for 
> discussion.  Or we can
> just toss it up for discussion now :D  I haven't been fully convinced by the 
> BBN doc that
> it's the kind of scheduler we want and that we can't get a scheduler that 
> inter-operates
> with both m-blocks and traditional blocks.
> 
> When we first started working on the in-band project and we were looking in 
> to the BBN
> doc, something struck us wrong about the scheduler. (us being me and Thibaud) 
>  We think
> its catering too much to the m-block when you can create a scheduler that can 
> operate with
> other blocks users might create that desire priority queues.  Basically, we 
> see increased
> complexity in the system by running two schedulers, one of which caters to a 
> specific
> block type.  On top of that, two schedulers is going to add additional 
> scheduling
> overhead.  We want to either mash them together and try to build a scheduler 
> that works
> with both types of blocks, or at least not cater the new scheduler so much to 
> m-blocks,
> but to make m-blocks work with it.
> 
> We're not sure what's 100% feasible and not, which is where you come in :D  
> But we think
> it's at least worth some more discussion.
> 
> - George

I think there might be a bit of misunderstanding here.

The biggest piece of the problem is interfacing the i/o between the
two abstractions.  This isn't really an OS "scheduler" problem.

FYI, the mblock runtime currently puts every mblock instance in its
own thread.  We'll be trying a similar experiment with the flow graph
stuff relatively soon (every gr_block in its own thread).  In both of
these cases, we'll be dependent on the underlying OS to to schedule
the blocks in those cases where there are more ready to run than you
have processors/cores.  We will provide hooks to allow the app
developer to specify desired priority, processor affinity and NUMA
bindings, but I suspect that in most cases these will be mostly for
tuning.

Independent of the underlying OS, gr_blocks and mblocks have
different constraints that must be satisfied in order for them to be
considered runnable.  E.g., for an mblock, it's runnable if there are
messages in it's message queue.  For a gr_block, it's runnable if
there is sufficient input and sufficient down stream buffer space to
write the output.

Now, as part of the desire to combine the data flow abstraction and
the message passing abstraction, there are use cases where the data
flow seems like it should be subordinate to the message passing
abstraction (I.e., feels like a procedure call).  This is particularly
true when the high level message passing code knows about for example
packet boundaries, but the data flow code doesn't.  In these cases one
could imagine the packet based code feeding bytes to the data flow
code, and receiving samples back.  When all the samples that
correspond to a given packet have been generated, the packet based
code may want to take "packet based" action.  E.g., send this frame of
samples to the i/o device (e.g., USRP) as a single logical entity, to
be transmitted on a particular frequency, at a given time, with a
specific power level.

I believe that we're going to find that there is a natural
decomposition of problems across the two domains.  E.g., pretty much
anything that looks MAC-like is going to want to run as an mblock.
The data is inherently packet based, and the logic is based on events
such as packets received and timeouts.  I suspect that much of channel
coding will fall in this category too.  On the other hand, lots of PHY
layer kinds of things (low level mods and demods) fit quite nicely in
the data flow abstraction.

I'm not sure if I've addressed your concerns.  

I believe the question that remains is how would _you_ want to
interface mblocks and gr_blocks/flow_graphs?  I suspect that the right
answer is use case dependent.

Eric




reply via email to

[Prev in Thread] Current Thread [Next in Thread]