discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] V3 Comments on "BBN's Proposed extensions for dat


From: David Lapsley
Subject: Re: [Discuss-gnuradio] V3 Comments on "BBN's Proposed extensions for data networking"
Date: Thu, 15 Jun 2006 10:45:52 -0400
User-agent: Microsoft-Entourage/11.2.3.060209

On 6/15/06 8:37 AM, "Michael Dickens" <address@hidden> wrote:

> On Jun 15, 2006, at 12:51 AM, David Lapsley wrote:
>> On 6/14/06 2:24 PM, "Michael Dickens" <address@hidden> wrote:
>>> Also, the whole discussion of packet radio requirements doesn't
>>> really fit into the GR baseline, and should instead probably be in
>>> 4.3, or at least elsewhere.
>> 
>> Do you mean 4.5.2?  The intent here was to describe the current
>> packet-capabilities in GNU Radio.  The last paragraph could be
>> moved to the
>> requirements section, but do you think the  whole section should go?
> 
> Sure, you could move them to 4.5.2, so long as they're rephrased as
> "limitations of the current framework" as opposed to "packet-radio
> needs".  Limitations are OK, since they work within the baseline
> concept; packet-radio needs do not work there, since they have
> nothing directly to do with the baseline.

Sorry, I thought your initial comment was referring to the last paragraph of
4.5.2, but actually, it seems it was referring to the bullet points at the
bottom of page 64.  I'll go with your original suggestion and move these
into the requirements suggestion.
 
>>> p68&70, 4.8.1: How can you implement "a mechanism is required that
>>> will allow m-block s to relinquish control of a processor after a
>>> certain number of processor cycles have been used" for a gr-flow-
>>> graph and guarantee that the internal flow-graph's memory is
>>> maintained?  How is this implemented in general?  I guess you could
>> 
>> I think Eric had discussed this earlier.  You are correct that it
>> is not
>> possible to pre-empt a gr-flow-graph once it has started.  The idea
>> is to
>> ensure that the amount of data fed into the gr-flow-graph can be
>> processed
>> within/close to the allowed time.  By making use of the timing
>> information
>> carried in the m-blocks, it should be possible to estimate the
>> processing
>> throughput of different gr-flow-graphs and then use this estimate
>> to work
>> out the maximum amount of data that can be fed into a gr-flow-graph
>> in order
>> to complete processing within the time budget.
> 
> Ahhh .... so will there be some "test runs" to get timing
> information, in order to have a better estimate of latencies?  From
> another perspective: How does this info get gathered without any
> estimate of how long it will take and thus how many CPU cycles to
> allow for the gr-flow-graph computation?  Yes, you can surely do what
> you've written ... I'm just wondering how the estimates are initialized.

Yes, there could be test runs and the initialization is the tricky part.  I
don't know that we should go into that much detail in the architecture
document (other than making clear what will be available to the
developer/user).  

Eric, what are your feelings on this?
 
>>> p76, 4.9.11: "Messages arriving at an unconnected relay port are
>>> discarded."  ... while it's nice to have unconnected ports, this
>>> takes extra processing to deal with.  Is it possible to never have
>>> unconnected ports, and/or to always make use of all ports?  Or in the
>>> dynamic graphing, is this just a possibility which can happen and
>>> thus needs to be considered?
>> 
>> It would be possible to prohibit unconnected ports, but there seems
>> to be an
>> extra degree of freedom including unconnected ports than excluding
>> them.
>> For example, a port could be initially unconnected, and then
>> connected at a
>> later stage.
> 
> Hmmm ... good point.  In a dynamic system, ports could get dropped or
> connected "on the fly".  Could you write a quick blurp about this,
> somewhere before 4.9?  Maybe 4.6.8 or 4.8.6?

Sure.  No problems.

Cheers,

Dave.







reply via email to

[Prev in Thread] Current Thread [Next in Thread]