discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] set_relative_rate


From: Miklos Maroti
Subject: Re: [Discuss-gnuradio] set_relative_rate
Date: Fri, 7 Feb 2014 14:10:19 +0100

Hi Tom,

On Fri, Feb 7, 2014 at 11:10 AM, Tom Rondeau <address@hidden> wrote:
> On Thu, Feb 6, 2014 at 9:14 PM, Miklos Maroti <address@hidden> wrote:
>> Hi Tom,
>>
>> Thanks for the answer! I have considered both approach already. What
>> you are saying is that set_relative_rate cannot capture this scenario,
>> so it is impossible to set different relative rates, right?
>
> Right; relative_rate as a value is defined as a single value for the
> entire block. you can consume and produce at different rates for each
> input/output stream.
>
>> Where exactly are the relative rates used in gnuradio core? Only for
>> the buffer size calculations or are they also used during runtime?
>
> Yes, mostly the initial buffer size calculation. It's also used to
> update the item offset value of a tag through a rate-changing block.
>
>> By the way, the vector approach does not scale ideally: if I increase
>> the size of vectors (to 100000 samples) or use set_output_multiple
>> with that large value then the performance of the block is degraded,
>> and I do not really understand why. If the block does pure streaming
>> (e.g. add) and does not require large quantities of data, then
>> everything works fine. I do not want to use messages, because the data
>> is processed (filtered, length changed, etc) along with other
>> transformations. Anyhow, what I am getting at that there is no good
>> way of processing very large blocks of data.
>
> Use gr-perf-monitorx (or in GRC just look for Performance Monitor) if
> you have ControlPort enabled and building properly [1][2]. You'll
> likely see the buffer in front of your block backing up while the
> output buffer is fairly empty as the scheduler has to dump lots of
> data into it before anything else can go, so you'll be starving the
> follow-on blocks.

Yes, I have used the performance monitor and indeed the data backs up
at that point and starves the follow-on blocks. What I have found is
that increasing the history size (set_history) to huge values does not
impair the performance, but increasing the output size (either with
using huge vectors or using set_output_multiple) degrades the
performance significantly. I am talking of 1000000 samples at a time.
However, I think the starving is happening because the block is unable
to produce the data fast enough: maybe output multiple works like a
filter: it just rounds down the noutput_values to an integer multiple,
but the scheduler will keep calling this block which cannot produce
data because there is not enough space in the followup buffer.

> Another model is to try and handle the state internally. Just allow
> data to flow in from each data stream and keep internal buffers. This
> might allow you to work with the scheduler better.

Yes, I have considered that as well, but then it would have to copy
data twice (from stream to main memory and back). Maybe that is the
easiest way to do it, but most likely would require the use of a
non-fixed rate block. If set_output_multiple would not degrade
performance, then that would be the easiest way to do things.

> I'm interested to see if you can get an approach that works well with
> your problem. So far, what you're trying to do seems somewhat of a
> non-standard use-case for GNU Radio, but I can see more people trying
> to do this kind of processing in the future. Would be good to know
> both the limits and why.

The typical problematic block is the following: take 128 pieces of
5000 long sample blocks and mix them into a stream of 5000 long
128-sized vectors. This is just matrix transposition: read in a large
matrix row by row and output the values column by column.

Miklos

> [1] http://gnuradio.org/doc/doxygen/page_ctrlport.html
> [2] http://gnuradio.org/redmine/projects/gnuradio/wiki/PerformanceCounters
>
>
> Tom
>
>
>> Miklos
>>
>> On Thu, Feb 6, 2014 at 11:15 AM, Tom Rondeau <address@hidden> wrote:
>>> On Wed, Feb 5, 2014 at 7:02 PM, Miklos Maroti <address@hidden> wrote:
>>>> Hi Guys,
>>>>
>>>> Is it possible to write a c++ block that takes 2 input streams,
>>>> produces 1 output streams, but to generate 1000 outputs it needs 1000
>>>> inputs of the first kind and 1 input of the second kind? How do I set
>>>> the set_output_rate? Does it apply to both input streams? How can I
>>>> ensure that the scheduler does not create too big buffer for the
>>>> second type of input?
>>>>
>>>> Miklos
>>>
>>>
>>> There are a couple of ways to do this. It might be easiest for you to
>>> use vectors of samples on input port 0. The output could be another
>>> vector or you could convert it to a stream again here. This is
>>> assuming that you always want to process 1000 samples at a time for
>>> every 1 sample on input port 1. You set your IO signature like:
>>>
>>> gr::io_signature::make2(2, 2, 1000*sizeof(type0), 1*sizeof(type1))
>>>
>>> The output signature is either 1000*sizeof(type0) and you can use a
>>> gr::sync_block (because 1 output item is 1 input item) or your output
>>> signature is 1*sizeof(type0) but you'll use a gr::sync_interpolator
>>> because now you'll be producing 1000 items after taking in a stream of
>>> 1 item. See vector_to_stream for a model of this second approach.
>>>
>>> You might also want to consider the tag stream interface instead of an
>>> indicator on stream 1. You would then have one input stream but look
>>> for the tag to process your 1000 output samples. This would be a more
>>> general approach if you aren't always using 1000 items at a time.
>>>
>>> Tom



reply via email to

[Prev in Thread] Current Thread [Next in Thread]