discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Number of samples processed in the work function


From: Marcus Müller
Subject: Re: Number of samples processed in the work function
Date: Tue, 28 Jun 2022 19:18:34 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.0

Hi Daniel

On 28.06.22 18:44, Perkins, Daniel (US) wrote:
  * processing more samples is not recommended

Not only not recommended, it's strictly forbidden, and breaks GNU Radio.
You get only as much output buffer as you get noutput_items. You produce more, you're overwriting parts of previous calls' output.

My socket packets contain a fixed number of samples so to avoid an extra memory transfer, I prefer to copy straight from the socket buffer directly to the output buffer with a volk function.  To make this work, I need to return that number (1024) of samples from the work function which sometimes violates the “processing more samples” rule.

Then your block is broken! This probably only works because you don't notice how you're overwriting data that has not yet been processed.

However, this seems to work without any issues and only the occasional dropped UDP frame.  I can also manipulate what GNU Radio will assign to noutput_items by calling  set_min_noutput_items.

That's the right thing to do.

When I set the min nouput_items to the size of my payload, I get a bunch of underruns.  What is the optimal way to deal with this?


What happens here is that GNU Radio waits to call your work function until the processing downstream has consumed enough items so that there's 1024 or more items of space in the output ring buffer.

If that takes longer, on average, than it takes your source to produce these samples, you have a problem: You're trying to attach a hamster to a water hose and tell it to drink fast enough. No matter how big you make that hamster's cheek pouches, at some point the hamster will have to spill some (overflow), if it can't drink as fast as the hose pumps in.

So, the solution is to both set min_noutput_items, and to make sure the rest of the flowgraph is fast enough so that there's always enough space for your work() to write into.

Best regards,
Marcus

PS: I swear, no hamsters were harmed in the making of this email!



reply via email to

[Prev in Thread] Current Thread [Next in Thread]