Yes, you'll get the same number of samples on both inputs if you derive
from sync_block. For example, here is the code from the "add" block:
add_ff_impl::work(int noutput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
float *out = (float *) output_items[0];
int noi = d_vlen*noutput_items;
memcpy(out, input_items[0], noi*sizeof(float));
for(size_t i = 1; i < input_items.size(); i++)
volk_32f_x2_add_32f(out, out, (const float*)input_items[i], noi);
return noutput_items;
}
For blocks derived from sync_decimator or sync_interpolator, you can
also assume that the number of input and output items will be the same
(or related).
On 01/02/2018 05:46 AM, Sakthivel Velumani wrote:
Hi Michael,
Thank you very much for the detailed explanation. I have one more
query - If a block has two input streams, will the no of items be same
in both streams? say for example I build a block that takes I and Q
samples as input and the algorithm demands I sample and its
corresponding Q sample to work correctly. In this case does the
scheduler guarantee that the items in both buffer are of same number
and in the same order? or do I have to check that I am processing
every Q sample and its corresponding I sample using tags or some other
mechanism?
Best
Sakthivel
On Mon, Jan 1, 2018 at 5:06 PM, Michael Dickens
<address@hidden <mailto:address@hidden>> wrote:
Hi Sakthivel - Short answers: The value can vary for each call; it
is determined by the scheduler. I've provided more info below if
you're curious. Cheers! - MLD
Details: One way to think of your questions is to imagine the
finite-length I/O buffers that hold the data between blocks, and
note that, in general, it is more CPU efficient to process "more"
data than very small chunks -- typically 1k of data can be handled
more efficiently than 4 bytes, when you consider the CPU overhead
required for the scheduler (this is true up to some "large" data
amount, when processing efficiency peaks and possible even drops
somewhat off of peak). When the flowgraph starts, these buffers are
all empty; so the scheduler tries to get blocks to process as much
input data as possible. Once the flowgraph is running, the buffers
hold (for all practical purposes) random amounts of data, which
means that the blocks (in general) will not be able to process the
amount of data as at startup time. Data will flow roughly in bursts
from source to sink, but since each block is executing in its own
thread the end result is data pipelining: "work" for any specific
block happens when there is simultaneously "enough" input data and
"enough" output buffer space -- combine the "note" above with this
concept and you have a rough interpretation of the scheduler
algorithm. Thus, with data streaming the scheduler has to be able to
work with dynamic amounts of I/O data / buffer space.
On Mon, Jan 1, 2018, at 10:30 AM, Sakthivel Velumani wrote:
> I am new to GNU radio. I have this general doubt that when items
are streamed from one block to another, how many input_items per
port (consider a type general block) are passed when the work()
function of the block is called each time? I guess this is handled
by the GNU radio's scheduler but would like to know if this is
constant or it varies for each call?
_______________________________________________
Discuss-gnuradio mailing list
address@hidden
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio