discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Slow down rate of Python source block


From: Tom Rondeau
Subject: Re: [Discuss-gnuradio] Slow down rate of Python source block
Date: Fri, 1 Aug 2014 09:30:32 -0400

On Fri, Aug 1, 2014 at 5:24 AM, David Halls <address@hidden> wrote:

________________________________________
From: address@hidden [address@hidden] on behalf of Tom Rondeau [address@hidden]
Sent: 31 July 2014 19:11
To: David Halls
Cc: address@hidden
Subject: Re: [Discuss-gnuradio] Slow down rate of Python source block


On Thu, Jul 31, 2014 at 12:21 PM, David Halls <address@hidden<mailto:address@hidden>> wrote:
Dear All,

I have a Python block that produces packets of size 1536 bytes. Due to various reasons, the latter parts of my flow graph are very slow (this is desired and cannot be changed). After producing 510 packets, I get the following error.

"handler caught exception: operands could not be broadcast together with shapes (1527) (1536)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gnuradio/gr/gateway.py", line 55, in eval
try: self._callback()
File "/usr/local/lib/python2.7/dist-packages/gnuradio/gr/gateway.py", line 160, in __gr_block_handle
) for i in self.__out_indexes],
File "/usr/local/lib/python2.7/dist-packages/trl/blsd_enc_b.py", line 198, in work
out_cA[0:len(cAm)] = cAm
ValueError: operands could not be broadcast together with shapes (1527) (1536)
thread[thread-per-block[1]: <block blsd_enc_b (2)>]: caught unrecognized exception"

Debugging more carefully, I can see that:

len(cAm) = 1536 , len(out_cA) = 32768

Just a quick response without really studying the problem or your code. The dynamic scheduler in GR is getting in your way, and the throttle block is definitely not the right way to help you. You need to either tell the scheduler what you need it to send your block or handle it internally. There are three ways to solve these issues:

1. Use set_output_multiple in the constructor, which will only allow the scheduler to send you chunks of data of some multiple of the number you pass that. I've seen this slow down the scheduler in other situations, but it sounds like you're going slow, anyways, so this shouldn't cause a problem.

2. Make your input signature your packet_length so each item will be a vector of that length. This would not be my preferred way, but we've played that game before.

3. Handle it internally. Buffer up the input until you have enough to produce what you need. You'd need to inherit from gr.basic_block here and do more management of the data and buffers yourself.

Tom
********************************************************************************************************************************************************


Tom,

Thanks for your reply. Unfortunately as it is a Python block, I cannot use 'set_output_multiple'. For 2. do you mean set the input signature of the blocks fed from the source block. This could be possible but quite a few different blocks (some standard GR blocks) are fed from it. Could you provide some more details, or a link to similar implementation for option 3. - again is this possible from within a Python block?

Regards,

David


David,

set_output_multiple is exposed through the Python gateway code, so you should be able to call self.set_output_multiple inside the constructor..

For 2, yes, I mean that you can adjust the io signature of the block, essentially taking in a vector so that 1 item is now a full packet. But yes, this makes it more difficult to integrate with other blocks; one reason I said this isn't a preferred method.

And for 3, the only think that I can think of is the qtgui sinks, like the freq sink that passes a chunk of data the length of the FFT size and buffers it internally. The work functions for these types of blocks tend to become more complex, though, and difficult to read. But you can certainly do it inside a Python block; you just may need to do hard copies out of the input buffers to make sure the data is actually moved and stored locally.

Tom
 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]