|
From: | Michael Wentz |
Subject: | Re: [Discuss-gnuradio] Flow control with message ports |
Date: | Thu, 10 Nov 2016 10:17:28 -0500 |
I'm not sure I understand. There was once a proof of concept flowgraph called pmt_smasher that would effectively keep publishing messages and the queue grows without bounds which was generally considered a low-priority issue (having no back pressure/flow control on message ports).You're describing different behavior than I understand the message ports to have. Is the queue that's overflowing some custom queue in your block that you dump new messages on to? If so just make that queue grow as more messages come in.NathanOn Tue, Nov 8, 2016 at 7:27 PM, Michael Wentz <address@hidden> wrote:______________________________Hi,I've made a block in Python that has one message port out and no other ports. What the block does is simple: read from a file, parse data into a dict, convert to a PMT, and publish as a message. The port is connected to a sync_block that is acting on these messages when it deems fit. My desired behavior is for the publisher to fill up the queue as fast as possible and block if the queue is full (waiting for room to open up). What I've observed is that the queue will instead overflow and messages will be dropped. Is there any way to have a blocking call to message_port_pub()?Looking through the code I do see a method in basic_block to get the number of messages in the queue, which I could use to decide to publish a message or not - but this isn't brought out in the SWIG interface. Is there a reason why? If not, I was thinking about re-defining the SWIG interface for basic_block in my OOT with additional methods, but was wondering if that would create conflicts/weird issues.Any other ideas for how to do this would be appreciated. I realize I could parse the file in my sync_block, but that's my last resort here.-Michael_________________
Discuss-gnuradio mailing list
address@hidden
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
[Prev in Thread] | Current Thread | [Next in Thread] |