discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] USRP2 eth_buffer


From: Johnathan Corgan
Subject: Re: [Discuss-gnuradio] USRP2 eth_buffer
Date: Wed, 22 Apr 2009 14:14:44 -0700

On Wed, Apr 22, 2009 at 1:48 PM, Juha Vierinen <address@hidden> wrote:

> I have been trying to get 25 MHz to disk with USRP2.  I am using the
> C++ interface and a five disk software raid 0 that can do about 150
> MB/s. I can easily run at 25 MHz with a simple nop_handler that only
> checks for underruns and timestamps continuity, but when I write to
> disk, I can barely do 10 MHz for longer than 30 s without overruns. I
> have tried just about every filesystem with the same result every
> time.

Try setting your application to run using real-time scheduling
priority.  This is done in C++ via a call to:

gr_enable_realtime_scheduling()

or from Python:

gr.enable_realtime_scheduling()

Check the return value to ensure that it worked, it should ==
gruel::RT_OK from C++ or in python gr.RT_OK.

You must have permission to do this, either by virtue of running as
root, or by allowing your user/group to do so by adding a line to
/etc/security/limits.conf:

@usrp - rtprio 50

Then add your username to the 'usrp' group (which needs to be created
if it doesn't already exist.)

> Why is there a (int)(MAX_SLAB_SIZE/sizeof(void*)) limit?

We use the Linux kernel packet ring method of receiving packets from
sockets.  This is a speed optimized method that maps memory in such a
way that the kernel sees it as kernel memory and the user process sees
it at its own memory, so there is no copying from kernel to user
space.  It also lets us receive multiple packets with one system call.
 (At full rate, we process about 50 packets per system call.)

The kernel maintains a ring of pointers to pending packets, and these
ring descriptors must be stored in one kernel memory region.  These
memory regions are of MAX_SLAB_SIZE, and each descriptor is
sizeof(void*).  So the tp_block_nr variable calculates the number of
possible packets by dividing the buffer length by the block size, and
if that is more than can be stored in MAX_SLAB_SIZE, it reduces it to
the limit that imposes.

So you probably aren't using all 500 MB of that memory.  You can
uncomment the debug printf in that part of code to see the number of
blocks actually allocated.

What tends to happen if you aren't running your user process as RTPRIO
is that the libusrp2 driver grabs the packets from the kernel okay,
but your flowgraph doesn't read them from the driver fast enough, and
you get backed up into an overflow.

Johnathan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]