discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Fwd: [Discuss-gnuradio] USRP2 eth_buffer


From: Juha Vierinen
Subject: Fwd: [Discuss-gnuradio] USRP2 eth_buffer
Date: Thu, 23 Apr 2009 13:15:56 +0300

I have attached a patch to allow users to define the ethernet packet
ring size. I remove the SLAB_SIZE restriction. I think gnuradio needs
a fairly new >2.6.5 kernel anyway.

Why is this needed? I challenge anyone to sample at 25 MHz
continuously for two hours without overruns or missing packets to a
five disk raid array (be it ext2 or anything else) with the default 25
MB buffer.

Still, the patch maintains the original 25e6 buffer size. A value of
250e6 to 500e6 allows fairly reliable sampling to disk at 25 MHz, so I
recommend increasing the default buffer size to something higher than
25 MB. Otherwise new users will have problems with overruns. Even
Firefox consumes hundreds of megabytes.

juha

---------- Forwarded message ----------
From: Juha Vierinen <address@hidden>
Date: Thu, Apr 23, 2009 at 11:00
Subject: Re: [Discuss-gnuradio] USRP2 eth_buffer
To: Bruce Stansby <address@hidden>
Cc: Eric Blossom <address@hidden>, Johnathan Corgan
<address@hidden>, "address@hidden"
<address@hidden>


> ext file system is the go, with my high speed digitizer I stream 250
> MB/s (thats bytes) to a six disk raid (0) array. The raid zero is the go
> if you can afford to loose data in the unlikely event of a disk failure.

I'd guess that your high-speed digitizer has a buffer that is larger
than 25 MB too. Do you know what the buffer size is for your sampler?

I did a simple benchmark of filesystem bandwidth with xfs and ext2.

xfs:
address@hidden:/data0$ sudo time dd if=/dev/zero of=tmp.bin bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 68.1847 s, 154 MB/s

ext2:
address@hidden:/data0$ sudo time dd if=/dev/zero of=tmp.bin bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 68.1712 s, 154 MB/s

Both give approximately the same bandwidth. I totally agree that ext2
might have less variability in i/o bandwidth, but at the same time, I
don't really think there is that large difference between any decent
modern filesystem in terms of long time average bandwidth of writing
large files to disk. Large distributed filesystems are a different
issue, and I'd guess that XFS and IBM's GPFS are good for those uses.

I now took the time to reformat the disk to ext2 and tried to write 25
MHz to disk with the vanilla eth_buffer. It also gave an underrun
after a few seconds. This might be because I am chopping the data into
100 MB files, but this is a necessity. I cannot have 24 hours of 25
MHz data in one large file.

I have suggested a modification to the usrp2 API that would allow
increasing the packet_ring buffer, why is that not a good idea? Isn't
it a good idea to add a feature that allows people to reliably sample
and store to disk at high bandwidth, even with more jittery
filesystems? I think nobody is using a pre 2.6.5 kernel, so this there
shouldn't really be any reason to restrict the size to the number of
pointers that fit into a kernel slab size.

I'll write a patch anyway and send it to the list.

BR,
juha

Attachment: eth_user_defined_rx_buffer_size.patch
Description: Text Data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]