discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] interfacing a DSP array card to USRP2


From: Firas Abbas
Subject: Re: [Discuss-gnuradio] interfacing a DSP array card to USRP2
Date: Wed, 31 Mar 2010 01:44:04 -0700 (PDT)

Hi,


> From: Jeff Brower <address@hidden>
> 
> Matt-

> We're working on a project at Signalogic to interface one of our  DSP array 
> PCIe cards to the USRP2.  This would
?>provide a way for one or  more TI DSPs to "insert" into the data flow and run 
C/C++ code for low-latency 
> and/or other high performance applications.  The idea is that we would 
> modify the current USRP2 driver (or create an alternative) so it would 
> read/write to/from the PCIe card instead of the Linux (motherboard) 
> GbE.


I want to share my little DSP PCI cards experience (not PCI-E) with with the 
community. 

The most important thing when playing with these cards is how the card hardware 
and software driver works. It is not necessarily when you work with PCI card 
that you will get a low latency system.

To clear the picture, I worked (from about 2 years ago) with a PCI card (from a 
respected manufacture, four 125MSPS 14 bits ADC and four GC5016 DDC, 4M gate 
Xilinx Virtex2 Pro). The card was 64/32 bit (it can work from 32 or 64 bit PCI 
bus) and it accept 66/33 MHz PCI clock. Theoretically it can transfer up to 
528MByte/sec when hosted with a 64bit/66MHz PC bus (very difficult to find) and 
can transfer up to 132MByte/sec with 32bit/33MHz PCI bus (very common). With 
realtime testing, it gave me about 113MByte/sec data streaming because my 
platform was 32bit/33MHz. 

The card problem was in the transfer latency. The card can transfer a data 
block of up to 64k @about 350usec latency (very high). I could not reduce 
transfer latency significantly even by using faster multiprocessor PC. The card 
working technique is to collect data in its built-in FIFO, transfer this data 
to a shared PCI RAM then initiate a hardware interrupt to acknowledge the OS 
that data is available  and the driver copy this data to the user working 
space. The card drivers was for windows OS. At first I thought this is a slow 
windows kernel interrupt serving problem. When the card manufacture released a 
Linux driver (after about 1 year), I carried out the tests again but the same 
latency problem persist. 

I concluded that PCI transfer mechanizem is not efficient for small packet 
transfer. However, it is very useful  when transferring large amount of data 
streaming. Again these observations was for PCI card and not PCI_Express and it 
was for the card I used to do the experiments.

May be it was a card bad design philosophy, but I wanted to share this 
information with the community.

Best Regards,

Firas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]