Hi all,
I have a simple flowgraph:
usrp n210 -> low pass filter
-> stream to vector decimator -> msg queue sink
with important parameters:
sample rate = .195312 MSps
vector length = 128
decimation rate = 16
msg queue length = 1
and my application is trying to
capture samples of a given signal as fast as possible. I'm
using a GRC generated python script that I'm editing, and
looking at time stamps I'm creating with python's
datetime.now(). My general code is:
TIMESTAMP1
flowgraph.usrp.set_center_freq(center
freq)
TIMESTAMP2
flowgraph.set_sample_rate(samplerate)
#does usrp, filter, etc.
TIMESTAMP3
flowgraph.start()
TIMESTAMP4
flowgraph.msg_queue.delete_head()
#waits for there to be data in the queue
TIMESTAMP5
flowgraph.stop()
average difference between
timestamps:
1 -> 2 = .5ms
2 -> 3 = 1ms
3 -> 4 = 4ms
4 -> 5 = 80ms
My issue is that I think the time
between timestamps 4 and 5 should take about 10ms:
128 samp/vector * 16 vector *
(1second / 195312 samp) =~ 10ms.
My questions that arise from this:
How much overhead SWIG/python
introduce, could that be significantly hampering performance?
Would a c++ version perform significantly better?
Would a higher end usrp perform
significantly better?
How close I can expect to get to
the theoretical minimum?
I appreciate any information
anyone could provide. Thanks very much.
-Scott