discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] USRP Dynamic Range and 8 Bit Problem


From: Matt Ettus
Subject: Re: [Discuss-gnuradio] USRP Dynamic Range and 8 Bit Problem
Date: Wed, 26 Dec 2007 00:46:35 -0800
User-agent: Thunderbird 2.0.0.9 (X11/20071115)


Firas,

Thanks for doing these tests.  See my comments inline.

Firas Abbas wrote:
Hi Matt, Don

I did the tests. We definitely have improvement in USRP SFDR by more than 3 dB. I did the tests as follows :

1) Test Setup:

a) Using USRP Rev 4.5 with Basic-RX  board.
b)  Using high accuracy function generator.
c) Using decimation rate =8, and gain =0.
d) Using single tone (The SFDR is usually tested using single tone).
e) Using two frequencies 250KHz and 5250KHz. This is because I noticed a large difference in USRP SFDR between DDC frequency =0 and DDC frequency = 5MHz (for example).

The tones that you see on the 250 kHz measurements are harmonics. They could be generated in the USRP, but they could also be there on your signal generator. Can you take a look at your sig gen on a spectrum analyzer to check if you still see the harmonics?

f) Using tone power levels of (+4dBm) which gives about 1Volt P-P into ADC (according to AD9862 data sheets, this analog input signal gives best THD performance) and (+8 dBm) slightly below the saturating level which produces about 2Volt P-P into the ADC (according to AD9862 data sheets, this analog input signal gives best Noise performance).
g) Using Intel Core2D PC with Ubuntu 7.10.
h) Using gnuradio 3.1.1.
i) Using usrp_fft.py.


2) Prepared USRP FPGA Work :

a) The first FPGA rbf file (usrp_std_0.rbf) was generated by modifying only the cordic.v file.

b) The second FPGA rbf file (usrp_std_1.rbf) was generated by modifying the cordic.v and the cic_dec_shifter.v files.

c) The third FPGA rbf file (usrp_std_2.rbf) was generated by modifying the adc_interface.v and the cic_dec_shifter.v files.

See the files at:
http://rapidshare.com/files/79109257/Files_Differences.tar.gz
http://rapidshare.com/files/79109346/all_fpga_rbf.tar.gz
http://rapidshare.com/files/79109391/Worked_files.tar.gz

When you make the following change:
- rx_dcoffset #(`FR_ADC_OFFSET_0) rx_dcoffset0(.clock(clock),.enable(dco_en[0]),.reset(reset),.adc_in({adc0[11],adc0,3'b0}),.adc_out(adc0_corr), + rx_dcoffset #(`FR_ADC_OFFSET_0) rx_dcoffset0(.clock(clock),.enable(dco_en[0]),.reset(reset),.adc_in({adc0[11],adc0,4'b0}),.adc_out(adc0_corr),

You need to take into account that the input to rx_dcoffset is only 16 bits. So in the second line, you are really sending in {adc0,4'b0} since the top repeated sign bit will be cut off. This block takes an average of the DC offset and subtracts it form the signal. This can cause an overflow if the signal is close to clipping and the DC offset is nonzero. The best thing to do here would be to make the rx_dcoffset block clip instead of wrap around. That would save a digit here.

Also be careful with the values in cic_dec_shifter, since each one is used at only one decimation rate. You need to test all the ones that change.

3) Test results:

Note 1:
I used in my tests the original rbf file std_2rxhb_2tx.rbf , usrp_std_1.rbf and usrp_std_2.rbf FPGA files (usrp_std_0.rbf was not used).

Note 2:
Let us assume that I have eye reading error by about (1dB - 2dB)!!!!.

Note 3:
See test results at :
http://rapidshare.com/files/79109205/Tests.tar.gz

a) The generated rbf file by modifying the cordic.v and the cic_dec_shifter.v files (usrp_std_1.rbf) gave us more SFDR than the original FPGA file by about 7 dB in case of input signal 5250KHz and level=+8dBm as shown in graphs. I tested this rbf file for all input signal levels (from -90dBm to +13 dBm) and for all decimations (8 to 256). It is working fine and great and should be used all the time instead of the original rbf file.

b) The generated rbf file by modifying the adc_interface.v and the cic_dec_shifter.v files (usrp_std_2.rbf) was not good as expected.

Although it gave us more SFDR than the original FPGA file by about 5 dB in case of input signal 5250KHz and level 4dBm as shown in the graphs, but the FPGA got crazy when the input signal level was 8dBm as shown n the graphs (I think the cordic was overflowed). When the FPGA went crazy, I reduced the input signal gradually by 1dB steps until I reached input signal =+4dBm then it worked back normal. Thus, the file is working good only and only if the input signal is equal or below 4dBm.


I think we should send this work as a patch to gnuradio to enhance our fantastic USRP device.

I agree that the results do look better. My concern is overflow when both I and Q are used. If you can try all of these with max-strength signals on both I and Q at the same time, I would be more than happy to include the final results in the standard build of the FPGA.

Also note that the spur you see at DC is a result of using truncation instead of proper rounding. This occurs in a bunch of places, including the CORDIC, CIC, and halfband outputs. Truncation of 2's comp numbers results in a slight bias. Rather than truncating, we should use the technique used in the following:

   http://gnuradio.org/trac/browser/usrp2/trunk/fpga/sdr_lib/round.v

This was not done in order to save space. I would really like to see a single-channel FPGA build which added back in all of these little details, had a TX halfband, and had wider internal datapaths to improve signal quality.

Thanks for doing all this investigation,
Matt





reply via email to

[Prev in Thread] Current Thread [Next in Thread]