discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Broadcast Receiver Audio Synchronization ( Delay


From: Benny Alexandar
Subject: Re: [Discuss-gnuradio] Broadcast Receiver Audio Synchronization ( Delay locked loop for the two-clock problem)
Date: Sat, 12 Nov 2016 13:41:40 +0000

Hi Fons,




>>  codec -> [ buffer -> resampler ] -> audio HW.

>>where [...] is the audio sink block. The buffer is *an internal part*
>>of the audio sink, and *not* the one that gr provides to connect codec
>>and audio sink.

So, there is an internal buffer in audio sink, which should at least be able to store a minimum of two audio frame blocks of same duration say (Tf). 
- Gnuradio scheduler puts each audio frames of duration 'Tf' into the queue between codec and audio sink.
- Audio sink copies into internal buffer and starts the audio hardware when two audio frames( frame_delay) are available in the internal buffer.
- Audio hardware triggers a callback after finishing each audio frame, ie after Tf time.
- For each callback measure the elapsed time  diff = (t2 - t1). 
- The error is (Tf - diff)  - (2 * frame_delay)


>>For this it needs timing info on
>>both the incoming and the outgoing blocks of data, using the same
>>clock for both.

Yes, but the input rate cannot be measured, because the codec is scheduled whenever there is output space in the queue and input bitstream is available to decode. This makes the input rate of audio sink to be variable based on codec processing time and  is highly variable based on the content encoded . How to apply the DLL here ?
But the codec can use the time stamp of the input bitstream which is stamped at RF sample entry time. So codec gets every transmission frame timestamped from USRP.   These transmission frames will be decoded into 'N' audio frames which does a interpolation of the timestamp from the RF entry time for each audio frame as explained earlier.


>>That means that only the audio sink block can do meaningfull timing of
the buffer writes, and any other info provided by upstream blocks
is useless. That's also why the buffer is part of the audio sink.


The reason for timestamping each audio frame from the start of transmission frame is because of not possible to timestamp every RF sample while entering. Timestamping will be at RF block of frames, which in a way maps to audio frames. But we can check the timestamps of every audio frame at audio sink, and the variation in data rate will be captured for every new RF received frame. So in effect if the RF block duration is Trf, the timestamping is also on the same duration.  In between timestamps are interpolated.

-ben


FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]