discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss-gnuradio] Relative merits of synchronization techniques


From: Johnathan Corgan
Subject: [Discuss-gnuradio] Relative merits of synchronization techniques
Date: Wed, 06 Sep 2006 11:08:23 -0700
User-agent: Thunderbird 1.5.0.5 (X11/20060728)

I'd like to hear your thoughts comparing "center of goodness" vs. "zero
crossing adjust" techniques for recovering bit timing and deframing in
an oversampled NRZ sample stream (I'm sure there are better names for
these algorithms!)

Take an incoming sample stream which represents an 8X oversampled NRZ
stream of 0s and 1s.  This would mean there are 8 samples to each baud,
and you need to pick at which point 0-7 you make the decision.  A known
sync code is sent first.

In gr_simple_correlator, you maintain eight "trial" shift registers,
each of which gets shifted into it a different bit in the sequence of
eight.

Then the shift register is compared to the sync code, and if it's within
a certain hamming distance, you note it as the start of goodness. The
first shift register to not meet the hamming distance threshold is
considered the end of goodness.  The "center of goodness" metric is then
the modulo average of these two points, and is used as the decision
point to convert the oversampled stream into a symbol stream.  (Did I
get that right?)

The above both recovers clock/symbol timing and "start of frame" in the
same operation.

The alternative, simpler technique I've used in the past is to use zero
crossings of the oversampled stream to estimate the center of the baud
period, and slice there.  This assumes the filtering ahead of the
sampling makes the center between the crossings the best place to slice,
and that there are sufficient zero crossings to keep clock skew from
causing the slice decision to move out too far from the center.

Once the bit timing is established the chosen bits are shifted into a
single shift register, and when the sync code is seen (within a certain
hamming distance), you are at the start of a frame.

The former technique appears more general, less reliant on prior
filtering, and immune to long strings of 1s or 0s.  On the other hand,
the latter technique is simpler, requires fewer calculations and less
memory.

So if the sample stream is known to have sufficient zero crossings and
has been properly filtered, do you see any hazards to going with the
latter technique?

-Johnathan

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]