Hi All,
I visited the awesome San Diego 3-rail (SD3R) club last night to have a look at their layout and setup, and found some interesting stuff going on that we may need to work on a bit with the experts here on the forum.
The anecdotal story is for you is "when the legacy base is connected to the track their DCS system (1 TIU) won't add engines or do any complicated exchanges. The moment the power is removed from the legacy base the DCS works fine."
So I looked into this a little more. Their club has a very strong legacy signal (almost 3V of swing on the carrier) which I think is artificially inflating the noise floor of the DCS packet in the receiver's CDMA correlator. Here's two screen captures showing with and without the legacy base on (note the absence of the carrier).
Legacy Off:
Legacy On:
Note these signal excursions are measured with engines on the track so they are smaller than what we measure with the TIU unloaded in the other threads.
In case anyone isn't familiar... how the DCS decoder works is it computes correlations on the overall 31 bit spreading sequence (see old post for details). I think what we're seeing is the super-imposed 3V sine wave is enough to hurt the correlation level to the point where it's not taking the packet as a successful spread-code match.
Like.... without the legacy carrier you are getting 29-31 / 31 levels of correlation and with the carrier there, it's inflating your noise floor (uncorrelated content) down to maybe 26-27 / 31 where the decoder starts to drop packets.
I think "DCS and Legacy compatibility" assumes the DCS packet is infrequent enough (10 us per second) that the legacy user/system isn't going to notice, and that the legacy voltage is small enough that it won't hurt the DCS correlator. Once the signals have comparable amplitudes (say factors of 3-4) the assumptions start to break down and the two systems interfere. DCS will always be the loser since legacy is continuous wave and it is not. We can probably work out the equations for detection probability of a 31 bit sequence with a sine interferer at a specific frequency. (usually its AWGN noise you do this for).
In the above situation it seems it's only killing them for long exchanges (adding engines, making them active, ...) not the short exchanges like whistles and speed. So in the in-term I suggested they just put a disruption momentary switch that disconnects the legacy base while they add engine. Essentially you hold the button down (open circuit) while the engine is being added. Their legacy performance is solid so I don't really want to attenuate the carrier since that might make new problems for them.
My thinking:
Maybe we could come up with a circuit that detects the rising edges of the DCS packet and temporary disconnects the legacy base (on microsecond scales) to let the DCS signal propagate without the noise floor inflation. It would have to be lightening fast. The DCS bit time is about 260ns (1/3.85 MHz) so probably too fast for a micro-controller.
Quick ideas:
ANALOG: Maybe something analog like a self-resetting latch with an RC delay. So you have the SR latch with the S port set at a threshold above the legacy carrier but below the DCS excursion. When set it disconnects the legacy base. The R port is an RC network from Q with maybe a 10us (1 DCS packet long) delay before the latch is reset and the legacy restored. If you get successive packets it keeps firing S. You can put a transistor to flush the RC everytime S=1.
DIGITAL: Maybe a cheap FPGA clocked at 200-300 MHz, and just oversampling the waveform heavily. Set an input pin threshold so it's above the carrier, below the DCS excursion and just sample it every clock. If you see a "1" you open a pass-transistor on the legacy carrier. You have a clock counter with a termination value of maybe 100 counts. If you see another "1" on any clock you put the counter back to 0 and keep counting...
Thoughts?