|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
In Reply to: Yep, posted by Frank on April 28, 2003 at 11:54:24:
Another problem is signal headroom. If two or more signals are added the dsd signal can saturate. They strongly advise to keep the signal level below -6dB of the full input range to prevent trouble.
PCM processing doesn't have this problem, a 24 bit signal handled by a 32 bit processor has plenty of processing offset.I fail to see how this is a problem for DSD but not PCM. The fact that a 32 bit DSP has 32 bits doesn't make any difference if the 24 bit source is fed into the 24 most significant bits of the DSP.
But anyway, I think an important point about bass management is getting lost in this discussion. Even if you use a 128 bit DSP running at 384 KS/s, you're still not going to get transparent bass management. The filter characteristics aren't dependent on the sample size or rate. Adding additional filters into the playback chain is always going to degrade the sound somewhat no matter how they're implemented.
Follow Ups:
Add two 24 bits pcm words and you could end up with a 25 bits word if the result overflows 24 bits. With 32 bit dsp that's not a problem. You need a lot of processing steps to run out of bits.
Feeding the 24 bits into the most significant bits of the 32 bit processor would be stupid.The end result has to be dithered down into 24 bits only once. Here you lose some of the bits and the s/n ratio degrades a little. But with 22 bits left (2 are used by the dithering) it's still very transparant and very linear.
The actual algoritmths implemented in dsp are very important to.
Bad filter design in dsp results in sonic degradation.With DSD it gets tricky. There really is little headroom for arithmatic.
In practise you end op with requantizisation or dithering down into 1 bit on many occassions after a processing step. And each time the signal deteriorates.The moment all the bits in the dsd stream are used up to express a high signal level less or no bits are left to express the finer details of the signal.
"Even if you use a 128 bit DSP running at 384 KS/s, you're still not going to get transparent bass management"
In theory not because there is still a finite resolution.
But in practise it will be very transparant if correct filter arithmetic is applied.
128 bit is a LOT of resolution and dynamic range.
A 32 bit dynamic range is already deadly.Frank
Add two 24 bits pcm words and you could end up with a 25 bits word if the result overflows 24 bits. With 32 bit dsp that's not a problem. You need a lot of processing steps to run out of bits.
Feeding the 24 bits into the most significant bits of the 32 bit processor would be stupid.
Frank, if you use the extra 8 bits as MSBs as you seem to suggest, you haven't eliminated any issues stemming from a lack of headroom on the input. You've only temporarily avoided the issues until you have to fit the result back into 24 bits, at which point you will either need to attentuate or compress the result. Further, without any additional LSBs, any processing steps that involve roundoff error will add to the noise floor. That defeats the whole purpose of using 32 bits. If the input & end result are 24 bit, then the advantage of doing intermediate processing at 32 bits is specifically to make sure that any noise and distortion introduced during processing is well below the 24 bit noise floor.
I believe that if you ask around, you'll find that it's industry standard practice to leave 2-3 bits of headroom in the final result during PCM mastering and 6 dB of headroom in the result of DSD mastering. I believe you'll also find that when higher bit depths are used during intermediate processing, the extra bits are usually used on the LSB end.
With DSD it gets tricky. There really is little headroom for arithmatic. In practise you end op with requantizisation or dithering down into 1 bit on many occassions after a processing step. And each time the signal deteriorates.
The moment all the bits in the dsd stream are used up to express a high signal level less or no bits are left to express the finer details of the signal.
I don't want to argue this too much since I really have no idea what Sony is doing in their DSD DSP. I would only point out that Sony recommends leaving 6 dB of headroom in the master specifically to avoid this sort of problem in the playback system.
I said:
"Even if you use a 128 bit DSP running at 384 KS/s, you're still not going to get transparent bass management"
To which you replied:
In theory not because there is still a finite resolution.
But in practise it will be very transparant if correct filter arithmetic is applied.
128 bit is a LOT of resolution and dynamic range.
A 32 bit dynamic range is already deadly.
Sure, 32 bits is more than enough given that the end result has to fit back into 24. But my point is that the desired filter characteristics don't depend on how precise the arithmetic is. No matter how many bits you use to represent a sample, the filter designer faces the same tradeoffs between the magnitude response, phase response, and other factors. Implementing the filter in 32 bit arithmetic instead of 24 bit arithmetic only means that the noise you introduce through roundoff error is much lower. It doesn't change the filter's frequency response, and thus it doesn't eliminate any audible artifacts of that response.
Dave
"Frank, if you use the extra 8 bits as MSBs as you seem to suggest, you haven't eliminated any issues stemming from a lack of headroom on the input."I assume that the recorded material uses the dynamic range to it's fullest potential in the max department but, of course, just under clipping.
Utilizing more defensive methods during recording to prevent problems in the rest of your production chain is fine. But they should consider upgrading to better processing equipment, like 32 bits fp processing."You've only temporarily avoided the issues until you have to fit the result back into 24 bits, at which point you will either need to attentuate or compress the result. Further, without any additional LSBs, any processing steps that involve roundoff error will add to the noise floor."
That's actuallly the case, but it's the best you can do to fit back the resulting data into 24 bit's with minimal signal degradation.
"That defeats the whole purpose of using 32 bits."
Not at all, the 32 bits processing is providing the headroom to get the maximum result from the recording.
It also obseletes the 'old school' thinking of the defensive recording practises that where neccesairy to circumvent the 'processing' bottleneck (be it analog or digital).
"If the input & end result are 24 bit, then the advantage of doing intermediate processing at 32 bits is specifically to make sure that any noise and distortion introduced during processing is well below the 24 bit noise floor."Exactly.
"I believe that if you ask around, you'll find that it's industry standard practice to leave 2-3 bits of headroom in the final result during PCM mastering and 6 dB of headroom in the result of DSD mastering."
"I believe you'll also find that when higher bit depths are used during intermediate processing, the extra bits are usually used on the LSB end."
I don't think that's true for processing.
For sample rate conversions,attenuation and filtering processes it's best to use the extra bits as lsb.
But for mixing (summing) and gain applications the extra bits are best used to maximize the headroom.
In practise the extra bits should be split equal at both ends of the pcm word.
In the last stage the file could be adjusted to use the maximum bitwidth by a fairly 'simple' bitshift before dithering it down to 24 bit.
This will always ensure maximum performance from the dac utilized in the players.
Frank
My Denon AVR-4800 uses an analog filter on an "analog direct" input stream, processes the bass only in the digital domain and then mixes in analog. The only effect I hear when it is turned on or off is more or less bass. Similarly, the analog "bass management" performed by the Paradigm X-30 active crossover which comes with my Servo 15 subwoofer comes at the cost of a subtle brightness which is just barely detectable.
Jim,I didn't mean to suggest that all filters are highly audible, only that implementing them in the digital domain doesn't eliminate the basic tradeoffs associated with finding the most transparent filter response.
Also, I've found that for me, the only kind of bass management I can live with is to run all of my speakers full range and use a sub to fill in the bottom end of their response. This seems to work best with subs that are designed to be run this way, as they tend to have a gentler LPF which blends better with the rolloff of the mains. For a long time, I bought into the theory that you need to HPF the mains because sound quality would somehow suffer if they were driven into LF excursions below their cutoff. But now I find that any kind of traditional bass management involving an LPF on the sub and a HPF on the mains (typically with fairly steep matched rolloffs) compromises low frequency imaging somewhat and makes the sub stand out more rather than blending in transparently. Obviously, this is less of an issue the lower your mains go, and yours go lower than mine, so your mileage may vary.
Dave
This post is made possible by the generous support of people like you and our sponsors: