![]() ![]() |
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
83.168.46.200
In Reply to: RE: JPLAY Responds posted by Bob_C on June 14, 2013 at 19:01:59
have the common sense (and background) to accept that delivery time in a time dependent stream being processed matters.
I have suggested that the answer will be to trace a musical stream from the HDD to the dac, but this is easier said than done.
Follow Ups:
I found the details of Jplay's explanation somewhat lacking. In particular, delivery time needs to be referenced to specific "whats" and "wheres". This they have not done. Also, in the case of an async USB connection, the stream is not time dependent, as there is no timing involved, just data. I do agree with Jplay's simplicity principle. Einstein is quoted as, “Everything should be made as simple as possible, but no simpler.”
If you want to trace things back, then you will need source code, and possibly firmware and device design documents. It won't be easy. This is the fundamental problem with software solutions to sonic problems in the computer system. If these problems can be solved elsewhere in the playback chain it will be much easier, where system complexity is much lower and there is some hope of transparency.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
simple. Install it, run it and look under priority and core assignment.
It doesn't work with my W8, increasing latency substantially and requiring larger buffers on the Mytek usbpal panel.
However it works wonders with my W7 install which doesn't sound anywhere as good w/o JPlay.
The usb packets may not be associated with timing, but inside a particular computer with all kinds of things running in the foreground or background, timing can be expected to matter, as the OSs are not 'real' time (just small time if you like).
"It doesn't work with my W8, increasing latency substantially and requiring larger buffers on the Mytek usbpal panel."
Have you had a chance to look at these?
http://www.computeraudiophile.com/f11-software/iso-usb-key-installer-preconfigured-and-stripped-down-audiophile-version-windows-8-pro-including-jriver-and-foobar-14390/
http://jplay.eu/forum/computer-audio/windows-8-optimization-script/
http://jplay.eu/forum/computer-audio/release-of-new-windows-server-2012-audiophile-core-edition-this-weekend/
The O/S not being R/T is primarily an issue with regard to buffer underruns and overrunns which produce audible glitches. Modern I/O devices, whether sound cards or USB controllers, time the external signals that they generate using a hardware clock, not software.
Back in the 1970's I was a product manager for a line of datacomms interfaces that generated telegraph signals under unbuffered software control. They had to be redesigned to meet the NZ Post Office specifications for apparatus allowed to connect to telegraph lines, which limited jitter to 1 microsecond. I've also designed and implemented several real-time kernals for a variety of computer systems. These real-time systems weren't synchronous, but they did undertake certain guarantees to meet certain latency requirements, but again these were typically in the multiple microsecond region. Even today, software latency is still measured in microseconds, hence the need for hardware buffering and hardware clocking for audio.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
It depends how you look at it. Is a usb receiver part of a dac, or part of a computer audio system? Same for a sound card.
I see them as part of the computer audio system. Maybe you see them as part of the dac.
I see both the computer and its software plus the DAC (and any digital cables or adapters) as part of a computer audio system. Of course, this computer audio system is itself a subsystem of a complete playback chain. What counts is the performance of the complete playback chain, not the performance of individual sub-systems. In selecting or designing components of this playback chain it may be useful to evaluate subsystems individually, but after a point an excessive fixation on optimization may become useless since overall performance may be limited by other subsystems, not to mention the possibility of interactions, which is what makes matters difficult.
It may be convenient to divide up a computer audio system into three pieces, a digital piece, a mixed signal piece, and an analog piece. It may also be convenient to divide the digital piece and the mixed signal piece into portions according to their clock domains. All this would be obvious to any system engineer who was competent in design of mixed signal systems, which necessarily includes competence in pure digital systems and analog systems. (People with these talents are rare, and are mostly to be found working where there is big money, e.g. telecommunications and military electronics.)
Of course you can package things into "boxes" any way you wish. The audiophile can do this as well. You can take a separate computer subsystem, a digital cable and a DAC and put them in a single cardboard box and call that a "1 box computer audio system". You can have boxes inside boxes if you like, ...
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
timing and time sequencing do come into it
"timing and time sequencing do come into it"
Only at the point of actual conversion to an analog waveform. The time at which a CD was ripped to hard drive will have nothing to do with the sound quality when it is subsequently played back. The only timing requirement is that the rip must be completed prior to playback. The same comment applies to other stages in the computer audio playback chain, up to the point where the controlling clock does its thing. (In the case of SPDIF it would be the SPDIF encoder, in the case of async USB it would be the master clock in the DAC.)
Unrelated activities in a computer system can affect sound quality of playback, even if they have absolutely no effect on the timing of those activities that are associated with playback. An example would be a periodic activity running on a processor core that is never used for music playback. The power consumed by this activity will create electrical noise that can then couple into the playback chain. In this case, it is true that the timing of this background activity will affect the timing of the coupled noise, but I don't think this is what you had in mind when you were talking about "timing".
If one can reliably hear or reliably measure (take your choice) audio degradation caused by timing effects then it will be possible to trace down the root cause and the chain of secondary causes. This may uncover effective ways of breaking the causal chain, thereby improving sound quality without needing to use expensive draconian measures. (For example, if noise couples to a DAC by power wiring and physical proximity, one could employ a second computer audio system to drive the DAC and then experiment with the first computer audio system doing various tasks to see if it still effects sound quality even if it is not physically connected to the DAC.) I would do these tests, but the differences that I hear are not sufficiently great as to be quickly and reliably detected. I would need a better ADC than the one in my juli@, which has noise levels that limit resolution to about 17 bits. It is also likely that I would need better analysis software, but this would not be terribly difficult to procure or write.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
too often competent engineers in the hifi world are also arrogant engineers who will discard any notions of how things may effect sound quality, I have certainly come across this in the squeezebox world and in the player development world.
just poor engineers or no engineers,Those geeks who spent 4 years being taught to 'engineer' software products are not engineers
Edits: 06/16/13
"too often competent engineers in the hifi world are also arrogant engineers who will discard any notions of how things may effect sound quality, I have certainly come across this in the squeezebox world and in the player development world."
I would disagree with you only in regard to the term, "competent". Any audio engineer who discards possible ways that sound quality might be affected is incompetent. In general, any arrogant individual who doesn't know that there are things he doesn't know is not only incompetent, he is a damned fool.
Years ago I worked for a large company. To provide a career path for the better and more experienced engineers that did not require them to go into management and supervise people we created a "technical ladder". The ladder contained job grades that were parallel to management job grades in terms of status and salary range, but did not require supervisory responsibility. The lowest rank on this ladder was "Consulting Engineer". For many years I served on boards that proposed and reviewed candidates for these titles. The distinguishing characteristic for a Consulting Engineer was the ability to foresee problems and plan activities that avoided them. In developing new technologies this meant that an individual had to have a firm grasp on what he didn't know, and constantly seek out new knowledge. An intelligent and experienced engineer with this attitude was key to avoiding development projects that spent a lot of time and money reinventing "the flat tire".
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: