Question for recording artist/engineers


Let's say you have a jazz band who wants to sell cds of their music with the best quality of sound they can achieve at the lowest out-sourced cost or do-it-yourself. If one wants to do a just-in-time type of manufacturing of their cd, how can they improve things?

Currently they are recording at 48k in Pro-tools, mastered in Sonic Solutions by Air Show Mastering, and then they use top of the line cds (Taiyo Yuden) with a Microboards Orbit II Duplicator. This has produced average cds but we want to do better.

What would you engineers do to improve this so it gets closer to audiophile quality? Would you recommend using a different mastering house, different cds, or a different Duplicator? Or would you just bite the money bullet and go directly to a full-scale manufacturer? We are trying not to have that much money tied up in inventory.

If this is the wrong place to post this question, please suggest another message board to post.

Thank you for your feedback and assistance.
lngbruno

Showing 5 responses by flex

Zaikesman, this may be closer to what you are looking for.

44.1khz came about because of its relationship to NTSC and PAL tv line rates. Early digital audio was recorded using versions of video recorders, and the audio frequency had to be related to the horizontal video frequency in order that both video and audio frequencies could be derived from the same master clock. 44.1khz was the original PCM-F1 format, which I believe was adopted first in Japan and ultimately became the compact disc standard.

The use of 48khz is based on its compatibility with tv and movie frame rates (50Hz,60 Hz), and with the 32khz pcm rate used for broadcast. 48khz has integer relationships with all of the above and therefore makes it easier to set up time code for studio sync. Looking at an article on 48khz, the author mentions your original idea of sample rate conversion as a primary reason for the concern with integer frequency relationships in the early days of audio.
That's not correct Piedpiper. 88.2k is not just halved to 44.1k. Conversions from 96k and from 88.2k to 44.1k both need a low pass filter and rate conversion algorithm. Also the conversion from 96k to 44.1k involves no losses relative to 88.2k to 44.1k. It just needs the right high quality conversion algorithm, which is present in the Sonic Solutions workstation.
If you throw out every other sample of an 88.2khz signal, you will have a 44.1khz data set, but with all the frequencies over 22.05 khz aliased into the audio band. The low pass filter is used to attenuate signal energy above 22.05khz (Nyquist) before reducing the sample rate.

Not just archival; it actually sounds better to record at higher frequencies and then downconvert to redbook. Current belief about high resolution says it sounds better than redbook mainly because of reduction of the distortions caused by filtering, especially steep brick wall low pass filters. When you start with an 88.2/96khz/24b signal, you still need a steep filter at the downconvert stage, but there are steps in the filter design that can be taken to roll the filter off more gently, keep ripple very low, and dither the 24b signal down to 16b. I can recommend papers if you're interested.
All of the downconversions from 96k and 88.2k use the same algorithm, it's just that the non-integer conversions are computationally greater. 96k->48k and 88.2k->44.1k are one phase filters, while 96k->44.1k and 88.2k->48k are multiphase filters of 147 and 80 phases respectively. This means that 147 or 80 sets of filter coefficients have to be stored instead of just one set, and the math has to be written to rotate regularly through all of the phases. Multiphase filters are correspondingly more software and memory intensive to implement than single phase filters, and can be much harder to do in realtime. This is a good reason for consumer manufacturers to stay away from them. Professional equipment usually has more software horsepower. As far as 88.2khz having an advantage over 96khz, most current players can play at 48khz as well as 44.1khz so there seems to be no compelling reason to stay with multiples of 44.1khz (assuming dvd release format).

Frequency doubling is not related to compute power. The most important reason for it is clocking. All equipment, consumer or pro, has to process both new and old formats using the same system clocks in its hardware and software operations, and system clocks can usually be divided or multiplied by factors of 2 fairly easily. Second, pro equipment needs to maintain compatibility with previous audio and video recording frequencies in order to deal with archival as well as new material. Ease of sample rate conversion is a factor but probably a distant 3rd in comparison to the first two.

'Throwing out every other sample' is something even the worst of software writers knows better than to do. You should listen to this kind of aliasing sometime in order to know why it's wrong.
In the not-so-distant past, 44.1khz was the consumer standard and 48khz the professional standard. High sampling rates were something a few engineers experimented with, but were nothing like an accepted standard as recently as ~10 years ago.