Has anyone heard the new North American products preamp and amp?


The new versions are called X-10s and the amp is on its third version or Mark III. This truly provides holograph imagine unlike anything I've heard before. On symphonic orchestras, one can hear the first violins. I have never heard an amp sound this precise.

In reality, I doubt if any amplifier can rival it. I certainly have never heard any that do so. Every album is so involving.

The preamp has yet to get a remote but is nevertheless, quite striking.
tbg

Showing 39 responses by roger_paul

timlub,

You mentioned -  a ton of digital correcting done to the signal on the front end.

I'm not sure if you were referring to the amplifier design. It is a pure analog design. There is no digital circuitry involved.

Roger

Here is correct link..
www.h-cat.com
Thank you Tim,

I know most audiophiles would not make the connection between an amp with low distortion (.005%) and one with no distortion. On the surface you would think it would just sound a little better. Like taking out a little residual distortion that nobody would consider "noticeable".

That is not the case because there are types of distortion that don't show up on the THD analyzers. When those are removed the difference is day and night.

If you take as an example a live person on a stage and next to him you have a high quality first surface mirror setup so that you see the real person and a reflection of that person at the same time. As long as the mirror was completely stable - you may have a hard time distinguishing the real from the reflection. However if you simply press on the center of the mirror so as to produce a tiny bend or warp, it would be instantly apparent which is which.  The instability of the mirror structure would cause objects that are far away to be even more unstable as the distance from the mirror would "amplify" the problem. The point is that it does not take much for your brain to recognize fake. This is why background objects in a performance are harder to resolve in a system with even tiny amounts of non-linearity. The farther away from the microphone - the smaller the signal size and the apparent location has drifted more then objects in the foreground or close to the mirror. . 

Remember the carnival mirrors that made your head small and your legs long? - that is a non-linear mirror. It can be seen that the "small head" end of the mirror actually has (optically) compressed the image and the "long legs" end of the mirror has (optically) stretched the image.

The correlation fits the description of the Doppler effect. That is to say that a train headed toward you has the whistle pitch as higher (compressing the sound waves)  and as it passes you (moving away) the whistle pitch drops (stretching the sound waves.)

Over the years I have claimed that Doppler is the destructive force in amplifiers. It is possible that an amplifier can alter the pitch of a signal with no moving parts.Removing Doppler from an amplifier forces its "reflection" to be true and now you are back to having a hard time telling if what you see (hear) is the real image or a reflection. The brain accepts either image as real. Whatever comes in the power amp will exit as a scaled clone with perfect pitch. 

Roger

 
The Mach One speed is of course the velocity of the "wave" part of the sound wave.The details of each sound object is embedded in the instantaneous pressure which is seen on the vertical axis. The frequencies contained in the rich harmonic structure of each instrument is totally dependent on the delivery speed as being constant (seen on the horizontal axis as time domain). If the velocity slows down or speeds up then each and all instruments are shifted up or down the spectrum together by an amount of offset that stands out to the ear-brain system as "not real".
The entire performance has acoustic relativity - meaning if one instrument drifts in location, they are all drifting together.

It does by the way shift up and down the spectrum. This is how energy from the fundamental input signal appears (as distortion) a full octave away as the first harmonic.

My process does not have the problem of shifting anywhere because it is fully locked by the shift countermeasures implemented as nano-degrees of phase shift. It has a capture range of 0.07 nano volts max deviation.

It therefore has no mechanism in place to produce harmonic distortion.

Roger
tbg

Has anyone heard the new North American products preamp and amp?


Yes I have.

Roger
mapman

Why do you talk about subroutines when all analog?
I have been a computer programmer for years and what I have found is that if you see the functionality of a circuit as a task unto itself - I consider it a subroutine that can be accessed to generate the response needed by the calling circuit which is the core signal handler. It is important to keep in mind this is pure analog that can act so quickly it is "near" digital". This concept stems from the smallest trigger event being able to shift the velocity by parts per billion (nano-degrees).This matches the detection levels reported by the velocity detectors which use quantum physics to generate extremely high gains.

The support template surrounding the core circuit has many power supplies. Each subroutine has its own supply to maintain purity.

Roger

It seems that the worst offender of the velocity errors is the power amp.
It may be because it has the task of converting the electrical wave into an acoustic wave via the transducers (speakers).

As you may know sometimes the IC connections can become "dirty" or poor and when you reconnect them or clean them up it restores the transparency and focus to your presentation.

That is the result of fixing a "poor" connection. The destructive nature of the poor connection represents a non-linear event in the chain. It does not take much of a non-linearity to scramble location information, etc.

By having a power amp that does not contain a non-linear path it keeps the purity of the chain much higher and allows the positive and negative wave-fronts to stay registered. (needed if you hope to hear live).

Roger
 
I thank this can be narrowed down to a basic concept.

What is the difference between listening at the venue and listening at home?

At the concert hall the medium is air.
At home (from your speakers) to you is air.
The electrical "version" of sound waves is put between you and the venue that is handled by your gear.

The only way to remove the effects of your gear is to have it "act" as if it is also air. The way to do this is to make sure the your gear delivers the (musical information) to you at the same speed as it traveled through the air at the time of the recording.

You listen to the sound (re) created by your speakers at constant velocity.
By locking the speed through the amplifier as constant you have done two things.

1) It no longer can create harmonic distortion - because air does not.
2) Your ear-brain system now accepts what it hears as a clear (air only) path from performer to you. 

It cannot sound "live" without addressing the delivery speed.
This concept cannot be easier to grasp if you can see the damage caused by an unstable velocity in the mix.

If you treat the electronic section as a "bad" connection between the venue [air] and your listening room [air], It might make more sense.

Sound has two properties
Pressure - represented by the amplitude or magnitude (vertical axis)
Time - represented by the speed or velocity (horizontal axis)

Amplifiers get the [pressure] part of it down pretty well - but they don't get the [time] part of it due to distortion.

Slight errors in amplitude (pressure) are caused by slight errors in timing.
If you fix one - you have fixed both.

You need both to be correct if you want to feed your brain with the key attributes that make it "live".

The consensus generally is that harmonic distortion specs alone tell you little meaningful about the resulting sound.
This could not be truer - the problem is that the THD specs have not been taken far enough. In other words amplifiers may have suppressed harmonic distortion but that is not good enough if you expect it to be perceived as "live". It cannot have even small amounts - it has to be removed entirely.

Roger



I know how difficult it is to see what I'm talking about believe me - this is why it took years to figure out.

The electrical version of a sound wave has to include the complete phenomenon of the [wave] event. Other wise it can't possibly be expected to sound like (un-amplified) sound.

The electrical gear has to translate the sound wave (microphone) to another medium (amplifier) and transfer it back to air (from speakers) with no alterations.

Live is literally lost in translation.

Roger
geoffkait,

One more question: once the signal leaves the amplifier it still needs to get to the speakers.  The speaker cables introduce additional velocity differences in the signal since the high frequencies travel closer to the surface of the conductor where the resistance is less, no?
This is an excellent question that I'm glad you asked.
The difference is that one is a non-symmetrical phase error and the other is a symmetrical phase error.

Symmetrical phase errors are tolerable because they "screw up" the positive and negative wavefronts identically. while it is not ideal it won't modify the location of an object the way that a non-symmetrical phase error can.

A  non-symmetrical phase error can alter the phase more on one of the pos/neg wavefronts which will introduce velocity errors (the positive wavefront may arrive faster (sooner) than the negative wavefront)
This puts a constant wrinkle in the time domain.

So in the case of your speaker wire.Indeed high frequencies travel closer to the surface (skin effect) but this happens equally to signals traveling in both directions .

In an active stage. The likelihood of a non-symmetrical phase error created by a non-linear event injects what looks like a poor connection into the chain.

The connection will lean towards a diode effect meaning it conducts better in one direction than the other. This phenomenon will always cause the location information to be scrambled or diffused and the holographic view to collapse.

Roger
mapman,

The consensus generally is that harmonic distortion specs alone tell you little meaningful about the resulting sound
You are making my point. The same way the SS amps with 0.005 % cannot match the warm full sound of a tube amp with 0.5 %

I knew years ago that the THD measurement gear seemed to ignore or miss something that should be relevant. It is microscopic variations in speed caused by the active amplifying process.It is Doppler on a very fine scale. 

If you try to raise the pitch of a 1KHz tone so it becomes a 2KHz tone, what is the very first thing to happen? (picture analog gear with an old fashioned knob for frequency). The very instant that you put pressure on the knob to turn it towards a higher frequency, you start to alter the phase angle of the current 1KHZ tone and the instantaneous frequency has to pass through every frequency between 1KHz and 2 KHz. So not long after you try to raise the frequency it puts out 1001 Hz. 1002Hz and so on until you reach the desired frequency.

When amplifiers produce harmonic distortion the same action occurs. as the trace of a perfect sine wave hits a non-linear event - the result will be a harmonic at 2KHz. However if we go back and slow down the events we can see that the first thing to happen is the beginning of the fundamental to move up the spectrum towards 2KHz. If we can monitor the shape of the sine wave and notice as soon as it appears to deviate from ideal it would still be within less than a cycle of 1000 hz. Knowing that the direction is headed up the spectrum we can apply a phase countermeasure to force it down the spectrum by an equal amount. This essentially locks the fundamental in place at its own frequency never having the opportunity to shift or drift up or down the spectrum. It is phase locked with a control of the phase down to nano-degrees. The distortion is virtual zero. If you prefer a measurement - how about -250 db. (150 db below the noise floor)
I hesitate to release the other specs about this amplifying method because if you are having a hard time absorbing the accuracy achieved it will get even more incredible. 

Roger

The speed I'm talking about is not the speed of electricity traveling from the positive supply down through the tube/transistor and into the circuit ground.
That is the vertical axis which is responsible for providing an instantaneous voltage (potential) that represent the instantaneous air pressure. 

One the other hand, the horizontal axis represents the time domain.
If for some reason the vertical motion of a rise in voltage (increase in pressure) fails to make it to the theoretical peak of the sine wave then we see this as compression. If you place the compressed image over the ideal image - it can be seen the somewhere (maybe 3/4 of the way up) the amplifier trace begins to fall behind the ideal trace. It's velocity has slowed down. This means that a portion of the sine wave has been altered, its shape is no longer ideal and now represents the shape of a lower frequency. Likewise if the amplifier trace is seen as reaching a higher than ideal voltage then its velocity has increased. This looks like expansion (or the opposite of compression). That portion of the amplifier's wave now appears to be a higher frequency. The pitch has changed.

This is classic Doppler.

By securing the velocity along the time domain - it forces the amplifier to put out a trace that would superimpose onto the ideal trace. The shape of the sine wave is not altered and will always represent the fundamental frequency. Harmonics of the locked sine wave are non existent.

Distortion and linearity are inversely proportional. The less the linearity the more distortion. Instead of trying to make a linear amplifier, I made one that does not distort. If you have zero distortion - you automatically have 100% linearity. (same as air).

In the absence of Doppler distortion, every instrument in a full orchestra can be heard separately as if you were at the original venue. 

Yes electricity travels at roughly the speed of light.
The horizontal movement of a signal representing a sound wave through a circuit has a specific playback speed. 

If you record sound at Mach One you must play it back at Mach One.

Roger


gdhal,

what happens when the two speakers (assuming two channels) are not precisely (as in less than 1000th of an inch) aligned and distanced from the listeners ears? Is distortion re-introduced?
Actually -The only thing that will happen is it will project the performance but with the center stage slightly to the left or right depending on which speaker is slightly closer. It is a shift in your viewpoint. 

Roger
Thanks mapman,

I hope you get a chance down the road to give a listen.
I will be at the Newport Beach show in June.

Roger

gdhal,

Those observations are slightly different.
It is true when I describe the projection that results in a full stage shift caused by the slight offset in distance between the speakers and your ears. If you were sitting in the hall for real and your turned your head slightly to the right (making your left ear closer to the stage) would it not appear that your perception of the stage has moved slightly to the left?

Regarding the sweet spot. Here is where I may get into trouble.
A portion of the "sweet spot" is directly caused by distortion. Surprise!
Both stereo channels have the same circuit. Having the same circuit means they behave identically. Whatever distortion is present in one will be present in the other.

When full spectrum music is played, the distortion product in both channels will be the same and it (the distortion) will manifest itself as an [additional] sound object all by itself. To the observer it sounds like a monophonic "entity". In order to experience the "spot" you can move your head from side to side while looking straight ahead. You will sense a moment when it seems like you hit a focal point of significance. Depending on how much of the music is close to common (centered or vocal mic mixed in as mono) you will have a distortion object added to your presentation. That object is dead center (assuming close matching of circuit components). If your speakers are at slightly different distances it will not be dead center.

This can be confused with an actual sweet spot created by the polar dispersion pattern of your speakers and how the venue was captured.

Because of the nature of H-CAT - after the electronics is burned in – the final remnants of the common distortion product disappear and you will be able to perceive a wide open sound stage that can be “viewed” from many listening positions. If this were not true then only the people in the hall who are seated dead center would have enjoyed the performance.

Roger


geoffkait,

But it is what you two have been saying, by claiming that an amplifier can produce "live" sound in the room, the same "live" sound from the recording venue. I am simply pointing out that that statement cannot be TRUE because there are SO MANY PROBLEMS INHERENT in the home audio system that DISTORT  the sound, not just amplifiers. Follow?
I think I may have found some common ground here. What you are saying is also true but here is the significant difference in the type of distortion.

Almost all of the "other" problems introduce static distortion as apposed to dynamic distortion.

This all relates to motion. If you take a picture and mount it on to a shake table (a vibrating platform) and turned it on -  what can you make out in the photo? Depending on the intensity of the vibration it will certainly be less than if you turn it off.

That is dynamic distortion which modulates the information.

Now take the same photo and this time just set it up straight with no vibration. However instead of straight on - rotate it a few degrees in one direction and leave it there. Now what can you make out in the photo?
Probably everything just fine - except it is viewed from an odd angle which stays at that angle. It may have a warp or change in perspective but even the warp is stable.

That is static (stable)  distortion that interferes with an unobstructed natural view.

Conventional analog amplifiers modulate the signal (velocity) flowing through a circuit at extremely small amounts and yes VACUUM TUBES do this LESS than SS but when you remove the modulation completely - it is day and night.

You cannot experience "live" sound in the presence of any modulation.
The velocity in the concert hall has zero modulation and your brain recognizes the stability of the air medium as authentic.

To stream this information into your ear canals at the perfect playback speed taps directly into the default process of the ear-brain system.

The reconstructed image of the source of the sound is easily uploaded to the mind when it enters the brain at the right speed.

Roger

geoffkait

I was mostly trying to separate the phase based errors that are found in crossovers and the phase characteristics of IC's with regard to affecting overall tonal balance of a presentation. Like the highs traveling on the outside (skin effect) and how it can warp or tare at an image in a fixed way.

There are other dynamic issues as well with cables when they pick up vibration of course and any effort to control the vibrations (anywhere) is desirable. 

It has been my experience that velocity based  "electrical modulation" in the amplifier chain were absolutely startling when removed. I would consider it a whole magnitude higher in the destructive property compared to mechanical vibrations. I know that with a constant velocity amplifier all the other issues seem to be exposed more easily.
It is quite surprising what happens when the music signal is allowed to flow steadily along the time domain without circuit induced contamination.

In fairness to your comment - I stand corrected.

Roger


geoffkait

(I have carried this over from the other "neutral" thread since it belongs here)- Roger

It is a logical fallacy that one can automatically achieve audio nirvana using an ideal amplifier, assuming for a moment that is what yours is. Things are just not that simple

Your right things are not that simple. This is why it took 25 years of intense research targeting one problem - distortion in AUDIO amplifiers.

No one else has come close to a full understanding of the amplifying process used specifically for signals in the audio spectrum. Amplifiers used for radio, video, uhf, microwave etc. do not have to deal with delivering analog data from a different medium. Audio amplifiers require the total package that must include velocity. The signal has to return back to sound waves in your home. It cannot be done in an environment where the velocity is unchecked.

Try to remember back in the day when you may have went from a mid-fi Kenwood or Sansui receiver to your first real audiophile gear (most likely tubes) and what a stark day and night difference it made.  For you It was a whole new world of audio. Finally it sounded like real music.

Then there was the horror of new digital (CD’s) on the scene and all it did was give you stress and was not anything like a good analog front end.

(I'm sure most of you will say it is still the case)

Look at how difficult it was for me to explain the [fact] that there are 2 separate distinct speeds happening in the amplifier.

1)     Electricity traveling at (speed of light)

2)     Electrical signals representing sound waves traveling at (750 mph)

This is nothing new – if the wave phenomenon could not “flow” through the hardware at this speed you would not be able to use it for audio.

What I have done is to guarantee the flow will be at exactly one constant speed or velocity.

That was no simple - It takes control at quantum levels to achieve this function.

If the velocity is perfectly nailed down – you have emulated the properties of air.

It has never been done before. That’s why it is a breakthrough. That’s why when you hear it in operation it is not recognizable as electrically delivered sound.

All it takes is for people to be open minded enough to learn something new that directly impacts the world of entertainment.

Judge for yourself [after] you hear what it does

The worst skeptic is converted within seconds of exposure to this process.

They may have no idea how it was done - but now know it obviously works.

Roger


If you showed a smart phone to someone in the 1600's you would be burned at the stake for witchcraft.

This is a time of incredible advancements in technology.
Still - Dolby Labs with all there millions cannot duplicate what I have done.
They have to install 27 speakers in the theaters and artificially pan sounds through separate channels to "give you the thrill of being there".
That's because they don't know how to project a stable sound object into mid air.

It is nothing more than very expensive "fake".

This amplifying method is self-evident and will easily stand the test of time.
You can't tell someone that "it's impossible" or "you can't do that" - after its already been done.

It is a moot point.

cleeds,

No I was referring to the fact that different frequencies travel at different distances from the center of the wire. The skin effect is the RF term but it shows the extent that high frequencies move away from the center which can be present in the audio band.
geoffkait

  Are you doing something quantum mechanically? Or are you just fond of the word quantum? ;-)
Yes - part of the auto-focus circuitry that tracks sound objects (as a very small signal)

I had to create a device that was dead quiet and begins tracking at 0.07 nano volts. The quantum reference thread used to detect velocity has an extremely high Z. (90+ gig). The velocity control system is driven directly from this process which squeezes the signal [gauge] into a virtual plasma. The plasma is also high Z.  The thread is then combined with the signal at the plasma level to ensure impedance matching.The signal is suspended this way so the auto focus circuitry can fully dominate the signal velocity while preventing it from "touching" the surrounding template and support circuitry.  This keeps contamination out of the signal.

The velocity control system is a twin running shift generator (red shift - blue shift) which is held to a servo neutral point under tremendous pressure.
Once detected, extremely tiny amounts of signal velocity are met with a counter measure (injected into the plasma) of greater than 700 db in real time.

All of this is held in a solid substrate made at the factory.

The signal velocity has no chance of deviating away from a dead accurate
Max error would be in nano degrees of phase shift away from the fundamental. 

It is impossible to generate harmonic distortion.
This technique is used through the entire chain.
The output is a virtual clone of the input (only bigger).
Since the output velocity = the input velocity it will pass the electrical version of a sound wave all the way through at exactly Mach One.

The hardware itself which has no sound of its own emulates the properties of air. Both pressure and time are locked in sync with the music signal.

The experience can be described as listening through a large hole in the wall to a performance happening in the next room. Nothing but air.

That's live.

Roger
 

Geoffkait

The precision of the auto focus circuitry relies on quantum mechanics to function properly. It extends the sensitivity of the velocity detection by massive amounts. The last few years of my work has been trying to accurately detect the flow or motion of the sound [wave] itself. The music information is clearly present in the electrical signal - but embedded deeper down in the signal (at a "DNA" level) is the stored data that reveals the pinpoint location of sound objects in a captured venue. It is tied directly to the flow rate or velocity of a traversing signal as it passes through an amplifier.

The wave phenomenon can be tangible or intangible depending on the medium.

Try this…

You are at a baseball game and the fans at one end of the stadium start what is known as the "wave". Fans stand up and raise their arms and sit down as fans next to them follow the same pattern. To fans across the other side of the stadium it appears that there is a continuous movement or flow of the "gesture".  There is no physical transfer of anything between fans but you can observe the wave phenomenon as something flowing and at any one instant you even know where it has passed and how long it took to get there.

The wave part of a sound waves is the critical key to recovering not only the accurate measurements of instantaneous air pressure – but it also reveals when [time] and where [location] the change in air pressure began. At the venue from your seat, you are listening to an historical event due to the delivery time. It is important to note that the slight delay (delivery offset in time) caused by the distance the waves travel is automatically ignored and removed by the ear-brain system as long as it is close enough to the source and you do not have visual clues (when you observe a drum being struck). That air pressure data rides on the wave which can be considered the carrier (like an AM radio transmission, you must be tuned to the carrier to recover the audio)

When the flow rate cannot be identified or it acts more like FM where the carrier frequency is intentionally modulated (horizontal axis) with audio – the location and pitch of sound objects has become unstable. The amount of instability is proportional to the degree of (velocity) contamination caused by microscopic alterations in the time domain. Under these conditions the “delivery offset” cannot be ignored by the ear-brain system because it is varying therefore the apparent distance between you and the performance is not constant.  This causes a secondary tracking event by the brain to deal with an offset that keeps changing. THIS DISTRACTION ALONE IS THE RED FLAG THAT TELLS YOUR BRAIN IT IS FAKE. In the absence of this variation - your brain is free to use the default (high level conversion) of streaming sound entering the brain at a CONSTANT speed of (750 mph). This is not an option. It is 100% necessary for events to sound "live" to your brain.

A “carrier” implies a frequency or an AC component – The speed of sound is more like DC because of the continuous flow in one direction. (Not having to return back to the stage).

At no time is this technique used to "enhance" or create some dimensional perspective that is not literally contained in the original sound wave.  

The bottom line is the successful "conversion" of information that has gone through 2 types of mediums where the output method (final translation at the speaker) is talking the same language that you brain understands.


Roger
atmasphere

700db? I'm sure you must realize how improbable this statement is, so I'm sure this is a typo. What did you really mean.  
It is not a typo.  Seven hundred decibels.

I have been busy developing RAW amplification in that range with the help of quantum physics to remove anything that causes instability. That was not easy.

The signal current is down converted in thickness so it can be combined with the quantum thread. The reaction that takes place at that level results in a massive signal (output) that contains the exact velocity of the streaming audio signal.

When combined with the sensitivity of the shift generators it is closer to 800 db. It all happens in a single point that is housed in a Faraday cage and buried in pure copper.

This is why it has resolution of biblical proportions.

Roger



atmashpere

Now the problem here of course if that you have a circuit that can do the process, but no means of measurement, as nothing exists that can deal with numbers that small.

With all due respect - it is not my problem.

Tubes have higher MEASURED distortion - yes?

Since there is plenty of distortion to play with in tube amps and it has no problem showing up on man-made test equipment, did your measurements from tube amps help you make a better amp?

atmashpere -

 I make this point fairly often in that one of the areas that we know very little about is how the human ear/brain system works. And because we don't know much about how it works, we don't really design equipment that takes advantage of those rules


I have been blessed with an understanding of the ear-brain system. (EBS)
At least to the point where I can design to it. That is what I have done.

If you have read my white paper you can see that I have no problem thinking outside the box. I have spent many years concentrating on one concept. I was determined to understand what happens to music signals when passed through an amplifier. Obviously they "act" differently in tube circuits vs SS circuits. I literally did behavior analysis on the fragile signal to see how it reacts to being manipulated by different circuits.

I wanted to somehow feed the musical information from the venue directly into the EBS. To do that you have to learn what the EBS "likes" to hear.
I'm not referring to your favorite music - I'm talking about what your EBS feels “comfortable” with receiving as a perfect data connection.

We know what happens when the EBS feels “uncomfortable” – that is when stress enters the picture.  Listening fatigue, etc. To prevent this from happening we must not feed the EBS with a mismatched or poor connection. It will reject the data as invalid. We can keep listening to it but it will not be accepted as a valid live sound.

Live sound has a path to the EBS unlike that of electronically delivered sound. It is apples and oranges. Live sound flows perfectly through air into the EBS which easily uploads the data to a higher level of analysis by the brain. It passes the test of validity because the brain recognizes the delivery method – the medium of air. It freely passes the data to an area of the brain that reconstructs a mental image of what it hears. It reaches the mind. At this level your conscious is free to “browse” the sonic landscape perhaps being attracted to a specific area in the [mental display] that it would like to concentrate on.

The ability to discern several instruments simultaneously is amazing enough but to further apply a desired filter to be able to listen to just one while the others are still playing is also a testament to massive sophistication of the whole process.

It is this extra ability to place a filter over one area of interest that can make or break the process. The conscious effort to filter something will only work if the target (of the filter) is stable.

Variations in delivery speed cause the mental location of objects to drift. The effort now to place a filter fails due to the moving target. The brain instantly knows the data transfer is contaminated with something that is not found in nature. It is fully aware that this is not a live event.

My goal was to specifically create a valid delivery system that allows the natural connection to the EBS to happen – it can now pass the validity test and continue to upload to the higher conscious level where object recognition takes place and the reconstruction in the mind of the sound stage happens.

More importantly – it is free to use the [mental] filtering tools to allow the concentration of interest to dictate what instruments are desirable or which can be ignored. Both filters require a stable target.

Yes it required quantum physics to penetrate the music signal enough to find its velocity and stabilize the process prior to passing it on to the ear-brain system.

Roger
atmasphere

as Carl Sagan said "extraordinary claims require extraordinary evidence". 
Your right - I guess since it cannot be measured then the only "evidence" is how it performs which by Carl's definition has to be "extraordinary".

I'll settle for that.

BTW I was in no way trying to dig at you. I have tremendous respect for you and your reputation. I was just trying to point out that even things you can measure don't in and of themselves provide answers so easily. It is even more difficult "working in the dark". 

Regards,

Roger

GK

Let's put it this way...
I'm convinced if I show you the exact process including all schematic diagrams, it still would make no sense to you how it works.

I have done this with top EE's in the government. It went right over their heads. What does that tell you? That I'm a genius? No. Just that I did the hard homework and found something everyone else missed because they are stuck thinking inside the box.

You have to treat your system not as a "stereo" but rather as a translator of  information suitable for consumption by the brain.

Your brain is the end user.

I found the contamination that breaks down the [outside] link to the ear-brain system. Without using quantum physics - that link cannot be accurate enough.

It successfully removes velocity based "analog jitter" in the time domain.
It locks down the correct playback speed and guarantees it is [constant].

This allows the smooth transfer of the sound [WAVE] phenomenon to flow toward you as if if was happening in the same room.

How difficult a concept is it to grasp?
 
To your collective delight I think I'm done trying to explain it.

Listen and enjoy.

Roger
I agree.

I think after the back and forth so far in this thread we may have found a common understanding of the issue and hopefully some solution.

After all look at the attention being paid to the physical stabilization of cables and racks and of the chassis. Airborne  vibrations are a valid contamination in the system. We know that "tiptoes" and other spikes put under the gear help to remove or dampen the effects of bass and more from penetrating and returning to the path of the source. The rings placed on tubes to cut down on microphonics is another example of the pesky problem of physical instability. 

Low jitter clocks and the like in our digital sources again are all meant to  eliminate or suppress the damage done to the otherwise smooth flow if information.

We know that the unchecked tiny micro vibrations will limit the purity of the musical presentation. 

The key to understanding how it hurts the presentation is that it affects the [velocity] of the delivery system. IOW it is not just an interfering note or sound added to the mix - it is the fact that it is a mechanism of alteration or modulation of the base delivery method. Your image is [shaking] as a result of the vibration issues caused by the physical world entering the chain.

Unless you had very sophisticated lab gear to examine the actual micro vibration at work - it is safe to assume that we know it exist by the fact that calming things down physically is quite noticeable acoustically. You don't have to measure the tiny vibrations to know that they are there.

In light of the awareness of this "invisible bug" it is readily accepted as fact that its presence is destructive and that it will react to attempts (mostly by trial and error) to suppress it.

If you consider what I have found and were able to address as simply more dynamic interference caused by the improper handling of an analog signal traveling through an amplifier, then I think we are on the same page.

The most startling aspect of the analog "jitter" is how bad it is compared to other destructive forces caused by the physical world. The unstable velocity in the amplifier happens at nano-scopic levels (far below measurable levels). But again - knowing it is there and making attempts to catch it happening based on [theory] is no different than you placing lead weights on boxes and draping your cables over insulators knowing that you are blindly affecting the issue.

Once you can suppress the analog jitter, other physical work done to keep things "stable" are much more obvious because that is all you are left with.

Roger

No curiosity no discovery
mapman

Any publicity may be good publicity compared to none. But one would expect spreading misinformation to backfire eventually.
Some manufacturers sell the sizzle not the steak. If you have the real deal - you don't have to lie about it.

It is self evident.
geoffkait,
Roger wrote,

"The most startling aspect of the analog "jitter" is how bad it is compared to other destructive forces caused by the physical world. The unstable velocity in the amplifier happens at nano-scopic levels (far below measurable levels). But again - knowing it is there and making attempts to catch it happening based on [theory] is no different than you placing lead weights on boxes and draping your cables over insulators knowing that you are blindly affecting the issue."

geoffkait wrote,

pretty sure that entire paragraph is another one of those false arguments, you know....a Strawman argument.  What is illogical about it?  Well, for starters, we have the capability to measure things that are nano scale.

Ok - so if we already have the capability to measure things that are nano scale and you are assuming it can be applied to sound reproduction - why haven't we done it already? Has the audio industry been wasting its time when all they had to do is use the magic tools we already have?

Because we are talking apples and oranges.  Reed Solomon does absolutely nothing in the analog world. It only deals with on/off. I was afraid to use the term "jitter" when describing my work but I thought it might trigger some kinship to the concept of tiny amounts of interference or disturbance. This is why I prefaced it with the term "analog". I realize it is probably an oxymoron since jitter is deviation or displacement of a pulse in a digital signal. It may only have added some confusion to the correction process I use which is 100% analog and also lives in the nano scale.

Roger

geoffkait

 
As we saw a couple weeks ago the LIGO project finally observed gravity waves and they had to use isolation to do it. Isolation of the optics.  
Gee - you mean somebody had a theory about something they thought existed even though they had no way to measure it? That's crazy talk.

It apparently took a bunch of scientists with a desire to get the answers quite a while to detect it - and had to use extreme methods to expose it.
So, you really probably should delete the expression, isolation can never be achieved, from your repertoire.
I agree, we can't say "never" anymore. I am in no way claiming the perfect amplifying method but I believe I am on the right path to make it happen.

The recording industry still needs to clean up the analog stages.

Roger
hew,

You are right the proof is in the pudding. As far as a separate device - as you know a chain is only as strong as its weakest link. I have had great success at the power amp end of the chain mostly because the power amp has a tricky task to perform correctly. That is driving the final transducer that converts the electrical language into mechanical language that ultimately allows the sonic continuum to make its way to your ear-brain system. However, every part of the chain should be of the same caliber.

The Wavefront Timing Control I developed on earlier models of the preamp was a way to manually decode or filter a fixed disturbance in the chain. It required making a separate adjustment every time you switched sources or even between different recording labels. The current auto-focus system is much more sophisticated in that it can detect the velocity that comes embedded within the audio signal. For this reason I use auto-focus in the preamp as well. The DAC and the phono stage also has auto-focus because it handles issues that pertain to the specific stage in question. It cannot guarantee fixing anything upstream. (A bad front end will still sound bad).

There are many recording studios waiting now for this process to be available as a mic preamp which is already under way. I hope to have recordings done this way later this year.

Roger


geoffkait

Maybe your amp would run more perfectly if you isolated it or is it immune?

Actually it is not immune - as tbg pointed out he was able to deal with some vibrations affecting the amp in his system.
geoffkait,

I will be the first to admit you guys have a better grip on the physical vibration issues. I'm still learning. But I do have a grip on electronic signals and how they behave in amplifier circuits. That has been my life's work. My obsession with accuracy has taken me down to the "tiny" world of analog errors. I wanted to see the first [velocity error] happen before it manifests itself as full blown distortion. That's the place to clobber it before it gets out of hand. I can force the phase of fundamental to stay within fractions of a degree of dead true.

The equivalent physical vibration would be extremely high to cause the fundamental to move a full octave or more away from true producing harmonic distortion. That is a serious shift in velocity.

Roger

theaudiotweak,

Many years ago Polk Audio used laser interferometry [probably from Johns Hopkins] to measure the cabinet motion of a new time aligned speaker. They found a speaker that sits directly on a hard surface or a carpet over foam floor and played at a reasonable volume level causes the cabinet to travel further than the excursion of the tweeter. This cabinet activity greatly reduces the advantages of time alignment. Would this severe cabinet motion with drivers enclosed or attached also create unwanted and audible Doppler distortion? Tom
Yes I would say it has to have an impact although the underlying time alignment is still valid and the additional vibrations (caused by the cabinet) will add blur to an "aligned" image. IOW objects would have an average (out of focus) location contaminated at least in part by the motion of the cabinet.

Roger
geoofkait

Pebble is the explanation for the pebbles and can be found on my web site
Interesting stuff -
mapman,

Roger, 

Can you tell us where your amps are made and who does the manufacturing?

Also do you keep an inventory and what is the wait period?

Everything is made here in the garden state (NJ).
I have a sub assembly house where all but critical assembly takes place.
I also have proprietary components made at the factory which are added late in the final assembly process. The circuit board manufacturing process is done within the US under non-disclosure contract and includes a structural process I developed to guarantee signal stability. The core amplifier circuitry is potted and housed in a Faraday cage. The potting material is a unique chemical compound developed especially for H-CAT because of the high signal impedance used in the auto-focus system.

Wait time varies but is about 3-4 weeks.

Roger
Hi Dave,

Send me an email with your contact info...
Use any email link at my web site and I'll get back to you.
Thanks

Roger
www.h-cat.com


dalecrommie

Way too much jabbering.......upgrade the capacitors in your speaker's crossovers, and be done with it.
Hi Dale

Is this in addition to or instead of new electronics?
What caps do you recommend?

Roger