Is this a logical break in technique?


Background of theory; take 3 people (just to explain my point, if this is even a point) each listen to different types of music. One rock, one jazz and one classical-keep it simple, if each one of these people only listen to one particular type of music for the entire break in period, do the speakers remember the focal points in the freq range of that type of music. Because jazz can be light, rock can be heavy and classical can be both as can all the genres but one genres compared to the other are recorded with different end results in mind.

Basically would it be better to break in a pair of speakers with pink noise and run the tones at different db’s just to expose the speaker to different signals basically training the speaker to produce anything and everything.

Example:

I listen to Jazz for 1 year straight on the same system as my friend. at moderate levels

My friend listen to rock for 1 year straight on the same system.

Say we swap system but not genres would there be a sound difference? If yes then this theory might have something to it. If not I need to lay off the weed.
ummagumma69dbb9

Showing 4 responses by bombaywalla

willhiteb@bellsouth.net

you are correct - deep bass to loosen the woofer surround. other signals to break-in the xover & mid & tweeter surrounds.
Nobody is arguing THIS point. In fact, we are beyond this point.

what the originator of this thread would like to know is: would a music system sound different if it were broken in on ONLY Jazz or ONLY classical or ONLY rock & (after the break-in period) be asked to play other genres of music VS. a music system that was broken-in using mish-mash of all 3 genres & after the break-in period) be asked to play other genres of music?

Bigjoe: No! peg it to 2 EEs chating on a forum thread.
willhiteb@bellsouth.net,

"There can't possibly be a "permanence" to the initial break-in source..."
My initial post tried to say exactly that but I wrote it in a diff way - I wrote "memory-less", if you remember? Anyway, looks like we are saying the same thing.

"And besides, the whole concept is immeasurable and totally subjective"
very possibly so!

IMHO, the originator of this thread should lay off the weed. (Now, I feel a bit silly taking the bait! Oh well....I didn't get too excited as I used in the past)
Aball wrote: "From the stand point of triboelectric noise theory, the answer to the question is "yes," the systems would break-in differently and sound different since the electromagnetic fields of the signals will be changing the domain spins of the material in ways according to the music's harmonic structure."

wait a minute, wait a minute!
I know that Arthur is doing his Ph.D that's why he canuse such big terms like "triboelectric noise" but it happens to be in Class-D amps.....:-)

the speaker driver consists of a fixed magnet. the music signal current carrying conductor immersed in this fixed magnetic field causes the driver to flex back & forth. Take away the music signal & the driver stops moving. Assuming that the speaker internal wiring/conductor is sized correctly to carry the expected current, the conductor should *not* be stressed => the domain spins of the conductor material should not be distorted. IOW, the domain spins should return to their quiescent state. I.E. the conductor/speaker internal cabling should be memory-less.
Also, if you look at the *average* value of a music signal over a long period of time (the originator of the post suggested playing Jazz/Rock for 1 year straight), it is practically zero. IOW, it should leave the conductor/speaker internal cabling without any memory.
Anyway, just my thoughts FWIW.