People assign analog attributes to digital cables. Warmer, colder, brighter etc. I just wonder which bits are added, changed, or entirely missed when the sound turns warmer or brighter etc.? If bits are missed or added it will likely cause nasty noises (if not corrected by the error correction mechanism of the receiving end).
There is no error correction in S/Pdif transfer, but that is not the point.
S/Pdif is coded such way that word clock for D/A converter can be extracted from it (from transitions) to keep CDP and DAC synchronized. This clock has to be rock solid (no time jitter) otherwise additional sounds will be added on analog side. For instance, if you play pure 1000Hz tone and have a lot of 60Hz noise then your S/Pdif stream of 1s’ and 0’s might have 60Hz jitter (vibrate in time back and forth 60 times a second). It will produce on analog side pure 1000Hz tone plus (mainly) 940Hz and 1060Hz tones (sidebands). These tones will be at very low level but still audible, since not harmonically related to the root frequency of 1000Hz. Amplitude of these extra tones will be proportional to the amplitude of the vibration (jitter). Since music has a lot of tones there will be a lot of additional tones added that will be perceived as noise or lack of clarity. There is a lot of sources of jitter, digital cable being one of them. It might add jitter because of reflections in the cable affecting/deforming edge of transition or it might be susceptible to ambient electrical noise, that when added to signal affects recognition (threshold) point hence affecting moment when transition is recognized. Jitter might also be caused by noise added to signal in CDP or DAC itself. Since different digital cables affect jitter differently (one has better shielding while the other less reflections) there will be a difference in sound.