If you’ve ever listened to headphones critically, or for an extended period of time, you’ve probably found that there are some things slightly odd with the sound presentation. Most people perceive the audio image produced by headphones to be a sonic blob on the left, another sonic blob on the right, and (maybe) a third sonic blob in the middle. In addition, after an hour or two of listening, you may have felt that the headphones became annoying and that you were tired of listening. That's the dreaded premature 'listening fatigue' setting in -- and it is not a good thing for us dedicated headphone lovers!
These headphone psycho-acoustic problems are very real and can be explained technically. Imagine that you are listening to a pair of big room speakers. If you turn off the left speaker, both ears continue to hear the right speaker, but the left ear will hear the right speaker sound wave after a very short time delay (ITD) and with an equalization difference (IAD) as the sound wave travels across the face/head. In tandem, these psycho-acoustic effects are also known to audio eggheads as Head-Related Transfer Function (HRTF). Now think about listening to a pair of headphones. If you somehow turn off the left channel, only the right ear can hear the sound. Of course, to the brain, this is highly unnatural since in a "normal" [speaker-based] listening environment both ears hear both speakers and in everyday life, sounds are generally heard by both ears. Your mind doesn’t really know what to do with sound that it only hears in one ear so, for most people, the sound ends up being over-localized. Hence, premature listening fatigue sets in.
There are some types of recordings (called "binaural") that are designed specifically for headphone listening and do not have these problems. Special microphone setups that look like a human head with microphone elements in the ears are used for these recordings. Unfortunately, since binaural recordings are best heard on headphones, they are not as desirable for speaker listening, so few companies actually produce them. There are also a few other types of microphone techniques that can be used with both headphones and speakers, but don’t do either particularly well.
The bottom line is that when the recording engineer placed the microphones and mixed the sound, they listened to two speakers. Thus, it is in a speaker-based acoustic environment that the desired audio image is recreated. Because headphones are a significantly different acoustic configuration, audio designed just for speaker listening often doesn’t sound quite right when played back on headphones.
A man named Ben Bauer developed the first electronic crossfeed circuit; and this basic concept served as the foundation for HeadRoom Crossfeed.
In the digital domain, computers can greatly aid in making headphones sound more natural.