> All direction of sound is mainly a product of two part. The most > important one is learning. It has been tested that babies do not have > the same stereo perseption as those a bit older. What they hear is > more like two (similar) sounds. But, they learn oterwise. It's the > same thing with direction, even though the mechanism is different. > The brain learns how your ear picks up sound from different angles by > learning how it should really sound, them how it actually sounds when > placed above or behind or whatever. When the hearing mechanism has > learned how it works, it can make assumptions when it hears a sound it > hasn't heard before. That assumption will (most likely) not be as > accurate, but close. > Your ear actually changes the frequency spectrum of sound depending on > direction. This is why an ear is shaped in the totally asymetric way > it is. But since no ear are the same, it imposes quite a challenge > for those recording Q-sound. No two people will hear the directions > in a Q-sound recording the same way since their ears does not function > excatly the same way, and may have learned slightly differently from > what the Q-sound system is trying to reproduce. Two points: a) Learning is such a marvelous thing, even blind babies that can not recognize the visual origin of the sound, so it is much more difficult to create a "audible spacial map", but they do and do it very well. b) Our hearing system can not be compared to two simple and stupid microphones, since we use several frequency discriminators in earh hear. Several sound nerve sensors are located in series along the spiral audible sensor. As far the sound enters the system, only high frequencies still going ahead so the high pitch nerves sensors are located in the inner of the spiral. In real all sensors are equal, and it is a brain activity to "learn" do discriminate what means the electric signal from each sensor (meanning different frequencies). By the same way that a baby needs to learn how to control a mechanical motion, *probably* he also needs to learn how to discriminate different frequencies. I question about the time required to a baby to learn how to control his arm movements, if it is not delayed by the fact that his visual focus system is also not improved. If you want to *try* to duplicate the human hearing system, start by instal at least 10 microphones 5mm apart, in each "robot's ear", with frequency band filters, digitize those 20 analog signals, and feed a powerful processor that will create a spectrum-spacial map based on phase, frequency and level. We also hear by bone sound transport, that feeds directly the sensors... try to do this: Close your both ears with your fingers, close your eyes, semi-rotate your head in front of your noisy computer fan, you will still able to identify the origin of the sound without the common aerial sound transmission. This just eliminate some of the theories about reflection and phase shift in the ear muscles and in the ear external canal. If the low frequency generated by the finger muscles is not allowing you to hear the computer fan noise, try to use any external sound block as rubber or swimming ear plugs instead your fingers. -------------------------------------------------------- Wagner Lipnharski - UST Research Inc. - Orlando, Florida Forum and microcontroller web site: http://www.ustr.net Microcontrollers Survey: http://www.ustr.net/tellme.htm