An Amp Guru – Music Synthesist’s Perspective on Deafness

Facebooktwitterredditpinterestlinkedintumblrmail

Let me give you what I know about the science of sound. The term sound refers to the compression and rarefaction of an elastic medium in a contained space. This compression and rarefaction takes place within the range of 20Hz to 20KHz and moves at a rate of 340.29 meters per second. An individual sound is known as an event. Syllables of words are separate events. Each event consists of a fundamental frequency and harmonics of that frequency.

The fundamental frequency is filtered by its delivery system. In other words, the sound of a violin is generated by the strings, but filtered by the body of the violin. That’s why a violin sounds different from a guitar. The filtering is broken down into two components – the cutoff frequency, and the resonance. The former is the frequency above or below which sound will not pass. The latter is the addition of harmonic information relative to the cutoff frequency.

Finally, every sound event consists of 2 envelopes – amplitude and frequency. Both envelopes have four portions. They are attack, decay, sustain and release. Take for example the sound of a bass drum, vs. the sound of a pipe organ. The bass drum has a short attack. The sound is at its greatest amplitude immediately after being hit. There is a very short decay period, followed by very little sustain, and the reverberation at the end of the event is the release time. The organ, on the other hand, climbs to its loudest point, has no noticeable decay, sustains almost indefinitely and slowly fades out in its release. Many instruments also experience pitch changes during their events, and the frequency envelope governs those.

What does this have to do with the Deaf?

Well, I’ve spent years synthesizing sound and hand building the machines that create or amplify it. Now, I’m on a different mission – the inverse. I’m trying to understand what exactly goes wrong with those ears that don’t work right.

Today I had a wonderful and informative meeting with Marsha Graham of – among others – AnotherBoomerBlog. Some of the many things we discussed were hearing aids, and a few of the different symptoms suffered by the Hard of Hearing. It was an enlightening experience for me. When a hearing person thinks of deafness, he tends to think in all or nothing terms. You just plain can’t hear – or you can hear, but the volume’s really low.

That’s not the case. Many Deaf and Hard of Hearing can hear, but only at certain frequencies. Often they hear, but their brains scramble the sounds. In other cases, they are unable to tune out certain noises while tuning in others. When the hearing speak in a crowded room, or on a city street, our ears – and our brains – filter out the unnecessary background noise. Many Hard of Hearing don’t have that filtering capability.

Therefore, hearing aids must employ much more sophistication than one might think. A hearing aid must be much more than simply a tiny microphone connected to a tiny amplifier. It needs to be capable of shifting frequencies, adding or removing filtering and altering envelope shapes. As I become more involved with the Deaf community, I find myself relying more and more on what I learned in its antithesis – music.

Facebooktwitterredditpinterestlinkedintumblrmail