Audiodef

A brief essay on being a composer with hearing loss

I recently wrote a brief essay for submission to an essay collection of musicians with hearing loss. I decided in the end that I would rather use this essay for my own blog. Here it is.

Damien Moody, composer, audio engineer

I was diagnosed with a profound sensorineural hearing loss at two years of age. I don't remember how old I was when I first got hearing aids. I also don't remember when I first was drawn to music. I don't remember a time when I didn't like listening to music of all kinds, and I would actively listen, not just have it on in the background. I used to love listening at high volume with headphones, because I had a typical "ski slope" hearing loss that made the midrange and bass sound very nice in my ears when at high volumes. I didn't mind so much that I couldn't hear higher frequencies when I was listening without hearing aids.

When I was 12 or 13 years old, a couple of things happened around the same time. My uncle got one of those little Casio keyboards, and my mom introduced me to Depeche Mode's music because she noticed I really enjoyed listening to electronically composed music. So I bugged my uncle frequently about playing with his keyboard, and bought all the Depeche Mode tapes I could find. Looking back, that was a nice time of musical discovery for me.

I took piano lessons in my early teens, but what I really wanted to do was create entirely new sounds, not simply play melodies. My middle school music teacher had a Yamaha DX-7 synthesizer he often left in the classroom, but he was too protective of it to let me fool around with it. (Can't say I blame him.) It wasn't until significantly later in life that I had opportunities to use synthesizers in a serious way. I honestly can't say whether my hearing loss had anything to do with my fascination with synthesizers versus acoustic instruments - I just know that the synthesizer is the instrument I'm called to play.

The biggest aural challenge I had was hearing myself while singing. While voice was not my primary instrument, I could not hear myself well enough to know how terrible I sounded. The pleasant resonance of a good singing voice couldn't be picked up accurately by my hearing aids, so it was a long time before I knew what I was missing. I could hear that singing voices had to go through the sinuses in order to sound good, but unfortunately, trying to squeeze my voice through my nose (which is not the same as proper sinus resonance - something I did not understand way back then) just made me sound ridiculous when I taped myself and played it back. I had a voice teacher in college who actually made that worse, although his intentions were good and I liked the guy. It wasn't until several years ago that I found a voice teacher who helped me move my voice from "zombie" to something halfway decent in recordings, and something I could be somewhat satisfied with.

Interestingly enough, I never really considered that I had serious aural challenges with listening to music, even though my hearing with hearing aids was such that I could not understand anyone's speech without lipreading. I could not pick out lyrics by listening, but this did not bother me very much.

These days I have bilateral cochlear implants and my aural challenges are mainly not being able to hear frequencies below 80 hertz, and music sometimes (but not always) sounds a little thin. I did a self-test at one point and noticed that with my cochlear implants, I could hear tones a little higher than 20 hertz. Because of my formal education and training as an audio engineer, I can easily speculate as to why my cochlear implants allow me to hear frequencies above the range of normal human hearing.

These days I not only compose, but also engineer my own music in a private project studio I built. One of the things I find very helpful is using software spectrographs that show frequency (which is similar to pitch, but not subjective) vs. amplitude (related to loudness). A commonly known phenomenon in sound engineering is the way excessively low frequencies around and below 20 to 40 Hertz really muddy up a mix. Using a software spectrograph can show these frequencies and I can then apply appropriate equalization to clean it up. This is a trick used by engineers with normal hearing, but it is especially useful to a sound engineer with a CI.

Composing multi-track electronic music means when I am ready to work on final mixing and mastering (what engineers do to make it sound as good as possible), I can solo each track to hear it, and bring in more tracks one at a time. This helps me understand how it all sounds together. Playing everything together without being able to solo individual parts would drive me up the wall trying to understand the interplay between parts. So the ability to solo parts (play parts by themselves) is very useful for a cochlear implant musician/engineer.

The recording software I use has a "big clock" - a display for bars, measures and beats that can be overlaid on my screen. This has made recording vocals much easier during busy passages. On the other hand, I figured out that I should be making a separate mix to record vocals, one where I don't need a visual indicator, so I started doing that. It works much better.

I continue working in my private music studio to this day, but have leaned more heavily toward instrumental compositions. I may even decide to rent studio time and offer audio engineering and recording services.

All material on audiodef.com is copyrighted and may not be redistributed without permission.