Over Hear
It can be difficult for students to attract my attention when I am lecturing. My occasional glances in their direction aren’t likely to notice a static arm raised in the standing‑room‑only lecture hall, and so they are reduced to jumping and gesturing wildly in the hope of catching my eye. And that’s why, whenever possible, I keep the house lights turned off. There are, then, three reasons why my students have trouble visually signaling me: (i) they tend to be behind my head as I write on the chalkboard, (ii) many are occluded by other people, are listening from behind pillars, or are craning their necks out in the hallway, and (iii) they’re literally in the dark.
These three reasons are also the first ones that come to mind for why languages everywhere employ audition (with the secondary exceptions of writing and signed languages for the deaf) rather than vision. We cannot see behind us, through occlusions, or in the dark; but we can hear behind us, through occlusions, and in the dark. In situations where one or more of these–(i), (ii), and (iii) above–apply, vision fails, but audition is ideal. Between me and the students in my course lectures, all three of these conditions apply, and so vision is all but useless as a route to my attention. In such a scenario a student could develop a firsthand appreciation of the value of speech for orienting a listener. And if it weren’t for the fact that I wear headphones blasting Beethoven when I lecture, my students might actually learn this lesson.
The three reasons for vision’s failure mentioned above are good reasons why audition might be favored for language communication, but there is a much more fundamental reason, one that would apply to us even if we had eyes in the backs of our heads and lived on wide‑open prairies in a magical realm of sunlit nights. To understand this reason, we must investigate what vision and audition are each good at.
Vision excels at answering the questions “What is it?” and “Where is it?” but not “What happened?” Each glance cannot help but inform you about what objects are around you, and where. But nearly everything you see isn’t doing anything. Mostly you just see nature’s set pieces, currently not participating in any event–and yet each one is visually screaming, “I’m here! I’m here!” There’s a simple reason for this: light is reflecting off all parts of the scene, whether or not the parts have anything interesting to say. Not only are all parts of a scene sending light toward you even when they are not involved in any event, but the visual stimulus often changes in dramatic ways even when the objects out there are not moving. In particular, this happens whenever we move. As we change position, objects in our visual field dynamically shift: their shapes distort, nearer objects move more quickly, and objects shift from visible to occluded and vice versa. Visual movement and change are not, therefore, surefire signals that an event has occurred. In sum, vision is not ideal for sensing events because events have trouble visually outshouting all the showy nonevents.
If visual nature is the loquacious coworker you avoid eye contact with, auditory nature is (ironically) the silent fellow who speaks up only to say, “Piano falling.” Audition excels at the “What’s happening?” sensing a signal only when there’s an event. Audition not only captures events we cannot see–like my (fictional) gesticulating students–but serves to alert us to events occurring even within our view. Nonevents may be screaming visually, but they are not actually making any noise, and so audition has unobstructed access to events–for the simple reason that sound waves are cast only when there is an event.
That’s why audition, but not vision, is intrinsically about “what’s happening.” Audition excels at event perception. And this is crucial to why audition, but not vision, is best suited for everyday language communication. Communication is a kind of event, and thus is a natural for audition. That is, everyday person‑to‑person language interactions are acute events intended to be comprehended at that moment . Writing is not like this; it is a longer‑term record of our thoughts. And when writing does try to be an acute person‑to‑person means of communication, it tends to take measures to ensure that the receiver gets the message now –and often this is done via an auditory signal, such as when one’s e‑mail or text messaging beeps an alert that there is a new message.
That language is auditory and not visual is, in the broadest sense, a case of harnessing, or being like nature for the purpose of best utilizing our hardware. Language was culturally selected to utilize the auditory modality because sound is nature’s modality of event communication.
That’s nice as far as it goes, but it does not take us very far. The Morse code for electric telegraphy utilizes sound (dots and dashes), and even the world‑record Morse code reader, Ted McElroy, could only handle reading 75.2 Morse code words per minute (a record set in 1939), whereas we can all comprehend speech comfortably at around 150 words per minute–and with effort, at rates approaching 750 words per minute. Fax machines and modems also communicate by sound, but no human language asks us to squeal and bleep like that. Clearly, not just any auditory communication will do. And that brings us to the main aim of this chapter: to say what auditory communication should sound like in order to best harness our auditory system. We move next to the first step in this project: searching for the atoms of natural sounds, akin to the contours in natural scenes on the visual side.
Дата добавления: 2015-05-08; просмотров: 807;