Sound is all around us and comes in a myriad of flavors. Some are nice, like music or wind blowing through leaves. Others, like the beep when your card gets rescinded, not so much. We know our ears pick up on sounds, but then, why do we also feel the hammering of a song in our chests when the bass is loud enough? And what’s the link between instruments playing by themselves and a bridge collapsing in 1850’s France?
Physically speaking, what we perceive as sound is a vibration produced by motion.
Imagine the world as a huge bathtub on which you, a yellow rubber duck, merrily float around. At various points along this tubby world, there are faucets pouring water. Some are bigger and pour a lot of water, while others are tiny and only give off occasional drips. Some are closer to you, while others are really far away. These are the sources of sound.
Regardless of their position or size, each faucet creates vibrations in the form of ripples on the water’s surface — which is the medium. Most of these will never make it all the way to you. For the ones that do, you’ll ‘hear’ the source faucet. How much you bob up and down on the wave it generated represents the sound’s amplitude — roughly equivalent to what we perceive as loudness. How frequently each sloshes you around, based on how close packed the ripples are, is the sound’s frequency — what we perceive as pitch. The way ripples push you is the direction of propagation — i.e. where we hear the sound coming from.
It’s not a perfect analogy because, as you may already suspect, the world is not a bathtub and we’re not rubber duckies. But it simplifies the conditions enough to understand the basics. For you to hear something, a few things have to happen: First, you need a source of motion to get it started. Secondly, sound travels as a wave, so there has to be a medium to carry the vibration between you and this source. You need to be close enough to the source to register the vibration before it attenuates or dies off. Lastly, the sound has to be in the right frequency interval — if the wave is too lazy or too steep, you won’t pick it up.
In real life, the medium can be any fluid (gas, liquid, plasma) or solid. Even you are one. The medium’s properties determine how sound propagates — fluids carry sound only as compression waves, which are alternating bands of low and high pressure, while it can propagate both as compression and transverse (shear) waves through solids. The medium’s density determines the speed of a sound, while its viscosity (how strongly particles stick to each other and resist motion) dictates how far it can travel before it runs out of energy/attenuates.
These properties aren’t constant through space or time. For example, heat dilation can cause a shift in parts of the medium’s properties, altering how fast sounds propagate at different points. Vibration can also transfer from one medium to another, each with different properties. If you’re dressed up in a sealed astronaut suit on Earth and talk loud enough, people will still be able to hear you. Take two astronauts into the void of space and they won’t hear each other talking because there are no particles to carry the vibration between them. But if they stand visor-to-visor they may faintly hear each other, as the suit and air inside it carry over part of the sound.
Perceiving is believing
From a subjective point of view, the answer to “what is sound” comes down to what you can hear. The human ear can typically pick up on frequencies between 20 Hz and 20 kHz (20,000 Hz), although age, personal traits, and the medium’s pressure shift these limits around. Everything below 20 Hz is called infrasound (under-sound), anything above 20 kHz is called ultrasound (over-sound).
All you have ever heard falls within this interval which, to be fair, is pretty limited. Cats and dogs can hear ultrasounds up to 45-65 kHz respectively, which is why they howl at a whistle you can’t even hear. Some whales and dolphins can even go as far as 100 kHz and over, an interval which they use to communicate. Still, they’re limited in what lower frequencies they can hear. An average cow, however, can probably perceive a wider range of sounds than you on both ends.
Apart from those four physical properties of sound I’ve bolded earlier, the are also perceived qualities of a sound. Pitch and loudness are directly tied to physical properties for simple sounds but this relationship breaks down for complex sounds. There’s also a sound’s perceived duration (how long a sound is) which is mostly influenced by how clearly you can hear it, a sound’s timbre (the way a sound ‘behaves’ over time, making it distinct from other sounds), its texture (the interaction between different sources), and finally the spatial location (where the different sources are relative to one another).
Your perception can also further influence the sounds you’re hearing through the Doppler effect. A sound’s relative spatial location to you over time can lower or raise a sound’s perceived frequency. That’s why you can tell if a car is rushing toward, moving away from you, or just sitting in traffic, from the pitch in sound.
Some time ago, musicians found that playing particular notes could make chords vibrate on other instruments, even when no one was touching them. The phenomenon was dubbed after the Latin term for ‘echo’, since the chords seemed to pick up and repeat the sound played to them and Latin sounds cool. Unknowingly, they stumbled upon a phenomenon that would see today’s soldiers ordered to break stride when crossing bridges to prevent them from collapsing — resonance.
Ok, so the nerdy bit first. Every object has the capacity to oscillate, or shift, between several states of energy storage. If you fix one end of a spring, tie a weight to the other, pull down, and then release said weight, it will bob up and down like crazy then gradually settle down. That movement is caused by the system oscillating between different states of energy — kinetic energy while the weight is in motion, potential elastic energy while it’s down, and potential gravitational energy while it’s up. It eventually settles at a particular point because this shift is inefficient, and the system loses energy overall (called damping) when transitioning from one state to the other.
But objects also have something called a resonant frequency which works the other way. They can resonate with all kinds of waves, from mechanical/acoustic waves all the way to nuclear-magnetic or quantum resonance. Each object can have more than one such frequency for every kind of wave.
When vibrating at one of these frequencies, systems can undergo the shift with much greater efficiency, so tiny but sustained external vibrations can add up inside the system to build powerful oscillations. It can even lead to a system holding more energy than it can withstand, causing it to break apart. This phenomenon became tragically evident on the 16th of April 1850 at the Angers Bridge, France, when the marching cadence of a battalion of soldiers going over the bridge amplified wind-induced oscillations, matching the structure’s resonant frequency, leading to collapse and the death of some 200 troops.
Sound is basically a mechanical wave, so it can also induce these resonant oscillations in objects — called acoustic resonance. If you’ve ever seen someone sing a glass to the breaking point, this is the phenomenon at work. If not, you can watch Chase here be adorably excited when he manages it.
Other cool and not-cool things sound does
I’m gonna start the “cool-things” list with the sonic refrigerator because get it, cool? Refrigerators? I love my job.
Pun aside, about halfway through last year a team from the Department of Prime Mover Engineering at Tokai University in Japan developed a system that uses resonant frequencies to pump and compress coolant in a refrigerator in lieu of traditional systems. Their engine draws power from the fridge’s residual heat, making for a much more energy efficient system.
The sound generated by individual atoms’ vibrations can be used to identify their chemical species, one team from Georgia Tech reported last year. These vibrations can even tell researchers what substances, and in which particular states, multiple atoms bunch together to form. It’s so accurate that CERN is already using the method to identify individual subatomic particles.
Sound may also help us stop tsunamis before they reach the shore according to Dr Usama Kadri from Cardiff University’s School of Mathematics. The math shows it’s a viable method, although we don’t yet have the technical capabilities to implement it.
Researchers at the Max Planck Institute for Intelligent Systems in Germany have also figured out a way to use sound in an acoustic tractor beam. I don’t even need to explain why that’s awesome.
Sound can also be very pleasant, in the form of music — for humans and cats alike.
Certain sounds can make your food taste sweeter or sourer, others can help you diet — but these are more tied to perception than physics.
On the “not-cool, dude” list we have sound-based weapons. From ancient weapons that used perception to shake the enemy’s morale, the infamous Jericho sirens on Nazi StuKas used as psychological weapons during WW2 (quite effective at first, then withdrawn from service since it ruined the plane’s aerodynamics and soldiers got used to them), to modern crowd-control acoustic cannons employed by police and armed forces — used with varying degrees of ethical success — sound has always played a part in warfare.
There are also some more exotic items on the list, such as the much-searched-for-but-still-undiscovered brown note. This was believed to match the human bowel’s resonance frequency and make soldiers inadvertently soil themselves in combat. Though I’d say it would only make their camouflage more effective.