Intro to Sound Reviewer
Intro to Sound Reviewer
Key Insights:
• Sound is produced when an object is in motion. Motion creates vibrations,
which generate sound that we can hear.
o The movement of molecules determines the frequency and amplitude of a
sound, as seen in a sound wave.
▪ Frequency is determined by rate of molecule movement,
measured in cycles per second or Hertz (Hz). It’s what gives sound
a particular pitch.
❖ more cycles per second = higher frequency and pitch
▪ Amplitude is determined by maximum displacement of air
molecules, leading to air pressure fluctuations expressed in decibel
sound pressure level (dB SPL). It’s what also determines sound’s
loudness.
❖ taller waves/higher amp = louder sound
Key Insights:
• Sound travels from one place to another through mechanical waves caused by
vibrations.
o when someone speaks, claps, or plays an instrument, vibrations are created
in a medium, and these vibrations travel as sound waves
• Speed of sound is better observed in solids as the molecules are more compact. In
liquids, it’s more so visually seen. Speed of sound isn’t as observable in gases as
the molecules are too far apart.
o For example, the sound of wind whistling in a forest is only possible
because of solids like tree branches forming whistle holes where the wind
blows.
o Sound can’t travel in a vacuum because if there’s no air, no medium, then
the vibrations needed to make sound cannot be made.
3. What is sound?
Key Insights:
• Sound can be defined as a physical stimulus (pressure changes in the air or other
medium) and a perceptual response (hearing experience).
• Condensation and rarefaction describe alternating regions of high and low
pressure in the medium as the wave propagates.
o Condensation/compression refers to a region in the wave where particles
are compressed together, resulting in an area of higher pressure and
density.
o Rarefaction refers to a region in the wave where particles are spread
apart, resulting in an area of lower pressure and density.
4. Sound localization
Key Insights:
• Sound localization entails locating the direction of sounds (in reference to the 3
dimensions) relative to our hearing space.
o Azimuth - left to right (horizontal)
o Elevation - up and down (vertical)
o Distance (from observer) – near/far
• Direct sound - sound waves travel straight from the source to our ears without
obstruction.
• Indirect sound - occurs when sound waves bounce off surfaces such as walls,
floors, ceilings, and even objects in the room before reaching our ears.
• An echo happens when a sound wave reflects off a distant surface and returns
clearly separated from the original sound.
• Reverberation happens when sound waves reflect off multiple surfaces in a
smaller space, causing them to mix together.
Key Insights:
• Location - Sounds from different locations are perceived as separate sources
e.g., 2 people speaking from opposite sides of a room sound distinct
• Timbre and pitch similarity
o Auditory stream segregation or melodic fission - Rapid high and low notes
played alternately are heard as separate melodies.
o A process by which our brain separates different sounds into distinct
perceptual streams rather than hearing them as one mixed sound.
• Proximity in time - Sounds starting at different times are perceived as separate.
• Auditory continuity - Like Gestalt principle of good continuation in vision, we
perceive sounds as continuous even when briefly interrupted. Our perception fills
in the gaps to create a seamless experience, emphasizing the brain’s preference
for continuity in both sound and vision.
• Experience and melody schema - Our past experiences shape how we recognize
sounds.
o Melody schema - Stored memory of familiar tunes helps us recognize
remixes or instrumental versions.
6. Why use digital tech in sound production?
Key Insights:
• Data Loss Management - Digital systems can reconstruct missing data using
error correction, preserving quality. In contrast, analog recordings degrade with
each copy, reducing clarity over time.
• Quality Preservation - Analog copies lose quality with every reproduction,
making images blurry and sounds unclear. Digital technology preserves quality by
reproducing data rather than physical copies.
• Non-linear Processing - Digital editing allows for flexible, high-quality sound
adjustments without degradation. Analog editing is linear and limits
modifications.
• Accessibility and Familiarity - Digital formats are widely used and easier to edit,
share, and store. Most modern recordings rely on digital technology for
convenience and reliability.
Key Insights:
• Sound is Personal – While sound preference is subjective, audio production
requires prioritizing professional standards over personal taste. A refined
preference, developed through experience, often aligns with high-quality
sound in professional settings.
• Sound is Omnidirectional – Sound exists all around us and can be layered in
recordings to create depth. Unlike visual elements, multiple sounds can coexist
without replacing one another, immersing listeners in different auditory
experiences
Key Insights:
• Intelligibility (can you clearly understand the audio?) - Speech, narration, or
lyrics must be clearly understood. If the audience struggles to comprehend the
content, the effectiveness of the audio is compromised
• Tonal balance (does the sound feel evenly distributed?) – A well-balanced mix
ensures no frequency range dominates. Too much bass can muddy the sound,
while too little high-end can make it dull. Excessive mid-range can cause
harshness and listening fatigue.
• Listener fatigue - Audio should be smooth and cohesive, avoiding jarring
elements that strain the ears. A well-mixed sound provides a comfortable
listening experience
• Definition (are the different sounds clearly separated?) - Each sound should be
distinct yet cohesive. No element should overpower the rest unless intended,
ensuring clarity without unnecessary blurring of sounds.
• Spatial balance and perspective (does the sound match the scene?) – Sound
placement should reflect a logical sense of space. Distant sounds should feel
distant, and ambient effects should match the environment. Accurately placed
sounds and their sources help create an immersive experience.
• Dynamic range (does the sound feel natural and well-controlled?) - The
difference between the quietest and loudest parts of an audio recording. This
difference must be well-managed. Softer sounds should remain audible, while
louder ones should avoid distortion or sudden, disruptive changes.
• Clarity - A clean recording should be free from unwanted noise, distortion, and
excessive reverberation. Any intentional distortion should serve a creative
purpose.
• Airiness - The sound should feel open and natural rather than muffled or lifeless.
Proper depth and spatial quality enhance the listening experience.
• Acoustical appropriateness - The recording environment should match the
content e.g., radio broadcasts require a dry sound for intimacy, while live
performances benefit from natural reverb.
• Source quality - The original recording must be high-quality, as poor
recordings cannot be fully fixed in post-production. Better initial recordings lead
to better final results