Mastering of Synth 1
Mastering of Synth 1
Intro:
Hello, I am Composing Gloves. This series is aimed at the producer, composer, lover of sound
who wants to learn sound design. This course's aim is a level of genuine competency with
synthesis. We will have exercises and even tests. You will be expected to produce and design
sounds. Exercises are completely necessary to the development of this skill, do not skip them.
If you do every exercise and pass every test then you will have a skill that will carry over into all
your productions and bring real value to what you do. This is more than just using sounds
someone else made, this is the freedom to think in sound, to create what comes to your mind
and truly hear in your head what can happen.
Goal: To achieve a level of Mastery over Synth 1. I have thought long and hard about
what it really means to master a synth. It means you know the total limits over the synth, you
can bring all its capability to the forefront, there are no compromises in mastery, if you had to
you could create the synth yourself. My aim for you is a practical degree of mastery.
Its Free. Easy as that. I want to help everyone and I may do other synths later, but my
goal is to really help the very beginner and beginners usually don’t have a whole lot.
That's it. Not a very high bar. XD You may see me grab things you don’t own. I will try to avoid
this but I’m not gonna cut corners either. If I grab something that is not a stock effect I will
explain why I am grabbing that particular plug.
1. Know your way around your DAW, I provide a tutorial series for FL:
https://www.youtube.com/watch?v=JSrCLI3nhEY&list=PLOMuI-j1vRxTCD6Hm3cyEU0w
-n1NjolCf
2. Basics, I have a series covering exactly what I mean by this called S ound and Synth
Basics: While I will be revisiting a lot of topics here any prior knowledge will be an
advantage.
Methodology:
I have explained components of synths so many times. It's clear to me this is not the
answer alone. I have also seen so many tutorials that do not explain enough or give meaningful
practice. So I will take the following approach:
I will have a chapter based on a sound design concept, rooted in theory. Each video will
explain an aspect of sound design with particular principles in mind. Things will be progressive,
with exercises along the way. I will explain theory but require its application before we move to
ase sounds in synth
the next topic. Lessons will not stay within Synth 1, meaning we will form b
one, but will processes and mix them further many times.
I want to set a new standard in online education and hope to achieve it with this method.
Chapter 1
An approach to Sound
Before we just start turning knobs and making noise we have to talk about what
synthesis is, and the basics methods to it. Synthesis means “to put together”. Synthesis is to
take many things and make one thing. A synthesizer is a device composed of many devices.
These basic components bring us to what we really need to understand, which is the philosophy
of sound. Anyone can hit keys and turn knobs till they get something cool, but it's the people
who know which knob to turn that get exactly to where they want to go.
If you can answer these questions then you are ready to ask these:
1. What is the best method to create this sound?
2. What processing will I need?
3. What are the requirements of this sound?
A waterfalls sound may evoke an emotion, or a car screech may cause you to be alarmed.
Having purpose in your tracks makes them relatable. It’s like you’re saying something. Tracks
with no purpose are often misguided with unclear reasoning.
There is also non-Function sound, and level in between. We will not be learning non-functional
sound.
Importance in sound is monumental if you want to interact with your audience. A sound
representing a bomb that killed millions will move people, while the sound of footsteps can bring
horror, or delight. But it makes little sense to have footsteps be more important than a singer, or
a lead sound go beneath a pad sound without good reason. Consider the importance in your
sound.
What you think your sound will be like and what others p erceive it to be will be very different.
You can make good guesses at this but never know exactly. Allow room for others p erception of
your work and let it alter yours. Did what you intend a sound to mean come across properly to
the listener?
After we have considered these three questions we simply must select methods, tools, and set
reasonable standards.
A Synth will establish a signal flow. Signal flow is how the synth generates signal (our sound)
and effects it. By changing the signal flow you can accomplish radical changes in sound. Synth
1 is not the most flexible in this standard, but that's fine. Its built like a typical synth with many of
the component put in a fixed order.
(Add pictures)
A brief overview of each general components in Synth 1:
1. Oscillators - Generates “Sound”
2. Amplifier (Envelope) - Controls the “loudness” of the sound over time.
3. Filter - Removes parts of sound
4. LFO (Low Frequency Oscillator, not audible)- Changes other settings over time.
5. Effect - Various Effects
6. Equalizer/Pan - Allows for us to change the “balance” of our sound.
7. Tempo Delay - Creates a delay (thinking shouting into a canyon)
8. Chorus/Flanger - A Delay Based Effect famous for its blurring effect (and many others)
9. Voice - Control over the number of notes we can play and how it plays them
10. Arpeggiator - Takes note data and redistributes it
11. Wheel / Midi - Changes how synth 1 Interprets external data from a Midi Controller
Those are all the components with a large title. The creative processes generally follows the
signal flow initially, you get an OSC (oscillator) going and start filtering and effecting it, but then
you decided that a different OSC would sound better, you then tweak your filter and back and
forth you goal honing in on the sound you want.
That's the general components, these components reveal the workflow the designer had in
mind. This is largely a subtractive synthesis based synth. It lacks any additive capabilities, and
RM and FM are fixed in a small place at the start of the signal flow. This means the synth will
obviously not be good at creating sound similar to samples. Instead we should aim for
subtractive based sounds.
Lesson 1
The Init Patch
Up first is the good ol sine wave! We will be working with this as our base for some time, but
before we do we need to set up an initial patch.
The Initial Patch: Possibly the most important patch you will ever have, this is the blank slate
you start from. It's your white canvas on which you will start painting. There are 2 ideas in sound
design for this:
1. Start with all buttons and knobs in default positions (positions we typically expect them to
be in when they are off). This allows us to start from the same spot creatively and
become comfortable with a particular way to navigating the synth. You may consider
setting up init patches with various configurations ready to go.
2. Start on an existing patch. Why re-invent the wheel? This will also cause you to stumble
across tricks other use. Presets are not bad. They increase workflow and let you see
other ways similar effects can be done. You will want to build your own library of presets.
Synth1 has many settings on already that we will need to turn off for a typical init preset.
Generally we want only one OSC on. So we need to turn the mix knob all the way to the left.
Next up, we have all our FX on! Something we really don’t want for our sound.
That's it! No we must save our new init patch. We will use this patch as a starting place often.
In Synth1 the saving procedure is a bit of an ordeal. First we must set a file path. Click the OPT
button at the bottom of synth 1.
You Should get this lovely window. Then Click browse and create a Synth 1 folder for your
presets.
You should have an updated file path.
Now we can save our patch to the folder we created. Hit WRITE at the bottom of the screen.
Lesson 2
The Sine Wave, Intro To Synthesis
aveforms. It is assumed you know what a
We are going to be doing a lot of talking about w
waveform is from sound and synth basics.
When we use waveforms we can think of them in several ways.
1. Static sound. A sound unto themselves. The goal. The genre Chiptune exemplifies this.
The sound does not seek to become something. Chiptune Music Example
2. A changing component. Morphing Pad sounds and other sounds use this mode of
thought. Its also used frequently as a composition tool. M odular Synthesis Jam, this is an
entire song from one synth! This is just one example, this mode of thought can be found
in any genre.
3. A layer. A single piece of a larger spectrum. Sine waves have some typical layer rolls.
Orchestration is the art of layering. Protectors of the Earth by Two Steps From Hell.
These 3 modes of thought are not static. They can move. In advanced sound design we may
take a sound with many layers and peel it back one by one until you're left with the sine wave.
The pure frequency.
The red lines on the side are the sounds spectrum. Each one represents one frequency. If we
removed all but one we would have a transition from mode 3 to mode 1 and it makes for a much
more interesting experience. Challenging the role of the sound helps to bring your intent across.
For example, let's say you’ve got a large number of synths all playing the same line, but there is
one synth in the many you have playing that you enjoy more than the others. You can decided
to filter the others out in various ways to take the listener to the sound you're most proud off.
Here virtual riot kinda goes in the opposite direction in his track init.
I do this in my track “Work All Day” but in the opposite direction, I add sounds to bring the full
power of my final sound at the climax of the chord progression there is a clear sense of direction
as a result.
Another example is in my track “Into the Deep” in which you hear a piano morph into another
piano changing the sense of space from far to close.
Here is a tutorial explaining this specific example: Creating Atmosphere
These 3 modes are not enough to truly grasp what our building blocks can do for us. Sound can
be broken into 3 fundamental components. These are the things that make the sound what it is.
1. Frequency
2. Phase
3. Amplitude
Using only these 3 things you can make any sound! However, a total breakdown of these pieces
is not everything we need for sound design. Instead I want to show you something even more
incredible. The 6 aspects of sound,
1. Tonal
2. Atonal
3. Flux
4. Nonflux
5. Long
6. Short
That is it. Pause and consider the magnitude of what I am claiming! I am saying all sound is
made of these 3 elements, and all sound can be describe by this 6 adjectives! Thats a massive
step forward in how we can look at sound! Every sound you ever make or hear is some
combination of these 6 aspects! These are far more useful for us because now we can take
techniques and put them into these fundamental categories. If we want a more tonal sound, we
can use a technique that preserves or introduces tonality. This naturally happens as you get
better at sound design, but to deliberately know this information is an incredible advantage.
To take it a step further we could combine this with the 5 components of music
1. Sound
2. Structure
3. Harmony
4. Theme
5. Rhythm
So, combining the 6 aspects of sound, with the 5 components of music with the 3 fundamental
components of sound and the 3 modes of thought we create a system of analysis useful to us.
We should give it a fancy name right? Something like B urgessian analysis, or Gloves Analysis,
or just structural analysis. The last one seems the most practical, but is already taken by music
theory. =)
Burgessian analysis will allow us to find important properties of a sound, consider their mixture,
decided the technique we should use in obtaining the sound, and consider its application. I
cover the 5 components of music in “An approach to beat writing” and will not be going over it
specifically here. The point of analysis is to be a system that gives us useful information about
something. We do not normally exhaust the full potential of this system, the amount of
information would be overwhelming!
Now, onto the sine wave. We desire to use Structural Analysis to reveal the functions of the
Sine wave as it relates to sound. For the sine wave it makes the most sense to start with its
components as it is a fundamental waveform.
I want to give you a very brief glimpse into sound theory. Nothing too crazy but a small look at it
helps a lot with understanding what we are dealing with.
When I say fundamental (or elementary) waveform the sine wave should get special attention.
This wave can be generated by a couple different techniques. A very smart man named J oseph
Fourier proved you can take any signal (in our case a sound) and break it into a bunch of sine
and cosine waves. A sine wave (also called a sinusoidal wave) is actually generated through a
trigonometric function called Sine, and the Cosine is like the Sine wave only it has a 90° phase
shift. (It has to do with circles and triangles and stuff you would need a math class for).
The red is a sine wave and the blue is a cosine wave. This illustration is a little weird, because a
cosine wave should be at 1 at zero but they are just showing the phase shift. I don’t wanna bog
you down in complex maths (which is where this would head) but I just want you to understand
that this wave is a big deal in a lot more than just sound. Also a word of caution: It’s not correct
to say everything is just sine waves, it's more correct to say everything can be broken down into
a summation of sine and cos waves of various amplitudes. In sound design we don’t say cosine
waves, instead we just regard it as a sine wave with a 90 degree phase shift. Other branches of
sound design are very closely related to advanced mathematics, I may bring it up if it is relevant
(but don’t worry I will give an easy to understand conceptual explanation as well).
So our breakdown of the fundamental parts of the sine wave are
1. Frequencies - a pure frequency. You actually have never heard a “true” sine wave for
some pretty specific math reasons (explanation). But you have heard things that are
close enough to a sine wave. The fact that this is only a single frequency means that if
we ever desire to reinforce the fundamental of our sound (the lowest frequency that is
responsible for what we perceive as pitch) then we will call upon the sine wave or a
waveform close to it.
2. Phase - Typically starts at the zero crossing, clicks result when not aligned correctly and
no envelope is applied to smooth the sudden demand for your speaker cone to magically
teleport to a value without any values in between. (E xplanation in the case of the sine
sub bass) this is actually a really useful technique for plucks and a number of other
sounds and is something synth one is good at.
3. Amplitude - How loud each sine wave can be. (this is slightly oversimplified and wrong
for technical reasons, but in the end we can view it like this and be ok for nearly
everything we want to do)
So that is the sound theory, and there is actually WAY more we could talk about but it would get
in the way at this point. These principles are talked about in far more depth in Sound and Synth
Basics if you need a refresher.
For now the whole purpose of this understanding is to make exceedingly clear that a sin wave
has the ability to enforce any part of the spectrum if we give it the proper amplitude, phase, and
frequency, because sound can be broken down into sin waves. This can be taken further with
other waveforms, especially when combined with flux.
This also greatly clarifies why the sub oscillator even exists. Which we will checkout in the next
lesson.
`
Exercises:
1. In these 3 tracks, what is the mode of thought? List why you think it’s that mode of
thought.
- Kink Boiler Room London Set just get a vibe for the main mode of thought, listen to the
whole thing if you're cool.
- Bob Dylan - North Country Blues listen from a sound perspective, we are not concerned
with the song writing as much right now.
- As Shadows Fall by Peter Gundry
2. I have prepared for you a project file -
In this file I have 3 instances with synth 1. Each instance is simply labeled “Color ___”. Simply
try fading in and out each color finding a balance of sound you like. Come up with automation
and create an experience. Basically, take this from a chord progression to a t rack. Stick to
simple volume automation at first, then experiment with synth 1 if you're comfortable with it, and
try adding plugs. I also have a notepad VST on the master channel with questions for you to
consider and several more challenges.
3. General Questions - Don’t skip this. Taking the time to go back if you forgot the answer
and really commit it to memory is vital to having genuine improvement.
2. List the 3 components of sound and what each of them are for.
4. T/F It is correct to say that all sound is made of sine waves. Explain.
Remember all answers or examples if I felt so inclined are included in the answer document that
comes with this book.
Lesson 3
The Sub Oscillator
Why? What a sub oscillator does is in most c ases is generate a sine wave between 80 and 30
hz. That's it. You may wonder, “Why have an oscillator specifically for this?! Why not just have
another full fledged oscillator?” (which it basically is because it can produce other spectrums as
well).
In order to truly understand the purpose of a sub oscillator section (or taking an oscillator and
restricting its role to that of a sub) we must ask a question.
You have probably heard a trumpet and a flute. However I beat you have never confused a
trumpet sound for a flute sound.
I also beat you can tell if the trumpet sound is playing the same note as the flute sound
(disregarding octave equivalents). Here are two instruments that sound incredibly different, yet
they have the same pitch?! How can this be?
Here is a video explaining a lot of this: What's the difference between Frequency and Pitch?
Thus the idea of timbre (pronounced “Tam-Ber”) is brought up.
This is the timbre of a trumpet playing a G6.
Here is a the timbre of a flute also playing a G6.
Free Analyzer Video
Earlier we talked about how any sound can be broken down into many frequencies. In audio we
use what's called the Discrete Fourier Transform to do this, abbreviated DFT. We will just
accept that this works as the explanation is not easy. When we do a DFT to a sound we find
that certain patterns appear. The one we care about is a frequency called “The Fundamental”.
This frequency is the lowest and loudest frequency in the spectrum which is a range of
frequencies resulting from the DFT.
In the flute and trumpet pictures above notice how the lowest tones in both instruments are
loudest and as the frequencies get higher the amplitude decreases.
The fundamental also happens to be a sine wave! Our perception of pitch comes from this
frequency. If a tuba, trumpet, synth, clarinet and singer all create the same note it’s the
fundamental frequency they all have in common.
AUDIO EXAMPLE
All the other frequencies in each instrument's sound can have variations in them (they are NOT
random variations, a far cry from it. We will get into it when we approach harmonic series), but
the fundamental must be the same (or an octave equivalent) for it to sound like the same note.
An octave equivalent is simply a ratio of 2 to a frequency. So if one frequency was 100 Hz, then
we would hear 200 Hz as an octave. This doubling is why we use Base 2 in audio. This
understanding is important and I am going to assume you're comfortable with logs as it relates
to audio, but if you're not here is a video where I explain it in a way that requires only knowing
how to multiply and divide to understand: Logs for Audio
I know it seems like something you can just skip, but really understanding logs really is very
important for an intuitive understanding of the overtone series.
Knowing this we can now see why the sub bass oscillator exists! Its to reinforce the fundamental
frequency! This is its most basic purpose. However, giving your fundamental some extra juice
can lead to mixing problems and music theory problems as well! There are also a number of
staples in which this particular technique are useful.
Because the sub bass essentially creates a more powerful fundamental or changes where the
fundamental is, it therefore causes our perception of pitch about the sound to change. The
louder and lower the fundamental is, the rounder and boomier your sound will become
depending on the range the fundamental is in.
Tutorial EXAMPLE of sine wave reinforcement in various ranges and also other waveforms.
If it is a low range this description fits well, if it is in a mid to high range then we are in less
danger of a boom (however it's not really a sub bass at this point as a sub bass should exist
80ish hz or below). We can reinforce a sound's tonality with the subbass. Keep in mind there
are less Hz in the lower octaves of the spectrum due to the logarithmic nature of pitch (covered
in that video I know you watched and took the time to understand). When combined with
whatever tuning system you are using this can periodically create a problem where we cannot
obtain a fundamental that actually sounds in tune with our sound. Trust me, you’ll know when
you have found such an issue. The way to deal with this is either change the sounds you are
using in the upper spectrum, as their spectrums may be giving your ear the expectation of a
different fundamental and thus the one you have is “out of tune” or you can mess with a pitch
offset until you find something that sounds about right (however this will only work for a small
range because of the logarithmic nature of pitch). Sooooo, basically it sucks if you run into this
issue.
This also has major effects in music theory as this can essentially alter what sound acts as the
lowest voice thus altering the inversions of chords! Beware!
Music theory series (I will not be explaining basic music theory in these lessons as
much):https://www.youtube.com/watch?v=1JBjYLGspk8&list=PLOMuI-j1vRxSVE6HUVLyjSyL6q
xa_TU2e
Next we have the conundrum of a missing fundamental (and second harmonic)! As sorta
indicated above, our ear has an expectation. Our ear gains a sense of what is the correct
frequency and what is the correct note based on other sounds around your note. There is a very
specific series called the Harmonic series (which we will really dive into later) that our ear has
come to expect. Its everywhere to some extent with the exception of noise. If you want to hear a
close to ideal representation of this series just listen to a Saw wave. That is the series! If a
sound strongly resembles this series (the saw wave basically) the our sense of the pitch of the
instrument will increase in accuracy! This is because our brain will use the series around it to
help it determine what is the fundamental. That's right, your brain d ecides, aurally speaking,
what the fundamental will be.
Video Example
However, what if the fundamental is missing from this series? Well, that is where things get
weird. Your brain is soooooo good at recognizing this series that it will actually adjust and insert
the fundamental for you! Thats right even though the frequency is totally gone you will still
perceive the pitch. You will know some frequencies are missing alright, but your perception of
pitch remains untouched! There are 2 circumstances you must understand for use in a track, the
first being a static removal and the second being a layered removal.
A static removal is when you make your sound in such a way that it is simply missing the
fundamental. As done in the video example I linked above. I am going to extend up 1 more
harmonic as well. So we are missing the first frequency (the one we held responsible for pitch)
and the second. The reason we remove the next harmonic is the series starts off as,
fundamental Octave (so same note basically) Octave plus a 5th! (a totally different note)
So if we removed these 2 frequencies, but kept all the others in the harmonic series what note
would we hear? An octave plus a fifth up? Or our original note? Surprisingly it's our original
note! The emphasis of a series has given our note a pitch it would not normally have! The
further we move away from this series the more likely we are to lose pitch, so some distortion or
filter techniques applied to a sound like this can cause your sound to suddenly sound grossly
out of tune (you will experience your fair share of this as you do more sound design especially
FM). If you do enough sound design you will probably find you have to adjust notes up or down
by some weird offset because suddenly the tuning changed, and this is the reason why! You
have messed up the balance of the implied fundamental. Instruments such as the bassoon have
sounds like this. Please note the fundamental doesn't have to be completely gone, it just needs
to be reduced in amplitude such that it no longer does its normal job. This is really an additive
synth technique, but can be implied here through the use or non use of the sub bass. Through
this method we can obtain nasally textures or hollow sounds. Synth 1 does posses the ability to
do static removal through a filtering technique (though it is very difficult to achieve usable
results).
Video Example
The second scenario is layered removal. This is when filters comes into play. Filters are a later
topic, but a filter's job is to remove frequencies in a specific way. A layered removal is when you
let the listener hear your entire sound then remove frequencies from it slowly. In this case the
harmonic series almost doesn't matter at all! The listener has already heard the sound in full,
and thus the pitch is fixed in their mind, however, if you let the sound go on like this for long
enough without the aid of a harmonic series the listener will acclimate to the new spectrum and
the tuning will shift. This is why every time someone uses a high pass filter we still hear
everything in tune.
Audio Example - S3RL - Pretty Rave Girl (Hands Up Edit) - High Pass Filters are everywhere
yet we hear all the right notes.
Now it should also make sense why this sub bass oscillator can produce other spectrums as
well! It's so we can use this implied pitch and also so if we don’t require the sub bass for this
then we have another oscillator at our disposal. So we should send a filtered signal (one with
only the first few frequencies present) to the low end of our sound for reinforcement rather than
a sterile sine wave. This will guide our ear and mind a little better. What spectrum you use for
this depends on the sound, and if you already have a number of sounds down there than a
single sine wave may be more than enough. The ability to filter the sub bass separately is not
provided in synth one, so if we intend to use this technique we will have to create a instance of
it.
Deadmau5 and Steve Duda - Mastering and Routing While the whole video is great I time
stamped it to start at the part relevant to the sub. Just for some additional perspective.
Consider what Slynk has to say : How to make your sub sound great on any system
These are all talked about from mostly a mixing perspective. Honestly this perspective is beaten
to death on YouTube and a lot of new producers get distracted by it. Generally that is the way
people are taught about sub bass, but we are interested in it more from a sound design
perspective and the potential is has to impact our sound and song in more than just mixing, just
keep that in mind. Sometimes mixes and tones are totally compromised because people are
taught to cut out certain sounds removing fundamentals all over changing how chords sound,
and how to mix feels. Let your ears be your guide but remember we are making m usic not
mixes. Basically, don’t get distracted by getting a mix to fit everyone's definition of what you
should do over making good music.
A long time use of the sub bass is kick drum reinforcement. The idea is that a kick is largely a
sine wave (this is false for many kicks but many EDM kicks follow the sine wave type).
Here is a more typical Hip Hop kick:
Here is an EDM Kick
This technique can work well for the kicks, it will just require different small adjustments. I am
going to show you a more basic form of this technique. The original technique is actually more
complicated because it requires being able to trigger the sound anytime a kick occurs on the
track. Instead we will just do a version that we will layer in with the kick with the knowledge
before hand of when the kick will occur.
So, the basic idea is if we can generate a sine wave at the proper frequency, we can use it to
layer in with our kick and give it some extra juice. A kicks pitch actually changes over time,
something we are going to ignore at this point. We will still get very usable results. We also
need to change how the sine wave turns on and off. It must match the kicks amplitude contour
meaning how the kicks volumes turns on and off. A kick has a powerful snap at the start and
then tapers off. As you can see in the images above. We are not worried about the snap at the
start, as these are a burst of higher frequencies, we are only concerned with the periodic part of
the waveform. The unique way the kick starts is called the t ransient and is incredibly hard to
replicate. A transient is defined as a moment of excited. Any time our audio suddenly peaks is a
transient.
Samples have their own terms. The start of the sample is called c hiff. It contains the unique way
the instrument starts. After that a number of things can happen. In the case of a kick it can vary,
but we are only concerned with kicks that are followed by a periodic (repeating) waveform. We
will supplement the periodic part of the wave from with our sine wave.
If our sine wave came in immediately we would compromise the sound of the chiff and augment
the sound of the kick in a more noticeable way. This isn’t always bad, but for what we aim to do
it’s not a goal.
Here is an example of an added sub changing the chiff, and one that is not. I also demonstrate
the effect of changing the phase of the added fundamental can have upon the chiff.
This is also useful for kicks that perhaps have a sloppy low end. We can remove the low end
through an EQ, and then supplement it with a far cleaner sub bass creating a custom built
squeaky clean kick.
Let’s do it! (the video linked above demonstrates this portion)
First, pick a kick sound. Select one that is not too long (so no big 808’s) but has a periodic
portion. Here is the kick I selected:
Send it to a mixer track and with an eq and shave off the low end. Find a nice solid spot where
the low end is still sorta there, but clearly reduced in amplitude so that you can merge it with
your sine wave.
This kick had a particularly dirty low end so I removed most of it.
Next grab synth1. Use the default patch we set up in lesson 1. We will need to do 2 things. One
is find a note that resonates with our kick. I found D#2 to work really well, you may need to
adjust the tuning of synth 1 to really zone in on the proper frequency.
The second is to adjust the envelope of our sine wave so it can match the amplitude contour.
We will really dissect this later. For now we are concerned with the D knob (decay) and A knob
(attack). These control how fast our sine wave turns on or “attacks”, and the decay knob
controls how long it takes our sound to turn off. Also turn the sustain off because the sustain will
keep our sound ringing at that volume and we want it to fade away not ring out. You will be able
to make more decisions about these knobs later but for now they are fine where they are.
Because we are not going to use the release knob we need to be careful about how long our
note is for the sine wave. The longer the decay the longer it will ring. Set the decay of your note
by ear after you have selected your pitch. Pitch and length have this funny relationship.
Generally speaking we want to set pitch before length, but it doesn’t always happen that way
(usually I end up adjusting this 2 or 3 times anyways).
Now we can set the attack knob. For reasons explained above a fast attack will alter our chiff,
while a slower attack will alter the periodic part of the waveform and an even slower attack will
cause us to hear the sound of the sine wave as a separate sound! This is a phenomenon known
as masking, in our case a combination of temporal (transient) and frequency masking. I cover
this in my critical listening series here: Masking. Masking is SUPER important to get sounds to
blend in a natural way. So all we are doing basically trying to find the ideal mask for our two
sounds so they merge together and sound like one comprehensive sound.
Take some time to try out various attack value now with the added knowledge of masking!
You may have noticed that your sub bass suffers from clicking and popping. You may also have
what sound like rough movement (bandpassed noise) when your volume changes on your sine
wave. This is due to a number of complication with digital audio and a couple other phenomena,
rather than get into here I will direct you again to h
ere, where I specifically talk about it.
Now try this, with a fast attack affect the chiff of the waveform. Then change the fundamentals
phase and see what happens.
Sometimes results are very noticeable because the fundamental of the kick is also very strong
but many times this is a very subtle effect or even has no effect! It just depends (freely adjust
the eq cut we made earlier to get more noticeable results).
While we are on the subject of phase I want to caution you about running your sounds through
plug-ins. Some plug-in color your sound. This just means they will change your sound in some
way. A plug that has a lot of color (not always a good thing, in fact in some situations it is
normally a bad thing) is Image-Lines Maximus. I will spare you the details, but essentially it has
to do with how the plug-in take the spectrum and filters it for processing.
I say this because you will more than likely use a number of additional plugs after to give it that
extra little shine, and even after that you will then mix and master it. Open an instance of this
plug and run a kick through it. You will find your kick may sound very different (granted if you
expect to run it through a plug in without it sounding different then I don’t know why you opened
the plug-in in the first place!) You may even get fancy and try to activate linear phase filters
(found in the drop down menu). Many people falsely think linear phase will fix it (because
teachers in production schools are talking about math and dsp they don’t actually know and
then their students going around saying false stuff backing it up with the “I went to school for
this” bullcrap) but it's not the phase shift of the filters at all that cause this problem in this case,
rather it's the type of filter used! IIR and FIR filters have completely different problems which I
give plenty of examples of here and explain a bit here. Just something to be aware of.
So there you have it! A pretty thorough explanation of kick reinforcement, granted later we will
also be able to toss gates and so forth into the mix and even approach creating a kick with
nothing but a synth. Granted many sample packs and already processed the living crap out of
your kick so a little bit of mixing with these things in mind usually is all you need, but if your
recording some drums you will find these concepts to be very useful!
This comes with some caution. If your sound is has a large range of notes then cutting your low
end on the sound can become tricky. If you make a static eq cut removing some of the
fundamental of the sound but then your sound plays a higher note above the frequency cut off
point you can wind up with timbral shifts you don’t like. (there are more technical reasons as
well, but most producers don’t know enough to care about them, such as chord inversion
problems. Ain’t nobody got time for dat!). In genres like dubstep it is almost expected to have a
separate sub, while other genres like DnB may be more subtle about how to implement the tone
or may even prefer a muddy low end!
This is generally a technique for more synthetic sounding tracks, but there are weird cross
breeds out there where the “separate sub” ends up becoming the whole bass line itself!
Some of the more subtle sound design techniques involve methods of incorporating it as a part
of the original sound and only processing parts of the spectrum. These will be talked about in
context.
Mixing tends to come up with this thing cause it's often far too loud, I don’t worry very much
about levels while being creative and lets that happen during mixing, however I do mix a little to
encourage creativeness. It's a fine line, and beginners often support the “I mix as I go” idea
usually with a skewed idea of what mixing actually is and why a separate step for it is important.
It's ok for a sub to be rhythmic and fit in and out of the track, in rhythmic genres huge long sub
bass lines would ruin everything. While other tracks they are fundamental to the atmosphere.
Zomboys “Young and Dangerous” has excellent examples of both. Listen carefully at the drop
for the sub. You’ll also find endless videos of people telling you how to make a sub “sound good
on any system”. Be Careful who you listen to, there are some videos filled with misinformation
based on Dogma but there are also good ones out there to! They are usually specific to a genre.
I listed some of these in the consideration section. Sometimes it makes sense to have a little
more than a sine wave down there in the sub bass so it comes through on smaller speakers that
don’t support such low frequencies. When we do this we generally want to follow the harmonic
series. However, it only makes sense to do this if the sub bass is important to the track. In a
deep house track this could be very important, but then you can ask “how many people listen to
deep house over small speakers?” Answer: like freaking nobody that likes deep house. Tracks
that already have busy low ends like dubstep and so forth generally don’t need anything else
competing for the low end. So just keep in mind you're treading waters that are specific to the
situation, style and tune! Don’t just go around adding harmonics to every sub bass in every song
because someone told you it was a good idea. Heck, some songs need no extra sub bass at all!
I have included a project file in which the challenge is to add a separate sub to a sound in a way
that is not obtrusive to the sound. There are 2 examples, one is more chill and requires you to
write a bass line. The other is a section of dubstep. You task is to get dat bass as clean as
possible while also enhancing the beat.
Mixing dangers
1. A lot of sounds like the low end, don’t add it if it's not needed! - R
ole of Sound in a Beat
2. Make room, consider panning and volume to all your sounds before eq and other
options.
3. Check your ADSR, often times the release time on a sub can be the source of the issue.
4. Consider normally forbidden techniques in thinner tracks (such as the deadly verb on the
sub or basically any delay based effect)
5. Consider Sidechain vs. Ducking vs. Creating a whole in the spectrum
6. USE THE FADER TO ADJUST THE SUB if you're using eq on your sub then you don’t
need one unless it's for treating one of the deadly techniques.
7. Sub is one of those things that suddenly eats headroom like a monster if you mix at a
bad monitoring level (meaning your perception of the bass is skewed). You’ll find in
mastering that your track sounds totally smashed, so double check your mix at various
levels and the level you think your audience will be listening at! Consider how bass will
be affected through the mastering process.
Exercises:
1. Reinforce the kick drum in this track with a sub bass. Add a Bass to the track as well. Try
each waveform with filtering and comment your best result.
2. Mix the following track making room for the sub bass.
3. Add a reinforced frequency or spectrum to one of the sounds in this track , try it out on
something other than the kick.
4. Video test - A series of sounds, which ones were missing the fundamental and what
type?
5. Video Test - General Waveform Test
6. General Questions
Questions 3-1
1. What are the 2 ways fundamentals are removed?
2. Does removing the fundamental change our perception of pitch?
3. T/F adding a sub bass does not affect our chords.
4. What are 2 common uses for sub bass.
5. List 3 mixing dangers with at least one being a generally forbidden technique.
6. T/F linear phase fixes all phase problems a plug-in may cause.
7. What is the start of a sample called?
(Reminder answers can be found on the answer document)
Lesson 4
Applications of The Sine Wave
We have considered the sine wave as a sub bass supporting role, now to the real interesting
part of the sine wave which is its application without restriction to a frequency range, which can
be revealed much more clearly with the 6 aspects of sound.
The sine wave is a tonal sound when remaining static, but easily posses the ability to move into
FX and atonality if pitch is automated or flux is increased. Short sine waves create those sweet
percussive hit leads, while longer sine waves tend be some kind of melodic lead. S oft and loud
is generally considered from 2 points of view. The first is the layer point of view which we have
already covered and second is from an expression point of view. The dynamics of the sounds
are products of the musical expressions, they are there to increase the musicality.
The brings us to 2 more ways of seeing our sound! The first is from a mixing perspective, and
the second is from a composition perspective. Each view contains sub categories. Mixing
includes things like recording, deciding which sound is more important for a particular “quality of
sound” and so on. Mixing will largely be a secondary throughout our lessons. Composition offers
many more fields, such as performance, influence on music via harmony, the sounds we pick
and how we blend them and so on. When doing sound design it's important to have both in
mind, but composition should take precedence over most mixing perspectives. Our aim is
music, which is decidedly in favor of the composition mode of thought.
ADSR
Two of our fundamental components long, and short have an almost direct relationship to
something called an ADSR envelope. ADSR stands for Attack Decay Sustain Release. There
are actually more kinds of envelopes, the most common ones add stages such as “hold” while
others have specialized purposes and others still offer unique methods, such as the morph
options in Massive.
We will be focusing on just ADSR as its what is available to us in Synth 1.
What is an Envelope? An envelope how we control something over time. It is a controlling
signal, meaning it's a signal that is not meant to be heard. For example, consider the sprinkler
system around your house. You don’t want it to always be on, so you put a timer on it so that it
only turns on during certain times of the day. The timer runs in the background to a utomate the
sprinkler turning on and off so that you don’t have to worry about it. The alternative is you
setting an alarm to remind you to go outside and turn the sprinklers on and then another alarm
to turn them off! In audio these Envelopes come in many forms, for example You could have
hired someone to turn your sprinklers on and off however this is clearly less effective than the
timer, and so we find that some methods are better than others in certain cases. You can also
adjust your timer in a variety of ways. Perhaps you have the sprinklers on for 5 minutes every
hour on the hour instead of just 30 min in the morning. Maybe you have your sprinklers turning
on and off randomly probably indicating your timer is broken! The same goes for the envelopes
we have, there are many ways to go about creating and using them, as well as a huge amount
of ways for them to send out signal. Each method yields a valuable way to internally control our
synth.
So what we are dealing here is an Envelope Generator, and in our case this envelope generator
creates an ADSR Amplitude Envelope. If we could take the ADSR Envelope out of the synth it
could control anything, but in Synth1 this envelope is fixed in place only allowing us to use it to
change the amplitude of our signal. Thus why it's labeled “Amplifier” as opposed to “Env” like in
other synths.
The controls are very easy to grasp, but setting them up can sometimes be an art form unto
itself.
A - Attack controls how long it takes our sound to reach in maximum amplitude from its
minimum amplitude when we activate it (in this case by playing a note).
At first glance this knob seems very easy to grasp completely, you want a pluck sound? Simply
use a fast attack. You want a smooth long pad sound? Use a long attack. But you’ll run into
some interesting things with this knob, especially as we add more and more processing. I want
to point these things out here because they are rarely if ever considered in most texts or
tutorials and can have a large impact on how you choose to use an envelope in a patch.
1. If the attack knob is set around 40 - 100ms you will miss the chiff of the sound. This
means if your sound relies on a specific phase relationship, that relationship will be lost,
further if you run your sound into a hefty nonlinear distortion effect you may get a vastly
different sound out based only on this move! (this is unlikely to happen but can, EQ’s are
used instead to deliberately mess with the spectrum. Since many distortion units are
based on an envelope follower) This can sometimes have large implications in complex
layered sounds.
2. These next comments have far more impact on short sounds than long sounds. This
amplifier is a filter! We have not talked about filters yet, but essentially this again points
to phase. Filters induce phase shifts and envelopes (in this case) will also!
3. Finally, moves stack. Setting this knob at the start of a patch may just be a ballpark
estimate because you haven’t added on the chorus, or that EQ yet. Every plug will
induce its own side effects (unwanted things such as quantization noise ect..) on our
sound so going back in patches where this knob is important can be a wise move.
There is much more we could say about the attack knob, but in reality 95% of the time you use
this, you set it once or twice carefully using your ears and then move on.
D - Decay is how long our sound will decrease in amplitude before it settles on its sustain value.
Here is a Chapman Stick Bass note. It has a very fast attack, a moderate decay and sustain
with a fast release that has a very complex action at the end. We seek to only recreate the
general contour of the shape with the envelope, but you can’t not develop an appreciation for
the complexity of real musicians playing their instruments with all this nuance in every note while
doing this! If the sustain is all the way up then the decay will have no effect. You can get
“whooshing” results if you attack time is longer than your decay and your sustain is lower than
your decay because your attack will suddenly drop to the sustain level as the rate the decay is
set to.
S - Sustain controls how long a note hold and at what level. The sustain directly affects the
decay. Note acoustic wind instruments typically have a volume level that continues to ring until
the player either runs out of the breath or decides to change notes. The player can get louder or
softer, something we would not use an ADSR envelope for directly. To accomplish this we
would use additional automation. In instruments like piano the sustain is a fixed value,
something we cannot do with a simple ADSR envelope. Because the sustain determines the
value the note will play at until we let go the sustain will affect the decay as already noted in the
decay section.
R - Release controls how fast the note will fade away after the you let go of the note.
This stage is very important to remember to look at in some types of sound design. For
example, a long decay on a bass sound can completely ruin it, while a long decay on a
atmospheric pad sound can give a reverb like effect and create dynamic expression.
Many additional FX use envelope generators to guide them in how they will change the sound
so if we change the contour of our sound at the generation stage of the signal chain that will
have a ripple effect through our processing.
We have two extra controls on our particular envelope each affecting playability.
Gain - Gain controls the maximum level of our instrument. We can view this knob in 2 ways.
1. The highest value our attack knob will reach. It’s more useful seeing it this way in short
sounds and may come in handy because of its position in the signal chain.
2. We can see it as a normal master gain. I discourage this view for this knob because that
is what your fader is for and this only applies if post processing is not a concern.
Velocity - Velocity controls how the envelope responds to how you hit the key. It will scale the
entire envelope based on the velocity level of note. If a note has a low velocity level then it will
have a low attack level and all the other levels (not lengths!) will also be altered accordingly.
You can control the degree to which this happens by simply changing the amount of this knob.
A value of zero will make every note hit will the same velocity level. Velocity is a physics term
meaning a magnitude (a scalar) which is speed in this case with direction (usually an angle). In
this case it has in mind how fast your hand comes vertically down on the key. However, this is
still very useful to us because we can give our notes velocity values increasing the musicality of
the phrases we write.
So let's say you are playing a note and it's being sustain and then you hit ANOTHER note! What
is the envelope generator supposed to do? Create another envelope for the new note or just
have the new note join in with the old notes envelope level? This problem is solved with
retrigger.
Retrigger- Retrigger defines what an envelope generator should do if two notes overlap each
other. In many synths Retrigger is hidden with other words such as legato, or mono vs. poly. If
you want a new envelope to occur when you hit a note then you want the envelope to r etrigger
for that note. Most synths pick this behavior for you and that is what synth1 does. It retriggers
for you. However, some synths offer extensive control over these modes in both polyphonic and
monophonic settings. What seems like a simple issue actually blows up to be a big mess.
Honestly I always find talking about this stuff a bit dry and so I won’t go into every detail here as
I will cover voice settings separately, but essentially if you want an envelope to retrigger then
use mono, if you don’t use legato and if you want to be able to play chords use poly which
forces you to use retrigger. Polyphonic retrigger is relatively new and not common in synths, but
can be found in some Kontakt Sample Libraries. Synth Secrets beats this subject to death.
Also these mesmerizing little squares show you which envelopes are active and how many
voices are currently sounding. It looks cool and is fun to play with, but practically speaking the
only thing I can think it would be good for is to keep you conscious of the CPU demand of your
patch.
Synths output sound constantly without an envelope so if you ever run into a modular synth
don’t be surprised when it just rings on and on and on! To stop it you need an envelope!
Ok, so now we have a solid grasp on envelopes so we can progress to creating some
sinusoidal sounds. Let's break it down from the bare bones and build up.
Shortness and Length is a scalable thing, much like size. The earth is small compared to the
sun, and is huge compared to you or I! When I use the term short or long I may not always
specify my reference point because I think it is self evident or it didn’t occur to me because
another concept occupies my thoughts. In the example above I used parts of the sound such as
the attack, and decay vs. other parts of the sound, in this case the sustain. The is a common
and useful way to talk about short and long.
Short sounds lose perceived amplitude after they pass around the 100-40ms range. (I
really want to use sones here, which is the unit for perceived amplitude but I fear I would
confuse readers therefore I will use “perceived amplitude” as a generalized approach but I want
to at least mention there is actually measurements for this stuff). As a sound's length is
decreased past how long it takes our ears to perceive the frequency we lose the ability to gauge
its loudness. This is of course only true for sounds that are already short. Go ahead and try it.
Get a sound out, a kick drum is a great example. Using an envelope shorten the sound
gradually. You will find frequency content being left out. Until you're left with a tiny remnant of
what used to be your mighty kick!
Shortness and Length are often found together in musical lines to offer contrast in the song or
may be used to create a style of playing. The sine wave when remaining static has large
applications of bass which we have already explored and lead sounds. It is often combined with
post processing.
Dry/Wet
You will come across a dry/wet knob at some point. In Synth one these knobs are labeled “Amt.”
for amount. Dry and Wet typically have acoustic implications (distortion is an entirely different
story). If you were to record a saxophone in an anechoic chamber and then the same
saxophone and a large church you would get 2 completely different sounding s axophone parts.
This is no surprise one recording would be nothing but direct sound hitting the microphone,
while the other recording (depending on how close the mic is to the flute) would have loads of
reverberant sound hitting the mic. In sample library development deciding if you will record the
natural reverb of the room or use convolution/algorithmic reverb is often a very important
decision and will impart a “sound” to the samples. Spitfire is famous for the sound of Air Studios,
where they record many of their products and when they finally decided to do a d ry library it was
quite a large deal.
We are not sampling, we a generating sound. This is very similar to the same ideal though.
When we create a sine wave, it is as if we somehow got the microphone infinitely close to the
sound source! It is a perfectly dry sound! The only places we would run into issues is the
generation of the wave and how we listen to the wave! But we don’t hear sound everyday as if
we were in an anechoic chamber so we to must make decisions about how to make our sound
sound like it's in an acoustic space. Many other FX available to us will impact our sense of
space but reverb is to be our weapon of choice for now. We will talk about the FX available to us
in later lessons all I want you to understand right now is how important the idea of space is in
creating a sound. Signal flow is also of massive importance as well. If we add reverb before we
run our signal through our compressor we will get much louder reverb, but if we compress our
sound and then add reverb suddenly we get an entirely different quality. Different sounds for
different genres.
Boom! You now got a sweet pluck sound with the sine wave! But now you have many many
things you understand past just doing these steps.
1. Consider its length, would your sine wave benefit from automating the release not length
in a musical way? What about the attack? How can you consider global length and local
length? Do these changes make meaningful impacts to your track?
2. Consider the verb. Does it enhance your sound or create mud? Can it be implemented
better? Does the range of your sine wave (meaning the notes you used) make sense for
the track and verb you picked? Would automating the reverb amount help? Consider
automating the reverb off while automating the release time up, this would blur the line
between the reverb and the sine wave creating a more uniform sensation of sound.
Range. Range is something I have avoided because it's far more complex than it looks at first.
Range is often decided by ear and left at that! In this way range is extremely simple. Yet range
entails a variety of psychoacoustic effects especially when combined with length (let alone flux
and tonality!) and also deals with digital audio to various degrees depending on the synth
chosen. All I desire to point out is spectrum perception and aliasing.
Aliasing
Aliasing is the introduction of unwanted frequencies because we exceed the nyquist limit.
Basically that means if we play very high notes we will wind up with low frequencies that don't
exist in our original signal! Synth1 actually doesn't have any problems with aliasing but I want to
mention it because you will find synths that do, and on top of that some synths claim aliasing is
apart of their sound! Here is a couple videos explaining aliasing more fully:
Image-Line
Good Explanation
Spectral Perception
This is very useful to us. You should have noticed this effect from the audio waveform test as
well. Namely that as a sound gets higher or lower our perception of the spectrum changes.
Higher notes all tend to sound the same, you could even try something sneaky,
Write a melody with either a square or saw wave. Have a line that ascends into an upper
register where it's difficult to tell the two apart and then switch the waveforms! Descend with the
new waveform. The effect is very smooth and can be used to great musical effect on all sorts of
timbres! If two timbres are close to each other than this will work to one extent or another. Pretty
handy. Two spectrums that are not close to each other such as a saw and a sine wave will
require a much higher frequency for this to have a smooth effect, so high that it is not pleasant
so other methods would better serve you.