Ultimate Mixing Console Processing for GarageBand
Only $89!
All about time alignment
Sound travels through air, and it travels at a specific speed, known as the
“speed of sound” (about 346.65 meters/second or 1137 feet/second). This
speed actually changes with temperature and pressure, but not by a lot.
Since the speed of sound is a bit higher than 1000 ft/sec, the converse relationship
is also true — for every foot sound has to travel through air, it will take
a bit less than a millisecond for it to transverse the distance. This is a great
“rule of thumb” to have in your head. If you need to figure things
out more precisely, you can use a tool like SpectraFoo to measure the delay time.
When there is enough of a delay between two copies of a sound, it sounds like an
echo. If the delay gets shorter the echo turns into a flam. If the delay gets
shorter still, the sounds merge and it is no longer a delay, but a filter. This
type of filter is characterized by a series of notches, and it makes the signal
sound (and feel) hollow. This kind of filter is generally known as “comb”
filter because the frequency response looks a bit like a hair comb.
The dreaded comb filter
You have
probably heard this kind of filter in action — usually inadvertently. It
crops up any time a microphone hears the same sound along two different paths. A
good example of this is if someone is giving a lecture and speaking at a lectern.
Usually the sound reinforcement system will have a gooseneck mic above the lectern.
When the speech leaves the speaker’s mouth, it goes directly to microphone,
but it also goes towards the lectern. It reflects off the hard surface of the lectern
and then heads to the microphone. The sound that the reflected sound travels is longer
than the direct path. So, the extra path length introduces a (short) delay relative
to the direct signal. The total sound heard by the microphone has a comb filter.
Other places this crops up is if you add plug-ins to a digital mixing system, or
send audio out of your computer to be loop processed with analog gear, or if you
multi-mic a source. The issue with plug-in latency has been (mostly) taken care of
in modern digital audio products -- most DAWs now have integrated latency compensation
for plug-in insert latencies, and sophisticated external DSP processors like Metric
Halo’s own +DSP have internal latency compensation.
That being said, latencies introduced by external loops and multi-mic'ing of sources
are not automatically compensated for by any products that we are aware of, and
they can have a significant and detrimental effect on the quality of the sound and
mixes you create.
The complexities of external loops we will leave for another tech page. The one
thing about external loops is that you always know when you are using one because
you actually have to set it up.
Mulitple copies of related sources
Multi-mic'ing of sources is a bit more subtle; it can happen as a result of things
you might do while recording that may not appear, on the surface, to introduce delays.
One example is if you record an instrument via a DI box (guitar or bass) and also record
the same instrument with a microphone on the cabinet. Another common recording technique
is to use close and far mics on an instrument. Finally, if you track with multiple mics,
the bleed between the mics may be significant; when you multi-mic a drum kit, for example
the bleed of one instrument (for example, the snare) into all the mics on the kit can
be quite large, and the time delay between the snare mic and the other mics on the kit
is on the order of a few tenths of a millisecond (top snare mic to bottom snare mic) to a
few milliseconds (snare mic to overheads).
All of these time delayed copies can create significant comb filters in your mixes. The
comb filters are subtle in the sense that you may not perceive them conciously unless
you know waht to listen for; instead you will just know that your mix doesn't sound great.
It will sound hollow and ringy — generally disappointing. If you solo the individual
instruments, they may sound great; but the overall mix sounds muddy and indistinct —
just small.
Well, the good news is that once you understand the phenomena, and know what to look (and
listen) for, it is actually pretty easy to fix. And fixing it is critical to ensuring that
your mixes sound and feel professional. That’s why we include a short delay in every
ChannelStrip instance; that way, whenever you need to introduce delays to compensate for
recorded delays, the delay process is already part of your channel strip.
The general principle
The general principle of time-aligning multi-mic'ed sources is to identify the latest copy
of the signal and then figure out how much you need to delay the other signals that have
the same source. Once you have the delay offsets, you just insert the delays, and you are
good to go.
Sounds easy, right? Well, as with everything worth doing, there are complications.
The biggest complication is that for many types of multi-mic situations, all the mics
copies of multiple signals and it is not possible to actually align everything
simultaneously. There are a number of techniques to deal with this, including:
- Mic technique
- Gating
- EQ
- Identifying the most critical elements
We’ll discuss some these techniques below when we address the specific case of
aligning drums. First we’ll discuss a simpler example.
A simple example
Let’s discuss the case of putting two microphones on a guitar cabinet (or an
acoustic guitar for that matter). First off, why would we want to do this. Well the
tone that the microphone hears way up close is very different than the tone further
away. The close up tone tends to be bright and raw (and with cardioid mics will also
have a bass-boost). The far mic tone tends to be smoother and more integrated, and
also includes the sound of the room acoustics. Generally, a combination of those
two tones with a judicious choice of mixing levels will yield an excellent recorded
sound. The problem is that the time-of-flight delay to the far mic will
cause the dreaded comb-filter when the two signals are mixed together.
Well, we know how to fix that right? We just delay the close mic signal by the
appropriate amount so that it lines up with the far mic signal. Easy. But what
is the proper delay? Well, that is the second complication. We can estimate it
using the information about the speed of sound listed above.
Let's say that the close
mic is right on the guitar cabinet and that the far mic is 5 feet away from the
cabinet. Then, the signal in the far mic is delayed from the close mic by
td = 5ft/(1137 ft/s) or td = 4.39 ms. If you are recording at 44.1kHz
sampling rate, this corresponds to
ts = td * fs, or ts = (0.00439 * 44100) = 194 samples.
So, to summarize, the distance
of 5 ft corresponds to just about 194 samples at 44.1kHz sample rate. This means that
the signal in the far mic will be delayed from the close mic by 194 samples. If we
want to align the two signals, we just dial in 194 sample delay in the close
mic channel — this delays the close mic to align it with the far mic.
A bit more complicated
Now, the only problem with the approach above is that the speed of sound depends on temperature
and pressure; so our estimate may not be that great. The best way of finding the appropriate
time delay is to measure it directly. There are two ways to do that:
- The simple way, if your
recording software allows it, is to zoom way in on the waveform display and drag a selection
from the beginning of a transient in close track to the begining of the same transient in the
far track. Then read out the delay using the time readout tools in your recording software.
-
If your recording software doesn't provide support for reading out sample-level selections, or
if you want to make the measurement most accurately, you can use a tool like the delay finder
in SpectraFoo Complete
to measure the precise delay between the two microphones.
A more complex example
Ok. That was the simple case. Now let's go back to the (much more difficult)
problem of aligning a multi-mic'ed drum kit. As we described above, the problem
when you are trying to time align a drum kit is that you have multiple mics, each
of which is hearing multiple sources — and each source is a different distance
from each mic. So, to make it more concrete, imagine you have a mic on the top of
the snare, and you also have a mic on the hi-hat. The sound from the snare goes to
the snare mic first, and then a delayed copy shows up in the hi-hat mic. No problem.
But, unfortuantely, the sound from the hi-hat goes to the hi-hat mic first and then a
delayed copy winds up in the snare mic. So if you delay one mic, the other delayed
signal will be delayed twice. In other words, its impossible!
So, since it isn't possible to get it perfect — instead we need figure out
what is most important. The critical idea is that we want to try to minimize the
overlap between the different microphones. This is different from the near and far
mic case; we don't really want all the microphones to give us a representation of
each signal in the drum kit; instead, the reason we are using multiple mics is to
try to capture each element seperately. So, if we are careful, we can limit the
bleed into adjacent mics.
The first thing to do is to be careful with mic technique; we want to use microphones
that limit the bleed from the other elements of the drum kit. This can be
accomplished with both mic selection and placement. Use of directional pick up patterns
will limit bleed from elements that are outside the polar pattern of the microphones.
When placing the mics, part of the placement selection is to use the drumkit as
a baffle to isolate some of the mics from the other elements of the kit.
After applying care to the selection and placement of the microphones, we can also
take into account the frequency spectrum of the various instruments. For example,
the hi-hat and the snare have very different sets of characteristic frequencies associated
with their timbres. As a result, you can utilize EQ to remove the bleed from the hi-hat
from the top snare mic, and vice-versa.
In addition to EQ, since drums are patterned instruments, we can also utilize gating
to try to limit the bleed from one element of the kit into the microphones for the elements
of the kit. Since the elements of the kit have strong tonality, the sidechain feature of a
gate (like the one in ChannelStrip) is incredibly useful. By adjusting the sidechain EQ
you can control what signals cause the gate to open. By tuning it to the resonant
frequencies of the drums, you will very effectively keep the other drums in the kit
from opening the gate for that microphone.
So, if you apply the guidelines above, you will have limited amounts of overlap in
the bleed between the mics. The primary mics that will still require a consideration
of time-alignment are the following microphones:
- Snare Top
- Snare Bottom (if used)
- Overheads
- Hi-Hats
The reason that these microphones are the most important is that the overheads are
pointed down at the instruments that the spot mics are listening to. Since both the
snare and hi-hats have significant high-frequency energy, the comb-filter that will
result from mixing the delayed signal is really significant.
So, to time align the kit, you want to figure out the following delays:
- Snare Top → Overheads
- Snare Bottom (if used) → Overheads
- Hi-Hats → Overheads
and then delay the Snare Top, Snare Bottom, and Hi-Hats by the appropriate amounts
so that all the sound is aligned with the signal that is in the overheads.
Give it a shot, you'll be shocked and happy with the results!
»Return to ChannelStrip for GarageBand.
version 1.0 - 20051110
Copyright ©2006, Metric Halo Distribution, Inc.
|