In the field of mixing and mastering, playback loudness is a constant concern. Since the EDM scene swept into the mass market in the late 2000s, commercial releases have been exceedingly noisy. And, since commercial recordings are so noisy, everybody likes their songs to be as loud as possible.
The only problem is that the playback speed is limited. A digital file has a hard limit called 0 dbFS, and any signal that approaches it causes harmonic distortion rather than playback amplitude. In other words, the amplitude of something will only go so far.
The words “loudness” and “amplitude” are not interchangeable. Rather, the sense of volume is referred to as “loudness.” It’s not so much about how much volume anything has as it is about how much volume it seems to have. As a result, we can adjust the way the sounds are heard and cater to the idea while we’re mixing with the intention of the final replay volume being really noisy.
Now I’d like to issue a strong cautionary statement. I spent a lot of time trying to work out how to make a loud combination. I have taught two difficult lessons. The first is that I was devoting so much time to learning how to make a mix loud rather than learning how to make a mix nice. They are not synonymous. The second point is that noisy mixes are not approved; instead, mixes that sound right to the customer are.
Don’t imagine that just because you can make your blend the loudest in the world would help you gain clients or sell albums. It’s not going to happen. It’s yet another tool in the toolbox for getting the job done. People want their albums to sound like they belong with what’s out there, and what’s out there right now is noisy, so it’s a necessary ability.
The Quantity Conundrum
The more instruments we have bashing away in the acoustic world, the more energy we bring into the air and the greater the overall sound intensity rating we have. This is not the case in the universe where we have an amplitude ceiling. Since each instrument consumes a portion of the overall amplitude available, the more instruments we add, the less noisy they can be.
A single voice and acoustic guitar will potentially get a quieter playback level in the modern world than a full band. This is especially true when we add drums to the mix, which I’ll go through in more detail later. However, the fewer musicians we have in our setup, the clearer the record will be. If we’re aiming for loudness, we’ll either keep our arrangement straightforward or drive our non-primary components further back in the mix so the main instruments can be heard more clearly.
Fletcher-Munson Curve (EQ)
There are some frequencies that are better than others. When various frequencies are produced at the same intensity, the Fletcher-Munson Curve describes how we interpret loudness. Our ears are most sensitive to frequency content between 1 and 4 kHz, and when we get into the bass range, particularly below 80 Hz, we lose a lot of perceived amplitude.
What does this say about our level of obnoxiousness? It means that by stressing the 1 to 4 kHz spectrum, we can achieve a quieter overall playback level. Of note, there are some significant risks to this. It’s the same spectrum that includes things like infant cries, test tones, and banshee wails… that range may quickly become harsh and unpleasant.
The way we treat the bass is another important part of how the Fletcher-Munson curve can affect our blend. We need more amplitude to get low tones to be heard at the same frequency as higher tones, so they end up taking up more of our total room.
There is, though, a method for partially avoiding this: overtones.
We can get more presence with less real level by boosting the harmonics of the bass and kick. This may be in the 400 Hz range for bass guitar. This could include the 150-250 Hz range for kicks and 808s. Of necessity, by emphasizing these higher tones, we are underplaying the basic tones. If we sacrifice presence for fundamental sounds, we won’t get as much punch, but that’s the price of noisy replay.
Degrees Enduring Compression
Loudness refers not only to how we interpret a sound’s amplitude but also to how long we perceive that frequency. In other words, a sound that reaches -10 dbFS and lasts just 1 ms would be considered louder than a signal that lasts 3 ms. The discrepancy between the peak level and the “RMS” level, or average level over time, is used to calculate this.
The peak level indicates how much amplitude is produced at the highest point, while the RMS indicates how much amplitude is present on a regular basis. Drums are intermittent sounds that peak and fade quickly. This is why, even though they plateau at the same volume, a snare drum would be considered as louder than a saw synth. When we add drums to a combination, our total amplitude space is easily exhausted. The way we treat drums has a big influence on how noisy the mix would end up being.
Compression helps us to regulate our highest speeds. The loudest portion of the signal is easily attenuated by compression. We may use this method to reduce the amount of drums without compromising quality. The disadvantage is that, while the drums will continue to be roughly the same volume, they will have less effect since the highest peaks have been cut short.
This is also true of the overall blend. We may make the quiet areas of the sound closer in dynamic range to the loudest parts of the sound by compressing the overall mix, resulting in a quieter overall record. However, there are sacrifices to be made here as well. The differentiation between silent and noisy sounds is vital to a record’s groove, contrast, and front-to-back photography. Compression will ruin our rhythm, make all of our sounds blend together, and reduce the three-dimensionality of our song by narrowing the gap between the “front” and “back” of the sound stage.
Finally, the quicker our compressor releases data, the higher the overall average amount we achieve. This is why limiters are such an effective instrument in the field of loudness. Quick-release times, on the other hand, can cause pumping and distortion, so if I’m trying to make a record quieter overall, I normally aim for the quickest release time that doesn’t cause pumping or distortion.
Increase Clarity by Reducing Masking
Being sure one instrument doesn’t cover up another is a vital part of having a mix loud. If an element becomes masked, we must increase the volume in order to be seen, and we easily lose our position. Furthermore, if there are any additional tones bouncing about in our mix, the mud gets worse as we compact it. This means that not only do we lose our absolute amplitude standard upfront, but we can’t get away with as much compression at the end as well.
Bear in mind that not everyone needs to be tidy and simple all of the time. We like the extra stuff sometimes because it has a certain vibe or emotion to it. However, transparency is key if we want a record to sound loud.
Masking is no longer exclusive to the frequency domain. It’s all about complexities. “Ducking” is a popular technique for enhancing the transparency of an aspect. When one element advises another element to get out of the way, this is known as ducking (or duck).
This is obtained by using a compressor’s Sidechain input. Let’s pretend we have a snare drum and a couple of electric guitars. If the electric guitars are large and strong, they will be able to mask the snare a little (snare a little lol). We may use a compressor on the guitar buss and feed the sidechain input with the snare’s output. As a consequence, once the snare drum hits, the guitars are muted. We don’t hear the guitars being shut off so this movement occurs fast and subtly. Since there’s less in the way, the snare just stands out a little more as it strikes.
Harmonic Energy Distortion
Our experience of sonic energy is called loudness. This can be heard more distinctly in terms of overall amplitude, but it also contains frequency content. For eg, which do you think we’d hear louder if we played a sine wave and white noise at the same degree of amplitude? The sound of white noise. Since there is broadband frequency information in white noise, it brings a lot of energy than a sine wave.
Distortion gives a sound more harmonic energy. As a result, the perceived loudness increases. This is why we perceive dubstep music’s distorted guitars and synthesizers as noisy. There’s a lot of harmonic energy here.
By saturating our clean synths, softly clipping drums, or mixing distortion into our bass in tandem, we can bring small levels of distortion to our mix. These methods can help texture our sound in a musically convincing way, as well as enhance perceived loudness.
Of course, there comes a stage where distortion becomes noise and interferes with the music. It’s crucial to keep track of where the music is heading while using distortion as a mixing instrument. If it’s an upset tune, go ahead and distort it. However, if it isn’t, you should be cautious. There’s also a distinction to be made between sculpted distortion with a specific feel and simply clipping the master bus.
It’s not difficult to get a noisy playback if we keep our components transparent, control our balances, manage our dynamics, pay attention to the Fletcher-Munson scale, and add tasteful bits of distortion into our mix.
However, you might have noted that this article incorporates a lot of cautionary advice. This is because mixing for loudness is a non-musical term, while mixing in theory is based on the music.
To put it another way, we should be blending to better express the record — it should be our highest priority. We don’t want to compromise the way the final listener hears the record in order to raise the replay volume… We want to use it sparingly if we have to for some cause.