In this article we’ll talk about production from the side of the mixing engineer, the errors that we see, and things that can be improved in your production right now.
Listen to different genres, analyse different music, watch artists perform, watch video clips − this is the best education. Because not everyone has the opportunity to graduate from Berkeley, not everyone has a good mentor − the best mentor in our life is music. By analysing music, you’ll learn how best to write harmonies, melodies, how good material should sound. This will instil in you a taste and simplify further work in everything. If you are good at music, you will be good at writing music too.
Talented producers are very good at music. See how big they have libraries of music, vinyl collections, how well they navigate in old and new music material. And understanding how music is made, how it sounds, what is good and what is bad, what has gained popularity and what hasn’t, will greatly help you in creating your own music.
This is actually a very long process, it lasts for years, and will always last, this is education that will constantly catch up with you, because you can graduate from Berkeley, university or school once, but music will teach you all the time in your life. Thus, you’ll not lose relevance and will always follow trends and create a quality product.
Now let’s talk about practical things, about what mistakes producers make during the production stage. These mistakes are quite common and can be seen even in famous people. In the future, of course, large teams fix these errors (mixing engineers, producers or assistants). But globally, these problems are best avoided from the start and instilled in yourself the habits of doing things right. This will greatly simplify your work and save you money. And it will also help your product sound good.
This is the most common mistake almost everyone encounters. Using low-quality sounds can kill your song from the very beginning, and no one will fix it in the future. In the process of creating a song, there is such a rule that the mistakes of each step migrate to the next and are layered on top of each other.
If you made a mistake at the stage of recording a song, the arrangement won’t fix it, you used bad sounds in the arrangement, the mixing engineer won’t be able to correct the bad sounds, the mastering engineer won’t fix the bad mix. As a result, you have already killed your song from the very beginning.
Try to choose normal sounds, for example, subscribe to Splice − a great library, hundreds of great quality sounds made by producers. It also doesn’t hurt to get some good synthesizer − it can be analogue or software. The latter include Omnisphere, Spire. In principle, this is not so important, the main thing is that it’s one good synthesizer that will cover all your needs.
Don’t attach to the same libraries in the future and be constantly in a creative search. You can study your favourite artists. You can often find videos on YouTube about which libraries and sounds they use. Nobody hides it − this is available information. And in this way you can create a pool of good sounds for yourself, and it will greatly help you to shape your own sound.
Of course, at first you’ll choose, sort out and search these sounds for a long time, because this is a rather long process. But in the end, like any producer, you’ll have your own library with all the sounds you need, and you’ll be able to make a very good product.
There is also a very common mistake − to use plugins in the sound selection process. For example, a person takes a kick, adds it to the arrangement, then puts an equalizer and a compressor on it, does something with it, in the end it seems to sound good, and he leaves it that way. It’s better not to do that. You immediately limit the potential of the sound, making it harder for the mixing engineer. The sound has to be good. That is, if you take a sound, it should sound good even without a compressor. Try to work under the reference. Set yourself a reference and choose a separate sound, compare it with some commercial releases, see whether it sounds or doesn’t sound, and don’t compromise. If you hear that the kick itself doesn’t sound in any way − just delete and look for a new one. This will also be very long at first, but it will be a good foundation for you from the very beginning.
Another recommendation is to try using the analyser when selecting sounds. Put an analyser on the master bus: this way you will see the sound − its frequency component, and you’ll be able to compare your sound or arrangement with your references. And even if you don’t have good monitoring, you can see what’s good and what’s bad. Because sometimes the monitoring cannot reproduce frequencies below 50 Hz. For example, you choose a kick that sounds good. But if you open the analyser, you will see a huge drop in low frequencies below 50 Hz, which you simply cannot control − you cannot hear them. Therefore, it’s better to check with the analyser.
Another very common mistake among the arrangers that we face is the mess of the session. People are using too many sounds, putting them on top of each other, making their songs crazy.
We often see that people use two or three kicks. In principle, this can be done, but often unfiltered kicks lead to an unpredictable result. And a person with little experience or with not the best monitoring will not be able to cope with this − this will lead to a bad result. Of course, layering is acceptable, and you can use it according to the rule: there must be two layers for one sound. For example, in the case of a kick, there should be one low-frequency sound and one high-frequency sound. It is necessary to cut off the top of one and the bottom of the other. This way you can pick up a kick with a good low and then a kick with a good high, combine them and get the sound you want. But on the other hand, I do not recommend playing with layering for beginners. The moment of selection of low and high sounds, filtering, take a lot of time. A person who wants to make music as soon as possible begins to engage in technical things, his hearing and perception are blurred, he outweighs his attention more to the selection of sounds than to the creation of music.
The same goes for snare. We often see sessions with a lot of snares, which make the sound muddy and don’t sound very good. The most normal approach in snare layering is to use one snare and add some kind of overtones to it − clap, snap − whatever, in order to emphasize the drum itself. But snare layering shouldn’t turn into an eternal layering of sounds in search of a good result. That is, you should have one good snare, one good clap, and that will be enough.
In the case of a hat, you can use 3 hats, and this is logically embedded in the stereo field (we have a right speaker, centre and left speaker) and thus they can fit into the mix, not interfere with each other, and not create too much rubbish. Well, on top, you are already layering accent cymbals, for example, crashes, reverses, splashes, rides − it doesn’t matter. But there shouldn’t be a lot of them, either. In general, try to stick to minimalism in the arrangement, since the fewer sounds and the better they are, the better the track will sound in the end.
We often see producers using multiple basses, and that’s a pretty gross mistake. Ideally, there should be one bass. If you layer it, it should be spaced by octaves (one in the bass in one octave, and the second in the other octave). This is also a rather complex topic. The fact is that when several sounds are in low frequencies, they are more and more susceptible to interference.
What is interference? For example, we have one sinusoid, a wave, that goes from bottom to top, and so on, (just like with light). If we have two frequencies, and they go in opposite direction to each other (one frequency goes up, and the other wave of the second sound goes down), then, accordingly, they mutually subtract, they add up to zero. How it works in practice. We have a speaker, an ordinary one, which goes forward − backward, follows a sinusoid, and if two sinusoids are fed into this one speaker in the opposite direction, then the speaker will simply stop, because it cannot go in two directions at the same time. Accordingly, when you add multiple basses, the frequencies of the different bass can add and subtract mutually. And it can not only be static, as is the case with a sinusoid, it can be dynamic. That is, in one moment of the track, you will add frequencies together, and thereby make the bass louder, and in some places, on certain notes, they will be subtracted and make the bass quieter, or it may disappear altogether. Therefore, on a large sound in commercial tracks, such situations are best avoided, because instability is most often not musical, it is very difficult to mix, it will take a lot of headroom during mastering − you will have a fairly quiet track because of this.
We have the same situation with kick and bass. It’s impossible for kick and bass to sound at the same time, in one note. Because in this case, the kick will subtract or supplement some bass frequencies, and we will have a loud bass for one kick, or a loud kick, and on the other − the bass will take the bottom of the kick − this is instability again, and this will not bring anything good.
Try to think with a subwoofer − this is one speaker, and, figuratively, many sounds cannot fit into it. Bottom is long waves that take up a lot of space, and one subwoofer cannot contain many sounds at the same time. Ideally, one bottom at a time. That is, when the kick hits, you shouldn’t have bass at that moment, or it should be very far from the kick so that the frequencies don’t add up. Thus, you can use a lot of tricks, the main thing is to remember about the base: first there is a kick and then bass. You can shape the part this way, you can equalize it as you like. Try not to have several elements at once in low frequencies. The same is about the kick − first the kick, and then the bass, or they must be spaced according to each other’s notes. Because there are times when the tone of the kick matches the tone of the bass, and they complement each other strongly, and then a certain note will sound louder than the rest − this is best avoided. There are many ways to avoid this situation, but since we are now at the arrangement stage, the best way is to separate them in time (so that no bass sounds when the kick is hit). This can be done in several ways: you can cut off the bass at the moment the kick sounds, or you can play around with the parts so that the bass doesn’t play at the moment the kick is hit. This will provide you with a smooth and stable bass, a good bottom and a lot of headroom in total.
In the accompaniment, as in everything else, try to use a minimum of sounds. You should have two or three main elements, let’s say a piano (synthesizer) or a guitar − this is the main element. And everything else should be built around it, to complement the melody. The main insight that I can give on the accompaniment: don’t write it in the same notes as the vocals. Because it will be the same as with the bass, only at a smaller level. The whole accompaniment will interfere with the vocals, and in the end there will be a big fight for the place.
If you analyse large commercial projects, you will notice that the accompaniment is in one octave and the vocals in the other. That is, if your accompaniment (synthesizer, guitar, piano) is at the bottom, then the vocals, ideally, should be at the top. This is not a 100% rule, but it will greatly simplify your work. Because in this case, you will not need to play with time, build some tricky pauses, spend a lot of time on mixing and automation. It’s the simplest option. Using this rule, it is easiest to build a commercial project and create an arrangement in which the elements won’t compete for a place in the song. That is, the accompaniment won’t fight for a place with the vocals, the vocals won’t drown behind the accompaniment. And again, try to stick to the minimalist rule − one main element and several auxiliary ones. You shouldn’t do a lot with a synthesizer, a lot of things, layering on each other − it will all mix into one big mess and in the end will just ruin the track.
After you’ve recorded the accompaniment, most often it comes to Fx − all sorts of uplifters, downlifters, sup sweeps, etc. The most important rule here is not to use too many of these elements, make sure that the Fx doesn’t intersect at the bottom with the kick and bass. That is, if you are using a sup sweep, make sure that there is no bass sound at that moment.
- Backing vocals
It can be anything, in fact, this element is quite experimental, but a couple of nuances should be taken into account.
When you build vocal harmonies, make sure that notes in backing vocals are not dissociated with notes in accompaniment.
Well, in general, the most common mistake that we encounter with backing vocals is a very large arrangement filled with different instruments and there are still a lot of backing vocals on top. And in the end, it also mixes into one mess, since large packs of backing vocals can be regarded as another instrument. And it turns out that you have a large orchestra, a large choir, and it is quite difficult to land without practice. You can find a lot of tracks with this approach, and you need to understand that in order to do such large projects, you need to be very well versed in the arrangement, in general.
But if you have listened to such academic music with a lot of backing vocals all your life, it won’t be difficult for you to intuitively compose such an arrangement correctly and verified, where each element will sit correctly in time.
These were the most common problems that a mixing engineer faces in terms of production.