Feel like I’m making music on a whole different level this year. I’ve never had a problem with writing loops and allowing for melodic fluency from my vocalist partners or sample cutting in the past, and still don’t, but lately I’ve taken to writing constantly evolving melodies throughout whole tracks as I’m making them. I’m not going to lie it’s more of a ‘pop music songwriting’ take on making music but fucking hell it’s challenging to the brain and the projects I’ve started through doing it are taking a long time but I’m going to feel so proud when they’re done!
I was recently having a discussion about getting tracks finished. I’ve now realised a lot of it for me is leaving it looping all night and doing other shit. My mind trances out to the repetitiveness & it starts imagining melodies etc that aren’t there and that’s how I progress with the track… I just add those sounds in at different parts of the structure, take bits out here and there and then make mid sections where I can really go nuts with longer melodies rather than just loops.
I’ve been working on a track tonight, and tonight as is the case many other nights I’ve been working on what I consider a shit tune. You know a tune is shit when it just doesn’t groove properly, the vocals sound cheesy as fuck, the chord progression has too many major chords so it sounds like a kids program theme, you can’t be arsed to mix it and it’s just generally something you’d turn off if somebody else sent you it.
But I carried on making it, and different to previous times when I’m carrying on out of determination to make it better, I carried on for a different purpose tonight. I was enjoying and chuckling to myself over the fact it was so shite. It was good just to have a jam and not worry about the perception of the music itself and focus more on the enjoyment and moment of making it.
In theory it’s all beneficial… Smiling and messy around experimenting (even knowing the track is going nowhere) is still progress. I’m confident that when working in my comfort zone I could have made something much more productive, polished and preferential to my ears but working outside of that comfort zone will be what helps me expand it so that my music doesn’t get too repetitive and samey.
Tonight, as a first I think, I naturally appreciated and embraced making shit music, rather than dwelling on it. It’s the continued ability to learn new things that keeps life and what you do in it exciting and interesting. Although I wish I could be spending more time on music, tonight has helped me realise that as long as I’m inspired to make music and the result is shit, it will still result in better music eventually. I no longer consider ‘the shit music stages’ writers block.
Using Compression Creatively As An Effect
In the previous section I explained how to use compression in a mixing sense to even dynamics and make things sound more even and crisp. This section basically tells you how to make things sound big and loud using compression in a slightly different way!
As well as evening transients and dynamics, compression can also be used to fatten sounds up or adjust the tonality of a sound, most of this is done with the attack and release settings, but the ratio and gain also play a big role.
In this article I will be going over two different examples of compression which are different from normal compression and focus more on making things perceivably louder than making things dynamically even.
Parallel (or New York) Compression is a type of compression used to ‘beef up’ sounds. In my opinion parallel compression is for those times when you have no need, or limited time, to be technical with your production & give more of a nod to the old school when engineers & producers didn’t have analysers etc. and you just based everything on the sound you’re hearing (as it should be, really ;)).
The concept of parallel compression is having two identical signals, each with a different level of compression on. One signal will be compressed to the balls (i.e. high ratio, fast attack and slow release) with a high level of gain to make up for the compression, and the other will be subtly compressed to not compressed at all. The result will be one signal which is extremely squashed in the transients but full in the sustain, and another signal which is nice and punchy and transient whilst not loud as a whole. The balance of these two signals are then mixed to make up to the level the original signal’s peak level was at, except now the sound will have a more consistent RMS level so will sound a lot more full and essentially louder as a whole.
As an example, you might have a drum signal which is peaking at -6dB (as in all of your drum sounds have been bussed into one channel fader - on Logic this accomplished by changing all of the outputs of your drum sounds to a bus input - and the total peak amplitude on the bus is -6dB). The next step is to split this signal into two duplicate signals so that we can apply parallel compression. On Logic there is two primary ways of doing this, I’m sure most DAW’s will have similar ways:
The first way is to send the signal 100% (or in Logic’s case 0dB) to another bus. The important thing to do here is make sure that your signal is sent pre-fader. This means that the signal is being sent by the percentage you’re sending it regardless of what level the fader is at on the original channel, it separates the send pot from the level fader. When the signal is sent post-fader, if you turn down the amplitude of the channel down, the sent signal will also be turned down, whereas in pre-fader if you turn the amplitude completely down in the original channel, the secondary signal will still be turned up. Sending signals in pre-fader does still include any effects or processing you have on your original fader, so that’s something to keep in mind.
Next we’ll apply the aggressive compression to the duplicate signal. The first thing we’ll adjust is the ratio, we want this excessively high as we really want to squash the shit out of our transients so that we can really raise the overall level of this signal. Experiment a little, but you’re not going to want it far off full. 30:1 is a good starting point. Next you want to pull down your threshold setting until your gain reduction meter tells you there is a substantial amount of compression taking place, I like around -9dB as a personal preference. If you can’t see how much gain reduction is taking place on an analyser, use your ears to decipher when extreme compression is taking place. Next you want to adjust your attack and release settings. As a rule, you’re going to want very fast attack and fairly slow release settings so that the compression really has time to batter down those transients. Finally you’ll want to add up the gain back, the settings you’ll want for this will either be just under the gain reduction amount (so add 8.5dB when 9dB is being reduced) or use the gain to make up to the peak amplitude the signal was at when the compressor was bypassed. You will now have an extremely compressed signal, in fact you essentially just limited it; but more on that later.
Finally we’ll mix this signal with our un-to-minimally-compressed original one until the peak of the combined signals meets -6dB. The idea was to fill in the gaps in between the transients to make the audio sound more full as a whole, so we’ll start by turning down the original signal (whilst our compressed signal is muted) a little to allow for some ‘headroom’ (turn the fader on the original signal down until it’s around -8dB). Next bring up the heavily compressed signal from inaudible to a level where the overall peak of both out[uts is reaching -6dB again. The two different signal levels can then be altered to taste depending on whether you want the final result to be more ‘fat’ or more ‘punchy’.
Another way of achieving parallel compression in Logic is using the mix setting on the standard compressor, this can be found by clicking on the little triangle in the bottom left corner of the compressor.
Using this will wield different results to the previous method. Basically the idea is the same as the previous section, but instead of using dual signals you can use the mix parameter on the compressor to mix between two signals (a dry uncompressed one & one with compression) but from inside the box. I find that the results from using this method for parallel compression is often more subtle than the previous, so I use this method on things that just need a bit more ‘thickness’ (often ‘airy’ synth leads, pianos and sometimes vocals).
As you can see, there are a few other interesting parameters on the drop-down menu of the compressor, but I will maybe cover them in another article.
Sidechain compression is an effect resulting when a compressor is used to reduce one sound when another is played.
Sidechain Compression For Mix Purposes
When using sidechain compression for mix purposes the aim is to ‘duck’ a sound to make room for another in the mix whenever it plays. To be honest it’s very rare that I use this particular technique for this purpose as I find if you’re mixing your elements properly and the track is properly composed then there shouldn’t be a need to duck one thing when another plays, but I understand that many producers/mix engineers do see the benefit and in theory it is a great technique to know.
A frequently used example of where sidechain compression may be necessary is if you have a sub-bass heavy kick drum and a sub-bass sound playing in your track at the same time, so I will use this as an example to talk through how to achieve the effect in Logic. Obviously the way this is achieved will range in the different DAW’s, but hopefully the theory of this tutorial on Logic will easily translate to your DAW. If all else fails, there are specific plug-in’s out there made specifically for sidechain purposes. If you can’t work out how to sidechain on your particular DAW, YouTube will likely have a video for every DAW explaining how to achieve it.
The first thing you’ll want to do is create a BUS in Logic and turn the output off so that there is no output coming from the bus. The reason for this is that the bus will just be acting as a ‘thru’ for your audio signal, so you don’t want to hear it. Next you should go to the sound you want to be controlling the sidechain, which we’ll call the trigger. In this instance we want the bass to duck whenever the kick is audible so the kick is our trigger. On the kick channel, set up a pre-fader send 100% to the bus we just set up with no output. We want this send in pre-fade because when we mix the kick down we still want it to be acting as loud as possible to control the sidechain, a quieter signal would effect our sidechain compression less because there will be less audio to be compressed.
What we need to do next is go to our sub-bass channel and add a Logic compressor onto the signal chain. Depending on the desired sound, choose whether you want the peak or RMS of the bass ducked when the kick plays (in this case RMS will likely be your boy as you don’t want the ducking to sound too blatant, you just want it to prevent levels getting too high when two sounds play or tidy up a little clashing in the frequencies). Next we need to go up to the top right corner of the compressor and change the ‘Side Chain’ parameter to the BUS you previously set up and sent the kick drum to. You should now see that the compression is working as your kick drum is playing.
You’ll now want to go through the other parameters on the compressor until you get the desired amount of ducking you require. It’s best to solo just your kick and bass sounds to begin with so you can hear how they’re working with each other, then readjust if necessary in the whole mix. Start with the threshold, a good starting point I find is around 8:1 to get a nice amount of ducking but not so much as it makes the ducked sound almost inaudible. Now adjust your threshold to duck the sound about 8-16dB. Add some gain back in (2dB-ish) so that a little bit of gain is applied to the sound when it’s not being ducked to bring it out a tad to match your kick sound. Now adjust you attack and release parameters. You won’t want your attack too slow as you’ll want the sound to be ducking quickly as the trigger plays, ‘7 o clock’ might be a good starting point. The release will be totally down to personal preference as this determine when the level of the bassline starts coming back up after the initial transient from the trigger. I find a good starting point is around ‘11 o clock’.
You should now have successfully sidechained your bass to make room for your kick drum.
Sidechain Compression As An Effect
Essentially the same process as sidechaining for mixing, the main differences are that you may want to sidechain the peak level as opposed to the RMS for a more sudden sound. I believe the main difference between sidechain as an effect and for mix purposes is the ‘sucky’ sound achieved when the sidechain is used blatantly. The way to produce this effect is largely to do with the attack and release of the compression sidechaining your sound. The attack should be fast, but not instant so that it sounds like the trigger is slowly pulling the sound down. A good starting point is around ‘9 o clock’ then make it faster or slower to taste, depending on the swing and groove of your track. The release is the same ideas as the attack but in reverse. It determines how quickly your audio springs back up to its original level, again you’ll want to judge this on the swing and groove of your track and how it sounds good. I tend to start at ’11 o clock’ release & adjust from there. You’ll want your ratio nice and high, to at least 11:1 to really get those parts with the trigger pulling right down, you’ll also want the threshold down a lot so that there is a gain reduction of around 16dB or more. Finally bump your gain up to a place in the mix where it is pulling through nicely on the parts that aren’t compressed. You may need to apply some normal compression afterwards to make it sit more nicely in the mix and stop those parts that aren’t compressed jumping right out and spiking your overall peak level.
The final thing I will mention with regards to sidechain as an effect is the fact that you might not always want an audible trigger in place when you want something else to ‘pump’. There’s more than one way of achieving this such as using LFO’s, but sticking with sidechain compression I’ll give you a brief run-through of how it’s achieved. Firstly set up a new audio channel with a fast high transient sound such as a kick drum or hi hat where you want your sound to be ducking, it doesn’t matter what it is - it isn’t going to be audible. Turn the level fader all the way down to 0 and then send the channel to a bus set up the same was as in the previous examples. You can now go about sidechaining the same way as before, it’s as simple as that. It works because you have turned the trigger all the ways down so the sound is inaudible but you’re sending the sound to a bus 100% pre-fade so the level being turned down down not effect the send. Your bus has no output so the sound isn’t audible there either, but the level coming into the bus will still effect any compression you sidechain to this bus!
Limiting is still compressing, but it’s the most aggressive form of compression there is. The ratio is on its highest point and the attack speed is as fast as possible so once the designated threshold is reached the signal will be so reduced that it won’t exceed the threshold what-so-ever.
The primary difference you should notice between a compressor and a limiter is that there is no ratio parameter on a limiter, the ratio is always fixed at the peak amount. The purpose of the limiter is to completely flatten a signal when it reaches a threshold so that it never exceeds that threshold, whereas on a compressor you have more control over how much the signal is reduced when the threshold is met.
Limiting is handy when you want something to not exceed a certain peak level and want to bring up the entire level of the audio to meet the peaking amplitude. For this reason it is often used in modern (and especially digital) mastering. Limiting is the primary cause for what is now being known as ‘the loudness war’, which is a term used to describe the increasing RMS levels of records and the decreasing dynamic range.
When limiting a single sound the first step is to make sure that it’s already the loudest it can possibly be without distorting from the source (i.e. turning it up in the synth or on the sampler), my reason for this is because we’ll be adding gain from the limiter which may bring subtle distortion into the signal even before it reaches the threshold point. Once this is done I will set a threshold of -0.3dB (this is a habit from mastering, as with mastering you should never allow your signal to reach the full 0dB because it can introduce artefact’s into the signal after being burnt to CD). If the signal is currently below the maximum peak level I will bring up the signal to meet its highest peak level before redlining. On the Logic Adlimiter there is a parameter called ‘input scale’ which is the one I use for this task, on other limiters you may need to turn the gain up on something in the signal chain before the limiter. Once the signal is at its highest peak point it can be before redlining I will begin to introduce gain into the signal. In doing this I will be greatly compressing the peak transients of the sound whilst bringing up the troughs to meet the same level. The idea here is to bring up the quieter parts of the sound to meet closer to the highest, whilst not turning up the higher points because they’ve been stopped at a desired threshold.
I now have a sound which is pretty close to the same consistent amplitude throughout! I can now mix down the sound in relation to my other elements of the track knowing that it’s never going to exceed a certain level.
Because a lot of my music is particularly dependant on the dynamics I take great care to try not to limit tracks too hot when using limiters for mastering. To help me with this task I use two particular pieces of software to help graphically analyse my signal. I don’t usually like to cheat and use graphic analysis because it’s not really very accurate (you can’t explain what you hear with your eyes), but these are pretty mathematical figures I wouldn’t be able to know from listening, especially in the environment I produce & mix in - which is not at all designed or transparent/flat enough for mastering. The problem with limiting is that all the time you’re bringing up the gain after the peaks have met the threshold, you are distorting the signal more and more as it flattens the peaks. This is often referred to as a signal being ‘too hot’, so I use these pieces of analysing software to help me determine whether I’ve under or over limited.
The first is s(M)exoscope by SmartElectronix, a live waveform display plug-in, which I use to check that when I am turning the gain up on my limiter, only the exceptionally loud parts of the audio passage are often being squashed. I try to limit only to the point where it’s rare that things are being squashed, only when the peak really sticks out compared to the rest of the track.
You can see in this image above that only the really loud peaks are being ‘squared off’ when they meet the threshold I’ve set on my limiter.
The next is Logic’s own MultiMeter which I use when mastering to check that my overall general RMS is meeting around -10dB, which is the industry standard RMS for a recording.
So after setting my limiter threshold to -0.3dB I watch these two analysers whilst turning up my gain on my limiter to reach what feels and sounds like the best amplitude. The limiter is always the last processor on my signal chain, followed by my analysers. Obviously there are other analysers on the Multimeter, pretty much all of which I use when mastering but the one mentioned is the only one I use particularly for limiting. I may do a complete digital mastering basics article at some point. :)
You may have noticed in the above image that my signal is actually redlining! This is not something I’d ever include in my mix (unless I was making a Lo-Fi track which required distortion on the signal). What’s happened here is I have placed my MultiMeter before my Adlimiter in the signal chain for the use of this example image. In a real mixdown that redline wouldn’t be there regardless of where the MultiMeter was in the signal chain as I’d have already pre-mixed the sound so that it wasn’t exceeding 0dB.
So this concludes the articles that I’m covering on compression. If there are any questions at all, subjects you’d like me to cover next or advice how to better explain things please visit my homepage & click the ‘ask me anything’ button and ask away! :)
Although phase is often seen as an evil thing when mixing audio, it does have its advantages when you’re aware you’re using it. The important thing is to learn to recognise what phase sounds like and the effect it has on your audio so that you know whether you want it there or not.
Signals - Stereo Vs Mono
With regards to phase, your main warning is that if something is too far out of phase it means that it’s not going to be properly audible when listened to in mono.
Some people might question why that matters in an age where the majority of home systems and headphones are now stereo? Your answer is specifically two ways that people still listen to music in mono: firstly is through mobile phones and mono laptop speakers (obviously these people are evil, but they do exist and if they’re listening to your music out loud through a phone or laptop, you want the mix to sound good!) Secondly is on mono sound systems in clubs and music venues.
When creating music in a digital audio workstation you have one channel of which all of your other channels are linked to, this is called the master channel (also known as Output 1-2 on Logic, the reason being because you can work in surround sound and still link to a master channel which is more than two channels in total).
The master channel is (generally) a stereo channel made up of two separate signals panned hard left and right. Now when you listen to this stereo in mono you have to consider that anything that’s hard panned 100% left or right will not be audible, and even when it’s partly panned it’s going to sound quieter in mono because mono will be playing everything that is 100% centred in the split signal properly then anything panned is essentially turned down in volume by the pan pots.
For this reason I make sure the key parts of my drum sounds i.e. kick and snare are 100% centred, as well as my sub bass. As a rule, anything below 500/600Hz is generally 100% centred and I am a bit less strict with my high mids/highs (above 4000Hz essentially) and let them have a bit more stereo space as they’re the ‘pretty’ parts of my tracks which will be more applicable to home listening.
Many producers and mix engineers recommend that ALL of your sounds are mono before effects as they would be when recorded in the analogue world and then use subtle effects to add stereo space afterwards. This isn’t always ideal now as so many plug-in instruments have stereo sounds, but there are ways you can make sure that your sounds always sound prominent, regardless whether they’re playing on a stereo or mono system, as I will go on to cover.
Phase is a result of two identical or similar sound waves overlapping one-another at a slightly different time interval. The audible effect is that some of the signal will sound louder and others will sound quieter, which happening quickly causes a slight ‘flutter’ in the left and right channels.
When a sound is in-phase it will have two of the exact same sound overlapping and the sound will be doubly as loud audibly.
When a sound is completely out of phase you will hear absolutely nothing as each of the signals will counter each other out.
Phase Invert / Reverse Phase
Unless you’re trying to cancel out noise or something in a sound you don’t want to hear, phase can be very annoying, so this is where phase inverting or reversing comes into play. The purpose of phase reverse or inverting is to swap the direction of one of your signals’ waves around so that it then becomes in phase with the other wave.
There are very powerful plugins out there which will put only certain parts of the wave back into phase so that when you have a signal which is slightly out of phase as opposed to completely, you can effectively make an out of phase stereo signal in-phase so that it will be audible on a mono signal, or just if you want it more prominent in the mix altogether.
Phase As An Effect
Chorus, flanger & (obviously) phase are all effects caused by phasing. They work in different speeds of which the two signals go out of phase and how far they go out of phase, creating dips and rises in audio in each signal which can give sounds a wider feeling across the stereo field.
More info on these effects in later tutorials.
My body is just a host between my soul and keyboard… There is no thinking involved, the soul knows how to express itself.
Would it be wise to use a tape saturation plug-in on the whole mix of a track to add warmth if I did it in moderation? Do you know which plug-ins I should use?
Right, so the most important first question is - are you mastering this track yourself or are you sending it to a mastering engineer? If the latter, nothing should go on the master channel at all!
If you’re mastering the track yourself, then there’s no reason why it shouldn’t be beneficial for certain sounds… Where you put it in the chain will greatly determine how the sound will be. For example, if you put it before an EQ then you can slightly adjust the EQ to compliment the warmth so that it doesn’t become muddy, but if you’re looking for a more vintage sound then you might like to put it on the chain last (personally I’d always have the limiter last, so just before that…)
If you get confused where to put it in your chain, imagine everything in hardware form… If you had a mix you were mastering in the days of using tapes it would have already come on tape from the producer so you could put it at the start… If creatively the mastering engineer worked with the producer they might decide to rerecord to tape for a vintage or dirty feel half way through the mastering process, then that’s an idea… Or in digital days a mastering engineer and producer might decide to put the final master through tape, again for a dirty or vintage feel you might want the tape plug at the end of the chain.
As for plug-ins, are you using Logic? If so there is a way you can use the tape delay plug to actually cause it to become tape saturation instead of delay. If not then I don’t actually have any myself but I remember I was going to buy this one before:
Also, PSP are renowned for warm/vintage sounding plugs with simple interfaces:
A friend recently contacted me asking my opinion on a music group she’d come into contact with who were going to be appearing on her radio show. [I won’t name drop because I haven’t asked permission for either parties for this blog!] She felt something may have been missing from their music and wanted to know what I thought. I listened to a rehearsal of one of their tracks to be honest actually expecting to be disappointed. It turns out it was quite the opposite, I was impressed! The vocalist was very good; upfront, in tune and radiating confidence. The MC’s in the group were decent too, reminiscent of my school days listening to UK Hip Hop (the likes of Skinnyman & Doc Brown). Even the beat was not shit… They’d gone for a sort of hybrid between UK Hip Hop & Dubstep.
Underground vs commercial. This argument often needs putting into perspective and I thought I should add some of my opinions on it.
She was right, something was missing and I believe it was something I notice a lot about collaborative underground music which is vocal-driven.
The group obviously had intention of appealing to a mass audience. I’m placing this judgement on the basis that I can imagine the two MC’s either producing or asking their producer to make something like UK Hip Hop from the 2003-07 era but give it a modern twist people listening to urban music now would appreciate, i.e. Dubstep. They then would have approached their very talented vocalist and explained the situation and she would have been very excited with the concept of this new hybrid idea.
Here’s where some of my theories on commercial manufactured music & the conflicting underground music scene come into play.
There’s two ways of approaching music as an artist. You can either manufacture it to appeal to the masses by being very focused on the current trends and musical fashions and adapting your art to that, or (as is the case with the majority of the music I love) you can make it to appeal to niche audiences or sometimes nobody at all & work your damned arse off trying to build a big enough scene around this music for it to break into the mainstream (this is how all EDM music in the UK has come to be popular).
Notice that I’m not slating either of these choices, good music can come out of both methods - problems only start arising when there is no soul in the music and it’s only made for the sake of business.
Back to the point at hand. Commercial music has to be a particular clean sound following the musical trends of the current market, underground is more raw and experimental, taboo or unpopular. Both can contain emotion and soul but the clear differences are the intention to reach the mass market of pop trends or not.. Mix the two together and you either sound like a commercially directed artist who sounds slightly amateur or an underground artist trying to sell out.
Either be underground and learn how to make your own business from your niche market, or be commercial and do less ‘hands-on’ work whilst focusing more on being a star whilst the professionals handle your business for you.