Microphones are lenses. They reveal a unique perspective on a sound like a lens reveals a unique view of an area. The analogy is not only figurative but practically pertinent: thinking of a lens when placing a microphone will often lead you to better placement – a position from which you will hear what the mic “sees”.
A microphone is the first step into a sound system, the second step of sound production (the first being the sound made by the artist). It is the gateway to either success or failure. It is a foundation-laying act that will determine how high this audio building will dare to rise. I love microphones because they are “old school” – no processors, no DAW’s, just you. Simply your ears, your “lens” (the microphone), a microphone stand and the white canvas of sound possibilities.
Over the years I have pursued finding the right microphone for the right source and arrived at an amazingly simple conclusion: the perfect mic does not exist.
I know, Sure, Beyerdynamic, AKG, Neumann, Audix, Telefunken, DPA, Sennheiser, Rode, CAT, ADK all want you to believe that they have the perfect mic for you. The truth is they don’t.
Finding the right microphone for a sound is a process. In fact, I think using the word “matching” instead of “choosing” would be a more appropriate term that would place greater emphasis on what it is you are really trying to do. Matching implies that your selection has one objective: to complement the sound you are capturing, a bit like a frame for a painting or having a sauce with a meal. It needs to reveal the source in a way that compliments it, bringing out its most unique and valuable sonic characters, while fitting within the context of the song/mix.
The pursuit of the perfect microphone is therefore total “utopia”. What is real instead is that some microphones will match some sources sometimes. The question then is how do we match a microphone to a source? Well, there are a number of criteria that can really help us all to get as close as possible to that first amazing step into sound shaping that ensures we are on the right path of a magical audio delivery.
However, firstly we need to settle once and for all that the perfect microphone does not exist. Only then will we find the real microphones and use them in a right way, no longer lured by the often exaggerated claims of gear advertisement. Finding the right microphone is an exciting process that always passes through the most important and basic of sound engineering principles: listening!
Listening will deliver you from the frenzied tyranny of technology into the promised land of wonder.
Choosing a microphone is a fascinating process. Every process has a first step, and in our case, the first step is simple: context.
What is the context in which you are going to microphone a source (small club, huge open air stage, big band, close or far from the source, etc.)?
Why is this the first step? Because vocals (whether using an IEM monitor, normal monitors, recording for a voice-over track, for a live take in a movie, for a pianist, for guitar player that sings, for a live recording in a small smoky New York club), all have very different production needs.
Is tone not the most important thing when looking for a microphone? Yes and no.
Yes. Ultimately you want the microphone to complement the source and deliver an amazing sound.
No. A microphone that you can hardly use within the context of your application is no good to you or the artist you trying to mix/record for even if it delivers the ultimate tone you are after.
If tone is the only or first thing we look at, we might end up considering an option that is not viable within our usage parameters. Essentially, we are narrowing down the options, and to do so we should keep the following factors in mind:
The answers to these 5 questions (and a few more, I just don’t want to overdo it) will narrow down your options before you even need to engage your hearing. At this point of the choosing process, all you need to know is:
Do you need an omnidirectional microphone with a very low noise to signal ratio? Or do you need a hyper-cardioid microphone that can handle in excess of 120 dbSPL? Would the detail and transition period accuracy of a condenser microphone be essential?
To take the first step in the microphone choosing process, all you really need is to develop an understanding of the specifications of a microphone (those funny numbers and names that are written in the paper that comes with the microphone – readily available online). You don’t need to understand it all, but you need to understand enough to choose a microphone that on paper will thick all the boxes of your production needs.
Learning how to understand the specifications of a microphone (and their implication), is crucial in choosing and using a microphone correctly. For more, you can have a look here. That will narrow down the options substantially. Then the listening can commence.
Earn 25% commission when your network purchase Uplyrn courses or subscribe to our annual membership. It’s the best thing ever. Next to learning, of course.
Once we have narrowed down the options to a few, then your listening needs to guide you further.
I learned a very fascinating way of matching sources to microphones from a great engineer called Michael Stavrou.
It is a rather simple concept, based on establishing the hardness level a sound possesses. There is no hardness meter out there, hence your ears will have to suffice.
If a hardness scale existed, we would have on the one side of the scale “hard sounds” and on the opposite side “sweet sounds” (it is obvious that I am not referring to loudness, but to tone and perception of a sound).
If we were to use a 10 step scale, we would have the following: 1= Sweet 10= Hard, and 2 to 9 will be the in-between. Thus we can categorise sounds accordingly.
Let us use brass as an example: a trumpet would generally be a hard sound (H7), while a trombone generally has a sweet sound (H4).
Using the same “scale”, we can start to “measure” microphones for their hardness level. You can simply use your voice as a source and check out how the different microphones change your vocal’s intrinsic sweetness/hardness level (just use the same sentence i.e. count from 1 to 10).
Using the “hardness” scale, we will end up with microphones that range from H1 to H10.
Now that we have measured both sources and microphones, we can simply match them by using the universal “opposites attract” principle, compensating for the hardness of a source with the sweetness of a microphone and the other way around (a H3 sound will go with an H6-7 microphone and so on). This essentially prevents us from ending up with an extremely hard sound, with a very aggressive tone or an extremely sweet sound where the tone cannot stand up in a mix.
You will end up with a hardish microphone like the Senniheser MD421 (H7) on a sweet sound like a trombone (H4), or a Shure SM 57 (H8) on a “sweet” Vox valve amp (H3).
Let us recap:
Listening is the most important aspect of sound engineering. It is the beginning and the end of all sound-related things, for all things we do should favour listening.
Something that always keeps on surprising me is the reality that engineers often do not know what to listen for (maybe I will write something about that in future).
Q: Why is this relevant to microphones?
A: Because the proof is in the pudding!
Once you finally sit down and started to listen (after you have selected a mic correctly and have placed it well), what you hear (the pudding) should prove that your efforts are well rewarded.
This is not as straight forward as it sounds as in most cases, what you will hear is unprocessed and not within the context of the mix in which it needs to fit. The challenge is to assess whether a sound will work simply by judging it at “face” value: before you do your magic and place it in the song.
At this point of the process mistakes can be made. For example, one of several common listening mistakes made, is the thinking that something sounds great just because it is “bright” or “warm” or “warm & bright”.
Even tough warmth and brightness is great, the question is how does the microphone/placement combo achieves that:
A tonic tone (or music to my ears) is a relatively neutral tone or a tone that will allow the engineer to do much once it is processed, both for his (or better his client’s) liking and for the mix.
Next time you listen to a microphone, remember that the sound you are hearing (most probably) is only one of several sounds that need to fit together. The more difficult it is to handle sonically, the more difficult (i.e. time consuming) it will be to make it fit into the context of your production.
Look for (or listen for) a microphone that offers a great “balanced” sound, one that preserves the natural tone of a source and delivers clarity. These 2 qualities together are always a precious prize. Clarity does not only rest on the presence of high-end frequencies in the tone, it is achieved by several factors. The more you achieve it without eq, the better, because then that sound will naturally stand out and fit in.
The famous song titled “A piece of Sky” sums up perfectly what it is to place a microphone properly: it is finding the angle, the position, that will reveal the glorious piece of sky you wanted and needed to see.
“The piece of sky” being a unique “take” or view that truly translates and enhances that “vibration” (that sound) so that it hits the listener precisely in that right spot. It is like looking at a view or a subject through a lens. Think of microphones as lenses, what the microphone sees is almost undoubtedly what it hears and picks up.
The question is, how do you place it so that the microphone not “only” sees the right “piece of sky”? How do you place it so that it translates beautifully that piece of “air” into a piece of “voltage’, a variation of electric current that most accurately captures the unique tones of the specific “vibration”.
To move from the philosophical to the practical, there are 3 microphone movements (in relation to the source you want to microphone).
THE BODY MOVEMENT
Every sound has a “body” length (I am not referring to wave length). Every sound has a distance or proximity at which its tone is most complete. Take a snare-drum for example, most people think that the best microphone position for its sound, is a placement very close to its top skin. Yet the sound and tone of a snare-drum comes from its shell and the snare mounted on its bottom skin. The top head is simply the “trigger”.
When placing a microphone you need to start by figuring out how long its “body” is, or better how much of its “body” you want into this piece of sky you want to shoot.
The closer you get to the source the smaller (or partial) the body is. Up to a point, the further you go from the source the more complete its body, beyond which the body will start to become smaller and further (softer).
Think of it as zooming with a lens: you can see the whole person, head to toes, just its head, or place a person on the horizon. The question is what do you want to see (hear)? If you are too close you will only see / hear a detail of the sound, if you too far you will see the whole picture but also many other things.
It boils down to knowing what you are looking for. However there are 2 sonic thresholds that are unhelpful to cross: too close to something and its natural beauty is “distorted”, too far and its prominence within the sonic stage (made up of all the sounds produced at the same time in proximity of the microphone) is gone.
Next time you place a microphone consciously think of the body of the sound and what you want of it. Too many engineers place microphone with no understanding nor insight, like a photographer shooting a tree while truly desiring a piece of sky!
THE FOCUS MOVEMENT
This is not true for all instruments but it is true for most (especially voices, wind instruments, most string instrument). Acoustic instruments are not nicely package symmetrical sound sources. They are “living” things, as such their sound production is often not symmetrical, therefore what is true on the left side (of the source) is not so true on the right side.
A voice is a very good example of this. Put your headphones on, listen to somebody speaking or singing closely (via a microphone), then imagine a central line (i.e. the nose) and listen to the same person moving the microphone to the right and the left. You will probably notice a difference between the 2 sides. In some people this is very slight in others it is very significant. In some instruments this is very remarkable.
I call this the focus movement, think of it like rotating the focus ring of a lens until the subject is in focus. So it is for you when you discover the right side of the “moon”. You will notice that something about the sound “got into gear” or better, it become suddenly more focus more “together”.
THE TONE MOVEMENT
Lastly turning the microphone at different angles from the source will award you a palette of different colours / tones. Because the frequency response of a microphone changes at different angles, changing the angle, gives you the opportunity to smooth-out the tone. For instance, you placed a microphone on an electric guitar amp, you did it well. You are happy with most of what you hear, but for the fact that is a bit too bright. Because the polar pattern of a microphone is narrower at high frequencies, rotating the microphone away from the central line will make the tone less bright and therefore more LF will be present.
Rotating a microphone really aids to your tone control!
NB: For vocals there is a further movement possible (if you are recording), up and down in relation to the mouth. Down towards the chest, up towards the forehead. The sound will change very remarkably in tone.
I made a truly amazing personal discovery once I realised that our ear-brain combo is the most sophisticated piece of audio equipment on the market (make sure you look after it and value it).
Vibrations traveling through the air end up stimulating one or more of the 25000 “trigger” cells we have in our ears, more precisely in an inch long chamber known as the organ of Corti. Each of these triggers are connected to the brain where the received stimuli are further analysed. If you could see a real-time brain scan, you would see it lighting up in different parts of the brain at different times as stimuli are processed and interpreted.
That Christmas tree lighting event hopping around your grey matter ends up processing things like tone, time, phase, levels and few other variables which in turn end up giving you the sensation of sound, making you experience the pleasure of music, or rather the pleasure of its anticipation.
Furthermore, the combination of all these variables analysed at the same time ends up giving you some truly important information, like location of the source in terms of distance and relative angle, which translates into depth and staging of a sound or sounds within a sound field. The entire wonder (our perception of sound staging) is based on our brain’s capacity for comparing data between the left and the right ear. If you close one ear, you would essentially lose a big chunk of that ability.
What does this have to do with microphones?
Every time we place a single microphone in front of a source, in essence, we close an ear. We loose (to a great extent) the intrinsic depth of the sound within its sound field; we also lose part of its tone as different parts of the instrument contribute differently to its acoustic tone. Unfortunately, by just using one microphone quite close, we end up focusing only on a portion of its natural tone.
Using two microphones with specific angles and distances between them, creates a similar pick-up to what our two ears do naturally (because the two microphones end up capturing phasing and time differences of the same sound that can be conveyed to our ears).
It goes without saying that this is not always or often practical or logistically viable (or even justifiable, because in most cases, live events run mono anyway). Talking about microphone positioning without mentioning stereo techniques would be like eating pasta without knowing or experiencing the undeniable taste of freshly grated parmesan on it. Can one eat pasta without parmesan? Absolutely (for the record some pasta types do not need or require parmesan on it), however never having tasted parmesan would imply that you never truly experienced pasta or food.
It is worth mentioning that stereo techniques that mono well are also helpful in live and in broadcast scenarios. In my book I mention that if you do not know how to take advantage of stereo techniques, your are definitely poorer for it.
To find out more, visit the DPA microphone university where you will find loads of great tips, among which a bunch of stereo microphone techniques explained.
Every time we use multiple microphones, we have a potential phasing problem (because a source will be picked up by more than one microphone at the same time – it is like having your ear in 2 different places at once), which is audible when summing a number of microphones to mono. If we were panning 2 microphones (one to the Left and the other to Right) instead of summing them to mono, we would not have that problem.
A rule of thumb for minimising phasing issues is to have around 10dB difference in the level between the microphone “versions” of the same source.
A typical example of this problem is heard when microphoning a choir, because you have one source that is covered with several microphones simultaneously.
To help in achieving the 10dB difference mentioned above, a common practise is something we call the “3 to 1 rule”. The rule states that the source-to-microphone distance of numerous microphones should be three times the distance between the sound source and the nearest microphone.
That said, the presence of a source in 2 different microphones in a live scenario is essentially unavoidable (i.e. think of microphones on a drum kit), which leads to the second problem that we call “bleed” or spill. This is particularly acute when a member of a band plays much softer than the rest. If everybody plays soft or loud, the bleed will be inconsequential; however in the above scenario, the amount of bleed of the loud source into the soft one will prove hard to handle. As always, the first course of action is with the source, the band: Get them to all play at similar intensities and you will solve a lot of issues.
Bleed within this context is the presence of a sound into a source which is not the intended one: hi-hat into the snare microphone, electric guitar into the double bass microphone, etc.
Phasing issues and bleed problems often overlap, herewith few approaches that help with both:
Use the bleed
This is the most difficult of the 3, as it requires the most experience, and the most experimentation time. Instead of trying to get rid of the bleed, use it. If you do so it will become “ambience” and add to the sound rather than be detrimental to it. Maybe you are recording a brass section with a double bass. You can try to box everybody in and in the process destroy the magic of musicians playing together or you could use the bleed. How do you transform bleed into ambience? You move around the bass microphone in the room until you find a position where the brass section ambient sound actually adds to the brass sound. Then you place the double bass there and do the “moves” we looked at in the previous sections.
Get rid of it
This is achieved with dynamic control called ‘gating’. Gating is super helpful, but it will only work if:
Couple it well
Part of the tricky aspect of sound that ”bleeds” into another microphone, is that the same sound hits the 2 different microphones at 2 different times creating a phasing / timing problem. At times this can be solved by hitting the polarity reverse switch on the “bleed in” microphone channel. It does not always work but it is definitely worth an attempt. If it works, it will make the bleed very helpful. A typical example of this are drums’ overhead microphones (they obviously get a lot of the snare sound in them). Try to reverse the overhead microphones’ polarity on the mixer one at the time, and then listen to the one you are working on with the addition of the snare microphone. Listen to the snare and particularly listen if the 2 sound better when the phase is normal or reversed. it might just work.
If you work with a DAW or a digital mixer that allows introducing delay on different channels, then delaying the source of the bleed to align it with the “bleed in” channel will bring the “bleed source” in phase with the bleed-in channel/s.
I trust this article have been helpful. You can also check out my online courses here.
Leave your thoughts here...
All Comments
Reply