Podcast Editing Tips: Part 3 – Phase Shifting to Recoup Loss of Headroom

I’ve talked before about utilizing high-pass filtering to roll-off undesirable low frequencies in audio. This can be done in software via plugins or done in hardware (E.g., on a mixing console or microphone preamp). I suggest reading our primer on high-pass filtering before proceeding with today’s topic, which is phase shifting.

What happens to audio when processed through a high-pass filter?

So what exactly is phase shifting? Well, let’s first understand what happens to an audio signal as it travels through a high-pass filter. In the example below, you can see I’m rolling off frequencies below 75 Hz at 12dB per octave. High-pass filers typically allow the user to specify just how much is rolled off below the frequency cut off. The higher the decibel value is, the slop increases and a greater roll off occurs. Look at the screenshot above and you’ll see the slope.


After applying the filter, you can see that on one axis, the energy of the waveform is displaced. The human voice is often said to be asymmetrical in nature, however high-pass filters can further increase asymmetry. This does not indicate a problem necessarily with the sound quality, but does pose challenges for podcast audio producers as it decreases headroom.


What is headroom?

Headroom is essentially the “wiggle room” you have as an engineer when processing audio before the source exceeds zero decibels relative to full scale (dBFS). Decibels relative to full scale (dBFS) is a unit of measurement for amplitude levels in digital systems, which have a defined maximum peak level. In the digital realm, 0dBFS is the absolute peak ceiling before audible clipping occurs (or distortion as it’s also known).

Now that you understand basic terminology, you see why displaced energy in a waveform can decrease headroom, creating processing challenges as you prepare for loudness normalization. Loudness normalization is an essential step after optimizing your audio (applying compression) as it targets optimal perceived average loudness. This creates consistency for the listener so they don’t need to crank the volume in louder environments and also aids intelligibility of spoken word. If you were to check the maximum true peak of your audio, pre phase shift, you’ll notice the loss of headroom. In the example below, the source checks in at -1.3dB. This is not the worst example, but you’ll get a better idea later on as we shift the phase of the source.


To better understand loudness normalization, please read “Loudness Compliance And Podcasts” and “Your Podcast Might Be Too Quiet Or Too Loud“.

Optimizing headroom through phase shifting

To restore headroom in your audio source after high-pass filtering, you can use third party tools to do so. This demonstration uses Waves’ InPhase plugin, however other tools such as iZotope RX suite contains an adaptive phase correction feature that is even easier.

Step 1.

My Digital Audio Workstation (DAW) of choice is Pro Tools. On a mono audio source, I select the clip and instantiate the LT (non live) version of Waves InPhase. I have created a default preset to handle phase shift correction, which you can download from here. Note that this preset is meant as a starting point, so values such as input gain and delay may be tweaked depending on the source you’re working with.

Here’s how I have the plugin configured:


As illustrated in the screenshot, you can set the all-pass frequency 200 Hz and Q bandwidth to 2.0. ” Type” sets the phase shift filter type, and toggles between Off, Shelf (using the 1st order allpass filter) and Bell (using the 2nd order allpass filter.) In the example we use the bell filter type. Allpass filters to correct phasing issues between tracks. Here’s a quick rundown of what these settings are doing, as documented in the plugin manual:

An allpass filter is a signal processing filter that affects only the phase, not the amplitude of the frequency-response of a signal. This example demonstrates using a 2nd order filter, which is defined by the frequency at which the phase shift is 180° and by the rate of phase change. The rate of the phase change is dictated by the Q factor. Q sets the width of the 2nd order (Bell) allpass filter: A narrower Q results in a faster phase transition toward the selected frequency, leaving a larger portion of the frequency intact.

The InPhase settings came recommended by Paul Figgiani, but they’re meant as a starting point. The significance of the settings are fairly complex. I will continue to explore their significance.

Step 2:

Clicking “Render” will process the selected audio and should shift the phase slightly so that the energy of the waveform isn’t lopsided.

Asymmetric Waveforms Processed

Now that the phase has been shifted slightly in the correct direction, we have an optimal source to work with for loudness normalization. When the process was started, our our source checked in at -1.3dB. Analyzing the corrected version below, you can see we’ve restored quite a bit of headroom.


Hot damn, look at that. Our source is now -2.5dB, a full 1.2dB of headroom recouped!

Step 3: Loudness normalization can be done on the source audio with the phase shifted in the correct direction. Headroom has been regained, which means when processing the audio with a true peak limiter, excessive limiting should not occur (this is a good thing as that can squash the dynamics of the human voice).


You should have a good understanding how most high-pass filters affect the human voice. It should be noted that the source audio worked with in these examples was high-pass filtered in hardware, not software. The reason why this matters is because linear phase filtering helps avoid the displaced waveform energy discussed, provided the source was not already high-pass filtered in hardware. Linear phase filters are a topic for another day, but effectively they ensure the original phase is not altered in any way during processing.

If you have any questions about how the aforementioned workflow is accomplished, don’t hesitate to reach out to us on Twitter and Facebook.


Many thanks to audio engineer Paul Figgiani, whose experience and breadth of knowledge rivals my own. He continues to be an inspiration and aided in the accuracy of this piece. Make sure to check out his blog: ProduceNewMedia.com.

High-Pass Filtering: Getting Rid of Undesirable Low Frequencies

I care deeply about the level of production that goes into each podcast that I make. My production chain begins with using the best microphones I can afford and ends with carefully edited and mastered audio that is suitable for consumption by the listener. I enumerate my production chain below and how it flows to the final product:

  • Microphones
  • Outboard processing (compressors)
  • Mixer
  • Computer
  • Recording software
  • Editing
  • Mastering
  • Upload to the web

Disturbing frequencies you don’t want

A common problem in podcasts produced by the inexperienced is an abundance of undesirable low-end frequency. For example: excessive bottom end that’s audibly disturbing when people hit their microphones, slam their arms on a desk, or a car drives by with its subwoofer blaring. If you record in a noisy environment, this amplifies certain problematic frequencies. Short of recording in an acoustically treated room away from the noise of the outside world, you can’t completely get rid of all undesired frequencies. You can of course minimize the impact or “energy” of the problem.

If you have a dynamic or condenser microphone with an XLR connector, there’s a good chance you will find a switch on it that is a bass roll-off (sometimes referred to as HP or High-Pass). The characteristics of the bass roll-off implementation can vary from microphone to microphone. Consult the manual from the manufacturer to get the details on how they implemented a bass roll-off.

XLR cable

What frequencies should I filter?

A High-Pass filter, as the name suggests, allows high frequencies to pass through whilst unwanted low frequencies (below the threshold you specify) are removed.

For spoken word, I typically filter below 100Hz as you really don’t need frequencies below that. Most of the frequencies that live below 100Hz too easily creep into a recording, some are avoidable if you have poor microphone technique and bad habits, others may be out of your control (as described earlier). In the illustrated example below, the microphone I use is a Shure SM7B. At the rear, there are two switches: one provides a presence boost, the other a bass roll-off. Audio engineers will tell you filtering closest to the source is best. I would agree, but in some scenarios, you’ll find the bass roll-off too aggressive.

Shure SM7B

The SM7B is a fantastic microphone, but I don’t roll-off on the microphone because the filtering slope begins at around 200Hz. There are some lower frequencies in a male voice that I like to retain around 150Hz). Other dynamic microphones more commonly roll-off at around 100Hz or even 80z. There are other microphones that offer granular controls such as the Sennheiser MD-421 that have multiple bass roll-off options. If you don’t filter at the source, there are 3 other options: filter at the mixer level if you have the luxury of owning one, filter at the preamp/channel strip, or in software.

dbx 286s

Rolling off bass in software

My primary recording software is Adobe Audition. The example below applies to other popular software packages such as: Avid Pro Tools, Logic Pro X, GarageBand, or Audacity. Plugins are little software programs that can be inserted into an individual audio track to apply a particular desired effect such as: dynamic processing (compression), reverb, delays, equalization, and more. To filter out low frequencies, insert an EQ plugin into the problematic audio track.

Enable the HP button and specify a value for the filter. Note that I set mine to 100Hz.

Note: If you prefer, you can filter all tracks at once, and how you do that is by inserting the same EQ plugin on your “Master Bus” track. The Master Bus is a stereo bus that sums all of the individual audio tracks together.

Parametric EQ

You’ll notice in the screenshot below that below the “Frequency” option, there is “Gain” setting, which defaults to 24dB/Oct. This setting tells the filter by how much gain should it attenuate from the signal in db (decibel as the unit of measurement for the intensity of sound). In other words, how aggressively should it cut out the frequencies. The “Oct” stands for octave. In musical terms, an octave is described as follows:

A series of eight notes occupying the interval between (and including) two notes–one having twice or half the frequency of vibration of the other.

To put this all together, my High-Pass filter will filter out frequencies at 100Hz or lower and will cut them on a relatively aggressive slope of 24db per octave.

HP filter


You now have an understanding of problematic frequencies that you want to remove from your podcast audio, how to remove them, and what exactly is going on from a technical perspective behind the scenes.

Booking Guests for Your Podcast

Last Monday I took the day off for Canadian Thanksgiving while Maxime held the fort. Today I’m speaking to you about booking guests to interview on your podcast.

If interviewing is something you enjoy but you’re having a hard time finding guests or are unsure of how to approach people, I’ll elucidate on some things you will be doing. There’s no easy way around going out there and chasing the people who you want to interview, however I’ll give you some pointers so you don’t feel lost or make mistakes you will regret.

Where do you start? Make a list

Instead of going week to week scrambling for ideas of who you should approach and then trying to contact them, write a list of as many people as you can think of that you want to interview. The more the better. It doesn’t matter what tools you use for keeping track of the guests you want to contact, just make sure that fill it with as much helpful information as possible.

Spreadsheets work well for financial data but they also work well tracking who you want to talk to. In a spreadsheet, my suggestion is to create columns that include the following headings:

  • Name
  • Email
  • Website
  • Phone
  • Skype
  • Social
  • First attempt (the date you contacted)
  • Second attempt (the date you contacted)
  • Booked

This will be your database to keep tabs on your prospects.

One thing you will discover is that people have preferred contact methods. Some like email over phone or Skype to give one example. Make a note of which communication method your prospect prefers. I like to colour code the preferred contact method so I can refer back to that later.

Reach out to people

You have a list ready and now will begin scouring the web and social media for contact information. This is not difficult but will require time and effort on your part. The hard part will be getting your prospective guests to respond and agree to the interview.

Use social media

Social media is ubiquitous. As of June 2016, Facebook reported on average 1.13 billion daily active users and 1.03 billion mobile daily active users. For the same month, Twitter reported 313 million active users of which 82% accessed the service from a mobile device. There’s a good chance you will find someone to talk to on either of these outlets.

My personal experience is most low to mid level “celebrities” (I use that loosely) are pretty receptive to speaking with people about what they do. It also gives them a platform to further plug their creative work.

Twitter: Begin with an @ reply and see if you can ask for an interview (you only have 140 characters to do that). Give them some time to reply. If you don’t hear back from them within a week, try a DM (Direct Message) and see if that goes through (this depends on the recipients privacy settings. DMs now have a 10,000 character limit, so make sure to briefly explain why you want to interview them and let them know you’re open to whatever communication method they prefer.

If you don’t hear back within 2 weeks, try tweeting them again. Keep in mind that people are busy. Depending on how many followers they have, they may not even notice your inquiries (my observation is that becomes an increasing issue for 10,000+ followers).

Facebook: It’s an excellent platform without some of the character limitations of Twitter. Creative people (I’ll lump writers, actors, musicians, makers into that term) often use a business page to showcase what they make. Begin by writing a message on their page (seen publicly) and ask them if they would be open to an interview. Remember to mention what it’s for and that you’re flexible to their preferred communication method. Brevity is key (don’t ramble on).


Email, yes email, is still an effective means of communication for an overwhelming majority of people. If social media efforts led nowhere, check for a link to their website from their social profile and see if there’s an email address or contact form. I recall many times where my prospects responded first over email rather than social.

Checklist of things to mention

  • Why you want to interview the individual (your podcast about “x”)
  • You can arrange to communicate via email, phone, Skype or whatever works best for them.
  • Give them an idea of how long you need them (I always say it will around an hour at most)
  • If they agree to the interview, be flexible and work around their schedule to book a time


In the coming weeks I’ll be delving into building on your interviewing skills. Hopefully this will provide some valuable information so you can book your own interviews. Please remember that when both parties have agreed and the place, date, and time, you should create a calendar event and also write down in your spreadsheet that the booking was successful. In the aforementioned section regarding making multiple attempts at contacting people, you should not give up if you don’t hear back. If you’ve made a couple of attempts, you can always come back to them later and move on to the next person. Some people are incredibly difficult to reach, but persistence is key. If it takes six months to book your favourite guest, then so be it.

Podcast Editing Tips: Part 2

Last week I started a new series on podcast editing tips. This week, once again I’m using Adobe Audition CC (2015.2.1) as the lens from which we’ll examine how to edit a podcast and what we can use to save some time and produce programs.

When applying dynamics processing of any kind you can get a preview of what your waveform will look like after applying changes. For example: applying compression or equalization to a spoken word track with preview enabled allows you to see how those changes will impact it.

The speed at which the preview window displays changes depends entirely on your processing speed of your computer. In my case, I’m using a six-year-old MacBook Pro, so I typically need to wait a bit for the preview to update.

Below you will find an illustrated example of how to enable the preview window and also how it works when you make changes. Note that the Preview Editor option is under the View > Show Preview Editor menu.

Pay close attention to how the waveform changes in the bottom half of the preview window. When I apply compression and equalization, you can see how the selected portion changes to reflect the settings I used.


Since I discovered the preview editor, it’s become indispensable. It’s not a replacement for a true understanding of what dynamics processing does, however, it’s handy to see how even a minor change in compression settings can change your levels. It also saves you in processing wait times. Instead of applying processing to an individual track in the editor and seeing that it had a negative outcome, use the preview to get a sense of where you stand.

Try podcasting on FeedPress

Publishing and uploading podcasts is simple and efficient with FeedPress. Try our new publishing tools on a commitment free 14-day trial and get started today.

Podcast Editing Tips: Part 1

When you begin learning the basics of how to use any recording program, remembering keyboard shortcuts and special editing techniques that save time can be overwhelming. If you figured out how to create tracks and successfully record a podcast, you’re in a good place to start. But what happens after you’re comfortable with the basics of recording and exporting audio? Does editing a podcast take several painstaking hours?

I’ll enumerate a few tips in today’s blog that can speed up your workflow. I’ll try to keep them generic, but for illustrative purposes, I’ll be dealing with Adobe Audition CC (v2015.2.1) on a Mac. Note that other popular DAWs (Digital Audio Workstations) such as Pro Tools and Logic Pro X have similar features, either named the same way or similar, albeit with different keyboard shortcuts.

Tip 1 – Markers

Most multi-track recording programs provide the ability to add markers. The keyboard shortcuts will vary from program to program, but many use single key strokes to invoke them. Markers allow you to mark a place in the timeline of audio that you would like to reference at a letter point. Why use markers? Perhaps someone a co-host or guest coughs or says something you’d like to edit out at a later point, markers allow you to easily remember a point in time where something occurred. I frequently use markers for the following reasons:

  • Editing out a cough
  • Editing out profanity (if it’s warranted for the particular podcast)
  • Editing out factually incorrect information or a stumble over words
  • Remembering a place that needs a sound bite at a specific time, such as additional dialogue, music, or sound effects.

In Adobe Audition CC, I can place a marker during recording by pressing “m” on my keyboard to add it to the timeline.

If you don’t see the list of markers, add that by going to Window > Markers.

enable marker window

Once you add a marker, note the timeline at the top screen above your tracks, where it represents the time in your audio. There will be a grey marker hovering over the point in time where you told it to go (as illustrated below).

marker timeline

Once you’ve added a few markers in your session (go ahead and try that), you will see in the marker list window all of the markers with the insertion time. There are a couple of things you should know about the marker list window.

  1. You can edit the name of the marker by clicking on the default title. I suggest giving it a clear name that best reflects what’s going on at that point in time.
  2. You can quickly jump to that insertion point by clicking on the grey marker icon to the very left, next to the marker name. That’s especially handy when a marker is near the end of an hour-long session.

marker list

Tip two – Edits and Ripple Delete

Often during an edit you will need to zoom in closely to make a precise correction, such as finding a natural sounding place to cut a piece of audio that won’t make the person sound unusual. If you ever listen to a podcast and can hear an edit, it’s not a very good edit. In most cases, bad edits are avoidable avoided by learning where the right place is to cut something, such as when a person takes a beat before saying the next sentence.

When you zoom into a waveform to make an edit, when you’re done, you’ll want to do a couple of things:

  1. Make the edit and stitch the two separated pieces of audio back together (illustrated below).
  2. Zoom back out to a point where you can see the entire session.

Both of these tasks are easy to do in Adobe Audition CC (and in other programs like Pro Tools and Logic Pro X). Stitching two pieces of audio back together can be done manually by selecting the adjacent piece of audio and dragging it over until it snaps to the other piece that was just cut. This can do done much faster. Doing it manually involves deleting the piece of audio you don’t need as well as dragging the adjacent audio over until you have a single uniform waveform.

The shortcut I use for this is Ripple Delete. Ripple Delete not only deletes the piece of audio that you don’t want, but it can shift the adjacent audio in the cut over automatically (as well as other tracks if you’re in a multi-track session). This is a big time saver, because why on earth would you want to manually drag audio from each track over to snap it back into its proper place?

In a multi-track session, using shift + option + command + k, you can make a cut along the insertion point I want on all tracks in that session, and then use Ripple Delete to automatically shift all of the adjacent tracks over to where they need to be from that cut.

multi-track edits

Once you’ve made your edits and have selected the portion of audio you wish to Ripple Delete (remember to select each portion), you can use the keyboard command of shift + delete to complete the edit. You can also go to the Edit > Ripple Delete menu and choose the selected clips option to accomplish the same thing.

ripple delete

Tip 3 – Customize your keyboard shortcuts

There are too many keyboard shortcuts in Adobe Audition CC to enumerate here, however, I encourage you to create your own custom keyboard shortcuts. If you’re coming from another popular program such as Pro Tools or Logic, you may want to map the most common functions to the shortcuts you’ve already built muscle memory around. Once you do that, you can save your shortcuts as a template. In the illustrated example below, you can see I’ve created one called Pro Tools and have mapped the toggle record function to “3” on the numeric keypad.

customize keyboard shortcuts

In the shortcut column, click inside the area and type the command on your keyboard to set the shortcut and then save your changes.


In the coming weeks I’ll continue to ease you into some simple yet powerful and time-saving editing shortcuts. The next time you start a project, try using markers and ripple delete. You’ll be pleasantly surprised at how much time you can save.

Try podcasting on FeedPress

Publishing and uploading podcasts is simple and efficient with FeedPress. Try our new publishing tools on a commitment free 14-day trial and get started today.