Stitcher Makes Technical Changes Affecting Podcasters

Stitcher has announced some favourable changes that will impact podcasters who have content in their directory. What started with initially removing the re-hosting of files and downloading direct to the source, further changes are being made to align themselves better with current analytics methodologies.

Podcast microphone

Last year, we began a process of implementing changes to the way Stitcher communicates with podcasters’ hosting providers. We started by moving all podcasters to direct streaming from the source–the Stitcher apps now make a direct file request to your hosting infrastructure whenever a user chooses to play or download an episode of a show.

This change, which we previously made on an ad-hoc basis for podcasters who requested it, gives you better insight into your overall download metrics and better facilitates server-side dynamic ad insertion.

To give podcasters more standardized, accurate and granular data about their shows, we will be making additional changes to align Stitcher’s downloading definitions with some of the emerging standards put forth by the IAB.

On October 2, 2017 we will remove this pinging behavior. This will provide clarity for all of our partners and it will support the IAB’s measurement initiatives. We will:

  • Make downloading new episodes in user Favorites the default app behavior
  • Record as a download any playback that downloads at least 200 kb (standard put forth by the IAB)
  • Present a separate “Front Page Impressions” metric in our partner portal.

We’re pleased to see these technical changes at Stitcher. As for how this affects FeedPress customers, it’s possible you may see an increase in downloads because of these changes.

Podcast Editing Tips: Part 3 – Phase Shifting to Recoup Loss of Headroom

I’ve talked before about utilizing high-pass filtering to roll-off undesirable low frequencies in audio. This can be done in software via plugins or done in hardware (E.g., on a mixing console or microphone preamp). I suggest reading our primer on high-pass filtering before proceeding with today’s topic, which is phase shifting.

What happens to audio when processed through a high-pass filter?

So what exactly is phase shifting? Well, let’s first understand what happens to an audio signal as it travels through a high-pass filter. In the example below, you can see I’m rolling off frequencies below 75 Hz at 12dB per octave. High-pass filers typically allow the user to specify just how much is rolled off below the frequency cut off. The higher the decibel value is, the slop increases and a greater roll off occurs. Look at the screenshot above and you’ll see the slope.

75Hz-High-Pass-Filter

After applying the filter, you can see that on one axis, the energy of the waveform is displaced. The human voice is often said to be asymmetrical in nature, however high-pass filters can further increase asymmetry. This does not indicate a problem necessarily with the sound quality, but does pose challenges for podcast audio producers as it decreases headroom.

Asymmetric-Waveform

What is headroom?

Headroom is essentially the “wiggle room” you have as an engineer when processing audio before the source exceeds zero decibels relative to full scale (dBFS). Decibels relative to full scale (dBFS) is a unit of measurement for amplitude levels in digital systems, which have a defined maximum peak level. In the digital realm, 0dBFS is the absolute peak ceiling before audible clipping occurs (or distortion as it’s also known).

Now that you understand basic terminology, you see why displaced energy in a waveform can decrease headroom, creating processing challenges as you prepare for loudness normalization. Loudness normalization is an essential step after optimizing your audio (applying compression) as it targets optimal perceived average loudness. This creates consistency for the listener so they don’t need to crank the volume in louder environments and also aids intelligibility of spoken word. If you were to check the maximum true peak of your audio, pre phase shift, you’ll notice the loss of headroom. In the example below, the source checks in at -1.3dB. This is not the worst example, but you’ll get a better idea later on as we shift the phase of the source.

LM-Correct-pre-phase-shift

To better understand loudness normalization, please read “Loudness Compliance And Podcasts” and “Your Podcast Might Be Too Quiet Or Too Loud“.

Optimizing headroom through phase shifting

To restore headroom in your audio source after high-pass filtering, you can use third party tools to do so. This demonstration uses Waves’ InPhase plugin, however other tools such as iZotope RX suite contains an adaptive phase correction feature that is even easier.

Step 1.

My Digital Audio Workstation (DAW) of choice is Pro Tools. On a mono audio source, I select the clip and instantiate the LT (non live) version of Waves InPhase. I have created a default preset to handle phase shift correction, which you can download from here. Note that this preset is meant as a starting point, so values such as input gain and delay may be tweaked depending on the source you’re working with.

Here’s how I have the plugin configured:

Waves-InPhase

As illustrated in the screenshot, you can set the all-pass frequency 200 Hz and Q bandwidth to 2.0. ” Type” sets the phase shift filter type, and toggles between Off, Shelf (using the 1st order allpass filter) and Bell (using the 2nd order allpass filter.) In the example we use the bell filter type. Allpass filters to correct phasing issues between tracks. Here’s a quick rundown of what these settings are doing, as documented in the plugin manual:

An allpass filter is a signal processing filter that affects only the phase, not the amplitude of the frequency-response of a signal. This example demonstrates using a 2nd order filter, which is defined by the frequency at which the phase shift is 180° and by the rate of phase change. The rate of the phase change is dictated by the Q factor. Q sets the width of the 2nd order (Bell) allpass filter: A narrower Q results in a faster phase transition toward the selected frequency, leaving a larger portion of the frequency intact.

The InPhase settings came recommended by Paul Figgiani, but they’re meant as a starting point. The significance of the settings are fairly complex. I will continue to explore their significance.

Step 2:

Clicking “Render” will process the selected audio and should shift the phase slightly so that the energy of the waveform isn’t lopsided.

Asymmetric Waveforms Processed

Now that the phase has been shifted slightly in the correct direction, we have an optimal source to work with for loudness normalization. When the process was started, our our source checked in at -1.3dB. Analyzing the corrected version below, you can see we’ve restored quite a bit of headroom.

LM-Correct-post-phase-shift

Hot damn, look at that. Our source is now -2.5dB, a full 1.2dB of headroom recouped!

Step 3: Loudness normalization can be done on the source audio with the phase shifted in the correct direction. Headroom has been regained, which means when processing the audio with a true peak limiter, excessive limiting should not occur (this is a good thing as that can squash the dynamics of the human voice).

Conclusion

You should have a good understanding how most high-pass filters affect the human voice. It should be noted that the source audio worked with in these examples was high-pass filtered in hardware, not software. The reason why this matters is because linear phase filtering helps avoid the displaced waveform energy discussed, provided the source was not already high-pass filtered in hardware. Linear phase filters are a topic for another day, but effectively they ensure the original phase is not altered in any way during processing.

If you have any questions about how the aforementioned workflow is accomplished, don’t hesitate to reach out to us on Twitter and Facebook.

Acknowledgements

Many thanks to audio engineer Paul Figgiani, whose experience and breadth of knowledge rivals my own. He continues to be an inspiration and aided in the accuracy of this piece. Make sure to check out his blog: ProduceNewMedia.com.

FeedPress Adds Support for New Apple Podcast Tags

Apple Podcast Spec Updates

The week of Apple’s World Wide Developer Conference (WWDC), Apple announced that they have updated their RSS podcast spec to include additional tags for podcast publishers. These tags will provide support for additional functionality inside of their own “Apple Podcasts” app. The tags themselves add features that many podcasters have been looking for, such as the ability to specify an episode as trailer, bonus, or full episode.

Apple Podcast Spec Updates

New Podcast Tags

We have added support for the following podcast tags:

  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episodeType>trailer</itunes:episodeType>
  <itunes:episodeType>bonus</itunes:episodeType>

Customers who use FeedPress’ podcast publishing platform can use the “Episode Type” field to specify an episode as a trailer, bonus episode, or full episode, as illustrated below.

FeedPress Publisher Supports New Apple Podcast RSS Tags

it should be noted that whilst the tags can be included now, we won’t see the changes reflected until Apple releases iOS 11 to the public, which won’t be until September and likely after the release of iPhone 8.

Conclusion

It’s really exciting to see Apple moving the needle on podcasts to improve both the listener’s and publisher’s experience. What do you think of the new podcast tags and the changes to Apple’s podcast app on iOS 11? Leave a comment below to chat with us.

FeedPress Adds Automatic JSON Feed Support

JSONFeed

Beloved Mac developers and long time bloggers, Brent Simmons and Manton Reece have launched JSON Feed. The take away for developers is as follows:

For most developers, JSON is far easier to read and write than XML. Developers may groan at picking up an XML parser, but decoding JSON is often just a single line of code.

Why care?

So why should anyone care about this with respect to podcasts when Apple controls the ecosystem? Although its early days for the spec, our opinion is that innovation in a space with an established, albeit old spec is a healthy thing.

RSS is and continues to be a workable transportation method for podcast data, but even RSS–which has been around since 1999–needs enhancement. That’s why open source initiatives like syndicated.media exist to take podcast functionality and RSS to the next level (we’re closely watching this).

What impact does it have?

Does this mean that JSON Feed will make any significant impact? That remains to be seen, but we’re pleased to see people move the needle forward.

Co-creator, Manton Reece wrote about how JSON Feed relates to podcast functionality:

JSON Feed includes an attachments array, which is similar to the enclosure element in RSS that enabled podcasting. We love podcasting and included an example podcast feed in the JSON Feed specification.

How FeedPress supports JSON Feed

Experimental JSON support is live on FeedPress. JSON Feed is generated every time the XML feed is refreshed and is not a replication from the source, it’s a creation. The JSON feed validates and it handles podcasts on RSS and Atom.

Conclusion

There is nothing FeedPress customers need to do in order to get JSON compatible feeds. Simply append the ?format=json parameter to the end of your RSS feed.

Here’s an example URL: https://podcast.hologramradio.org/master?format=json

FeedPress customers are encouraged to test this with compatible RSS readers and Podcast apps. We’d love to hear your feedback.

Update: As of May 31, 2017, the feed_url paramter has been added. As per JSON Feed spec documentation, it’s highly recommended:

Your Podcast Might Be Too Quiet Or Too Loud

Alex Knight at the mixing console, working on sound designing a podcast in Avid Pro Tools.

According to the 2017 edition of Edison Research’s “Infinite Dial” report, 65% of people listen to podcasts on mobile devices. Based on location listened to most often, they further break down that 52% of the sampled audience most listen to podcasts at home, 18% in a vehicle, 12% at work, 3% on public transit, 3% at the gym, and 3% walking around.

The Problem

FeedPress advocates podcast producers pay close attention to loudness compliance with their audio. Irrespective of listening environment, it’s just good practice.

Interest in podcast production techniques and the analysis of podcast audio is a growing trend. Engineers are analyzing many of the top ranking podcasts–including ones repackaged from radio–and are finding they exhibit a multitude of problems. Some of the problems podcasts have include an extended dynamic range, wide ranging degrees of loudness, and even clipped audio.

Why Care?

Overly loud podcasts may contain audible distortion, which can be extremely uncomfortable for listeners. Furthermore, your audience should not be frustrated and have to constantly reach for the volume controls when listening to podcasts. This is why audio engineers advocate that podcast producers aim for a target loudness of -16 Loudness Units relative to Full Scale (LUFS) for stereo files and -19 LUFS for mono files. LUFS is a standard designed to enable normalization of audio levels of broadcast TV, other video, and now podcasts.

There are two reasons why engineers are pushing for audio compliance: maintaining a level of consistency between program audio, and comfort in loud listening environments. Some examples of noisy listening environments include: the morning or afternoon commute by train, car, or walking outside. Working with spoken word requires attention to detail to maximize intelligibility and loudness for mobile device consumption.

Solutions

You are a story teller, editor, and producer and must ensure the quality of your audio matches the high bar set for your content. There are solutions to this complex problem that do not require an audio engineering degree. For example, podcast producers can use tools built into Adobe Audition such as “Match Loudness” to optimize and export their podcasts to recommended compliance targets.

Another solution is to use an online service such as Auphonic, which contains reasonable presets for novices. Note that even though there are tools that can make this job more efficient, you should still understand the fundamentals of why loudness compliance is needed and how it’s achieved.

Audio engineer Paul Figgiani of produceNewMedia blog, and a longtime advocate for loudness compliance, writes:

I’ve discussed the reasons why there is a need for revised Loudness Standards for Internet and Mobile audio distribution. Problematic (noisy) consumption environments and possible device gain deficiencies justify an elevated Integrated Loudness target resulting in audio that is perceptually louder on average compared to Loudness Normalized audio targeted for Broadcast. Low level, highly dynamic audio complicates matters further. The recommended Integrated Loudness targets for Internet and Mobile audio are -16.0 LUFS for stereo files and -19.0 LUFS for mono. They are perceptually equal.

Conclusion

You now realize the importance of optimizing your podcasts for loudness compliance. To learn more about properly optimizing and mastering podcast audio, please read our in-depth article on loudness compliance.