Podcasting Basics: Episode 3 – The Basic Functions of a DAW

\

Episode Summary

Join us today as we discuss Digital Audio Workstations, what are some of their basic functions, as well as the theory behind digital audio. Enjoy!

\

Episode Transcript

Nemanja: Welcome to the Nootka Sound Podcast, I’m your host Nemanja Koljaja, a professional Sound Engineer, Audio Editor and Podcast Producer and a CEO and Founder of Nootka Sound, a professional podcast production facility. Today we’re talking about DAWs. DAW or Digital Audio Workstation is a computer software specifically made for audio recording, editing and post production. There are a number of free ones, the two most famous are Audacity which is open source and is available on PCs, Apple Macs and Linux operating systems. The second is GarageBand, which comes preinstalled with any Mac computer nowadays and it’s exclusive to Apple products. There are a lot of commercial options out there, ones that are worth looking into when it comes to podcast editing are Adobe Audition, Avid’s Pro Tools, Steinberg Cubase and Reaper. I use Pro Tools because I like its interface and it suits my workflow. In truth, it doesn’t really matter which ones you choose and you can do the same thing in every one of these programs. Either way, if you’re recording podcasts, you’re just going to be recording into the software and then sending the audio to your editor you’re going to edit the episodes by yourself. So let’s say you’ve plugged in your microphone into your audio interface and you’ve opened your DAW. The first thing you do is you create a channel (or a track) onto which you will record your audio. Each recording source should have its own separate track and this is known as multitrack recording. When it comes to recording music, it would look like this. You would have one channel for the guitar, one channel for the bass guitar, one channel for the main vocal, one channel for part of the drum, etc. So if you’re recording locally with a guest, make sure that there are two channels, one that’s recording your microphone and one that’s recording your guest’s. Now, there are two main types of audio channels, mono and stereo or monophonic and stereophonic. Mono means there’s only one source and stereo is comprised of two channels being panned fully left and right. Panning is spatial distribution of signal across the two channels. So you would be using a mono channel if you’re recording one source, for example your voice. Stereo is used when you’re recording one source, but with two microphones at the same time, for example the grand piano. So, when you create a mono channel for your voice you want to start recording. But before you do that, you will have to arm the track or make it record-ready. That means activating the RED Rec button that usually every DAW has on each channel. If you’re not recording and want to hear what the microphone sounds like, all you have to do is enable input monitoring, it’s usually represented as a green button with the letter I. But, beware of the feedback, if the microphone is facing the speakers and your speakers are on when you turn on input monitoring, that microphone signal is going to be played back and it’s going to be perpetually picked up by the microphone and then played back again in a loop and it can break your speakers and your ears. Trust me it’s not pleasant. It’s the sound engineer’s worst nightmare. Another two things you usually have on a channel in a DAW are the two buttons, one labeled S and one labeled M. S stands for solo and you can solo the track in order to hear it only by itself when there’s more channels playing at the same time. M stands for mute and it mutes the channel. Another thing that each channel has, aside from a volume fader that controls the volume amount of a track is a meter. In a DAW and in digital audio generally, the signal is measured in dBFS or decibel Full Scale. The meter next to the channel represents exactly that. In the next episode we’re going to cover what’s the ideal signal level when recording audio for your podcast. The important thing to mention is that the meter is represented with three colors, green, yellow and red in regards to the amount of signal that’s present in the channel. Green and yellow are okay, but red color represents overload, distortion, or clipping whichever term you like to use. In this specific case it represents digital clipping and it is of utmost importance you try and avoid getting to that red section when recording your audio. If you’re meter is going into the red it means you have to turn down the signal. You should always try to do that on the source, so turn down the gain knob or even pull back the microphone from the sound source. If that’s still not helping, you can use the PAD switch on the interface or a mixer to attenuate the signal electronically. These are some basic functions of a DAW. Ideally, as I’ve said if you’re new to audio engineering you would record your audio and export the audio to send it to a professional to be edited, mixed and mastered. Of course, DAWs provide much more and are actually powerful systems for doing whatever you can and cannot imagine to a sound. It has a number of built in plug-ins or audio effects to be used for altering the sound waves. It really represents all of the tools that are needed for successful sound post-production. Let me back track a bit, when it comes to exporting audio, let’s talk about the sample rate and the bit rate. The sample rate represents the number of samples per second. Think of it as a range of frequencies in digital audio. The minimum sample rate you should use is 44.1 kHz, that’s the lowest number that’s used in professional audio. You can also choose the sample rate of 48 kHz, 88, 96 and 128 kHz. The higher you go the more detailed the sonic information will be – also your files will be bigger too. I’d usually recommend going for 48 kHz, that’s more than enough to capture all sonic information needed for a high quality podcast. The bit depth, on the other hand, relates to the dynamic range. Dynamic range is the distance between the positive peak and the negative peak in a sound wave, or in essence it’s the distance between the softest point of the sound wave and the loudest. It represents the number of bits of information in every single sample of a digital audio file. So if it’s a 16 bit file, every single sample will store 16 bits of sonic information.  I’d usually recommend choosing the 24 bit size. Although since podcasts are voice based content and the human voice generally doesn’t have that much intricate sonic information, 44.1 kHz and 16 bit WAV or AIFF files are also okay. Let’s talk about audio file formats of audio files. First, you have the uncompressed lossless audio formats, that means they store all of the audio content and are not losing any information, the two main ones are WAV designed by Microsoft and AIFF designed by Apple. The next thing you have is the compressed lossless format such as FLAC, which store all of the information, but it’s somewhat compressed which means you’re losing some of the dynamic range. The last thing you have is the lossy compressed file format which stores very little information and the most famous one is the Mp3. So if you save your audio files in the Mp3 format, you’re losing a lot of the original information, and that’s why I always recommend backing up your audio in a lossless format such as the WAV.

That’s it! Thank you for listening, make sure you share the podcast with your friends and click that subscribe button so you never miss an episode! If you have any questions for us or suggestions about a topic we can cover related to the podcasting industry leave a comment below or send us an email at info@nootkasound.online. Also, make sure to check out our website, podcastproducer.org. Tune in to our next episode where we cover DAWs.  Peace out!

Let's get that episode live!

© Nootka Sound 2020, All Rights Reserved.