Getting the right voice-over and music for a planetarium show is essential. If the voice-over artist records a performance that’s too “dry” like they’re reading out of a legal journal, then you’re going to lose the audience. If the music is too “busy,” then it will distract from the show. I was very careful in who I chose to do our Sky Tonight narration and when it comes to music I’m extremely particular. The sound is vital to any type of show. And it’s this process that I found the most taxing on my patience.
Out of every step in this whole process, editing audio is my least favorite. I dread having to edit audio. I just hate it. I don’t know how people make a living doing it day-in and day-out. I’d lose my mind. But it has to get done, and instead of having my voice-over artist do it on their end, I opt to do it on my end of things. Not only does it save money, but I want to make sure no unnecessary filters are applied to the original audio track to eliminate room noise. I have my own method of doing that.
Once the narration gets in, I have to pull it into my Digital Audio Workstation (or DAW) for editing. The program I use is called Cubase.
I set up all my planetarium projects the same in Cubase. I record at 24 bit and at 44.1 khz. If other people record at 32 bit float and 48 khz, then great for them. I just have all my projects on all my software set that way because that’s what I like to work in for the final product inside our own theater. This is important because I have to tell my voice-over artist what to set their project to so it will match mine. If my project is 24 bit, 44.1 khz then they should record at that same setting so when I pull in their performance there won’t have to be any conversions. It pulls in nice, plays well with the other tracks—just drag and drop.
I then have to go through this entire voice-over track and edit out all the spaces between the sentences. I edit out all the breaths, all the throat clearing, all the chair squeaks, all the bad takes, etc. Yes, there are some plugins you can get that will help eliminate room noise from your recording. Some of these work quite nice. But, it does affect the quality of the voice. It is my preference to manually go in and find all the silent sections in the performance and actually make them silent. If you deal with a professional voice-over artist who records stuff for a living, then you’re going to get a pretty quiet recording. By this, I mean, you won’t hear an air conditioner in the background or cars driving past outside. So, you shouldn’t have to apply any effects to their actual vocal performance besides EQ, compression, reverb, etc.
I like to go through the entire performance and make these tiny cuts in between all the areas that should be silent. Then, I make sure that every cut has a small volume curve to make sure there isn’t any audible immediate drop out but more of a brief fade into silence. Without the volume curves you would hear this constant ticking sound between each cut. Then, with all the cuts in place, I back up or pull forward the start and end of each event to butt it up against the beginning or end of each transient.
I try to keep the flow of the original voice-over intact but I do have to allow for X amount of space between each scene for the sake of animation transitions.
When all the events are cut and edited, I then go through the script and color code each section of the narration in accordance to the scene they’re in. I can then export these individual sections to later playback in Digital Sky or After Effects to effectively “time” out the animations.
One thing I like to do during this process is use a reference track.
I tell the voice-over artist not to “normalize” the audio after they’re done recording. Normalizing an audio track is when you take audio that might be a little low in volume and bump it up in volume to where it should be. People usually do this with voice-over tracks because there’s a big volume difference when a person speaks as opposed to when a person sings. In order to get a loud, or very “present,” voice-over track you would have to increase the gain on your preamp, or the trim, or the drive, or any number of dials to get the desired volume level and tone you’re looking for. When you do that, you’re also increasing the volume of the ambient room noise. So, it can be quite difficult at times to get a loud voice-over track that doesn’t also have loud ambient room noise.
Since I don’t have the voice-over artist normalize her audio, I do it myself. But I don’t use normalizing software. I use a variety of plugins on the voice-over track to bump it up to a normal audio level. I use a combination of compression, EQ, and reverb. I also use a reference track.
When audio engineers talk about a “reference” track what they’re talking about is a piece of audio from a recording they like which helps them judge where their own audio is at. For me, I like to use the Benedict Cumberbatch isolated center channel track from his performance on the planetarium show, Super Volcanoes. It’s a clear recording, the level is right where it should be, and the EQ on it is just right.
I’ll load that reference track up and I’ll go back-and-forth between that track and my own voice-over track, adjusting the volume and effects until they’re sitting right together.
Then, I’ll go through each section and look at the time duration for each part. This helps me determine the length of time for each musical piece.
Back in 2008 I released a planetarium music album called “Form & Void: A Digital Universe.” It was the soundtrack for our in-house show called Digital Universe. I still use some of those tracks to this day for certain Sky Tonight sections. But, for the most part, I write and record all new music for each Sky Tonight show.
I like to start off with one very serene piece of music to open the show with our sunset. This piece will return at the end of the show when we wrap things up. But for the middle of the show, I have to look at each section and compose new music to fit in the length of time required.
I typically do all this at home.
I have a recording station set up in my office with a keyboard, guitar, bass, etc. I’ve recorded stuff in my office before. However, I find that in a given work day there’s a lot of distractions. People are coming in the office space chit-chatting, there’s the sound of school groups outside, there’s times I have to stop to get something else done. It’s just way easier to get into the right headspace to write and record music when I’m at home and away from distractions.
The whole process of writing music for the show lasts the entire duration of the show creation process. In fact, I usually put the music in at the very end. I simply use just the voice-over track to time out all my animation sequences.
That being said, once the music is done and imported into my office DAW, I can then start mixing the entire project.
I use the voice-over track as a guide and I then mix the rest of the music so it’s just below the voice. It’s important to not have the music and voice fighting each other. The voice has to be clear and the music can’t be distracting or too loud. That being said, the music should get bumped up a tad when there are those transitions between scenes.
Mixing audio, in general, is a completely different skill. It’s very fortunate that I came into this job with a background in audio, because most people in this field would have to outsource this part of the process to someone else.
Before I came on board, the planetarium staff spent thousands of dollars getting their voice-over recordings, music, and editing done by a different person and company. Even then, it was still all in stereo. Today, our planetarium shows are not only all mixed in 5.1 surround sound, but all of the sound effects are mixed in 5.1 surround. There are ways to “fake” a 5.1 effect, but that consists of sending your stereo file through a type of filter that just outputs those sounds to different channels and at different volumes. But here at this planetarium, I actually mix the show and sound effects in proper surround sound.
To do that, I have to get a really nice stereo mix of the entire show first. From there, I change all my outputs to 5.1 and remix the show for our particular type of speaker setup. I have a grid setup in Cubase and Sony Vegas where I tell the software the location of the speakers. This, in effect, lets the other speakers know how much bleed will be getting between the spaces.
Mixing for 5.1 can be very tricky because the way it sounds in an office space will sound widely different than how it will sound in a dome theater. I have to go back and forth with testing the audio in my office and in the theater itself. It can be quite time consuming going back and forth like that and transferring audio files.
Once I have the show audio the way I want it, I have to encode it in a certain codec: the Sony AC3 file format. Unfortunately, my Cubase software won’t export or render audio in the AC3 format, so I have to export all my audio 5.1 stems to their individual channels, import these stems into Sony Vegas, and export the final AC3 file.
Once that is rendered, then the show audio is done. It’s a rather quick rendering process when dealing with audio. Unfortunately, the same can’t be said when it comes to rendering 4K fulldome video files. And that’s what we’re going to talk about next, rendering animation for fulldome.
Comentarii