“Feel-Good Game Sound” for the Game Music Composer

How can we define “feel-good game sound”? That’s the question that sound designer Joonas Turner attempted to answer with his recent GDC Europe talk entitled, “Oh My! That Sound Made the Game Feel Better!”  Joonas’ talk was a part of the Independent Games Summit portion of GDC Europe, which took place in Cologne Germany on Monday August 3rd 2015.

While much of Joonas’ talk focused on issues that would chiefly concern sound designers, there were several interesting points for game composers to consider.  I’ll be exploring those ideas in this blog.

Joonas is a video game sound designer and voice actor working within the E-Studio professional recording studio in Helsinki, Finland.  His game credits include Angry Birds Transformers, Broforce, and Nuclear Throne.  After briefly introducing himself, Joonas launched into his talk about creating an aural environment that “feels good” and also makes the game “feel good” to the player. He starts by identifying an important consideration that should guide our efforts right from the start.

Consider design first

JoonasTurner

Joonas Turner, sound designer at E-Studio.

In his talk, Joonas urges us to first consider the overall atmosphere of the game and the main focus of the player.  Ideally, the player should be able to concentrate on gameplay to the exclusion of any distractions.  The sound of a game should complement the gameplay and, if possible, deliver as much information to the player as possible.  If done perfectly, a player should be able to avoid consulting the graphical user interface in favor of the sonic cues that are delivering the same information.  In this way, the player gets to keep attention completely pinned on the playing field, staying on top of the action at hand.

Clearly, sound effects are designed to serve this purpose, and Joonas discusses a strategy for maximizing the utility of sound effects as conveyors of information… but can music also serve this purpose?  Can music deliver similar information to the player?  I think that music can do this in various ways, by using shifts in mood, or carefully-composed stingers, or other interactive techniques.  By way of these methods, music can let the player know when their health is deteriorating, or when they’re out of ammo.  Music can signal the appearance of new enemies or the successful completion of objectives.  In fact, I think that music can be as informative as sound design.

Music, sound design and voice-over: perfect together

As his GDC Europe talk proceeds, Joonas reminds us to think about how the music, sound design and voice-over will fit together within the overall frequency spectrum.  It’s important to make sure that these elements will complement each other, with frequency ranges that spread evenly across the spectrum, rather than piling up together at the low or high end.  With this in mind, Joonas suggests that the sound designer and composer should be brought together as early as possible to agree on a strategy for how these sonic elements will fit together in the game.

puzzle

(Here’s where Joonas brought up the first of two controversial ideas he presented during his talk.  While I’m not sure I agree with these ideas, I think the viewpoints he expresses are probably shared amongst other sound designers in the game industry, and therefore could use some more open discussion in the game audio community.)

While composers for video games always want to create the best and most awesome music for their projects, Joonas believes that this desire is not always conducive to a good final result.  He suggests that the soundtrack albums for video games are often more exciting and musically pleasing than the actual music from the game.  With this in mind, Joonas thinks that composers should save their best efforts for the soundtrack, while structuring the actual in-game music to be simpler and less aesthetically interesting.  In this way, the music can fit more comfortably into the overall aural design.

Your sonic brand

At this point in his presentation, Joonas urges the attendees to find aural styles that will be unique to their games.  He tells the audience to avoid using a tired sonic signature in every game, such as the famous brassy “bwah” tone that became pervasively popular after its use in the movie Inception.  If you are wondering what that sounds like, just hit the button below (courtesy of web developer Dave Pedu).

In 2012, Gregory Porter (an avid movie lover and creator of YouTube videos about the movies) created a fun video illustrating just how pervasive the infamous Inception “bwah” had actually become:

In my book, A Composer’s Guide to Game Music, I discuss the concept of creating a unique sonic identity for game in the chapter about the “Roles and Functions of Music in Games.”  In the book, I call this idea “sonic branding”(Chapter 6, page 112), wherein the composer writes such a distinctive musical motif or creates such a memorable musical atmosphere that the score becomes a part of the game’s brand.

Be Consistent

When recording music or sound design for a project, Joonas tells us that it’s important to remain consistent with our gear choices.  If a certain microphone has been used for a certain group of character voices, then that microphone should continue to be used for that purpose across the whole project.  Likewise, the same digital signal processing applications or hardware (compression, limiting, saturation, etc) should be used across the entire game, so that the aural texture remains consistent.  Carrying Joonas’ idea into the world of game music, we would find ourselves sticking with the same instrument and vocal microphones, and favoring the same reverb and signal processing settings throughout the musical score for a game.  This would ensure that the music maintained a unified texture and quality from the beginning of the game to the end.

Shorter is better

short-hand

In his talk, Joonas shares his personal experience with sound effects designed to indicate a successful action – a button press that causes something to happen.  Joonas tells us that for these sounds, shorter is definitely better.  The most successful sounds feature a quick, crisp entrance followed by a swift release. A short sound designed in this way will be satisfying to trigger, and won’t become tiresome after countless repetitions.

For the composer, the closest analogy to this sort of sound effect is the musical stinger designed to be triggered when the player performs a certain action.  In order to adhere to Joonas’ philosophy, we’d compose these stingers to have assertive entrances and quick resolves, so that they would be fun for the player even when repeated many times.

To clip or not to clip…

(This is the second of the two controversial ideas Joonas presented in his talk. Again, while I don’t necessarily agree with this, I think it’s an idea that hasn’t been expressed often and may need further discussion.)

VU-Meter

A volume unit (VU) meter registering some high audio levels.

The common wisdom amongst audio engineers is to avoid overloading the mix.  Such overloads can produce clipping and create distortion, which deteriorates the overall sound quality of the game.  However, Joonas suggests that for intense moments during gameplay, some clipping and distortion may actually enhance the sensation of anxiety and frenetic energy that such moments seek to elicit.  According to Joonas, this enhancement can actually be a desirable outcome, and the sound designer should therefore not be afraid of such overloads and clipping during intense moments in a game.

How would this idea relate to music?  Well, we’ve probably all heard examples of successful pop music that embraces sonic overload.  Lead vocalists sometimes scream into microphones to produce overloads, or a wailing guitar riff may be recorded with lots of overload artifacts.  As a deliberate effect placed carefully for the sake of drama, such brief moments of overload can add edginess to contemporary musical genres.  However, we’ve all likely heard other examples of overloads that seem more the product of high decibel levels rather than any deliberate processing. It’s important to differentiate a deliberate effect from an accidental one.  In music at least, we always want to control the final outcome of the mix, including the presence or absence of overload distortion.

Conclusion

Joonas wound up his talk by urging attendees to always give priority to the elements in the sound mix that are most important.  That would be a good guiding principle for music mixing as well.  Joonas is an interesting thinker in the area of game sound design.  He can be followed at his Twitter account, @KissaKolme.  Please feel free to comment below about anything you’ve read in this blog, and let me know how you feel about the ideas we’ve discussed.  I’d love to read your thoughts!

border-159926_640_white

Studio1_GreenWinifred Phillips is an award-winning video game music composer whose most recent project is the triple-A first person shooter Homefront: The Revolution. Her credits include five of the most famous and popular franchises in video gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the Massachusetts Institute of Technology Press. As a VR game music expert, she writes frequently on the future of music in virtual reality video games. Follow her on Twitter @winphillips.

Organic Sound in a Virtual Environment, Pt. 2

Image

When I talked about some basic techniques for achieving a more organic sound with virtual instruments, I didn’t mention any mixing considerations. Since this is a complex subject that goes far beyond the scope of a single blog, I’ll probably be returning to it several times… but let’s start with a general overview, and some thoughts about the orchestral recording environment. Mixing for a virtual orchestra can be a highly contentious subject, with controversy pursuing nearly every topic of conversation from reverb to volume levels to panning. It’s good to remember, though, that there is a pretty broad range of recording environments and mixing approaches in live orchestral tracks, which means that there can’t (and shouldn’t) be just one “correct” approach when working with virtual orchestras.

Some live orchestral recordings take the studio approach, in that they are fairly dry and close-mic’d in a small recording environment that’s buffered to eliminate acoustic artifacts, leaving only the original sound. Other orchestral recording sessions are clearly conducted in a very large space such as a concert hall, which gives the sensation of both distance and complex reverberation, reflections and tonal coloration caused by the unique properties of the space. Both the studio and the concert hall environments for orchestral recordings are entirely legitimate and afford the composer a set of advantages and disadvantages. The concert hall environment provides a richness of tone and texture from the acoustic properties of the room, but instruments in this space can sound distant and small performance details may not come through clearly. The studio approach allows the instruments of the orchestra to be captured with greater sonic detail and intimacy, but the dryness of the space may have a detrimental effect on the ability of the orchestral sections to blend properly, requiring artificial reverb to be applied during the mixing process.

What does this mean for virtual orchestras? Well, before we think about the recording environment that we’d like to simulate, we have to evaluate our orchestral sample libraries in terms of the environments in which they were originally recorded. Are they wet or dry? Some libraries are reverberant to the point of sounding dripping wet. Others are dry as a bone. This can make it difficult to use these libraries in tandem, but I usually don’t let this deter me. We can apply reverb to the dry instrumental samples so that they match the acoustic properties of the wet ones. I find that a process of trial-and-error can yield satisfying results here. However, there’s no way to completely remove the reverb from an orchestral library that was recorded wet… so what if our hearts were set on that intimate studio sound? Well, there are ways to address this issue. For instance, we can assume that our orchestral recording was captured in a large space, but that many microphones were positioned in tight proximity to the important players so that the subtle nuances of their performances would come through. When we layer our dry instruments with our wet ones, we can send some of the dry signal out for reverb processing (to account for the larger space) and mix those echoes and reflections with the reverb tail found naturally in the wet recordings. This will allow the dry instrument groups to sit in the larger space, but still feel intimate.

Now, what do we do about the orchestral sections that still feel purely wet? They’ll likely sound quite distant and washy. We can counteract this by layering dry instrumental soloists into these sections, sending their signal out for reverb processing as we did before. This can work very well for section leaders such as the first violin. When I’m applying this technique, I’ll sometimes evaluate the number of players in wet orchestral sections, and if it would be realistically feasible, I will boost this number by adding a dry chamber section. For instance, I might add a dry chamber violin section consisting of 4 players to a very wet 1st violin orchestral section consisting of 11 players. This will give me a resulting 1st violin section with fifteen players, which is large but not unreasonable. I’ll add some reverb to the dry instruments so that they’ll give the impression that they exist in the same space as the others, but that they are more closely mic’d.

These are just a few ideas on how to reconcile wet and dry instrumental recordings. As always, experimentation and close listening will be our best guide in determining if these techniques are achieving the desired results. In the future, I’ll talk a bit more about other mixing concerns, such as panning, level balancing, and mastering techniques. Hope you enjoyed this blog entry, and please share your own methods and questions in the comments below!