Video game music systems at GDC 2017: pros and cons for composers

Video game composer Winifred Phillips, pictured in her music production studio working on the music of LittleBigPlanet 2 Cross Controller

By Winifred Phillips | Contact | Follow

Welcome back to our three article series dedicated to collecting and exploring the ideas that were discussed in five different GDC 2017 audio talks about interactive music!  These five speakers shared ideas they’d developed in the process of creating interactivity in the music of their own game projects.  We’re looking at these ideas side-by-side to cultivate a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. In the first article, we looked at the basic nature of five interactive music systems discussed in these five GDC 2017 presentations:

If you haven’t read part one of this article series, please go do that now and come back.

Okay, so let’s now contemplate some simple but important questions: why were those systems used?  What was attractive about each interactive music strategy, and what were the challenges inherent in using those systems?

The Pros and Cons

Illustration of a GDC 2017 presentation, from the article by Winifred Phillips (video game composer).In this discussion of the advantages and disadvantages of musical interactivity, let’s start with the viewpoint of Sho Iwamoto, audio programmer of Final Fantasy XV for Square Enix.  He articulates a perspective on interactive music that’s rarely given voice in the game audio community.  “So first of all,” Iwamoto says, “I want to clarify that the reason we decided to implement interactive music is not to reduce repetition.”

From the article by game composer Winifred Phillips - an illustration of the game Final Fantasy XV.Those of us who have been in the game audio community for years have probably heard countless expert discussions of how crucial it is for video game composers to reduce musical repetition, and how powerful interactivity can be in eliminating musical recurrences in a game.  But for Iwamoto, this consideration is entirely beside the point.  “Repeating music is not evil,” he says. “Of course, it could be annoying sometimes, but everyone loves to repeat their favorite music, and also, repetition makes the music much more memorable.”  So, if eliminating repetition was not at the top of Iwamoto’s list of priorities, then what was?

“We used (musical interactivity) to enhance the user’s emotional experience by playing music that is more suitable to the situation,” Iwamoto explains, also adding that he wanted “to make transitions musical, as much as possible.”  So, if the best advantage of musical interactivity for Iwamoto was an enhanced emotional experience for gamers, then what was the biggest drawback?

For Iwamoto, the most awesome struggle arose from the desire to focus on musicality and melodic content, with the intent to present a traditionally epic musical score that maintained its integrity within an interactive framework. Often, these two imperatives seemed to smash destructively into each other.  “At first it was like a crash of the epic music and the interactive system,” he says.  “How can I make the music interactive while maintaining its epic melodies? Making music interactive could change or even screw up the music itself, or make the music not memorable enough.”

 

My perspective on epic interactive music

A photo of video game composer Winifred Phillips working in her music production studio on the music of LittleBigPlanet Cross Controller.Sho Iwamoto makes a very good point about the difficulty of combining epic musicality with an interactive structure.  For the popular LittleBigPlanet Cross Controller game for Sony Europe, I dealt with a very similar conundrum.  The development team asked me to create an epic orchestral action-adventure track that would be highly melodic but also highly interactive.  Balancing the needs of the interactivity with the needs of an expressive action-adventure orchestral score proved to be very tricky.  I structured the music around a six-layer system of vertical layering, wherein the music was essentially disassembled by the music engine and reassembled in different instrument combinations depending on the player’s progress.  Here’s a nine-minute gameplay video in which this single piece of music mutates and changes to accommodate the gameplay action:


 

Illustration of a GDC 2017 presentation, from the article by Winifred Phillips (video game composer).Leonard J. Paul’s work on the platformer Vessel also hinged on a vertical layering music system. However, the biggest advantage of the vertical layering music system for Paul was in its ability to adapt existing music into an interactive framework.  From the article by game composer Winifred Phillips - an illustration of the game Vessel.Working with multiple licensing agencies, the development team for Vessel was able to obtain a selection of songs for their game project while it was still early in development.  The songs became rich sources of inspiration for the development team.  “They had made the game listening to those songs so the whole entire game was steeped in that music,” Paul observes.

Nevertheless, the situation also presented some distinct disadvantages.  “The licensing for those ten tracks took eight months,” Paul admits, then he goes on to describe some of the other problems inherent in adapting preexisting music for interactivity.  “It’s really hard to remix someone else’s work so that it has contour yet it stays consistent,” Paul says, “So it doesn’t sound like, oh, I figured out something new in the puzzle or I did something wrong, just because there’s something changing in the music.” In order to make the music convey a single, consistent atmosphere, Paul devoted significant time and energy to making subtle, unnoticeable adjustments to the songs.  “It’s very hard to make your work transparent,” Paul points out.


 

Illustration of a GDC 2017 presentation, from the article by Winifred Phillips (video game composer).For sound designer Steve Green’s work on the music of the underwater exploration game ABZU, the main advantage of their use of an interactive music system was in the system’s ability to customize the musical content to the progress of the player by calling up location-specific tracks during exploration, without needing the make any significant changes to the content of those music files.  “So its mainly not the fact that we’re changing the music itself as you’re playing it, we’re just helping the music follow you along,” Green explains.  This enabled the music to “keep up with you as you’re playing the game, so it’s still interactive in a sense in that it’s changing along with the player.”

From the article by game composer Winifred Phillips - an illustration of the game ABZU.While this was highly-desirable, it also created some problems when one piece of music ended and another began, particularly if the contrast between the two tracks was steep.  “The dilemma we faced was going in from track one to track two,” Green observes.  For instance, if an action-oriented piece of music preceded a more relaxed musical composition, then “there was a high amount of energy that you just basically need to get in and out of.”

 

My perspective on interactive transitions

Photo of video game composer Winifred Phillips working in her music production studio on the music of the Speed Racer video game.Steve Green makes a great point about the need for transitions when moving between different energy levels in an interactive musical score.  I encountered a similar problem regarding disparate energy levels that required transitions when I composed the music for the Speed Racer video game (published by Warner Bros Interactive).  During races, the player would have the option to enter a special mode called “Zone Mode” in which their vehicle would travel much faster and would become instantly invincible.  During those sequences, the music switched from the main racing music to a much-more energetic track, and it became important for me to build a transition into that switch-over so that the change wouldn’t be jarring to the player.  I describe the process in this tutorial video:


 

Illustration of a GDC 2017 presentation, from the article by Winifred Phillips (video game composer).While sometimes a game audio team will choose an interactive music system strictly based on its practical advantages, there are also times in which the decision may be influenced by more emotional factors.  “We love MIDI,” confesses Becky Allen, audio director for the Plants vs. Zombies: Heroes game for mobile devices.  In fact, the development team, PopCap Games, has a long and distinguished history of innovative musical interactivity using the famous MIDI protocol.  During the Plants vs. Zombies: Heroes project, MIDI was a powerful tool for the audio team.  “It really was flexible, it was something you really could work with,” Allen says.

From the article by game composer Winifred Phillips - an illustration of the game Planets vs. Zombies: Heroes.However, that didn’t mean that the MIDI system didn’t create some problems for the audio team.  Early on in development for Plants vs. Zombies: Heroes, the team decided to record their own library of 24 musical instrument sounds for the game.  But during initial composition, those instruments weren’t yet available. This led to an initial reliance on a pre-existing library (East West Symphonic Orchestra).  “We were undergoing this sample library exercise, knowing that we’d be moving over to those samples eventually,” Allen says. Although the East West sample libraries had been initially used, they were fundamentally different. “Our PopCap sample library is fantastic too, but it’s totally different,” Allen adds.  “So the sounds were not the same, and the music, even though they were the same cues, just felt wrong.”  Allen advises, “I think it’s very important, if you can, to write to the sample library that you’ll be using ultimately at the end.”


 

Illustration of a GDC 2017 presentation, from the article by Winifred Phillips (video game composer).For Paul Weir’s work on the space exploration game No Man’s Sky, the motivation to use a procedural music system was also partly influenced by emotional factors.  “I really enjoy ceding control to the computer, giving it rules and letting it run,” Weir confides.  But there were other motivating influences as well. According to Weir, the advantages of procedural music rest with its unique responsiveness to in-game changes.  “Procedural audio, to make it different, to make it procedural, it has to be driven by the game,” Weir says.  From the article by game composer Winifred Phillips - an illustration of the game No Man's Sky.“What are you doing, game? I’m going to react to that in some way, and that’s going to be reflected in the sound I’m producing. In order to do that,” Weir adds, “it has to use some form of real-time generated sound.”  According to Weir, “procedural audio is the creation of sound in real-time, using synthesis techniques such as physical modeling, with deep links into game systems.”

While this gives a procedural music system the potential to be the most pliable and reactive system available for modern game design, there are steep challenges inherent in its structure.  “Some of the difficulties of procedural generated content,” Weir explains, “is to give a sense of its meaningfulness, like it feels like it’s hand crafted.” In a moment of personal reflection, Weir shares, “One of my big issues, is that if you have procedural audio, that the perception of it has to be as good as traditional audio. It’s no good if you compromise.”

 


 

So, for each of these interactive music systems there were distinct advantages and disadvantages.  In the third and final article of this series, we’ll get down to some nitty-gritty details of how these interactive systems were put to use.  Thanks for reading, and please feel free to leave your comments in the space below!

 

Photo of video game composer Winifred Phillips in her music production studio.Winifred Phillips is an award-winning video game music composer whose most recent projects are the triple-A first person shooter Homefront: The Revolution and the Dragon Front VR game for Oculus Rift. Her credits include games in five of the most famous and popular franchises in gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the MIT Press. As a VR game music expert, she writes frequently on the future of music in virtual reality games. Follow her on Twitter @winphillips.

Video game music systems at GDC 2017: what are composers using?

By video game music composer Winifred Phillips | Contact | Follow

Video game composer Winifred Phillips, presenting at the Game Developers Conference 2017.The 2017 Game Developers Conference could be described as a densely-packed deep-dive exploration of the state-of-the-art tools and methodologies used in modern game development.  This description held especially true for the game audio track, wherein top experts in the field offered a plethora of viewpoints and advice on the awesome technical and artistic challenges of creating great sound for games. I’ve given GDC talks for the past three years now (see photo), and every year I’m amazed at the breadth and diversity of the problem-solving approaches discussed by my fellow GDC presenters.  Often I’ll emerge from the conference with the impression that we game audio folks are all “doing it our own way,” using widely divergent strategies and tools.

This year, I thought I’d write three articles to collect and explore the ideas that were discussed in five different GDC audio talks.  During their presentations, these five speakers all shared their thoughts on best practices and methods for instilling interactivity in modern game music.  By absorbing these ideas side-by-side, I thought we might gain a sense of the “bigger picture” when it comes to the current leading-edge thinking for music interactivity in games. In the first article, we’ll look at the basic nature of these interactive systems.  We’ll devote the second article to the pros and cons of each system, and in the third article we’ll look at tools and tips shared by these music interactivity experts. Along the way, I’ll also be sharing my thoughts on the subject, and we’ll take a look at musical examples from some of my own projects that demonstrate a few ideas explored in these GDC talks:

So, let’s begin with the most obvious question.  What kind of interactive music systems are game audio folks using lately?

Continue reading

VR Game Composer: Music Beyond the Virtual

Photo of video game music composer Winifred Phillips, from the article entitled "VR Game Composer: Music Beyond the Virtual."Welcome to the third installment in our series on the fascinating possibilities created by virtual reality motion tracking, and how the immersive nature of VR may serve to inspire us as video game composers and afford us new and innovative tools for music creation.  As modern composers, we work with a lot of technological tools, as I can attest from the studio equipment that I rely on daily (pictured left). Many of these tools communicate with each other by virtue of the Musical Instrument Digital Interface protocol, commonly known as MIDI – a technical standard that allows music devices and software to interact.

Image depicting VR apps from the article by Winifred Phillips, Game Music Composer.In order for a VR music application to control and manipulate external devices, the software must be able to communicate by way of the MIDI protocol – and that’s an exciting development in the field of music creation in VR!

This series of articles focuses on what VR means for music composers and performers. In previous installments, we’ve had some fun exploring new ways to play air guitar and air drums, and we’ve looked at top VR applications that provide standalone virtual instruments and music creation tools.  Now we’ll be talking about the most potentially useful application of VR for video game music composers – the ability to control our existing music production tools from within a VR environment.

We’ll explore three applications that employ MIDI to connect music creation in VR to our existing music production tools. But first, let’s take a look at another, much older gesture-controlled instrument that in ways is quite reminiscent of these motion-tracking music applications for VR:

Continue reading

Video Game Music Production Tips from GDC 2016

Game Composer Winifred Phillips during her game music presentation at the Game Developers Conference 2016I was pleased to give a talk about composing music for games at the 2016 Game Developers Conference (pictured left).  GDC took place this past March in San Francisco – it was an honor to be a part of the audio track again this year, which offered a wealth of awesome educational sessions for game audio practitioners.  So much fun to see the other talks and learn about what’s new and exciting in the field of game audio!  In this blog, I want to share some info that I thought was really interesting from two talks that pertained to the audio production side of game development: composer Laura Karpman’s talk about “Composing Virtually, Sounding Real” and audio director Garry Taylor’s talk on “Audio Mastering for Interactive Entertainment.”  Both sessions had some very good info for video game composers who may be looking to improve the quality of their recordings.  Along the way, I’ll also be sharing a few of my own personal viewpoints on these music production topics, and I’ll include some examples from one of my own projects, the Ultimate Trailers album for West One Music, to illustrate ideas that we’ll be discussing.  So let’s get started!

Continue reading

MIDI for the Game Music Composer: Wwise 2014.1

wwise-logo-empowers

MIDI seems to be making a comeback.

At least, that was my impression a couple of months ago when I attended the audio track of the Game Developers Conference.  Setting a new record for attendance, GDC hosted over 24,000 game industry pros who flocked to San Francisco’s Moscone Center in March for a full week of presentations, tutorials, panels, awards shows, press conferences and a vibrant exposition floor filled with new tech and new ideas. As one of those 24,000 attendees, I enjoyed meeting up with lots of my fellow game audio folks, and I paid special attention to the presentations focusing on game audio. Amongst the tech talks and post-mortems, I noticed a lot of buzz about a subject that used to be labeled as very old-school: MIDI.

This was particularly emphasized by all the excitement surrounding the new MIDI capabilities in the Wwise middleware. In October of 2014, Wwise released its most recent version (2014.1) which introduced a number of enhanced features, including “MIDI support for interactive music and virtual instruments (Sampler and Synth).” Wwise now allows the incorporation of MIDI that triggers either a built-in sound library in Wwise or a user-created one. Since I talk about the future of MIDI game music in my book, A Composer’s Guide to Game Music, and since this has become a subject of such avid interest in our community, I thought I’d do some research on this newest version of Wwise and post a few resources that could come in handy for any of us interested in embarking in a MIDI game music project using Wwise 2014.1.

The first is a video produced by Damian Kastbauer, technical audio lead at PopCap games and the producer and host of the now-famous Game Audio Podcast series.  This video was released in April of 2014, and included a preview of the then-forthcoming MIDI and synthesizer features of the new Wwise middleware tool.  In this video, Damian takes us through the newest version of the “Project Adventure” tutorial prepared by Audiokinetic, makers of Wwise.  In the process, he gives us a great, user-friendly introduction to the MIDI capabilities of Wwise.

Twitch-LostChocolate

border-159926_640_white

The next videos were produced by Berrak Nil Boya, a composer and contributing editor to the Designing Sound website.  In these videos, Berrak has taken us through some of the more advanced applications of the MIDI capabilities of Wwise, starting with the procedure for routing MIDI data directly into Wwise from more traditional MIDI sequencer software such as that found in a Digital Audio Workstation (DAW) application.  This process would allow a composer to work within more traditional music software and then directly route the MIDI output into Wwise.  Berrak takes us through the process in this two-part video tutorial:

border-159926_640_white

Finally, Berrak Nil Boya has created a video tutorial on the integration of Wwise into Unity 5, using MIDI.  Her explanation of the preparation of a soundbank and the association of MIDI note events with game events is very interesting, and provides a nicely practical application of the MIDI capability of Wwise.

MIDI in Wwise for the Game Music Composer: Peggle Blast

PeggleBlastBanner

In a previous blog post, we took a look at a few tutorial resources for the latest version of the Wwise audio middleware.  One of the newest innovations in the Wwise software package is a fairly robust MIDI system.  This system affords music creators and implementers the opportunity to avail themselves of the extensive adaptive possibilities of the MIDI format from within the Wwise application.  Last month, during the Game Developers Conference in the Moscone Center in San Francisco, some members of the PopCap audio development team presented a thorough, step-by-step explanation of the benefits of this MIDI capability for one of their latest projects, Peggle Blast.  Since my talk during the Audio Bootcamp at GDC focused on interactive music and MIDI (with an eye on the role of MIDI in both the history and future of game audio development), I thought that we could all benefit from a summation of some of the ideas discussed during the Peggle Blast talk, particularly as they relate to dynamic MIDI music in Wwise.  In this blog, I’ve tried to convey some of the most important takeaways from this GDC presentation.

PeggleBlastSession

“Peggle Blast: Big Concepts, Small Project” was presented on Thursday, March 5th by three members of the PopCap audio team: technical sound designer RJ Mattingly, audio lead Jaclyn Shumate, and senior audio director Guy Whitmore.  The presentation began with a quote from Igor Stravinsky:

The more constraints one imposes, the more one frees oneself, and the arbitrariness of the constraint serves only to maintain the precision of the execution.

This idea became a running theme throughout the presentation, as the three audio pros detailed the constraints under which they worked, including:

  1. A 5mb memory limit for all audio assets
  2. Limited CPU
  3. 2.5mb memory allocation for the music elements

These constraints were a result of the mobile platforms (iOS and Android) for which Peggle Blast had been built.  For this reason, the music team focused their attention on sounds that could convey lots of emotion while also maintaining a very small file size.  Early experimentation with tracks structured around the use of a music box instrument led the team to realize that they still needed to replicate the musical experience from the full-fledged console versions of the game.  A simple music-box score was too unsatisfying, particularly for players who were familiar with the music from the previous installments in the franchise.  With that in mind, the team concentrated on very short orchestral samples taken from the previous orchestral session recordings for Peggle 2.  Let’s take a look at a video from those orchestral sessions:

Using those orchestral session recordings, the audio team created custom sample banks that were tailored specifically to the needs of Peggle Blast, focusing on lots of very short instrument articulations and performance techniques including:

  1. pizzicato
  2. marcato
  3. staccato
  4. mallets

A few instruments (including a synth pad and some orchestral strings) were edited to loop so that extended note performances became possible, but the large majority of instruments remained brief, punctuated sounds that did not loop.  These short sounds were arranged into sample banks in which one or two note samples would be used per octave of instrument range, and note tracking would transpose the sample to fill in the rest of the octave.  The sample banks consisted of a single layer of sound, which meant that the instruments did not adjust their character depending on dynamics/velocity.  In order to make the samples more musically pleasing, the built-in digital signal processing capability of Wwise was employed by way of a real-time reverb bus that allowed these short sounds to have more extended and natural-sounding decay times.

wwise-logo-empowers

The audio team worked with a beta version of Wwise 2014 during development of Peggle Blast, which allowed them to implement their MIDI score into the Unity game engine.  The composer, Guy Whitmore, composed the music in a style consisting of whimsically pleasant, non-melodic patterns that were structured into a series of chunks.  These chunks could be triggered according to the adaptive system in Peggle Blast, wherein the music went through key changes (invariably following the circle of fifths) in reaction to the player’s progress.  To better see how this works, let’s watch an example of some gameplay from Peggle Blast:

As you can see, very little in the way of a foreground melody existed in this game.  In the place of a melody, foreground musical tones would be emitted when the Peggle ball hit pegs during its descent from the top of the screen.  These tones would follow a predetermined scale, and would choose which type of scale to trigger (major, natural minor, harmonic minor, or mixolydian) depending on the key in which the music was currently playing.  Information about the key was dropped into the music using markers that indicated where key changes took place, so that the Peggle ball would always trigger the correct type of scale at any given time.  The MIDI system did not have to store unique MIDI data for scales in every key change, but would instead calculate the key transpositions for each of the scale types, based on the current key of the music that was playing.

The presentation ended with an emphasis on the memory savings and flexibility afforded by MIDI, and the advantages that MIDI presents to game composers and audio teams.  It was a very interesting presentation!  If you have access to the GDC Vault, you can watch a video of the entire presentation online.  Otherwise, there are plenty of other resources on the music of Peggle Blast, and I’ve included a few below:

Inside the Music of Peggle Blast – An Interview with Audio Director Guy Whitmore

Peggle Blast!  Peg Hits and the Music System, by RJ Mattingly

Real-Time Synthesis for Sound Creation in Peggle Blast, by Jaclyn Shumate

PopCap’s Guy Whitmore Talks Musical Trials And Triumphs On Peggle Blast

 

GDC Audio Bootcamp

AudioBootCamp-Icon

The Game Developers Conference is nearly here!  It’ll be a fantastic week of learning and inspiration from March 2nd – March 6th.  On Tuesday March 3rd from 10am – 6pm, the GDC Audio Track will be hosting the ever-popular GDC Audio Bootcamp, and I’m honored to be an Audio Bootcamp speaker this year!

This will be the 14th year for the GDC Audio Bootcamp, and I’m honored to join the 9 other speakers who will present this year:

  • Michael Csurics, Voice Director/Writer, The Brightskull Entertainment Group
  • Damian Kastbauer, Technical Audio Lead, PopCap Games
  • Mark Kilborn, Audio Director, Raven Software
  • Richard Ludlow, Audio Director, Hexany Audio
  • Peter McConnell, Composer, Little Big Note Music
  • Daniel Olsén, Audio, Independent
  • Winifred Phillips, Composer, Generations Productions LLC
  • Brian Schmidt, Founder, Brian Schmidt Studios
  • Scott Selfon, Principal Software Engineering Lead, Microsoft
  • Jay Weinland, Head of Audio, Bungie Studios

We’ll all be talking about creative, technical and logistical concerns as they pertain to game sound.  My talk will be from 11:15am to 12:15pm, and I’ll be focusing on “Advanced Composition Techniques for Adaptive Systems.”

Bootcamp-Session-Twitter-Sm

Here’s a description of my Audio Bootcamp talk:

Interactive music technologies have swept across the video game industry, changing the way that game music is composed, recorded, and implemented. Horizontal Resequencing and Vertical Layering have changed the way that music is integrated in the audio file format, while MIDI, MOD and generative models have changed the landscape of music data in games.  With all these changes, how does the game composer, audio director, sound designer and audio engineer address these unique challenges?  This talk will present an overview of today’s interactive music techniques, including numerous strategies for the deployment of successful interactive music structures in modern games. Included in the talk: Vertical Layering in additive and interchange systems, how resequencing methods benefit from the use of digital markers, and how traditionally linear music can be integrated into an interactive music system.

Right after my Bootcamp presentation, all the Audio Bootcamp presenters and attendees will head off to the ever-popular Lunchtime Surgeries.  No, the attendees won’t actually be able to crack open the minds of the presenters and see what’s going on in there, but as a metaphor, it does represent the core philosophy of this lively event.  The Lunchtime Surgeries offer attendees a chance to sit with the presenters at large roundtables and ask lots of questions.  It’s one of the most popular portions of the bootcamp, and I’ll be looking forward to it!

Winifred-Phillips_GDC-Speaker

If you’ll be attending the GDC Audio Track, then I highly recommend the Audio Bootcamp on Tuesday, March 3rd.  Hope to see you there!