Video game music systems at GDC 2017: pros and cons for composers

Video game composer Winifred Phillips, pictured in her music production studio working on the music of LittleBigPlanet 2 Cross Controller

By Winifred Phillips | Contact | Follow

Welcome back to our three article series dedicated to collecting and exploring the ideas that were discussed in five different GDC 2017 audio talks about interactive music!  These five speakers shared ideas they’d developed in the process of creating interactivity in the music of their own game projects.  We’re looking at these ideas side-by-side to cultivate a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. In the first article, we looked at the basic nature of five interactive music systems discussed in these five GDC 2017 presentations:

If you haven’t read part one of this article series, please go do that now and come back.

Okay, so let’s now contemplate some simple but important questions: why were those systems used?  What was attractive about each interactive music strategy, and what were the challenges inherent in using those systems?

The Pros and Cons

Illustration of a GDC 2017 presentation, from the article by Winifred Phillips (video game composer).In this discussion of the advantages and disadvantages of musical interactivity, let’s start with the viewpoint of Sho Iwamoto, audio programmer of Final Fantasy XV for Square Enix.  He articulates a perspective on interactive music that’s rarely given voice in the game audio community.  “So first of all,” Iwamoto says, “I want to clarify that the reason we decided to implement interactive music is not to reduce repetition.”

From the article by game composer Winifred Phillips - an illustration of the game Final Fantasy XV.Those of us who have been in the game audio community for years have probably heard countless expert discussions of how crucial it is for video game composers to reduce musical repetition, and how powerful interactivity can be in eliminating musical recurrences in a game.  But for Iwamoto, this consideration is entirely beside the point.  “Repeating music is not evil,” he says. “Of course, it could be annoying sometimes, but everyone loves to repeat their favorite music, and also, repetition makes the music much more memorable.”  So, if eliminating repetition was not at the top of Iwamoto’s list of priorities, then what was?

“We used (musical interactivity) to enhance the user’s emotional experience by playing music that is more suitable to the situation,” Iwamoto explains, also adding that he wanted “to make transitions musical, as much as possible.”  So, if the best advantage of musical interactivity for Iwamoto was an enhanced emotional experience for gamers, then what was the biggest drawback?

For Iwamoto, the most awesome struggle arose from the desire to focus on musicality and melodic content, with the intent to present a traditionally epic musical score that maintained its integrity within an interactive framework. Often, these two imperatives seemed to smash destructively into each other.  “At first it was like a crash of the epic music and the interactive system,” he says.  “How can I make the music interactive while maintaining its epic melodies? Making music interactive could change or even screw up the music itself, or make the music not memorable enough.”

 

My perspective on epic interactive music

A photo of video game composer Winifred Phillips working in her music production studio on the music of LittleBigPlanet Cross Controller.Sho Iwamoto makes a very good point about the difficulty of combining epic musicality with an interactive structure.  For the popular LittleBigPlanet Cross Controller game for Sony Europe, I dealt with a very similar conundrum.  The development team asked me to create an epic orchestral action-adventure track that would be highly melodic but also highly interactive.  Balancing the needs of the interactivity with the needs of an expressive action-adventure orchestral score proved to be very tricky.  I structured the music around a six-layer system of vertical layering, wherein the music was essentially disassembled by the music engine and reassembled in different instrument combinations depending on the player’s progress.  Here’s a nine-minute gameplay video in which this single piece of music mutates and changes to accommodate the gameplay action:


 

Illustration of a GDC 2017 presentation, from the article by Winifred Phillips (video game composer).Leonard J. Paul’s work on the platformer Vessel also hinged on a vertical layering music system. However, the biggest advantage of the vertical layering music system for Paul was in its ability to adapt existing music into an interactive framework.  From the article by game composer Winifred Phillips - an illustration of the game Vessel.Working with multiple licensing agencies, the development team for Vessel was able to obtain a selection of songs for their game project while it was still early in development.  The songs became rich sources of inspiration for the development team.  “They had made the game listening to those songs so the whole entire game was steeped in that music,” Paul observes.

Nevertheless, the situation also presented some distinct disadvantages.  “The licensing for those ten tracks took eight months,” Paul admits, then he goes on to describe some of the other problems inherent in adapting preexisting music for interactivity.  “It’s really hard to remix someone else’s work so that it has contour yet it stays consistent,” Paul says, “So it doesn’t sound like, oh, I figured out something new in the puzzle or I did something wrong, just because there’s something changing in the music.” In order to make the music convey a single, consistent atmosphere, Paul devoted significant time and energy to making subtle, unnoticeable adjustments to the songs.  “It’s very hard to make your work transparent,” Paul points out.


 

Illustration of a GDC 2017 presentation, from the article by Winifred Phillips (video game composer).For sound designer Steve Green’s work on the music of the underwater exploration game ABZU, the main advantage of their use of an interactive music system was in the system’s ability to customize the musical content to the progress of the player by calling up location-specific tracks during exploration, without needing the make any significant changes to the content of those music files.  “So its mainly not the fact that we’re changing the music itself as you’re playing it, we’re just helping the music follow you along,” Green explains.  This enabled the music to “keep up with you as you’re playing the game, so it’s still interactive in a sense in that it’s changing along with the player.”

From the article by game composer Winifred Phillips - an illustration of the game ABZU.While this was highly-desirable, it also created some problems when one piece of music ended and another began, particularly if the contrast between the two tracks was steep.  “The dilemma we faced was going in from track one to track two,” Green observes.  For instance, if an action-oriented piece of music preceded a more relaxed musical composition, then “there was a high amount of energy that you just basically need to get in and out of.”

 

My perspective on interactive transitions

Photo of video game composer Winifred Phillips working in her music production studio on the music of the Speed Racer video game.Steve Green makes a great point about the need for transitions when moving between different energy levels in an interactive musical score.  I encountered a similar problem regarding disparate energy levels that required transitions when I composed the music for the Speed Racer video game (published by Warner Bros Interactive).  During races, the player would have the option to enter a special mode called “Zone Mode” in which their vehicle would travel much faster and would become instantly invincible.  During those sequences, the music switched from the main racing music to a much-more energetic track, and it became important for me to build a transition into that switch-over so that the change wouldn’t be jarring to the player.  I describe the process in this tutorial video:


 

Illustration of a GDC 2017 presentation, from the article by Winifred Phillips (video game composer).While sometimes a game audio team will choose an interactive music system strictly based on its practical advantages, there are also times in which the decision may be influenced by more emotional factors.  “We love MIDI,” confesses Becky Allen, audio director for the Plants vs. Zombies: Heroes game for mobile devices.  In fact, the development team, PopCap Games, has a long and distinguished history of innovative musical interactivity using the famous MIDI protocol.  During the Plants vs. Zombies: Heroes project, MIDI was a powerful tool for the audio team.  “It really was flexible, it was something you really could work with,” Allen says.

From the article by game composer Winifred Phillips - an illustration of the game Planets vs. Zombies: Heroes.However, that didn’t mean that the MIDI system didn’t create some problems for the audio team.  Early on in development for Plants vs. Zombies: Heroes, the team decided to record their own library of 24 musical instrument sounds for the game.  But during initial composition, those instruments weren’t yet available. This led to an initial reliance on a pre-existing library (East West Symphonic Orchestra).  “We were undergoing this sample library exercise, knowing that we’d be moving over to those samples eventually,” Allen says. Although the East West sample libraries had been initially used, they were fundamentally different. “Our PopCap sample library is fantastic too, but it’s totally different,” Allen adds.  “So the sounds were not the same, and the music, even though they were the same cues, just felt wrong.”  Allen advises, “I think it’s very important, if you can, to write to the sample library that you’ll be using ultimately at the end.”


 

Illustration of a GDC 2017 presentation, from the article by Winifred Phillips (video game composer).For Paul Weir’s work on the space exploration game No Man’s Sky, the motivation to use a procedural music system was also partly influenced by emotional factors.  “I really enjoy ceding control to the computer, giving it rules and letting it run,” Weir confides.  But there were other motivating influences as well. According to Weir, the advantages of procedural music rest with its unique responsiveness to in-game changes.  “Procedural audio, to make it different, to make it procedural, it has to be driven by the game,” Weir says.  From the article by game composer Winifred Phillips - an illustration of the game No Man's Sky.“What are you doing, game? I’m going to react to that in some way, and that’s going to be reflected in the sound I’m producing. In order to do that,” Weir adds, “it has to use some form of real-time generated sound.”  According to Weir, “procedural audio is the creation of sound in real-time, using synthesis techniques such as physical modeling, with deep links into game systems.”

While this gives a procedural music system the potential to be the most pliable and reactive system available for modern game design, there are steep challenges inherent in its structure.  “Some of the difficulties of procedural generated content,” Weir explains, “is to give a sense of its meaningfulness, like it feels like it’s hand crafted.” In a moment of personal reflection, Weir shares, “One of my big issues, is that if you have procedural audio, that the perception of it has to be as good as traditional audio. It’s no good if you compromise.”

 


 

So, for each of these interactive music systems there were distinct advantages and disadvantages.  In the third and final article of this series, we’ll get down to some nitty-gritty details of how these interactive systems were put to use.  Thanks for reading, and please feel free to leave your comments in the space below!

 

Photo of video game composer Winifred Phillips in her music production studio.Winifred Phillips is an award-winning video game music composer whose most recent projects are the triple-A first person shooter Homefront: The Revolution and the Dragon Front VR game for Oculus Rift. Her credits include games in five of the most famous and popular franchises in gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the MIT Press. As a VR game music expert, she writes frequently on the future of music in virtual reality games. Follow her on Twitter @winphillips.

Video game music systems at GDC 2017: what are composers using?

By video game music composer Winifred Phillips | Contact | Follow

Video game composer Winifred Phillips, presenting at the Game Developers Conference 2017.The 2017 Game Developers Conference could be described as a densely-packed deep-dive exploration of the state-of-the-art tools and methodologies used in modern game development.  This description held especially true for the game audio track, wherein top experts in the field offered a plethora of viewpoints and advice on the awesome technical and artistic challenges of creating great sound for games. I’ve given GDC talks for the past three years now (see photo), and every year I’m amazed at the breadth and diversity of the problem-solving approaches discussed by my fellow GDC presenters.  Often I’ll emerge from the conference with the impression that we game audio folks are all “doing it our own way,” using widely divergent strategies and tools.

This year, I thought I’d write three articles to collect and explore the ideas that were discussed in five different GDC audio talks.  During their presentations, these five speakers all shared their thoughts on best practices and methods for instilling interactivity in modern game music.  By absorbing these ideas side-by-side, I thought we might gain a sense of the “bigger picture” when it comes to the current leading-edge thinking for music interactivity in games. In the first article, we’ll look at the basic nature of these interactive systems.  We’ll devote the second article to the pros and cons of each system, and in the third article we’ll look at tools and tips shared by these music interactivity experts. Along the way, I’ll also be sharing my thoughts on the subject, and we’ll take a look at musical examples from some of my own projects that demonstrate a few ideas explored in these GDC talks:

So, let’s begin with the most obvious question.  What kind of interactive music systems are game audio folks using lately?

Continue reading

Video Game Music Production Tips from GDC 2016

Game Composer Winifred Phillips during her game music presentation at the Game Developers Conference 2016I was pleased to give a talk about composing music for games at the 2016 Game Developers Conference (pictured left).  GDC took place this past March in San Francisco – it was an honor to be a part of the audio track again this year, which offered a wealth of awesome educational sessions for game audio practitioners.  So much fun to see the other talks and learn about what’s new and exciting in the field of game audio!  In this blog, I want to share some info that I thought was really interesting from two talks that pertained to the audio production side of game development: composer Laura Karpman’s talk about “Composing Virtually, Sounding Real” and audio director Garry Taylor’s talk on “Audio Mastering for Interactive Entertainment.”  Both sessions had some very good info for video game composers who may be looking to improve the quality of their recordings.  Along the way, I’ll also be sharing a few of my own personal viewpoints on these music production topics, and I’ll include some examples from one of my own projects, the Ultimate Trailers album for West One Music, to illustrate ideas that we’ll be discussing.  So let’s get started!

Continue reading

Interactive Music for the Video Game Composer

Game Composer Winifred Phillips works in her studio on the music of the popular Spore Hero video game As a speaker in the audio track of the Game Developers Conference this year, I enjoyed taking in a number of GDC audio sessions — including a couple of presentations that focused on the future of interactive music in games.  I’ve explored this topic before at length in my book (A Composer’s Guide to Game Music), and it was great to see that the game audio community continues to push the boundaries and innovate in this area! Interactive music is a worthwhile subject for discussion, and will undoubtedly be increasingly important in the future as dynamic music systems become more prevalent in game projects.  With that in mind, in this blog I’d like to share my personal takeaway from two sessions that described very different approaches to musical interactivity. After that, we’ll discuss one of my experiences with interactive music for the video game Spore Hero from Electronic Arts (pictured above).

Musical Intelligence

Baldur BaldurssonPhoto of Baldur Baldursson, the audio director for Icelandic game development studio CCP Games (part of the article by game composer Winifred Phillips) (pictured left) is the audio director for Icelandic game development studio CCP Games, responsible for the EVE Online MMORPG.  Together with Professor Kjartan Olafsson of the Iceland Academy of Arts, Baldursson presented a talk at GDC 2016 on a new system to provide “Intelligent Music For Games.”

Baldursson began the presentation by explaining why an intelligent music system for games can be a necessity.  “We basically want an intelligent music system because we can’t (or maybe shouldn’t really) precompose all of the elements,” Baldursson explains. He describes the conundrum of creating a musical score for a game whose story is still fluid and changeable, and then asserts,  “I think we should find ways of making this better.”

Continue reading

AES Convention: What’s New on the Show Floor

This past weekend, the Audio Engineering Society held its annual North American convention in the Jacob Javits Center in New York City. I was participating as an AES speaker, but I also knew that AES includes an exhibit floor packed with the best professional audio equipment from all the top manufacturers, and I didn’t want to miss that! So, in between my game audio panel presentation on Saturday, and the Sunday tutorial talk I gave on the music system of the LittleBigPlanet franchise, I had the pleasure of searching the show floor for what’s new and interesting in audio tech. Here are some of the attractions that seemed most interesting for game audio folks:

Fraunhofer/Cingo - Winifred PhillipsOne of the most interesting technologies on display at AES this year was Fraunhofer Cingo – an audio encoding technology developed specifically to enable mobile devices to deliver immersive sound for movies, games and virtual reality. Cingo was developed by the institute responsible for the MP3 audio coding format.  According to Fraunhofer, the Cingo technology “supports rendering of 3D audio content with formats that add a height dimension to the sound image, such as 9.1, 11.1 or other channel combinations.”  This enables mobile devices to emulate “the enveloping sound of movies, games or any other virtual environment.”   While I was there, Fraunhofer rep Jennifer Utley gave me the chance to demo the Cingo technology using the Gear VR headset, which turns Samsung mobile phones into portable virtual reality systems.  The sound generated by Cingo did have an awesome sense of spatial depth that increased immersion, although I didn’t personally notice the height dimension in the spatial positioning. Nevertheless, it was pretty nifty!

Continue reading

Game Music Middleware, Part 5: psai

Middleware-Blackboard

 

This is a continuation of my blog series on the top audio middleware options for game music composers, this time focusing on the psai Interactive Music Engine for games, developed by Periscope Studio, an audio/music production house. Initially developed as a proprietary middleware solution for use by Periscope’s in-house musicians, the software is now being made available commercially for use by game composers.  In this blog I’ll take a quick look at psai and provide some tutorial resources that will further explore the utility of this audio middleware.  If you’d like to read the first four blog entries in this series on middleware for the game composer, you can find them here:

Game Music Middleware, Part 1: Wwise

Game Music Middleware, Part 2: FMOD

Game Music Middleware, Part 3: Fabric

Game Music Middleware, Part 4: Elias

What is psai?

The name “psai” is an acronym for “Periscope Studio Audio Intelligence,” and its lowercase appearance is intentional.  Like the Elias middleware (explored in a previous installment of this blog series), the psai application attempts to provide a specialized environment specifically tailored to best suit the needs of game composers.  The developers at Periscope Studio claim that psai’s “ease of use is unrivaled,” primarily because the middleware was “designed by videogame composers, who found that the approaches of conventional game audio middleware to interactive music were too complicated and not flexible enough.”  The psai music engine was originally released for PC games, with a version of the software for the popular Unity engine released in January 2015.

psai graphical user interface

psai graphical user interface

Both Elias and psai offer intuitive graphical user interfaces designed to ease the workflow of a game composer. However, unlike Elias, which focused exclusively on a vertical layering approach to musical interactivity, the psai middleware is structured entirely around horizontal re-sequencing, with no support for vertical layering.  As I described in my book, A Composer’s Guide to Game Music, “the fundamental idea behind horizontal re-sequencing is that when composed carefully and according to certain rules, the sequence of a musical composition can be rearranged.” (Chapter 11, page 188).

Music for the psai middleware is composed in what Periscope describes as a “snippets” format, in which short chunks of music are arranged into groups that can then be triggered semi-randomly by the middleware.  The overall musical composition is called a “theme,” and the snippets represent short sections of that theme.  The snippets are assigned numbers that best represent degrees of emotional intensity (from most intense to most relaxed), and these intensity numbers help determine which of the snippets will be triggered at any given time.  Other property assignments include whether a snippet is designated as an introductory or ending segment, or whether the snippet is bundled into a “middle” group with a particular intensity designation.  Periscope cautions, “The more Middle Segments you provide, the more diversified your Theme will be. The more Middle Segments you provide for a Theme, the less repetition will occur. For a highly dynamic soundtrack make sure to provide a proper number of Segments across different levels of intensity.”

Here’s an introductory tutorial video produced by Periscope for the psai Interactive Music Engine for videogames:

Because psai only supports horizontal re-sequencing, it’s not as flexible as the more famous tools such as Wwise or FMOD, which can support projects that alternate between horizontal and vertical interactivity models.  However, psai’s ease of use may prove alluring for composers who had already planned to implement a horizontal re-sequencing structure for musical interactivity.  The utility of the psai middleware also seems to depend on snippets that are quite short, as is demonstrated by the above tutorial video produced by Periscope Studio.  There could be some negative effects of this structure on a composer’s ability to develop melodic content (as is sometimes the case in a horizontal re-sequencing model).  It would be helpful if Periscope could demonstrate psai using longer snippets that might give us a better sense of how musical ideas might be developed within the confines of their dynamic music system.  One can imagine an awesome potential for creativity with this system, if the structure can be adapted to allow for more development of musical ideas over time.

The psai middleware has been used successfully in a handful of game projects, including Black Mirror III, Lost Chronicles of Zerzura, Legends of Pegasus, Mount & Blade II – Bannerlord, and The Devil’s Men.  Here’s some gameplay video that demonstrates the music system of Legends of Pegasus:

And here is some gameplay video that demonstrates the music system of Mount & Blade II – Bannerlord:

border-159926_640_white

Studio1_GreenWinifred Phillips is an award-winning video game music composer whose most recent project is the triple-A first person shooter Homefront: The Revolution. Her credits include five of the most famous and popular franchises in video gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the Massachusetts Institute of Technology Press. As a VR game music expert, she writes frequently on the future of music in virtual reality video games. Follow her on Twitter @winphillips.

Game Music Middleware, Part 4: Elias

Middleware-Blackboard

Welcome back to my blog series that offers tutorial resources exploring game music middleware for the game music composer. I initially planned to write two blog entries on the most popular audio middleware solutions (Wwise and FMOD), but since I started this blog series, I’ve been hearing buzz about other middleware solution and so I thought it best to expand the series to incorporate other interesting solutions to music implementation in games.  This blog will focus on a brand new middleware application called Elias, developed by Elias Software.  While not as famous as Wwise or FMOD, this new application offers some intriguing new possibilities for the creation of interactive music in games.

If you’d like to read the first three blog entries in this series, you can find them here:

Game Music Middleware, Part 1: Wwise

Game Music Middleware, Part 2: FMOD

Game Music Middleware, Part 3: Fabric

Elias-Logo

Elias stands for Elastic Lightweight Integrated Audio System.  It is developed by Kristofer Eng and Philip Bennefall for Microsoft Windows, with a Unity plugin for consoles, mobile devices and browser-based games.  What makes Elias interesting is the philosophy of its design.  Instead of designing a general audio middleware tool with some music capabilities, Eng and Bennefall decided to bypass the sound design arena completely and create a middleware tool specifically outfitted for the game music composer. The middleware comes with an authoring tool called Elias Composer’s Studio that “helps the composer to structure and manage the various themes in the game and bridges the gap between the composer and level designer to ease the music integration process.”

Here’s the introductory video for Elias, produced by Elias Software:

The interactive music system of the Elias middleware application seems to favor a Vertical Layering (or vertical re-orchestration) approach with a potentially huge number of music layers able to play in lots of combinations.  The system includes flexible options for layer triggering, including the ability to randomize the activation of the layers to keep the listening experience unpredictable during gameplay.

Elias has produced a series of four tutorial videos for the Composer’s Studio authoring tool.  Here’s the first of the four tutorials:

There’s also a two-part series of tutorials about Elias produced by Dale Crowley, the founder of the game audio services company Gryphondale Studios.  Here’s the first of the two videos:

As a middleware application designed specifically to address the top needs of game music composers, Elias is certainly intriguing!  The software has so far been used in only one published game – Gauntlet, which is the latest entry in the awesome video game franchise first developed by Atari Games for arcade cabinets in 1985.  This newest entry in the franchise was developed by Arrowhead Game Studios for Windows PCs.  We can hear the Elias middleware solution in action in this gameplay video from Gauntlet:

The music of Gauntlet was composed by Erasmus Talbot.  More of his music from Gauntlet is available on his SoundCloud page.

Elias Software recently demonstrated its Elias middleware application on the expo floor of the Nordic Game 2015 conference in Malmö, Sweden (May 20-22, 2015).  Here’s a look at Elias’ booth from the expo:

Elias-NordicGame2015

Since Elias is a brand new application, I’ll be curious to see how widely it is accepted by the game audio community.  A middleware solution that focuses solely on music is definitely a unique approach!  If audio directors and audio programmers embrace Elias, then it may have the potential to give composers better tools and an easier workflow in the creation of interactive music for games.