VR for the Game Music Composer: Audio for VR Platforms

In this article written for video game composers, Winifred Phillips (video game composer) is here pictured working in her music production studio on the music for the Scraper: First Strike game, developed for popular VR gaming platforms (PSVR, Oculus Rift, HTC Vive).

By Winifred Phillips | Contact | Follow

Hello there!  I’m video game music composer Winifred Phillips.  Lately, I’ve been very busy in my production studio composing music for a lot of awesome virtual reality games, including the upcoming Scraper: First Strike first person VR shooter (pictured above) that’s coming out next Wednesday (November 21st) for the Oculus Rift, HTC Vive and Windows Mixed Reality Devices, and will be released on December 18th for the Playstation VR.  My work on this project has definitely stoked my interest in everything VR!  Since the game will be released very soon, here’s a trailer video released by the developers Labrodex Studios, featuring some of the music I composed for the game:

Scraper: First Strike is just one of a whole slew of VR games I’ve been working on over the past year.  Last year, when I was just starting to get really busy working with VR development teams, I wrote an article here that offered a bunch of informative resources connected to the field of VR audio.  The article I posted in 2017 took a general approach to the role that audio plays in Virtual Reality experiences.  Since we’re well into 2018, I thought we could benefit from expanding that topic to include the state-of-the-art in VR headset platforms.  Taking a look at the hardware platforms that are currently available should give us video game composers a better idea of the direction that VR audio is currently headed.

For one thing, VR is now broadly considered a part of a larger category that also includes AR (Augmented Reality) and MR (Mixed Reality) devices.  Those two categories are often considered synonymous, although that’s certainly debatable.  Since there’s no clear expert consensus at this point on what characteristics separate AR from MR, let’s just consider them as one category that we’ll call AR/MR for now.  In this article I’ll be focusing on resources that are specific to each of the competing platforms in VR and AR/MR.

Let’s get started!

Audio for VR and AR/MR devices

A wide variety of head-mounted devices now exist that can immerse us in imaginary worlds, or bring fantastic creatures to life in our living rooms.  While many of these devices share common underlying technologies in regards to audio creation and implementation, there are differing tools and techniques that apply to each of them.  I’ve included links in the discussion below that may be helpful in understanding how these technologies differ.

When virtual acoustics meets actual acoustics

The newly-released Magic Leap One is an AR/MR device.  This means that it allows the wearer to see the real world, while superimposing digital images that seem to exist in reality, and not just within the device.  For instance, an AR/MR device can make us think that a miniature toy dinosaur is toddling across our coffee table.  With this in mind, creating audio for AR/MR becomes a little tricky.

An image illustrating a discussion of spatial acoustics in AR/MR devices, from the article by award-winning game composer Winifred Phillips.For instance, let’s say that we want our tiny dinosaur to emit a ferociously-adorable little roar as he climbs on top of our coffee table books.  That sound won’t be convincing if it doesn’t seem to be happening inside our actual living room, with its unique acoustical properties.  The real-life room has to be mapped, and acoustic calculations have to be factored in.  This isn’t an issue when developing sound for virtual reality, since the sound sources emit within an environment that exists completely within the virtual world.

It’s a fascinating problem, and one that the Magic Leap folks have considered seriously, using a system they’ve dubbed ‘Soundfield Audio’ to apply physics calculations that can produce appropriate acoustics based on the environment.  They’ve also patented a spatial audio technology that uses the wearer’s head movements to calculate the position of virtual sound sources.  Here’s a video that shows off a video game music visualization application for Magic Leap called Tónandi:

 

The Hololens is also an AR/MR device, and therefore faces a lot of the same issues as the Magic Leap One.  To address these, Hololens uses a spatial audio engine that calculates the position of sound-emitting sources combined with personalized Head Related Transfer Functions or HRTFs (a concept we discussed in an article from 2015).  These HRTFs help to localize all the aural components of the virtual soundscape.  In addition, the Hololens creates a room model to match the user’s location so that sounds seem to reflect from real-life walls and travel convincingly to the player’s ears.  We should expect this technology to improve when Microsoft releases their next generation of Hololens early next year.  Here’s a video produced by Engadget that goes into more detail about the audio experience delivered by Hololens:

 

Spatial sound for mixed reality

An image illustrating the use of Spatial Sound on the Windows Mixed Reality platform, from the article for video game composers by Winifred Phillips (game music composer).While we’re waiting for the next generation of Hololens to be released, Microsoft has been keeping busy in the traditional VR space with its Windows Mixed Reality platform, which allows third party equipment manufacturers to create VR headsets based on its existing VR reference designs and software.  While the Mixed Reality platform shares common elements with the Hololens, the VR devices under the Windows Mixed Reality banner offer standard VR experiences, without any AR/MR elements.  Both the Hololens and the Windows Mixed Reality devices use the Spatial Sound software development kit for the design and implementation of positional audio.  This allows audio developers to create soundscapes for a large number of devices using the same tools.  While the convenience factor is certainly attractive, Hololens and Windows Mixed Reality offer very different experiences, so audio developers will certainly need to keep that in mind.  Here’s a short video that reviews the capabilities of the Spatial Sound SDK:

 

Positional audio inside the virtual machine

Now let’s move on to discuss what’s happening with the current VR devices.  As we know, unlike an AR/MR headset, a VR device cuts us off completely from the outside world and plunges us into an environment existing entirely within the machine.  There is currently a healthy and varied crop of VR devices from which to choose.  The two most popular and famous VR headsets are the Oculus Rift and the HTC Vive.  In this article for video game composers, Winifred Phillips discusses the new audio capabilities of the recently released 3DSP SDK for the famous HTC Vive headset for Virtual Reality.Both devices rely on positional audio technologies to deliver great aural experiences, and each company has worked diligently to improve the technology over time.  In June 2018 the HTC Vive introduced a new Software Development Kit (SDK) for immersive audio.  The new SDK allows for more sophisticated audio technologies like higher order ambisonics, higher resolution audio, more refined spatial acoustics (pictured right), and HRTFs based on refined real-world models to improve the accuracy of positional audio.

Depiction of popular and famous Oculus Rift device's audio SDK for Virtual Reality, from the article written by Winifred Phillips for video game composers.The Oculus Rift has upgraded their existing audio SDK to improve the positional accuracy of sounds emitting very close to the player (a technology they call Near-Field Head-Related-Transfer-Function (HRTF).  They have also provided the option of implementing sound that originate from large sources (such as an ocean, for instance, or a forest fire).  Using the Volumetric Sound Sources technology, large sound-emitting objects can project their aural content across an assigned radius consistent with their scale.  Here’s a video from the Oculus Connect 4 conference, demonstrating the Near-Field HRTF and Volumetric Sound Sources capabilities of the Oculus audio SDK:

 

Photo depicting the new built-in headphones for the popular PlayStation VR system, from the article by Winifred Phillips for video game composersThe PlayStation VR, as the only console-specific VR device, does not share the same market with such devices as the Vive or the Rift and therefore is not faced with the same competitive pressures.  Nevertheless, improvements continue to be made to the PSVR’s technology.  The newest model of the PSVR (released late last year) is a revised version with small but valuable improvements.  Among the changes, Sony added built-in stereo headphones to the headset (pictured right), removing the need for players to hook up separate headphones in order to experience VR audio.

Standalone VR audio

Now let’s take a quick look at the standalone VR devices (i.e. those devices that don’t need to be hooked up to a computer or console, and don’t need a mobile phone installed in order to work).  These VR headsets offer untethered, cable-free virtual reality exploration, but they’re also usually a bit less powerful and full-featured.  The five best-known standalone headsets are the Oculus Go, the Oculus Quest, the Lenovo Mirage Solo, the HTC Vive Focus, and the Shadow VR.

The Oculus Go and Lenovo Mirage Solo both hit retail this May.  The HTC Vive Focus and the Shadow VR both became available for consumers just this month. The Oculus Quest was recently announced and is expected to hit retail in spring 2019.  All five use a Qualcomm smartphone processor chip from the Snapdragon line, so in that respect they’ve essentially adopted the internal mechanism of a high-end mobile phone and simply incorporated it into their on-board hardware.  In fact, the Qualcomm Snapdragon 835 A depiction of the Qualcomm Snapdragon 835 chip for popular untethered VR devices, from the article for video game composers by Winifred Phillips (game music composer).(used in the Lenovo, Vive, Oculus Quest and Shadow VR devices), is also the same chip that’s at the heart of the Samsung Galaxy S8, the Google Pixel 2, and many other smartphone models.  Since all five untethered VR devices use the Snapdragon technology, developers can choose to avail themselves of the Qualcomm Snapdragon VR Software Development Kit, which includes a 3D Audio Plugin for Unity (designed to provide high-performance audio on Qualcomm Snapdragon devices).  Qualcomm also offers a suite of 3D Audio Tools for use in conjunction with a Digital Audio Workstation such as Pro Tools.  While these are by no means the only choices, since they were developed by the company responsible for the Snapdragon processor, it stands to reason that some Snapdragon insights may have influenced the design and function of these tools.  Here’s a video interview with Hugo Swart, head of IoE Consumer Electronics at Qualcomm, as he discusses the virtual reality capabilities of the Snapdragon 835:

 

Audio for mobile VR

If you do a quick Amazon search under the phrase “Mobile VR Headsets,” you’ll see that there is now a dizzying plethora of headset models based around the “insert your mobile phone here” philosophy.  These headsets all rely on the processing technology of the phone inserted into them, and there are so many varying models that I won’t be attempting to delve into that topic.  Generally speaking, if there is a SDK specific to a particular VR headset model, then it should be considered.  For instance, the Oculus Audio SDK makes sense for the Samsung Gear VR and the Oculus Go and Oculus Quest, since all three are Oculus-designed VR systems.  Likewise, the new Resonance Audio SDK from Google is a good choice for any of the Google headsets (Daydream View, Google Cardboard, Lenovo Mirage Solo with Daydream).  Here’s a brief video produced by Google that demonstrates the Resonance Audio SDK:

 

Conclusion

That’s our discussion of where things currently stand with regards to VR platforms!  In the next article, we’ll be focusing on tips and tools for game audio folks working in VR.  I hope you enjoyed the article, and please let me know your thoughts in the comments section below!

 

Photo of video game composer Winifred Phillips in her game composers production studio.Winifred Phillips is an award-winning video game music composer whose recent projects include the triple-A first person shooter Homefront: The Revolution. Her latest video game credits also include numerous Virtual Reality games, including Scraper: First Strike, Bebylon: Battle Royale, Fail Factory, Dragon Front, and many more.  She has composed music for games in five of the most famous and popular franchises in gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the MIT Press. As a VR game music expert, she writes frequently on the future of music in virtual reality games. Follow her on Twitter @winphillips.

Video game music systems at GDC 2017: tools and tips for composers

Photo of video game composer Winifred Phillips, working in her music production studio on the music of the SimAnimals video game.

By video game composer Winifred Phillips | Contact | Follow

Welcome back to this three article series that’s bringing together the ideas that were discussed in five different GDC 2017 audio talks about interactive music!  These five speakers explored discoveries they’d made while creating interactivity in the music of their own game projects.  We’re looking at these ideas side-by-side to broaden our viewpoint and gain a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. We’ve been looking at five interactive music systems discussed in these five GDC 2017 presentations:

In the first article, we examined the basic nature of these interactive systems. In the second article, we contemplated why those systems were used, with some of the inherent pros and cons of each system discussed in turn.  So now, let’s get into the nitty gritty of tools and tips for working with such interactive music systems.  If you haven’t read parts one and two of this series, please go do so now and then come back:

  1. Video game music systems at GDC 2017: what are composers using?
  2. Video game music systems at GDC 2017: pros and cons for composers

Ready?  Great!  Here we go!

Continue reading

Video game music systems at GDC 2017: pros and cons for composers

Video game composer Winifred Phillips, pictured in her music production studio working on the music of LittleBigPlanet 2 Cross Controller

By Winifred Phillips | Contact | Follow

Welcome back to our three article series dedicated to collecting and exploring the ideas that were discussed in five different GDC 2017 audio talks about interactive music!  These five speakers shared ideas they’d developed in the process of creating interactivity in the music of their own game projects.  We’re looking at these ideas side-by-side to cultivate a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. In the first article, we looked at the basic nature of five interactive music systems discussed in these five GDC 2017 presentations:

If you haven’t read part one of this article series, please go do that now and come back.

Okay, so let’s now contemplate some simple but important questions: why were those systems used?  What was attractive about each interactive music strategy, and what were the challenges inherent in using those systems?

Continue reading

Video game music systems at GDC 2017: what are composers using?

By video game music composer Winifred Phillips | Contact | Follow

Video game composer Winifred Phillips, presenting at the Game Developers Conference 2017.The 2017 Game Developers Conference could be described as a densely-packed deep-dive exploration of the state-of-the-art tools and methodologies used in modern game development.  This description held especially true for the game audio track, wherein top experts in the field offered a plethora of viewpoints and advice on the awesome technical and artistic challenges of creating great sound for games. I’ve given GDC talks for the past three years now (see photo), and every year I’m amazed at the breadth and diversity of the problem-solving approaches discussed by my fellow GDC presenters.  Often I’ll emerge from the conference with the impression that we game audio folks are all “doing it our own way,” using widely divergent strategies and tools.

This year, I thought I’d write three articles to collect and explore the ideas that were discussed in five different GDC audio talks.  During their presentations, these five speakers all shared their thoughts on best practices and methods for instilling interactivity in modern game music.  By absorbing these ideas side-by-side, I thought we might gain a sense of the “bigger picture” when it comes to the current leading-edge thinking for music interactivity in games. In the first article, we’ll look at the basic nature of these interactive systems.  We’ll devote the second article to the pros and cons of each system, and in the third article we’ll look at tools and tips shared by these music interactivity experts. Along the way, I’ll also be sharing my thoughts on the subject, and we’ll take a look at musical examples from some of my own projects that demonstrate a few ideas explored in these GDC talks:

So, let’s begin with the most obvious question.  What kind of interactive music systems are game audio folks using lately?

Continue reading

GDC 2017: How video game composers can use music to build suspense

Winifred Phillips, video game composer, giving a talk as part of the Game Developers Conference 2016 in San Francisco.

By Winifred Phillips | Contact | Follow

The Game Developers Conference is coming up soon!  Last year I presented a talk on music for mobile games (pictured above), and I’m pleased that this year I’ll be presenting the talk, “Homefront’ to ‘God of War’: Using Music to Build Suspense(Wednesday, March 1st at 11am in room 3006 West Hall, Moscone Center, San Francisco).  In my talk I’ll be focusing on practical applications of techniques for video game composers and game audio folks, using my own experiences as concrete examples for exploration.  Along the way, I’ll be discussing some very compelling scholarly research on the relationship between suspense, gameplay and musical expression.  In preparing my GDC 2017 presentation I did a lot of reading and studying about the nature of suspense in video games, the importance of suspense in gameplay design, and the role that video game music plays in regulating and elevating suspense.  There will be lots of ground to cover in my presentation!  That being said, the targeted focus of my presentation precluded me from incorporating some very interesting extra research into the importance of suspense in a more general sense… why human beings need suspense, and what purpose it serves in our lives.  I also couldn’t find the space to include everything I’d encountered regarding suspense as an element in the gaming experience.  It occurred to me that some of this could be very useful to us in our work as game makers, so I’d like to share some of these extra ideas in this article.

Continue reading

Montreal International Game Summit 2014

WP-Session-MIGS-2
Just came back from a fantastic experience speaking at the Montreal International Game Summit 2014!

Montreal is a beautiful city, and that’s reflected in the fantastic rainbow-tinted windows of the convention center where the summit was held – the Palais des congrès de Montréal.

MIGS-Colored-Glass

The weather was relatively warm while I was there, but I spent most of my time at the summit… although I did enjoy the city views from the enormous walls of windows.


MIGS-WindowWall

This year’s summit was more vibrant than ever, and the fun began in the wide hallways where attendees could test their video game trivia knowledge by taking part in “The Game Masters” quiz show.  I wasn’t brave enough to compete, but I had to get a picture of the set:

MIGS-Game-Masters The show floor was very exciting this year, with a lot of the activity centering around the two Oculus Rift stations.  My attention, though, was caught by two things.  First — the AudioKinetic booth, where the Wwise middleware was on display:

MIGS-AudioKinetic

And second, this big green guy who was hulking inside the Ubisoft booth.  He looks brutish, but don’t let that fool you — he’s a real charmer.

MIGS-Ubisoft-Mascot

Here’s the big schedule of sessions that was posted at the event.  My speech was towards the end of the second day of the summit, right before the MIGS Brain Dump (which is kind of similar to a GDC rant).

MIGS-Big-Schedule

My talk was titled, “Music, the Brain, and the Three Levels of Immersion.”  It was a great audience!

WP-Session-MIGS-1

I had a wonderful time sharing some ideas about the role that music can play in helping gamers to achieve immersion. I’d first explored these ideas in my book, A Composer’s Guide to Game Music, and it was such a joy to explore these ideas with such an enthusiastic audience!

MIGS-SpeechPhoto-YT-Thumb

I’ll be posting a video excerpt from my talk soon.  It was wonderful to speak at MIGS 2014, and thanks to all the creative and inspiring people I met this year in Montreal – it was a tremendous pleasure!

LittleBigPlanet 3 – Hollywood Music in Media Awards

hmma2014

Hey, everyone!  After my blog yesterday about winning the Hollywood Music in Media Award, I’ve received a bunch of questions about LittleBigPlanet 3 and the Hollywood Music in Media Awards program – so I thought I’d post some info that explains everything in a bit more detail.  It’s a little easier to do this in third person, so here goes – I hope this helps!

On November 4th, game composer Winifred Phillips received a 2014 Hollywood Music in Media Award (HMMA) in the category of “Best Song in a Video Game” for music she composed for the LittleBigPlanet 3 video game (developed by Sumo Digital Ltd. and published by Sony Computer Entertainment, LLC).

As one of the composers on the LittleBigPlanet™3 music composer team, Phillips was recognized for her song, “LittleBigPlanet 3 Ziggurat Theme.”  

Info about LittleBigPlanet 3:

Sony Computer Entertainment Europe announced the news about this award on November 6th via their official LittleBigPlanet twitter feed.  

The critically acclaimed and best-selling PlayStation® franchise  LittleBigPlanet™ makes its debut on PlayStation®4  with  LittleBigPlanet™3. Sackboy™ is back, this time with playable new friends – Toggle, OddSock and Swoop – each with their own unique abilities and personalities.  This handcrafted adventure is set to revolutionize the way gamers Play, Create and Share in the world of LittleBigPlanet.

Sumo Digital Ltd, the developer of LittleBigPlanet 3, has forged a reputation as a World Class multiple award-winning independent game development studio. The company has grown exponentially over 11-years from 15, to 270 people spread across the Head Office in Sheffield, UK and a dedicated Art Studio in Pune, India.  Sumo Digital is one of the UK’s leading game development studios.

Info about the Hollywood Music in Media Awards:

The Hollywood Music in Media Award ceremony was held on November 4th 2014 at 7pm at the Fonda Theater (6126 Hollywood Boulevard, Hollywood).  The Hollywood Music in Media Awards recognizes and honors the creation of music for film, TV, and videogames, the talented individuals responsible for licensing it and musicians both mainstream and independent, from around the globe. The HMMAs is co-branded with Billboard/Hollywood Reporter Film & TV Music Conference. HMMA advisory board, selections committee and voters include National Academy of Recording Arts and Sciences, Oscar, Emmy, Society of Composers and Lyricists and Guild of Music Supervisors members.

Additional info about Winifred Phillips (the LittleBigPlanet franchise and the HMMAs):

Phillips’ award-winning track, “LittleBigPlanet 3 Ziggurat Theme,” from LittleBigPlanet™3, is a highly interactive musical work, written as a complex classical fugue, and incorporating an organic, world-music influenced instrumental arrangement in support of a women’s choir.  Phillips has received two previous Hollywood Music in Media Awards – in 2012 for Assassin’s Creed Liberation (Ubisoft®) and in 2010 for the Legend of the Guardians (Warner Bros. Interactive Entertainment).  Phillips is one of the composers on the LittleBigPlanet music composer team, and has created tracks for six games in the series, including LittleBigPlanet 2, LittleBigPlanet 2 Toy Story, LittleBigPlanet Cross Controller, LittleBigPlanet PS Vita, LittleBigPlanet Karting, and now LittleBigPlanet 3.  

Phillips’ work as a composer for the LittleBigPlanet game series has earned her previous awards nominations from the Game Audio Network Guild Awards, the Hollywood Music in Media Awards, the NAViGaTR Awards and the D.I.C.E. Interactive Achievement Awards.  Phillips works with award-winning music producer Winnie Waldron for all her projects, including those in the LittleBigPlanet franchise.  Phillips is also the author of the book A COMPOSER’S GUIDE TO GAME MUSIC, published in 2014 by the Massachusetts Institute of Technology Press.