VR for the Game Music Composer: Audio for VR Platforms

In this article written for video game composers, Winifred Phillips (video game composer) is here pictured working in her music production studio on the music for the Scraper: First Strike game, developed for popular VR gaming platforms (PSVR, Oculus Rift, HTC Vive).

By Winifred Phillips | Contact | Follow

Hello there!  I’m video game music composer Winifred Phillips.  Lately, I’ve been very busy in my production studio composing music for a lot of awesome virtual reality games, including the upcoming Scraper: First Strike first person VR shooter (pictured above) that’s coming out next Wednesday (November 21st) for the Oculus Rift, HTC Vive and Windows Mixed Reality Devices, and will be released on December 18th for the Playstation VR.  My work on this project has definitely stoked my interest in everything VR!  Since the game will be released very soon, here’s a trailer video released by the developers Labrodex Studios, featuring some of the music I composed for the game:

Scraper: First Strike is just one of a whole slew of VR games I’ve been working on over the past year.  Last year, when I was just starting to get really busy working with VR development teams, I wrote an article here that offered a bunch of informative resources connected to the field of VR audio.  The article I posted in 2017 took a general approach to the role that audio plays in Virtual Reality experiences.  Since we’re well into 2018, I thought we could benefit from expanding that topic to include the state-of-the-art in VR headset platforms.  Taking a look at the hardware platforms that are currently available should give us video game composers a better idea of the direction that VR audio is currently headed.

For one thing, VR is now broadly considered a part of a larger category that also includes AR (Augmented Reality) and MR (Mixed Reality) devices.  Those two categories are often considered synonymous, although that’s certainly debatable.  Since there’s no clear expert consensus at this point on what characteristics separate AR from MR, let’s just consider them as one category that we’ll call AR/MR for now.  In this article I’ll be focusing on resources that are specific to each of the competing platforms in VR and AR/MR.

Let’s get started!

Audio for VR and AR/MR devices

A wide variety of head-mounted devices now exist that can immerse us in imaginary worlds, or bring fantastic creatures to life in our living rooms.  While many of these devices share common underlying technologies in regards to audio creation and implementation, there are differing tools and techniques that apply to each of them.  I’ve included links in the discussion below that may be helpful in understanding how these technologies differ.

When virtual acoustics meets actual acoustics

The newly-released Magic Leap One is an AR/MR device.  This means that it allows the wearer to see the real world, while superimposing digital images that seem to exist in reality, and not just within the device.  For instance, an AR/MR device can make us think that a miniature toy dinosaur is toddling across our coffee table.  With this in mind, creating audio for AR/MR becomes a little tricky.

An image illustrating a discussion of spatial acoustics in AR/MR devices, from the article by award-winning game composer Winifred Phillips.For instance, let’s say that we want our tiny dinosaur to emit a ferociously-adorable little roar as he climbs on top of our coffee table books.  That sound won’t be convincing if it doesn’t seem to be happening inside our actual living room, with its unique acoustical properties.  The real-life room has to be mapped, and acoustic calculations have to be factored in.  This isn’t an issue when developing sound for virtual reality, since the sound sources emit within an environment that exists completely within the virtual world.

It’s a fascinating problem, and one that the Magic Leap folks have considered seriously, using a system they’ve dubbed ‘Soundfield Audio’ to apply physics calculations that can produce appropriate acoustics based on the environment.  They’ve also patented a spatial audio technology that uses the wearer’s head movements to calculate the position of virtual sound sources.  Here’s a video that shows off a video game music visualization application for Magic Leap called Tónandi:

 

The Hololens is also an AR/MR device, and therefore faces a lot of the same issues as the Magic Leap One.  To address these, Hololens uses a spatial audio engine that calculates the position of sound-emitting sources combined with personalized Head Related Transfer Functions or HRTFs (a concept we discussed in an article from 2015).  These HRTFs help to localize all the aural components of the virtual soundscape.  In addition, the Hololens creates a room model to match the user’s location so that sounds seem to reflect from real-life walls and travel convincingly to the player’s ears.  We should expect this technology to improve when Microsoft releases their next generation of Hololens early next year.  Here’s a video produced by Engadget that goes into more detail about the audio experience delivered by Hololens:

 

Spatial sound for mixed reality

An image illustrating the use of Spatial Sound on the Windows Mixed Reality platform, from the article for video game composers by Winifred Phillips (game music composer).While we’re waiting for the next generation of Hololens to be released, Microsoft has been keeping busy in the traditional VR space with its Windows Mixed Reality platform, which allows third party equipment manufacturers to create VR headsets based on its existing VR reference designs and software.  While the Mixed Reality platform shares common elements with the Hololens, the VR devices under the Windows Mixed Reality banner offer standard VR experiences, without any AR/MR elements.  Both the Hololens and the Windows Mixed Reality devices use the Spatial Sound software development kit for the design and implementation of positional audio.  This allows audio developers to create soundscapes for a large number of devices using the same tools.  While the convenience factor is certainly attractive, Hololens and Windows Mixed Reality offer very different experiences, so audio developers will certainly need to keep that in mind.  Here’s a short video that reviews the capabilities of the Spatial Sound SDK:

 

Positional audio inside the virtual machine

Now let’s move on to discuss what’s happening with the current VR devices.  As we know, unlike an AR/MR headset, a VR device cuts us off completely from the outside world and plunges us into an environment existing entirely within the machine.  There is currently a healthy and varied crop of VR devices from which to choose.  The two most popular and famous VR headsets are the Oculus Rift and the HTC Vive.  In this article for video game composers, Winifred Phillips discusses the new audio capabilities of the recently released 3DSP SDK for the famous HTC Vive headset for Virtual Reality.Both devices rely on positional audio technologies to deliver great aural experiences, and each company has worked diligently to improve the technology over time.  In June 2018 the HTC Vive introduced a new Software Development Kit (SDK) for immersive audio.  The new SDK allows for more sophisticated audio technologies like higher order ambisonics, higher resolution audio, more refined spatial acoustics (pictured right), and HRTFs based on refined real-world models to improve the accuracy of positional audio.

Depiction of popular and famous Oculus Rift device's audio SDK for Virtual Reality, from the article written by Winifred Phillips for video game composers.The Oculus Rift has upgraded their existing audio SDK to improve the positional accuracy of sounds emitting very close to the player (a technology they call Near-Field Head-Related-Transfer-Function (HRTF).  They have also provided the option of implementing sound that originate from large sources (such as an ocean, for instance, or a forest fire).  Using the Volumetric Sound Sources technology, large sound-emitting objects can project their aural content across an assigned radius consistent with their scale.  Here’s a video from the Oculus Connect 4 conference, demonstrating the Near-Field HRTF and Volumetric Sound Sources capabilities of the Oculus audio SDK:

 

Photo depicting the new built-in headphones for the popular PlayStation VR system, from the article by Winifred Phillips for video game composersThe PlayStation VR, as the only console-specific VR device, does not share the same market with such devices as the Vive or the Rift and therefore is not faced with the same competitive pressures.  Nevertheless, improvements continue to be made to the PSVR’s technology.  The newest model of the PSVR (released late last year) is a revised version with small but valuable improvements.  Among the changes, Sony added built-in stereo headphones to the headset (pictured right), removing the need for players to hook up separate headphones in order to experience VR audio.

Standalone VR audio

Now let’s take a quick look at the standalone VR devices (i.e. those devices that don’t need to be hooked up to a computer or console, and don’t need a mobile phone installed in order to work).  These VR headsets offer untethered, cable-free virtual reality exploration, but they’re also usually a bit less powerful and full-featured.  The five best-known standalone headsets are the Oculus Go, the Oculus Quest, the Lenovo Mirage Solo, the HTC Vive Focus, and the Shadow VR.

The Oculus Go and Lenovo Mirage Solo both hit retail this May.  The HTC Vive Focus and the Shadow VR both became available for consumers just this month. The Oculus Quest was recently announced and is expected to hit retail in spring 2019.  All five use a Qualcomm smartphone processor chip from the Snapdragon line, so in that respect they’ve essentially adopted the internal mechanism of a high-end mobile phone and simply incorporated it into their on-board hardware.  In fact, the Qualcomm Snapdragon 835 A depiction of the Qualcomm Snapdragon 835 chip for popular untethered VR devices, from the article for video game composers by Winifred Phillips (game music composer).(used in the Lenovo, Vive, Oculus Quest and Shadow VR devices), is also the same chip that’s at the heart of the Samsung Galaxy S8, the Google Pixel 2, and many other smartphone models.  Since all five untethered VR devices use the Snapdragon technology, developers can choose to avail themselves of the Qualcomm Snapdragon VR Software Development Kit, which includes a 3D Audio Plugin for Unity (designed to provide high-performance audio on Qualcomm Snapdragon devices).  Qualcomm also offers a suite of 3D Audio Tools for use in conjunction with a Digital Audio Workstation such as Pro Tools.  While these are by no means the only choices, since they were developed by the company responsible for the Snapdragon processor, it stands to reason that some Snapdragon insights may have influenced the design and function of these tools.  Here’s a video interview with Hugo Swart, head of IoE Consumer Electronics at Qualcomm, as he discusses the virtual reality capabilities of the Snapdragon 835:

 

Audio for mobile VR

If you do a quick Amazon search under the phrase “Mobile VR Headsets,” you’ll see that there is now a dizzying plethora of headset models based around the “insert your mobile phone here” philosophy.  These headsets all rely on the processing technology of the phone inserted into them, and there are so many varying models that I won’t be attempting to delve into that topic.  Generally speaking, if there is a SDK specific to a particular VR headset model, then it should be considered.  For instance, the Oculus Audio SDK makes sense for the Samsung Gear VR and the Oculus Go and Oculus Quest, since all three are Oculus-designed VR systems.  Likewise, the new Resonance Audio SDK from Google is a good choice for any of the Google headsets (Daydream View, Google Cardboard, Lenovo Mirage Solo with Daydream).  Here’s a brief video produced by Google that demonstrates the Resonance Audio SDK:

 

Conclusion

That’s our discussion of where things currently stand with regards to VR platforms!  In the next article, we’ll be focusing on tips and tools for game audio folks working in VR.  I hope you enjoyed the article, and please let me know your thoughts in the comments section below!

 

Photo of video game composer Winifred Phillips in her game composers production studio.Winifred Phillips is an award-winning video game music composer whose recent projects include the triple-A first person shooter Homefront: The Revolution. Her latest video game credits also include numerous Virtual Reality games, including Scraper: First Strike, Bebylon: Battle Royale, Fail Factory, Dragon Front, and many more.  She has composed music for games in five of the most famous and popular franchises in gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the MIT Press. As a VR game music expert, she writes frequently on the future of music in virtual reality games. Follow her on Twitter @winphillips.

Understanding Audio in VR – A Game Music Composer’s Resource Guide

Video game music composer Winifred Phillips working in her game composers production studio.

By Winifred Phillips | Contact | Follow

When I’m not at work in my studio making music for games, I like to keep up with new developments in the field of interactive entertainment, and I’ll often share what I learn here in these articles.  Virtual reality is an awesome subject for study for a video game composer, and several of my recent projects have been in the world of VR.  Since I’m sure that most of us are curious about what’s coming next in virtual reality, I’ve decided to devote this article to a collection of educational resources.  I’ve made a point of keeping our focus general here, with the intent of understanding the role of audio in VR and the best resources available to audio folks.  As a component of the VR soundscape, our music must fit into the entire matrix of aural elements, so we’ll spend this article learning about what goes into making expert sound for a virtual reality experience. Let’s start with a few articles that discuss methods and techniques for VR audio practitioners.

Continue reading

Resources For Video Game Music Composers

Video game music composer Winifred Phillips, at work in her music production studio.

By Winifred Phillips | Contact | Follow

I’m pleased to announce that my book, A Composer’s Guide to Game Music, is now available its new paperback edition! I’m excited that my book has done well enough to merit a paperback release, and I’m looking forward to getting to know a lot of new readers!  The paperback is much lighter and more portable than the hardcover.  Here’s a view of the front and back covers of the new paperback edition of my book (click the image for a bigger version if you’d like to read the back cover):

award-winning video game music composer Winifred Phillips' book, A Composer's Guide to Game Music, is now available in paperback.

From the article by Winifred Phillips (composer of video game music) - depiction of the book cover of A COMPOSER'S GUIDE TO GAME MUSIC.As you might expect, many aspiring game composers read my book, and I’m honored that my book is a part of their hunt for the best resources to help them succeed in this very competitive business.  When I’m not working in my music studio, I like to keep up with all the great new developments in the game audio field, and I share a lot of what I learn in these articles. Keeping in mind how many of my readers are aspiring composers, I’ve made a point of devoting an article once a year to gathering the top online guidance currently available for newcomers to the game music profession.  In previous years I’ve focused solely on recommendations gleaned from the writings of game audio pros, but this time I’d like to expand that focus to include other types of resources that could be helpful.  Along the way, we’ll be taking a look at some nuggets of wisdom that have appeared on these sites.  So, let’s get started!

Continue reading

Video game music systems at GDC 2017: tools and tips for composers

Photo of video game composer Winifred Phillips, working in her music production studio on the music of the SimAnimals video game.

By video game composer Winifred Phillips | Contact | Follow

Welcome back to this three article series that’s bringing together the ideas that were discussed in five different GDC 2017 audio talks about interactive music!  These five speakers explored discoveries they’d made while creating interactivity in the music of their own game projects.  We’re looking at these ideas side-by-side to broaden our viewpoint and gain a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. We’ve been looking at five interactive music systems discussed in these five GDC 2017 presentations:

In the first article, we examined the basic nature of these interactive systems. In the second article, we contemplated why those systems were used, with some of the inherent pros and cons of each system discussed in turn.  So now, let’s get into the nitty gritty of tools and tips for working with such interactive music systems.  If you haven’t read parts one and two of this series, please go do so now and then come back:

  1. Video game music systems at GDC 2017: what are composers using?
  2. Video game music systems at GDC 2017: pros and cons for composers

Ready?  Great!  Here we go!

Continue reading

Video game music systems at GDC 2017: pros and cons for composers

Video game composer Winifred Phillips, pictured in her music production studio working on the music of LittleBigPlanet 2 Cross Controller

By Winifred Phillips | Contact | Follow

Welcome back to our three article series dedicated to collecting and exploring the ideas that were discussed in five different GDC 2017 audio talks about interactive music!  These five speakers shared ideas they’d developed in the process of creating interactivity in the music of their own game projects.  We’re looking at these ideas side-by-side to cultivate a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. In the first article, we looked at the basic nature of five interactive music systems discussed in these five GDC 2017 presentations:

If you haven’t read part one of this article series, please go do that now and come back.

Okay, so let’s now contemplate some simple but important questions: why were those systems used?  What was attractive about each interactive music strategy, and what were the challenges inherent in using those systems?

Continue reading

Video game music systems at GDC 2017: what are composers using?

By video game music composer Winifred Phillips | Contact | Follow

Video game composer Winifred Phillips, presenting at the Game Developers Conference 2017.The 2017 Game Developers Conference could be described as a densely-packed deep-dive exploration of the state-of-the-art tools and methodologies used in modern game development.  This description held especially true for the game audio track, wherein top experts in the field offered a plethora of viewpoints and advice on the awesome technical and artistic challenges of creating great sound for games. I’ve given GDC talks for the past three years now (see photo), and every year I’m amazed at the breadth and diversity of the problem-solving approaches discussed by my fellow GDC presenters.  Often I’ll emerge from the conference with the impression that we game audio folks are all “doing it our own way,” using widely divergent strategies and tools.

This year, I thought I’d write three articles to collect and explore the ideas that were discussed in five different GDC audio talks.  During their presentations, these five speakers all shared their thoughts on best practices and methods for instilling interactivity in modern game music.  By absorbing these ideas side-by-side, I thought we might gain a sense of the “bigger picture” when it comes to the current leading-edge thinking for music interactivity in games. In the first article, we’ll look at the basic nature of these interactive systems.  We’ll devote the second article to the pros and cons of each system, and in the third article we’ll look at tools and tips shared by these music interactivity experts. Along the way, I’ll also be sharing my thoughts on the subject, and we’ll take a look at musical examples from some of my own projects that demonstrate a few ideas explored in these GDC talks:

So, let’s begin with the most obvious question.  What kind of interactive music systems are game audio folks using lately?

Continue reading

Video Game Music Production Tips from GDC 2016

Game Composer Winifred Phillips during her game music presentation at the Game Developers Conference 2016I was pleased to give a talk about composing music for games at the 2016 Game Developers Conference (pictured left).  GDC took place this past March in San Francisco – it was an honor to be a part of the audio track again this year, which offered a wealth of awesome educational sessions for game audio practitioners.  So much fun to see the other talks and learn about what’s new and exciting in the field of game audio!  In this blog, I want to share some info that I thought was really interesting from two talks that pertained to the audio production side of game development: composer Laura Karpman’s talk about “Composing Virtually, Sounding Real” and audio director Garry Taylor’s talk on “Audio Mastering for Interactive Entertainment.”  Both sessions had some very good info for video game composers who may be looking to improve the quality of their recordings.  Along the way, I’ll also be sharing a few of my own personal viewpoints on these music production topics, and I’ll include some examples from one of my own projects, the Ultimate Trailers album for West One Music, to illustrate ideas that we’ll be discussing.  So let’s get started!

Continue reading