VR Headphones Update 2018: Video Game Music Composers

In this article written for video game composers, Winifred Phillips (composer of music for God of War) is here pictured working in her music production studio.

By Winifred Phillips | Contact | Follow

Glad you’re here!  I’m videogame composer Winifred Phillips.  My work as a game music composer has included music for projects released on nearly all of the gaming platforms, from one of my most recent projects (a Homefront game released on all the latest consoles and PCs) to one of my earliest projects (a God of War game released on PlayStation 2, PlayStation 3, and PlayStation Vita, pictured above).  An image of the September 2018 cover of Music Connection Magazine featuring the article "Video Game Composers Speak!" - features interviews of famous game music composers, including popular game music composer Winifred Phillips.You can read about my work as a video game composer in an interview I gave to Music Connection Magazine for this month’s issue (pictured right).

Lately, I’ve also been creating lots of video game music for awesome virtual reality games developed for the Oculus Rift, Oculus Go, HTC Vive, Samsung Gear VR, PlayStation VR, and lots of other top VR platforms.  One of the things I’ve noticed while working in VR is the immense importance of the audio delivery mechanism.

When audio is painstakingly spatialized, it becomes crucial to convey that carefully-crafted spatialization to the player with as little fidelity loss as possible.  With the importance of this issue in mind, for the past few years I’ve been periodically writing about headphones in relation to their use in virtual reality.

Illustration depicting a set of headphones, from the article by Winifred Phillips for video game composersAs game audio experts, we want to make sure that players are using the best kind of headphones, so that our work isn’t distorted or deteriorated.  To that end, in these articles we’ve been taking a look at new types of headphones that offer technology designed to enhance the VR experience.  We’ve also been considering how consumers use headphones, and some end-user problems that might come up along the way.  In this article, I’d like to revisit some topics we first touched upon in previous years, so that we can see how that tech is progressing.  Let’s get started!

Audeze iSINE

Audeze debuted its iSINE Virtual Reality Headphones in January 2017 at the Consumer Electronics Show in Las Vegas. These in-ear headphones touted planar magnetic technology as the driving factor behind their ability to deliver more convincing 3D audio for VR. While most headphones deliver sound by virtue of the standard dynamic driver (a.k.a. the moving coil driver), iSINE has focused its attention on planar magnetic technology.  For a discussion of what planar magnetic technology offers and why it may be significant to the VR gamer, you can read about the Audeze iSINEs in my article from last year.

A depiction of the Mobius headphones created by the popular Audeze manufacturer, from the article for video game composers by Winifred Phillips (game music composer).So, what’s new with Audeze?  Well, Audeze has taken their product line one step further by announcing a new set of over-the-ear headphones that they’re branding as the ultimate headphones for gamers.  The Audeze Mobius headphones (pictured right) bring the planar magnetic technology forward into a dedicated gaming product, featuring the enhanced clarity of spatialization that makes this tech uniquely applicable to VR uses. While the Mobius headphones can also simulate surround sound, gamers will want to turn that function off when using these headphones in VR.  Along with the sound quality and increased spatialization promised by the planar magnetic drivers, the Mobius also features 3D head-tracking and a detachable microphone for in-game chat.  All these specs come at a price, however, and the Mobius headphones will be selling for $399 when they begin shipping to consumers at the end of September.  This might seem steep, but coming from a high-end headphone manufacturer whose top-of-the-line models sell for nearly four thousand dollars, the pricing for the Audeze Mobius is relatively accessible.  Here’s a video produced by Engadget about the Audeze Mobius:

Audeze completed an Indiegogo campaign for the Mobius in March, raising over a million dollars from over four thousand backers. It will be interesting to see if a Mobius success story might inspire more manufacturers of gaming headphones to begin incorporating planar magnetic technology into their products.  Of course, we can’t forget that crowdfunding campaigns can be risky (as shown by the unfortunate story of the OSSIC X later in this article), but since Audeze is an established headphone manufacturer with a well-known track record, we shouldn’t have to worry about the Mobius failing to ship.  For a very different story, let’s now get an update on the fate of two prototype VR headphones we discussed a year ago.

The OSSIC X and the Entrim 4D

I have the sad duty to report on the demise of two promising headphone models.  These products announced themselves during the height of popular enthusiasm for the nascent VR industry, and for awhile it seemed like they had seized their opportunity at just the right time.  Let’s start off the discussion with the OSSIC X.


In this article for video game composers, Winifred Phillips explains the circumstances behind the premature demise of the famous OSSIC X headphones.Back in 2016, I first wrote about the OSSIC X headphones, which were being described by OSSIC founder and CEO Jason Riggs as the “Oculus for audio headphones.”  When the Kickstarter campaign for OSSIC X netted a whopping $2.7 million dollars in April 2016, the press began hailing the famous prototype headphones for their historic achievement in breaking Oculus’ record as the largest VR crowdfunding campaign to date.  OSSIC followed this up with an additional $500 thousand dollar Indiegogo campaign, and everything looked promising for awhile.  The headphones were purported to calibrate to each wearer’s unique ears by virtue of what the OSSIC folks were calling “individual anatomy calibration.”  Using eight discrete drivers, the headphones would be able to play back sound to the correct portion of the ear, fully simulating the natural Head Related Transfer Functions of each wearer.  For a fuller explanation of Head Related Transfer Functions, check out this article I wrote back in 2015.

A year later, I checked back in with the OSSIC X to see how things were going, and I interviewed OSSIC’s creative director Sally Kellaway.  We discussed some of the features of the forthcoming headphones, and Kellaway clarified a few issues in regards to the required developer-side plugin that would enable a VR game to fully avail itself of the OSSIC X’s Head Related Transfer Functions.  While all these details painted an appealingly sunny and optimistic picture for the OSSIC X, some dark clouds were beginning to appear on the horizon.

OSSIC began announcing delays in their production schedules, and prospective purchasers discovered that they could no longer pre-order the product. Then, in April the company raised an additional $100 thousand dollars on the StartEngine crowdfunding site as they prepared their product for mass production.  Why was OSSIC going back into fundraising mode at this stage?

Capture from the popular Kickstarter page for the OSSIC X headphones, in the article written by Winifred Phillips for video game composers.A month later, all these portents of doom were fully realized.  On May 19th 2018, OSSIC announced via its Kickstarter and Indiegogo pages that it would be unable to deliver the OSSIC X headphones it had promised (as pictured right).  So, what went wrong?  After an historically successful crowdfunding campaign, OSSIC managed to fully design the product and ship just eighty units to some developers and a few early backers.  With only a smattering of units delivered, the OSSIC team explained that “the product still requires significantly more capital to ramp to full mass production, and the company is out of money.”  Whereupon OSSIC abruptly shuttered its company, leading to an outcry from the tens of thousands of product backers who had contributed over $3.7 million dollars to the crowdfunding campaign.  The Washington Post published an article about the failure of the OSSIC X to keep its mass production promises, and backers of the OSSIC X launched a Facebook page for a planned class-action lawsuit.

The situation couldn’t be any messier, or sadder.  The OSSIC folks had an intriguing idea regarding customized HRTF and increased fidelity of spatialization.  Perhaps another company will take up these ideas and execute them with a more cautious and frugal business plan.  The story of the OSSIC X may provide a useful case study, as well as a cautionary tale, to those audio technology companies who may aspire to create a revolutionary headphone product for VR.

Entrim 4D

Depiction of the Entrim 4D headphone system, from the article by Winifred Phillips for video game composersThen there’s the story of the Entrim 4D. First announced by Samsung over two years ago, we initially discussed the Entrim 4D headphones in an article from September 2016.  Like the OSSIC X, the Entrim 4D headphones are designed specifically for VR.  However, that’s where any similarities end.  Unlike other headphones, the Entrim 4D wasn’t designed to blaze new trails in audio quality and spatialization.  Instead, they were designed to literally rock our world.

The Entrim 4D directs low level electrical impulses into the wearer’s inner ear in a process known as Galvanic Vestibular Stimulation.  Once these impulses reach the nerve regulating balance, they work to create the illusion that the wearer is moving.  Depending on the nature of the pulses, the wearer may feel varying types of movement, and these sensations can be customized to match the kinetic activities within a VR environment.  In other words, the Entrim 4D claimed to solve the inherent problem causing VR nausea – that our visually-perceived movements do not match our physically-perceived movements.

The Entrim 4D headphones made a few appearances at some technology conventions such as SXSW 2016 and the Samsung Developers Conference. And then… nothing.  If you’ve been a regular reader of these articles, you’ll recall that we’ve been revisiting this subject periodically to see if the Entrim 4D headphones might eventually pop up again in the news, or make another appearance at a technology event.  At this point, with such a long period of silence firmly behind us, I think we can comfortably draw the conclusion that the Entrim 4D headphones will not be seeing a retail release.

An animated illustration of the vMocion 3v system of Galvanic Vestibular Stimulation, from the article for video game composers by Winifred Phillips (game music composer).However, that doesn’t mean that the idea of Galvantic Vestibular Stimulation in VR has been abandoned.  During the most recent SIGGRAPH conference, an Osaka University team lead by Dr. Kazuma Aoyama demonstrated a system called GVS RIDE that uses Galvanic Vestibular Stimulation to induce the sensations of roll, pitch, and yaw.  The GVS RIDE consists of a set of four electrodes delivering pulses to influence the vestibular system.  Similarly, the VMocion 3v System (pictured above) also uses a set of four electrodes to induce the sensation of physical movement within VR. Developed by the Mayo Clinic’s Aerospace Medicine and Vestibular Research laboratory team in Arizona, the VMocion 3v system is currently available for license to VR game studios.  Here’s a video produced by the makers of the VMocion 3v System to explain the virtues of Galvanic Vestibular Stimulation:


While all these developments are intriguing, they unfortunately don’t involve any sort of integration into audio headphones (as the Entrim 4D would have).  So at this point, let’s move on to an entirely different topic relating to the use of audio headphones in VR.

The Headphones Problem

In this article for video game composers, Winifred Phillips discusses the incompatible technologies of Surround Sound systems and the famous Virtual Reality platforms.Back in 2015 I wrote about a tricky issue associated with the use of headphones in VR… or, more specifically, a popular misperception about the use of headphones in VR.  At the time, I’d noticed that many gamers and game journalists were recommending surround sound headphones for VR use.  This is, as we know, completely wrong.  Surround sound (as pictured left) interferes with the binaural signal of the VR game, degrading the quality of the positional audio that the game is trying to deliver.  Back in 2015, the message hadn’t really gotten out that surround sound headphones are incompatible with VR.  In fact, there was some confusion regarding what the difference was between surround sound and other formats (binaural, ambisonic, etc).  This kind of confusion is exactly what leads consumers to waste money on expensive surround sound headphones for use with their brand-new VR rig.  So, as a part of this update article, I thought I’d revisit the issue and see if the situation has improved.  Are people still talking about surround-sound formats and spatialized VR audio as though they’re the same thing?

Well, the problem certainly hasn’t gone away.  For instance, Scientific American published an article last year about advances in VR sound with the title “New VR Tech Aims to Take Surround-Sound to the Next Level.”  The article itself discusses the importance of headphone-delivered spatial audio in achieving a sensation of presence in virtual reality.  The title, however, serves to perpetuate a misconception that surround sound and other types of spatial audio are essentially synonymous when used with headphones, when they are definitely not.  These kind of false equivalencies continue to be drawn by publications such as The Verge, which described ambisonics as a “full-sphere surround sound technique,” that works to “trick your brain” into “assigning positions and distance to sounds, even when wearing something like headphones.”  Describing ambisonics as a “surround sound technique” and suggesting that this surround sound works great in VR when you’re wearing headphones is a problem – it may confuse players into thinking that surround-sound headphones are viable for VR.  In a similar vein, the technology site Tech.co began an article about VR gaming by suggesting that the reader might “put on a VR headset and add over-ear 7.1 surround sound headphones.”  We all know that applying 7.1 surround sound encoding to VR audio would seriously degrade the sound quality.

And, of course, let’s not forget the products that are still on the market, claiming to enhance the experience of VR by virtue of surround sound headphone technology.  As a fun example, let’s look at a bizarre product advertising itself as an ideal solution for VR sound.  Illustration of the HaloSurround product, from the article written by Winifred Phillips for video game composers.The ‘HaloSurround‘ (pictured) has an amusingly peculiar visual aesthetic.  It combines a set of in-ear headphones with a black hoop-shaped contraption worn on the top of the head like a hat.  Resembling a ‘halo’ only in the vaguest sense, the HaloSurround headphones deliver 5.1 surround sound through the six speakers mounted inside the ‘halo’ perched on top of the wearer’s head.

Just to complete the overall weirdness of this product, the HaloSurround is available in a model that includes its own Google Cardboard-style VR headset designed for “VR apps”.  So again, what we’re seeing here is a false connection drawn between surround-sound headphones and VR.

At the moment, it seems that most gamers are resorting to online forums to sort out their confusion in regards to whether surround sound headphones should be used in VR.  While the PlayStation VR has always included information about this in the FAQ area of their web site, the Oculus and HTC Vive sites offer no easily-discernible guidance on this issue.  Considering how much effort and care are devoted to the creation of convincing aural worlds for VR, it would be a shame if some players missed out on the experience because they were using the wrong headphones.  I’ll be keeping an eye on this issue, in the hopes that some more definitive guidance may be forthcoming from the VR headset manufacturers.


In this article we’ve revisited topics of interest connected to VR headphones, and we’ve been brought up to speed on current developments with several headphone models.  In my next article, I’ll be exploring what’s brand new in the world of headphones for VR.  In the meantime, please feel free to share your thoughts in the comments section below!


Photo of video game composer Winifred Phillips in her music production studio.Winifred Phillips is an award-winning video game music composer whose recent projects include the triple-A first person shooter Homefront: The Revolution.  Her latest video game credits also include numerous Virtual Reality games, including Scraper: First Strike, Bebylon: Battle Royale, Fail Factory, Dragon Front, and many more. She has composed music for games in five of the most famous and popular franchises in gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the MIT Press. As a VR game music expert, she writes frequently on the future of music in virtual reality games. Follow her on Twitter @winphillips.

Composing video game music for Virtual Reality: Comfort versus performance

In this article series for video game composers, Winifred Phillips is depicted in this photo working in her music production studio.

By Winifred Phillips | Contact | Follow

Delighted you’re here!  I’m videogame composer Winifred Phillips, and I’m happy to welcome you back to this four-part article series exploring the role of music in VR games! These articles are based on the presentation I gave at this year’s game Developer’s Conference in San Francisco, entitled Music in Virtual Reality (I’ve included the official description of my talk at this end of this article). If you haven’t read the previous three articles, you’ll find them here:

During my GDC presentation, I focused on three important questions for VR game music composers:

  • Do we compose our music in 3D or 2D?
  • Do we structure our music to be Diegetic or Non-Diegetic?
  • Do we focus our music on enhancing player Comfort or Performance?

In the course of exploring these questions during my GDC presentation, I discussed my work on four of my own VR game projects –the Bebylon: Battle Royale arena combat game from Kite & Lightning, the Dragon Front strategy game from High Voltage Software, the Fail Factory comedy game from Armature Studio, and the Scraper: First Strike shooter/RPG from Labrodex Inc.

Continue reading

Composing video game music for Virtual Reality: Diegetic versus Non-diegetic

In this article for and about the craft of video game composers, Winifred Phillips is pictured in this photo working in her music production studio.

By Winifred Phillips | Contact | Follow

So happy you’ve joined us!  I’m videogame composer Winifred Phillips.  Welcome back to our four part discussion of the role that music plays in Virtual Reality video games! These articles are based on the presentation I gave at this year’s gathering of the famous Game Developer’s Conference in San Francisco.  My talk was entitled Music in Virtual Reality (I’ve included the official description of my talk at this end of this article). If you haven’t read the previous two articles, you’ll find them here:

During my GDC presentation, I focused on three important questions for VR video game composers:

  • Do we compose our music in 3D or 2D?
  • Do we structure our music to be Diegetic or Non-Diegetic?
  • Do we focus our music on enhancing player Comfort or Performance?

While attempting to answer these questions during my GDC talk, I discussed my work on four of my own VR game projects – the Bebylon: Battle Royale arena combat game from Kite & Lightning, the Dragon Front strategy game from High Voltage Software, the Fail Factory comedy game from Armature Studio, and the Scraper: First Strike shooter/RPG from Labrodex Inc.

In these articles, I’ve been sharing the discussions and conclusions that formed the basis of my GDC talk, including numerous examples from these four VR game projects.  So now let’s look at the second of our three questions:

Continue reading

Composing video game music for Virtual Reality: 3D versus 2D

In this article written for video game composers, Winifred Phillips is here pictured working in her music production studio.

Welcome!  I’m videogame composer Winifred Phillips, and this is the continuation of our four-part discussion of the role that music can play in Virtual Reality video games.  These articles are based on the presentation I gave at this year’s Game Developer’s Conference in San Francisco, entitled Music in Virtual Reality (I’ve included the official description of my talk at this end of this article).  If you missed the first article exploring the history and significance of positional audio, please go check that article out first.

Are you back?  Great!  Let’s continue!

During my GDC talk, I addressed three questions which are important to video game music composers working in VR:

  • Do we compose our music in 3D or 2D?
  • Do we structure our music to be Diegetic or Non-Diegetic?
  • Do we focus our music on enhancing player Comfort or Performance?

Continue reading

Composing video game music for Virtual Reality: The role of music in VR

In this article for video game composers, Winifred Phillips is pictured working in her music production studio.

By Winifred Phillips | Contact | Follow

Hey everybody!  I’m video game composer Winifred Phillips.  At this year’s Game Developers Conference in San Francisco, I was pleased to give a presentation entitled Music in Virtual Reality (I’ve included the official description of my talk at the end of this article). While I’ve enjoyed discussing the role of music in virtual reality in previous articles that I’ve posted here, the talk I gave at GDC gave me the opportunity to pull a lot of those ideas together and present a more concentrated exploration of the practice of music composition for VR games.  It occurred to me that such a focused discussion might be interesting to share in this forum as well. So, with that in mind, I’m excited to begin a four-part article series based on my GDC 2018 presentation!

Continue reading

The Virtual Reality Game Music Composer


Project Morpheus headset.

Ready or not, virtual reality is coming!  Three virtual reality headsets are on their way to market and expected to hit retail in either late 2015 or sometime in 2016.  These virtual reality systems are:

VR is expected to make a big splash in the gaming industry, with many studios already well underway with development of games that support the new VR experience.  Clearly, VR will have a profound impact on the visual side of game development, and certainly sound design and voice performances will be impacted by the demands of such an immersive experience… but what about music?  How does music fit into VR?

At GDC 2015, a presentation entitled “Environmental Audio and Processing for VR” laid out the technology of audio design and implementation for Sony’s Project Morpheus system.  While the talk concentrated mainly on sound design concerns, speaker Nicholas Ward-Foxton (audio programmer for Sony Computer Entertainment) touched upon voice-over and music issues as well.  Let’s explore his excellent discussion of audio implementation for a virtual space, and ponder how music fits into this brave new virtual world.


Nicholas Ward-Foxton, during his GDC 2015 talk.

But first, let’s get a brief overview on audio in VR:

3D Positional Audio

All three VR systems feature some sort of positional audio, meant to achieve a full 3D Audio Effect.  With the application of the principles of 3D Audio, sounds will always seem to be originating from the virtual world in a realistic way, according to the location of the sound-creating object, the force/loudness of the sound being emitted, the acoustic character of the space in which the sound is occurring, and the influences of obstructing, reflecting and absorbing objects in the surrounding environment.  The goal is to create a soundscape that seems perfectly fused with the visual reality presented to the player.  Everything the player hears seems to issue from the virtual world with acoustic qualities that consistently confirm an atmosphere of perfect realism.

All three VR systems address the technical issues behind achieving this effect with built-in headphones that deliver spatial audio consistent with the virtual world.  The Oculus Rift licensed the  Visisonics RealSpace 3D Audio plugin to calculate acoustic spatial cues, then subsequently built their own 3D Audio plugin based on the RealSpace technology, allowing their new Oculus Audio SDK to generate the system’s impressive three-dimensional sound.  According to Sony, Project Morpheus creates its 3D sound by virtue of binaural recording techniques (in which two microphones are positioned to mimic natural ear spacing), implemented into the virtual environment with a proprietary audio technology developed by Sony.  The HTC Vive has only recently added built-in headphones to its design, but the developers plan to offer full 3D audio as part of the experience.

To get a greater appreciation of the power of 3D audio, let’s listen to the famous “Virtual Barber Shop” audio illusion, created by QSound Labs to demonstrate the power of Binaural audio.

Head Tracking and Head-Related Transfer Function

According to Nicholas Ward-Foxton’s GDC talk, to make the three-dimensional audio more powerful in a virtual space, the VR systems need to keep track of the player’s head movements and adjust the audio positioning accordingly.  With this kind of head tracking, sounds swing around the player when turning or looking about.  This effect helps to offset an issue of concern in regards to the differences in head size and ear placement between individuals.  In short, people have differently sized noggins, and their perception of audio (including the 3D positioning of sounds) will differ as a result.  This dependance on the unique anatomical details of the individual listener is known as Head-Related Transfer Function.  There’s an excellent article explaining Head-Related Transfer Function on the “How Stuff Works” site.

Head-Related Transfer Function can complicate things when trying to create a convincing three-dimensional soundscape.  When listening to identical binaural audio content, one person may not interpret aural signals the same way another would, and might estimate that sounds are positioned differently.  Fortunately, head tracking comes to the rescue here.  As Ward-Foxton explained during his talk, when we move our heads about and then listen to the way that the sounds shift in relation to our movements, our brains are able to adjust to any differences in the way that sounds are reaching us, and our estimation of the spatial origination of individual sounds becomes much more reliable.  So the personal agency of the gaming experience is a critical element in completing the immersive aural world.

Music, Narration, and the Voice of God


Now, here’s where we start talking about problems relating directly to music in a VR game.  Nicholas Ward-Foxton’s talk touched briefly on the issues facing music in VR by exploring the two classifications that music may fall into. When we’re playing a typical video game, we usually encounter both diegetic and non-diegetic audio content.  Diegetic audio consists of sound elements that are happening in the fictional world of the game, such as environment sounds, sound effects, and music being emitted by in-game sources such as radios, public address systems, NPC musicians, etc.  On the other hand, non-diegetic audio consists of sound elements that we understand to be outside the world of the story and its characters, such as a voice-over narration, or the game’s musical score.  We know that the game characters can’t hear these things, but it doesn’t bother us that we can hear them.  That’s just a part of the narrative.

VR changes all that.  When we hear a disembodied, floating voice from within a virtual environment, we sometimes feel, according to Ward-Foxton, as though we are hearing the voice of God.  Likewise, when we hear music in a VR game, we may sometimes perceive it as though it were God’s underscore.  I wrote about the problems of music breaking immersion as it related to mixing game music in surround sound in Chapter 13 of my book, A Composer’s Guide to Game Music, but the problem becomes even more pronounced in VR.  When an entire game is urging us to suspend our disbelief fully and become completely immersed, the sudden intrusion of the voice of the Almighty supported by the beautiful strains of the holy symphony orchestra has the potential to be pretty disruptive.


The harpist of the Almighty, hovering somewhere in the VR world…

So, what can we do about it?  For non-diegetic narration, Ward-Foxton suggested that the voice would have to be contextualized within the in-game narrative in order for the “voice of God” effect to be averted.  In other words, the narration needs to come from some explainable in-game source, such as a radio, a telephone, or some other logical sound conveyance that exists in the virtual world.  That solution, however, doesn’t work for music, so it’s time to start thinking outside the box.

Voice in our heads

During the Q&A portion of Ward-Foxton’s talk, an audience member asked a very interesting question.  When the player is assuming the role of a specific character in the game, and that character speaks, how can the audio system make the resulting spoken voice sound the way it would to the ears of the speaker?  After all, whenever any of us speak aloud, we don’t hear our voices the way others do.  Instead, we hear our own voice through the resonant medium of our bodies, rising from our larynx and reverberating throughout our own unique formantor acoustical vocal tract.  That’s why most of us perceive our voices as being deeper and richer than they sound when we hear them in a recording.

Ward-Foxton suggested that processing and pitch alteration might create the effect of a lower, deeper voice, helping to make the sound seem more internal and resonant (the way it would sound to the actual speaker).  However, he also mentioned another approach to this issue earlier in his talk, and I think this particular approach might be an interesting solution for the “music of God” problem as well.

Proximity Effect

“I wanted to talk about proximity,” said Ward-Foxton, “because it’s a really powerful effect in VR, especially audio-wise.”  Referencing the Virtual Barber Shop audio demo from QSound Labs, Ward-Foxton talked about the power of sounds that seem to be happening “right in your personal space.”  In order to give sounds that intensely intimate feeling when they become very close, Ward-Foxton’s team would apply dynamic compression and bass boost to the sounds, in order to simulate the Proximity Effect.

The Proximity Effect is a phenomenon related to the physical construction of microphones, making them prone to add extra bass and richness when the source of the recording draws very close to the recording apparatus.  This concept is demonstrated and explained in much more depth in this video produced by Dr. Alexander J. Turner for the blog Nerds Central:

So, if simulating the Proximity Effect can make a voice sound like it’s coming from within, as Ward-Foxton suggests, can applying some of the principles of the Proximity Effect make the music sound like it’s coming from within, too?

Music in our heads

This was the thought that crossed my mind during this part of Ward-Foxton’s talk on “Environmental Audio and Processing for VR.”  In traditional music recording, instruments are assigned a position on the stereo spectrum, and the breadth from left to right can feel quite wide.  Meanwhile, the instruments (especially in orchestral recordings) are often recorded in an acoustic space that would be described as “live,” or reverberant to some degree.  This natural reverberance is widely regarded as desirable for an acoustic or orchestral recording, since it creates a sensation of natural space and allows the sounds of the instruments to blend with the assistance of the sonic reflections from the recording environment.  However, it also creates a sensation of distance between the listener and the musicians.  The music doesn’t seem to be invading our personal space.  It’s set back from us, and the musicians are also spread out around us in a large arc shape.

So, in VR, these musicians would be invisibly hovering in the distance, their sounds emitting from defined positions in the stereo spectrum. Moreover the invisible musicians would fly around as we turn our heads, maintaining their position in relation to our ears, even as the sound design elements of the in-game environment remain consistently true to their places of origin in the VR world.  Essentially, we’re listening to the Almighty’s holy symphony orchestra.  So, how can we fix this?

One possible approach might be to record our music with a much more intimate feel.  Instead of choosing reverberant spaces, we might record in perfectly neutral spaces and then add very subtle amounts of room reflection to assist in a proper blend without disrupting the sensation of intimacy.  Likewise, we might somewhat limit the stereo positioning of our instruments, moving them a bit more towards the center.  Finally, a bit of prudently applied compression and EQ might add the extra warmth and intimacy needed in order to make the music feel close and personal.  Now, the music isn’t “out there” in the game world.  Now, the music is in our heads.

Music in VR

It will be interesting to see the audio experimentation that is surely to take place in the first wave of VR games.  So far, we’ve only been privy to tech demos showing the power of the VR systems, but the music in these tech demos has given us a brief peek at what music in VR might be like in the future.  So far, it’s been fairly sparse and subtle… possibly a response to the “music of the Almighty” problem.  It is interesting to see how this music interacts with the gameplay experience.  Ward-Foxton mentioned two particular tech demos during his talk.  Here’s the first, called “Street Luge.”

The simple music of this demo, while quite sparse, does include some deep, bassy tones and some dry, close-recorded percussion.  Also, the stereo breadth appears to be a bit narrow as well, but this may not have been intentional.

The second tech demo mentioned during Ward-Foxton’s talk was “The Deep.”

The music of this tech demo is limited to a few atmospheric synth tones and a couple of jump-scare stingers, underscored by a deep low pulse.  Again, the music doesn’t seem to have a particularly wide stereo spectrum, but this may not have been a deliberate choice.

I hope you enjoyed this exploration of some of the concepts included in Nicholas Ward-Foxton’s talk at GDC 2015, along with my own speculation about possible approaches to problems related to non-diegetic music in virtual reality.  Please let me know what you think in the comments!