Wednesday 21 November 2012

Innovating audio environments in games

Many professional sound designers feel that the future of game audio lies within the sophistication of environmental audio. The reverb of sound within games has increased in sophistication since the early days of EAX. 'Environmental Audio Extensions' (EAX) was created to enable a more realistic and accurate audio environment and was implemented in games such as Doom 3 and Prey. More recently the attempt to produce refined audio systems for replicating environmental physics have taken center stage. 

One way that game developers are approaching reverb within games is through the application of single preset algorithms to a subset of the sound mix. This has been further developed by creating reverb regions that will call different reverb presets based on the area the player is currently located. Resulting in the reverb changing based on predetermined locations that used predefined reverb settings. So what game developers are now trying to achieve is a sound system in virtual world that replicates the same physics as the real world.  (Damian Kastbauer) http://www.gamasutra.com/view/feature/132645/the_next_big_steps_in_game_sound_.php?print=1

How can reverb be improved within games?

One of the solutions for improved reverb of game audio is for the game engine to calculate the reverb of a sound in the game at realtime. This could be achieved through the calculation of geometry at the time a sound is played or through the use of reverb convolution.

Reverb convolution is a process for digitally simulating the reverberation of a physical or virtual space. This is determined by a mathematical convolution operation and will use a pre-recorded audio sample of the impulse response of the space being modeled.

Simon Ashby, who founded and VP of strategy at AudioKinetic believes that convolution reverb is certainly the most appropriate way for reproducing realistic environmental acoustics. He explains that one of the reasons developers may avoid this method is due to "the time and expertise to code such advanced DSP effects." Another reason he provides relates to the technology of our time and that "convolution reverbs consume a lot of runtime memory and CPU resources." http://www.develop-online.net/features/1208/Optimising-Convolution-Reverb

One company that is attempting to improve the possibility of more realistic envirnomental reverb is AudioKinetic, who have created a convolution reverb that adjusts memory and CPU usuage based on available resources, while minimising the impact on reverb quality.

Simon Ashby explains the two approaches to optimise runtime performance, "time-domain truncation and frequency-domain truncation."http://www.develop-online.net/features/1208/Optimising-Convolution-Reverb

Time-Domain Truncation can be achieved by reducing the length of the IR. Ashby's says, "A good approach to shorten the IR length is to determine the noise floor level of the scene where the IR will be used and then reduce the IR end time to the point where the reverb tailgate artifact is inaudible."http://www.develop-online.net/features/1208/Optimising-Convolution-Reverb




Frequency-Domain Truncation is the removal of low energy frequencies

* An IR is a recorded sample of a room’s response to short impulse sounds, which are applied to the incoming audio signal. Typically, rooms with long reverb times generate longer IRs and use more resources at runtime, whereas smaller rooms generate shorter IRs and consume less runtime resources. http://www.develop-online.net/features/1208/Optimising-Convolution-Reverb

One game that led the frontier in innovation of environmental sound was Crackdown, which received a BAFTA for its audio implementation. Raymond Usher said, 

"A revolutionary audio shader that uses the surrounding geometry to dynamically color and time delay the direct and indirect audio reflections." http://blindsecondlife.blogspot.co.uk/2007/11/crackdown-audio.html

Crackdown was a huge step forward in creating reverb that considers the surroundings of a virtual world. The video below is an example of the audio system used in Crackdown, called 'Audio Shader'. At 4:15 a section on reverb and reflections gives an insight into the system behind the audio. You can see a variety of lines, shapes and words which all contribute to the analysis of the surrounding geometry. 




The innovation through sound did not end at complex reverb systems in Crackdown, as Raymond Usher explains,

"We also hired an explosives expert to do controlled detonations for us. If you've seen the explosions in Crackdown, they're pretty massive. We took that as a challenge to make the biggest sounding explosions ever in a video game. By using a unique layering system, our recordings from the explosive session, coupled with the audio shader system... we definitely encourage you to turn it up." http://interviews.teamxbox.com/xbox/1885/The-Audio-of-Crackdown/p2/

Future Innovation

One innovative idea that is being thrown around by both games developers and audio engineers is the ability to simulate the voice as it travels through the body, mouth and into the air. Using highly complex analysis of the human body as well as the study of sound through virtual air, the outcome would be an intensely realistic sound that would be effected by the characters body. It is strongly believed that sound should originate from distinct positions in 3D pace, just as in reality. Realistically wave-tracing audio should require much less computation than realistically ray-tracing graphics. If this system of using wave-tracing as means to implement realistic audio into a game was adopted then it would produce the effects of perceived volume and position as well as frequency attenuation, reverberation and even the doppler shift effect.http://idcmp.linuxstuff.org/2008/10/wave-tracing-ray-tracing-for-sound.html

What is Ray-Tracing?

Ray-Tracing is a technique used in the computer graphics side of video games. It is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects.

What is Wave-Tracing?

Wave-Tracing Audio is a theory presented to mimic the idea of Ray-Tracing in graphics. As explained in the Regular Expressions blog, regular Ray-Tracing, rays of light are traced backward from a pixel of the camera, to an object and eventually to a light source. If you can do that with light, why can't it be done with sound?

This is an interesting theory and is one that may be achieved if instead of tracing rays, vibrations would be traced. Instead of light sources within the game, the focus would be on air and the airs friction.

Prototype

Another more recent game adopted the initiative to develop a reverb system that replicates real life surroundings is Prototype. In prototype, all the ambience tracks were sent through a procedural reverb system. Scott Morgan explains, 

"Through a system of ray casting, the physical space of the listener was analyzed in real time, and the reverb parameters set to align with the size of the space that the listener was in."http://www.gamasutra.com/view/feature/132645/the_next_big_steps_in_game_sound_.php?print=1

An example of this real time analysis with in Prototype is explained by a sceneraio in the game where you a enter a tunnel in Central Park. The system detects an enclosed space of a certain size and dynamically sets the reverb parameters. In real time the sound of the park's birds and other ambient sounds are passed through the bigger reverb to give the illusion that the sounds are no longer arriving directly to the listener, but are reflected first in an attempt to replicate the real world. Scott Morgan 


References and relevant links

http://designingsound.org/2010/02/charles-deenen-special-the-future-of-sound-design-in-video-games-part-1/ 2010 - not really recent

http://www.prosoundeffects.com/blog/2012/06/gaming-sound-effects-generative-audio 2012

http://www.develop-online.net/features/1653/AUDIO-SPECIAL-The-generation-game

http://www.gamasutra.com/view/feature/130733/designing_a_nextgen_game_for_sound.php?print=1

http://www.develop-online.net/features/1685/In-Depth-Square-Enixs-Luminous-Studio-engine

http://www.gamasutra.com/view/feature/4257/the_next_big_steps_in_game_sound_.php

http://designingsound.org/2010/02/the-next-big-steps-in-game-sound-design/

http://en.wikipedia.org/wiki/Environmental_Audio_Extensions

http://interviews.teamxbox.com/xbox/1885/The-Audio-of-Crackdown/p2/

http://idcmp.linuxstuff.org/2008/10/wave-tracing-ray-tracing-for-sound.html

http://www.cs.princeton.edu/~funk/sig98.pdf

http://bjaberle.com/2011/02/the-future-of-game-audio/

http://www.gamasutra.com/view/feature/132645/the_next_big_steps_in_game_sound_.php?print=1

Wednesday 7 November 2012

Inspiring

Interesting film about the production of three indie games... Super Meat Boy, Braid & Fez

Youtube Trailer


Indie Games


Interactive Music Systems in Games

Interactive music systems


Computer games rely on the player to make decisions within the game, so music needs to react and interact to these decisions in real-time. As technology has advanced, new approaches to interactive music systems have been created and have now become just as important as graphics and gameplay. West B. Latta explains,


"Games are an interactive medium, and as such, the presentation of musical soundtracks must also be able to adapt to changing gameplay. To get a truly immersive experience, the music in games must change on-the-fly according to what is happening in the game, while still retaining a cinematic quality." http://www.shockwave-sound.com/Articles/C01_Interactive_Music_in_Games.html

The earliest forms of games were limited to simple 8-bit sounds that would play when triggered or heard as a short musical loop playing in the background. Classic games such as Zelda and Supermario started to show signs of development with the idea of an interactive music system, but were still bound by the technology of their time. The interactive music scheme in Super Mario Brothers music changed in tempo as the player's time was running out.

An interactive music system reacts to the player's choices within the game, such as finding hidden objects and entering certain areas. Project Bar-B-Q 2003 discusses interactive audio systems and questions what an interactive audio system is as well as what it should be. 

"An audio system that is designed to have its pre-determined sonic behavior influenced in response to real-time events, and is comprised of an Interactive Audio Engine and Interactive Audio Data". http://www.projectbarbq.com/bbq03/bbq03r5.htm

Below is a flow diagram of the Interactive Audio System in games


http://www.projectbarbq.com/bbq03/bbq03r5.htm
Some argue that music in video games does not require an interactive element and a musical score will suffice. However, some of the benefits of an interactive music system seem more appropraite than a simple cinematic score. Here are some of 'project bar-b-q's outcomes:


Why Interactive Audio?
  • It enhances the user experience.
  • It empowers the user via participation and choice, facilitating the market trend toward active consumption.
  • It provokes and inspires user involvement.
  • It creates a unique personality for products.
  • It enables users to perform new types of activities.
  • It creates a participatory education experience.
  • It's potentially cheaper to implement.
  • It allows simplification of the system and cost reduction.
  • It allows audiences to experience interactive audio outside of its original context.
http://www.projectbarbq.com/bbq03/bbq03r5.htm

Andrew Clark said,

"It would be really cool if game music could complement onscreen action with the same kind of subtlety, depth, and expression. The complication is that, in games, the timing, pacing, contexts, and outcomes of the onscreen action are constantly in flux, depending on the actions of the player." http://www.gamasutra.com/view/feature/129990/defining_adaptive_music.php?print=1

One of the first interactive music systems to be used in games was 'Direct Music Producer' which was a component of Microsoft DirectX. It allows the user to create music and sound effects that would be selected by the users choices in the game. It enables the gamer to experience variation in the music and sounds. 'Stainless Steel Studios' was a games company that adopted the use of 'Direct music Producer' in games such as 'Empires: Dawn of the Modern World'. Sound designer Scott Morgan was drafted in to create an interactive music system for 'Empires: Dawn of the Modern World'  due to his experience with Microsoft and 'Direct Music Producer'. Here is Morgan's demonstration of 'Direct Music Producer';




Middleware

In modern production of video games, there are two leading pieces of software that are integrated into games... Fmod and Wwise.

FMOD

Through event systems the composer can utilize multichannel audio files or 'stems'. This allows certain individual instruments or sections to be added or subtracted based on games states. or any other dynamic information fed into FMOD such as health, location, proximity to certain objects or enemies. FMOD takes a more 'logic based' approach and allows the designer to define various cue, segments and themes that transition to other cues, segments or themes based on any user-defined set of parameters. FMOD allows for beat-matched transitions, and time-synchronized 'flourish' segments.



FMOD was used in

- Splinter Cell: Chaos Theory
Depending on the level of 'stealth and stress' of the player, different intensities of music would begin to be brought in. This is called 'vertical' approach to music system design

- Tomb Raider: Legend
Troels Folmann used a system which he devised called 'micro-scoring', which is crafting vast number of small musical phrases and themes that were then strung together in a logical way based on the players actions throughout the course of the game. An example in the game: the player may explore a jungle area with an ambient soundtrack playing. As they interact with an artifact or puzzle, a seamless transition is made to a micro-score that is specific to that game event.

Wwise

Wwise is the product of AudioKinetic, which allows multichannel stems to be intergrated into a "logic-based" approach to music design. With these features composers can create a series of themes with time-synchronized transitions which are created through game events or states. It also allows other parameters to fade various musical stems in and out of the mix, creating smooth transitions and a more professional finish. Wwise is a system that incorporates both a horizontal and vertical approach to music design. (West B. Latta)




Wwise has been used in a huge amount of games

Such as:
- Batman: Arkham City
- Halo Wars and Halo 4
- Mass Effect 2 & 3
- Star Wars: The Force Unleashed 2
- Assassin's Creed 2, 3 & Brotherhood

Wwise also includes features such as, cross-platform sound engine, real-time game simulator, plug-in architecture and SoundFrame API 


Video Example of Interactive Music System


Totally Games / LucasArts’ “X-Wing” series

"The “X-Wing” (PC DOS) series, which debuted in 1993, featured MIDI versions of John Williams and John-Williams-esque orchestral music. Lucas Arts’ patented iMUSE music engine handled sophisticated run-time interactions between dramatic onscreen action and a database of music loops, cues, and transitions. (Evolving versions of iMUSE were also used on a number of later Lucas Arts projects.)" http://www.gamasutra.com/view/feature/129990/defining_adaptive_music.php?print=1


Here is an example of an Interactive Music System being used in conjunction with the timing of an online game of Battlefield 3. What you hear at first is only sounds from within the game, such as gun fire and explosions etc. However, at 0:48 in the video the enemy team only has five remaining lives and are about to be defeated. At the moment the number reaches five the music starts, which builds in texture until the game concludes. The music then continues to play in full orchestration during the scoresheet screen.


The use of interactive music in Battlefield 3 is used in every game type, and used in a similar manor. As a team is reaching the end of a game, the music starts. Each team hears different music, which will either be victory music or defeat music. The increase in musical texture gives the players a sense of urgency and can often push them further in the game with added adrenaline.

Another example of Battlefield 3's interactive audio implementation is through dialogue in online games. Using game type 'conquest' as an example, the dialogue will be heard when a flag has been captured. But the intensity of the dialogue will be assessed on how many of the flags have been lost in total. So if your team has three flags captured but the fourth has just been lost, it will only be explained but with little expression. However, if the you had just lost all the flags, the dialogue would be screamed out, to suggest that you need to work harder.

Tuesday 6 November 2012

The Sounds of Batman: Arkham City


Direct Music Producer

'Direct Music Producer' is a piece of software that allows the user to create interactive music for games. It allows for huge variation in the sounds and music that is played in the game, and could often be designed to be different every time. Here is a video created by Scott Morgan explaining his use of 'Direct Music Producer'.


Tuesday 16 October 2012

Sound Propagation in Games

'Sound Propagation' in games, is the term given to a system built into the game that allows for sound to be manipulated in relation to the environment. One of the benefits of realistic sound propagation in a game is that it can dramatically increase immersion of the player. The sound propagation's wave interaction effects the following:


Reflection - Absorption - Diffraction- Refraction

Sound reflection is a one of the more obvious techniques used when creating a virtual room, and is often heard as reverb or echo. Absorption is the method of blocking sound by large objects between the source of the sound and the position of the listener. An example of this could be a door, window or even large furniture. Sound travels around obstacles but results in muffling and volume alterations.

Sound diffraction is when a sound is heard that is not insight. An example of this could be a radio playing music in another room to the one the character is in. The sound is still heard even though you can not visually see the sound source. Sound refraction is the natural bending of the sound. Refraction can also add additional sound, which effectively amplifies the sound. An example of a natural amplifier is a large lake.

Limitations of game industry 
Game wants to run between 30 and 60 FPS 
Cannot “downgrade” the game for sound propagation 
Limited memory (2 to 50 MB for audio) 
Limited CPU (around 10% of the total CPU for audio) 
Up to 64 simultaneous sounds playing 
At 60 FPS, that gives 250 ms per sound 
http://gdcvault.com/play/1015492/Real-time-Sound-Propagation-in


Due to the limitations of technology in the games industry not all game developers include a sophisticated sound propagation system. Here is an example of sound propagation being implemented in a game. The Dunia engine in Far Cry 2 enables sound propagation to be used to add extra depth to the games scenery. Interestingly the gamer is able to start a fire, which changes in sound depending on the position of the character and the addition of different landscape objects.


Another more obvious example of sound propagation is in the weather system for minecraft. The sound of rain changes when walking outside, as the gamer takes cover under trees or blocks, resulting in a much more realistic gaming experience.






Wave Types

Longitudinal Waves - Transverse WavesTorional Waves

"To further increase immersive gameplay, several recent games (such as Valve's Half Life 2 and Crytek's Far Cry) have added an integrated physics and behavior engine to enhance that realism, as objects interact with one another in a more physically plausible way. In contrast, sound generation and propagation have not received as much attention due to the extremely high computational cost for simulating realistic sounds."http://www.few.vu.nl/~A.Eliens/research/research/papers/@archive/science/p66-raghuvanshi.pdf



This clip shows how distance sound propagation can be perfectly emulated to enhance the gaming experience.



Below are some more examples and explanations of sound propagation in relation to digital rooms and games











Tuesday 9 October 2012

Techtonic Games Developer Diary

Truly amazing bone crunching sound!


Creating Monsters

Here is a clip that shows an insight into Wabi Sabi and the work they produce. They create sound design for a variety of high profile games, and this short footage shows how they go about creating sounds for a monster in Dead Space.



ENJOY

Friday 5 October 2012

8-BIT Goodness

This is a great fan made video of some classic games with alternative sound design. Quite an interesting approach to some of the games. The sound design that really stood out for me...Pac-Man! Almost sounds like the cookie monster.




For further information on the creator - biglionmusic.com


Wednesday 3 October 2012

Nasa Record Sounds of Space

As the exciting work of 'Curiosity Rover' continues to wow space enthusiast around the globe, a group of scientists have made a discovery of their own. Nasa have recently published an article on their official website, about a recent recording made in space. The article begins to explain how the sound is created and gives details on the satellite that recorded the audio.




Visit the article at - Nasa

Here is another video captured from a satellite in high orbit. This is an example of the sounds produced by the earth and picked up by the satellite. Interestingly I noticed people commenting on how there is no sound in space but the creator of the video points out it is not recorded through microphones on the satellite, but radio waves given off by the earth which were then converted into audio.


Tuesday 2 October 2012

None Repetitive Design

Due to the limitations of memory in consoles, only a small amount of audio can be played at one time in a game. Sound can be taken from a variety of places within the games console, ranging from the CD/Blu-Ray, RAM and hard drive. To create none repetitive sound design, the audio integrator needs to carefully plan where each sound will be placed, and to decide whether the sound will need to be loaded or streamed instantly. An example of a sound that needs to be available instantly, would be character voices. These are sounds that usually occur often during both online and offline gameplay and would be required to sound immediately.


Console Sound Data Limitations

Playstation 2 (Released in 2000) limited to - 2MB of RAM

Xbox 360 (Released in 2005) limited to - 512 of RAM

Here you can see how the these two consoles have upgraded over time but even with the modern Xbox 360 there is still the issue of delegating space for visuals and sound. Usually the visuals in the game take center stage as most people seem to be more impressed with sharp pictures and cutting edge graphics, but game developers are now starting to realise that sounds can enhance the gamers experience just as much as the visuals.


Alexander Brandon explained the basic idea of 'File Management' in his book 'Audio for Games':

  • Each sound file is like an audio CD. You typically store CDs somewhere different from where you play them. Sound files are stored with the rest of the game's data - usually on a hard drive, a CD, or a DVD.
  • For the sound files to play, they must be moved out of storage to a location that is able to play them - the equivalent of the audio device on which you play your CDs.
  • The fastest way to play sound files is usually to access them from memory: Random access memory can have sounds placed in it and removed; read-only memory cannot be changed once things are placed in it. The entire file is copied to memory from the storage area, then activated.
  • A slower way to play files is streaming. Streaming takes data from storage and copies small chunks of a file into RAM one chunk at a time. When the game code triggers a chunk of data to play, that data is removed from RAM and the next chunk is lined up behind it to play immediately afterward. Understandably, this process is slower. Imagine if the first 30 seconds of a song were on one CD and you had to swap it out for a second disc when you wanted to hear the next 30 seconds. The process of playing a streamed file isn't this cumbersome, but it does take longer to initially load the file for playback.


How to cheat the system


The Playstation 2 has 2MB of RAM. What this means is that no more than 2MB of RAM can be played back at the same time.

One way of getting around the problem of small memory during the era of the Playstation 2 was for the composer and sound designers to save the files in sound banks. This technique allowed only sounds that were needed for what is currently happening in the game to load in RAM and the sounds that are not needed till later are kept in storage.

"Using smaller banks gives greater control because it is a more efficient use of RAM, and lets you prioritize groups of sounds at different points in the game according to the importance of playback."

Sony created a proprietary program for developers called VagEdit, which converts WAV files into VAG files. Using VagEdit, the sound data could be compressed to one-quarter of its original size, this allowed for less memory to be used and to conquer the dreaded 2MB limitation for memory space. This was heavily used on the Playstation 2 console.


How do you avoid repetition?

The repetition of sound in games is something that can grow old very quickly. If a gamer is racking up hours and hours of game time they will start to become aware of repetition in sounds.. resulting in breaks in emersion. For example, if different weapons in the game had the same sound, it immediately becomes unrealistic, as we all know different materials, shapes and sizes will produce a different sound.

Real-Time sound generation can be created through a variety of new and exciting techniques. One of these being the granulation synthesis of sound in video games, explained by Leonard J. Paul, 

"Granulation of sound for games is becoming a more viable tool for sound artists as the processing power of game consoles continue to improve. Granulation is a relatively recent method of sound generation which allows for sampled sound to be modified in real-time to allow pitch to change independently of tempo among other audio effects." GRANULATION OF SOUND IN VIDEO GAMES
LEONARD J. PAUL1

Granular synthesis enables sampled sounds to be cut into very small grains and then played back in endless combinations. It can be used to change the playback speed or pitch independently of another in real-time. Leonard J. Paul suggests the easiest use of granular is to augment existing non-specific backgrounds such as room tones to extend their length without noticeable loop points.


Procedural Audio


"This is also an example of procedural audio as Spore is processing all the incoming data in real time (especially in the creator mode) and making decisions about the sound the creature will create."




http://fora.tv/2006/06/26/Will_Wright_and_Brian_Eno


http://fora.tv/2006/06/26/Will_Wright_and_Brian_Eno#chapter_04

Tuesday 25 September 2012

LMU Post 1 - Hammer of Dawn


The 'Hammer of Dawn' is a weapon that stood out from the crowd when developers at Epic Games released 'Gears of War'. Not only was it innovative in visual design, but it came with a brutal sound that became iconic.


Image A

Players of the 'Gears of War' games soon understand that the 'Hammer of Dawn' is an 'Imulsion Powered Laser Designator', which beams down energy from a satellite cannon in high orbit. This short 'fan made' clip includes a fine selection of sounds that the 'Hammer of Dawn' creates.

Video A

Following the release of the first 'Gears of War' game (2006), Alice Liang explained:

“Almost all the sounds in Gears of War are organic and not synthesized. There was a pretty strict “No lasers” policy – one exception being the Hammer of Dawn weapon sound, which contains six layers of synthesized sounds (one of which is the processed sound from the motor of a pencil sharpener)."

Impressively, the creators designed this weapon to have audio feedback actions when the weapon is fired by a player. It has 'confirmation', 'punishment' and 'reward' sounds integrated into the weapon, which all add to the experience of using the 'Hammer of Dawn'. 

Firstly the use of 'confirmation' through sound is heard when the three beeps are completed. It is not the most convincing of sounds but the three beeps enable the gamer to listen out for the sound while their eyes may be searching the screen for enemies.

I find the 'punishment' sound the most interesting, as the gamer does not need a visual aid to understand that the weapon is not being used correctly. When the weapon can not lock onto a target, a sound is created. I think they may have taken inspiration from game show sounds, as it is that 'wrong answer' buzzer that is so often used in conjunction with something being wrong. Not only does it have the characteristic sound of a 'wrong answer' buzzer, but the sound relates to the weapon. For example, a normal gun would include sounds of the gun jamming if the gamer misses a reload, as these relate to the realistic characteristics of the weapon. The 'Hammer of Dawn' on the other hand is an 'alien' weapon, and we accept that it will not posses life like sounds such as the weapon jamming, and a buzzer sound seems to be more appropriate.

(Skip to 1:29 in the video below for an example of the sound)



Video B


Finally the weapons feedback to the player in relation to 'reward'. I see the reward being created by the sound of the blast. Not only does every person in the game here this sound, but the gamer controlling the weapon is now in control! With the blast in full motion, the gamer is able to move the blast to nearly any location on the map and killing multiple enemies in the process. The player controlling the weapon may feel an element of satisfaction now that the weapon is finally unloading hell on the opponents.