One way that game developers are approaching reverb within games is through the application of single preset algorithms to a subset of the sound mix. This has been further developed by creating reverb regions that will call different reverb presets based on the area the player is currently located. Resulting in the reverb changing based on predetermined locations that used predefined reverb settings. So what game developers are now trying to achieve is a sound system in virtual world that replicates the same physics as the real world. (Damian Kastbauer) http://www.gamasutra.com/view/feature/132645/the_next_big_steps_in_game_sound_.php?print=1
How can reverb be improved within games?
One of the solutions for improved reverb of game audio is for the game engine to calculate the reverb of a sound in the game at realtime. This could be achieved through the calculation of geometry at the time a sound is played or through the use of reverb convolution.
Reverb convolution is a process for digitally simulating the reverberation of a physical or virtual space. This is determined by a mathematical convolution operation and will use a pre-recorded audio sample of the impulse response of the space being modeled.
Simon Ashby, who founded and VP of strategy at AudioKinetic believes that convolution reverb is certainly the most appropriate way for reproducing realistic environmental acoustics. He explains that one of the reasons developers may avoid this method is due to "the time and expertise to code such advanced DSP effects." Another reason he provides relates to the technology of our time and that "convolution reverbs consume a lot of runtime memory and CPU resources." http://www.develop-online.net/features/1208/Optimising-Convolution-Reverb
One company that is attempting to improve the possibility of more realistic envirnomental reverb is AudioKinetic, who have created a convolution reverb that adjusts memory and CPU usuage based on available resources, while minimising the impact on reverb quality.
Simon Ashby explains the two approaches to optimise runtime performance, "time-domain truncation and frequency-domain truncation."http://www.develop-online.net/features/1208/Optimising-Convolution-Reverb
Time-Domain Truncation can be achieved by reducing the length of the IR. Ashby's says, "A good approach to shorten the IR length is to determine the noise floor level of the scene where the IR will be used and then reduce the IR end time to the point where the reverb tailgate artifact is inaudible."http://www.develop-online.net/features/1208/Optimising-Convolution-Reverb
Frequency-Domain Truncation is the removal of low energy frequencies
One game that led the frontier in innovation of environmental sound was Crackdown, which received a BAFTA for its audio implementation. Raymond Usher said,
Crackdown was a huge step forward in creating reverb that considers the surroundings of a virtual world. The video below is an example of the audio system used in Crackdown, called 'Audio Shader'. At 4:15 a section on reverb and reflections gives an insight into the system behind the audio. You can see a variety of lines, shapes and words which all contribute to the analysis of the surrounding geometry.
The innovation through sound did not end at complex reverb systems in Crackdown, as Raymond Usher explains,
"We also hired an explosives expert to do controlled detonations for us. If you've seen the explosions in Crackdown, they're pretty massive. We took that as a challenge to make the biggest sounding explosions ever in a video game. By using a unique layering system, our recordings from the explosive session, coupled with the audio shader system... we definitely encourage you to turn it up." http://interviews.teamxbox.com/xbox/1885/The-Audio-of-Crackdown/p2/
Future Innovation
One innovative idea that is being thrown around by both games developers and audio engineers is the ability to simulate the voice as it travels through the body, mouth and into the air. Using highly complex analysis of the human body as well as the study of sound through virtual air, the outcome would be an intensely realistic sound that would be effected by the characters body. It is strongly believed that sound should originate from distinct positions in 3D pace, just as in reality. Realistically wave-tracing audio should require much less computation than realistically ray-tracing graphics. If this system of using wave-tracing as means to implement realistic audio into a game was adopted then it would produce the effects of perceived volume and position as well as frequency attenuation, reverberation and even the doppler shift effect.http://idcmp.linuxstuff.org/2008/10/wave-tracing-ray-tracing-for-sound.html
What is Ray-Tracing?
Ray-Tracing is a technique used in the computer graphics side of video games. It is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects.
What is Wave-Tracing?
Wave-Tracing Audio is a theory presented to mimic the idea of Ray-Tracing in graphics. As explained in the Regular Expressions blog, regular Ray-Tracing, rays of light are traced backward from a pixel of the camera, to an object and eventually to a light source. If you can do that with light, why can't it be done with sound?
This is an interesting theory and is one that may be achieved if instead of tracing rays, vibrations would be traced. Instead of light sources within the game, the focus would be on air and the airs friction.
Prototype
Another more recent game adopted the initiative to develop a reverb system that replicates real life surroundings is Prototype. In prototype, all the ambience tracks were sent through a procedural reverb system. Scott Morgan explains,
"Through a system of ray casting, the physical space of the listener was analyzed in real time, and the reverb parameters set to align with the size of the space that the listener was in."http://www.gamasutra.com/view/feature/132645/the_next_big_steps_in_game_sound_.php?print=1
An example of this real time analysis with in Prototype is explained by a sceneraio in the game where you a enter a tunnel in Central Park. The system detects an enclosed space of a certain size and dynamically sets the reverb parameters. In real time the sound of the park's birds and other ambient sounds are passed through the bigger reverb to give the illusion that the sounds are no longer arriving directly to the listener, but are reflected first in an attempt to replicate the real world. Scott Morgan
References and relevant links
http://designingsound.org/2010/02/charles-deenen-special-the-future-of-sound-design-in-video-games-part-1/ 2010 - not really recent
http://www.prosoundeffects.com/blog/2012/06/gaming-sound-effects-generative-audio 2012
http://www.develop-online.net/features/1653/AUDIO-SPECIAL-The-generation-game
http://www.gamasutra.com/view/feature/130733/designing_a_nextgen_game_for_sound.php?print=1
http://www.develop-online.net/features/1685/In-Depth-Square-Enixs-Luminous-Studio-engine
http://www.gamasutra.com/view/feature/4257/the_next_big_steps_in_game_sound_.php
http://designingsound.org/2010/02/the-next-big-steps-in-game-sound-design/
http://en.wikipedia.org/wiki/Environmental_Audio_Extensions
http://interviews.teamxbox.com/xbox/1885/The-Audio-of-Crackdown/p2/
http://idcmp.linuxstuff.org/2008/10/wave-tracing-ray-tracing-for-sound.html
http://www.cs.princeton.edu/~funk/sig98.pdf
http://bjaberle.com/2011/02/the-future-of-game-audio/
http://www.gamasutra.com/view/feature/132645/the_next_big_steps_in_game_sound_.php?print=1
No comments:
Post a Comment