Wednesday 10 July 2013

MFP Update

Today marked the work in-progress presentation, which provided the opportunity to explain my topic of study, research and development of the practical project.

Presentation


The following information is a list of topic slides and a brief understanding of what was discussed.



  • Title - Can the fusing of sound to gestures increase immersion in audio-games
  • Project Outline - What the project will consist of = Practical element and data analysis of user testing
  • Rationale - Lack of immersive gameplay currently available in audio games, previous research, research to suggest gestures are a suitable controller for improving immersion
  • Audio-Games - A brief discussion of audio-games and the history of their development - Commercial audience and game developers
  • Immersion - Discussion of the theories of immersion and how they all relate - Main focus is on GameFlow
  • Testing Immersion - Using tried and tested methods - Questionnaire - Won't be using eye tracking due to lack of need for visual engrossment
  • Practical element - Variation of pong, built in max/msp/jitter, Mastery/Sensory based game
  • Game prototype - Screen shots of prototype 1 and prototype 2 - Progression from physics-based engine and towards audio buffers
  • Current state of project - Showed a screenshot of audio buffer system with explanation of how it works
  • Challenges and Solutions - Issues regarding gesture capture and target audience, conveying gesture without visuals
  • Bibliography
Thoughts on Presentation

Overall I think the presentation went well, some of the issues highlighted during the questions/feedback part of the presentation suggested working on aspects of the game, which are currently in development. For example, the current state of the gameplay audio is limited to two sounds. However, I did explain that this is only to test the gameplay idea works, and a larger selection of sounds will be added almost immediately.

Sounds to be included and tested (so far)


  • The idea of a lightsaber influenced sound to reflect the movement of the players gestures. By lightsaber I mean, have a constant sound being produced but it increases in sonic interest and volume when a significant gesture is made.
  • More audio feedback - One example is if the players gesture leave the gesture capture zone then a subtle error sound (Game show buzzer springs to mind), which aims to suggest that the player has left the area of gameplay. Inspired by the idea of leaving the area of gameplay in video games such as Call of Duty: Modern Warfare 2 which introduced sounds of Geiger counter reflecting levels of radiation, when leaving the confines of a multiplayer map.
  • One idea I mentioned that could be implemented is music. This would certainly be suitable in the menu screen which currently only consists of a female counting down from 5-0. If the music was to play, and then reflect his counting down it could improve the sonic experience for the player.
  • I have also been thinking about creating a non-repetitive soundscape for the gameplay, which would hopefully not distract the player from the game, but enhance the sonic experience. This could be achieved by exploiting the stereo-field, with the addition of one-shots spread across the left to right axis.


Prototype demonstration

The work in-progress presentation allowed me to demonstrate the second prototype of the audio-game (Currently under the name 'Stab in the Dark'.. Pretty fitting considering the nature of the game). However the demonstration highlighted issues I knew were there and explained to the audience to expect. As it is still in development the game has some bugs, which relate to the gesture capture method always looking for the gesture even when a colour (for colour tracking) has not been selected. On loading of the game it searches for black, which I believe I can change to search for red, which is the colour I have had most success with when testing. That is one of the advantages of the prototype as being able to click on the webcam image and select what colour you want to track allows for changes and individuality of each user.

Another issue which sometimes featured in my own testing of the prototype was gestures not being acknowledged when the player makes one. After some research into the patch, one theory I have come up with regarding this is the poor lighting of certain scenarios. I have found that my tracking method works best in rooms with a good level of natural light and ceiling lights on. However, in the room for the presentation, lights were off to allow the projector to be viewed clearly. So this is something I want to work on further so I can ensure quality tracking of gestures. As Karen Collins stated in Playing with Sound, any break between sound and gesture can result in frustration and confusion... consequently breaking immersion. The gesture capture has improved considerable since the first working prototype, which used the cv.jit.track method. This was highly unreliable when making quick gestures (which is pretty common in my game!). I am currently using the colour tracking functions and I have to say the accuracy is usually great (In the correct light..)

More to follow...

No comments:

Post a Comment