Friday 26 July 2013

MFP Update - Weekly Diary (26/07/13)

Weekly Diary

Week Beginning
Activity
Key Findings
26/07/13
Preparation for participant testing
This week has been very hands on with the practical side of the project as it was important to start getting some results, which will lead towards a project answer.

One of the issues was having three different systems for producing audio in different ways. Two were using midi (in different ways) and the other used audio. However, after creating the first two, I decided to use only audio to improve the gaming experience. This involved creating three very similar systems (all using the same grid system), but using audio in relation to the x/y axis in a different way.

So I have now produced three versions of the prototype. These are as follows...

Version 1 - X = Pitch Y = Duration
Version 2 - X = Rhythm y = Instrumentation Layers
Version 3 - X = Panning Y = Pitch

To provide the user with a familiarity with the prototype, each version uses the same system for playing as well as looking very similar.

Current Testing
The game is currently out with a participant to test the game mechanics and participate in the test. This is an important process as I have spent a long time working on it so I may have become oblivious to problems or bugs. Hopefully will have the results by tonight/tomorrow.

What the participants get...
I have produced a .zip file containing three versions of the prototype, a welcome note and a time, score and questionnaire. See image below.

What now?Wait for the feedback from the first tester, if no problems or bugs are picked up I will immediately send out the game to my list of participants. I will then aim to have received some data back from these for analysis for the upcoming meeting. If there are problems, these will be addressed immediately.

I am continuing my reading regarding spatial audio feedback, as well as investigating how other audio-only games do this.

I am also trying to think of more ways in which to use the x&y axis for audio purposes. Ideally I would like to receive feedback regarding these prototypes and analyse the average speed it took to find the sound in each game. Then further develop the two that were the quickest.






Monday 22 July 2013

MFP Update - Weekly Diary

Weekly Diary

Week Beginning
Activity
Key Findings
15/07/13
What am I trying to find out?
After a productive meeting I was able to finally focus on what the final project is trying to find out, which is to discover effective ways of positioning/locating sound in a 2D space.

The practical work completed to date has provided the groundwork for practical side of the project, and highlighted the issue regarding the positing of sound in 2D space in audio-only games and that it would be useful to have a x&y system to work out spatial location.

Research title development: 'Audio Feedback for Spatial Information: A Game for the Blind'
This title confirms clearly highlights what the research is and how it will be achieved, which is through an audio game for the blind that uses a variety of audio/x&y combinations.

Practical Project (Max/Msp)
Completed 2 different versions of the game, which use different audio/x&y combinations.

My initial thought was that I could use the same max/msp system for each game, however due to the different approaches to sound two completely different systems had to be created.

Version 1 uses a midi piano grid system

x = pitch
y = duration

Version 2 uses a system very similar to an interactive music system, which turned out to be very tricky to get right (See Image 2/3). 

However, I am happy with the results it produces and will be interested to see the timing and questionnaire information.

x = Rhythm
y = Instrumentation Layers

Participant gatheringI am beginning to have a more even ratio between visually impaired and able sighted participants. Once the third version is created I will send out three standalone applications with questionnaires. I will then make any necessary changes and improvements in an attempt to discover the optimal sound characteristics for positioning and locating sound in a 2D space.

Under the Hood



Presentation Mode

Friday 19 July 2013

MFP Update - Post-Presentation Meeting

This weeks meeting provided an opportunity to discuss the development of project. The working prototype of a gesture controlled audio-only game had been created, however it wasn't quite answering a question relating to immersion.

One of the key discussions of the meeting related to difficulties with locating a object in a 2D space when no visuals are available, which is very common in audio-only games. So combining all the practical work and research compiled to-date, the project will be focusing on discovering effective ways of positioning audio (objects) in a 2D space, with further development towards 3D space. It will involve experiments with assigning variables to the x & y axis which will be used to calculate where the sound is in the 2D space.

Initial thoughts

I have started by splitting the 2D space into a 4x4 grid system, with each individual grid being 80x60 in size. This then creates a 'whack a mole' or 'battleship' type game, where a sound is positioned, heard and removed and the player is required to find this sound using the movement of their hand to search the 2D space. The player will try several versions of the same game, which will make use of different types of audio. The player will be timed, which aims to discover the most accurate method of position and locating a sound in a 2D space.

I am currently creating audio systems in max/msp that can be "figured out". For example, x-axis = Pitch (C,D,E,F) and y-axis = Duration/Length (1, 200, 400 & 800). Testing of three systems will provide answers regarding players ability to find the sound and prompt further changes to the system to acquire the ultimate method of locating sounds in 2D space. I do have a couple of methods that could be regarded as unusual, however until proven unsuccessful they will be pursued as a viable option.

One other thing to note, I have stepped up the search for participants. The list of possible participants is beginning to become a more even mix of visually impaired and able sighted participants.

Monday 15 July 2013

MFP UPDATE - Gesture and Interactive Soundscape

As I am investigating the most effective relationship between gesture and audio and their ability to create an immersive presence within audio-only games, I am building different types of systems in max/msp to test these possibilities  This post explains the most recent patch in development which is a gesture-audio system, which allows the player to use gestures to interact with sounds in the virtual world. It is still in its early stages of development, but below are some screen shots of the system and what mechanics are being implemented to make the system work.

Firstly, it is important to note that the gesture capture method is the same in each patch to avoid a bias in results. I am utilising a common webcam and colour tracking.

As the game involves the player to return a 'pitched'* ball as it approaches their static body position, I felt it would be best to start creating impact sounds for their surroundings which they could reach out with their hands/bat and interact (sonically) with. With the player hitting a 'pitched' ball back and forth with a virtual bat I started to think of the surroundings of a baseball ground, batting ground, cricket ground etc. With the user being currently confined to a static body position, this reminded me of batting grounds where there are usually chain-link fences surrounding the player to prevent the ball from flying off into the distance. So, with this in mind I started to create a sonic boundary around the player, which can be interacted with when the player gestures towards the boundaries of the webcam image (less than 20 to be precise). Currently I have only one side of the x-axis implemented but the idea works just fine, so further development should establish a sonic boundary for the player.

Below is an image of how the interactive sonic-boundary is being implemented in max/msp. I am making use of a non-repetitive sound design technique, which can be seen by the URM system on the bottom right of the image. This creates a non-repeating random number that affects the playback speed of the sfplay~ (sound file). The sound file chosen is a bat hitting a chain link fence and when triggered will play a different sounding version every time. More audio files will be added to improve the quality of audio selection as development continues.


The sfplay~ (audio file) is triggered when the tracked gesture ventures less than 20 towards the left. However, the first version of this system meant that if the player went too far left and left the screen (so 0), and then came back on the screen (going up from 0 but still less than 20), meant that the sound file was triggered again. This broke the illusion of a virtual sonic environment as their hand was triggering the sound on the return. So in an attempt to solve this issue I implemented a system which acknowledges the direction of changing numbers (left & right), which can be seen in the middle of the image (box that says left). When it changes to left, it produces a 0, which in tern produces a bang 0. If this 0 is matched by a gesture tracked number less than 20... it triggers the sound, but it won't play unless the data has completely change, i.e. gone back into the virtual space.


* The pitched ball refers to the current method of using sound to convey a ball travelling closer and further away from the play. It uses a C-Major scale ascending and descending to reflect its distance from the player - it works, however attempts to find an alternative are still under way.

MFP - Daily Diary

Daily Diary

Date
Activity
Key Findings
15/07/13
Built a gesture system that provides constant audio feedback.
This will be one of many patches which will be used to test relationships between gesture and audio, in an attempt to establish the most effective at increasing player presence in an audio-game environment.
Inspired by the lightsaber sound in Starwars, I took inspiration from previous max/msp lightsaber creations and implemented them into my own gesture controlled version. I combined the constant sound, which changes in respect to its position in the virtual space with several one-shot sounds to provide feedback to 'significant gestural movements'.
15/07/2013
Developing ideas relating to the project outcome. Thinking of ways that I can fuse audio to gestures and how many gestures need to be tested
I currently have one gesture (a slap), which was developed for the audio-game prototype. However, to test the relationships of the different types of audio and gesture, I feel it will be beneficial to include more gestures to the audio-game. So one of the possible gestures I thought of could be a punch, which would require depth analysis of the image. However, webcams do not provide a x-axis (depth) which could be problematic when tracking a punch. So I have spent time trying to work around this and one method I am toying with is splitting the webcam image into a grid of 8X6 with the idea being if the tracked gesture is in a small amount of boxes it is far away and the more boxes that are filled with the gesture.. the closer the gesture is. So far the theory isn't producing the results intended as it is only acknowledging individual boxes rather than multiple. Further development and research required.
15/07/13
Further reading - 
GESTURE, SOUND AND PLACE, A max/msp mapping toolbox
Gesture, sound and place relates in somewhat to my research as the workshop provides the participant with an opportunity to change audio through gestural expression.

A max/msp mapping toolbox discusses the possibilities of mapping gestures via a series of equations relating to a matrix or matrices 
15/07/13Audio/Game related forum research - Engaging in discussions & Conducted a search to find issues surrounding immersion in audio-games, lack of physical engagement and further knowledge of audio-game releases.

Will be aiming to produce one of these daily diary tables everyday, whilst posting backlogged posts relating to the development of my work.

Wednesday 10 July 2013

MFP Update

Today marked the work in-progress presentation, which provided the opportunity to explain my topic of study, research and development of the practical project.

Presentation


The following information is a list of topic slides and a brief understanding of what was discussed.



  • Title - Can the fusing of sound to gestures increase immersion in audio-games
  • Project Outline - What the project will consist of = Practical element and data analysis of user testing
  • Rationale - Lack of immersive gameplay currently available in audio games, previous research, research to suggest gestures are a suitable controller for improving immersion
  • Audio-Games - A brief discussion of audio-games and the history of their development - Commercial audience and game developers
  • Immersion - Discussion of the theories of immersion and how they all relate - Main focus is on GameFlow
  • Testing Immersion - Using tried and tested methods - Questionnaire - Won't be using eye tracking due to lack of need for visual engrossment
  • Practical element - Variation of pong, built in max/msp/jitter, Mastery/Sensory based game
  • Game prototype - Screen shots of prototype 1 and prototype 2 - Progression from physics-based engine and towards audio buffers
  • Current state of project - Showed a screenshot of audio buffer system with explanation of how it works
  • Challenges and Solutions - Issues regarding gesture capture and target audience, conveying gesture without visuals
  • Bibliography
Thoughts on Presentation

Overall I think the presentation went well, some of the issues highlighted during the questions/feedback part of the presentation suggested working on aspects of the game, which are currently in development. For example, the current state of the gameplay audio is limited to two sounds. However, I did explain that this is only to test the gameplay idea works, and a larger selection of sounds will be added almost immediately.

Sounds to be included and tested (so far)


  • The idea of a lightsaber influenced sound to reflect the movement of the players gestures. By lightsaber I mean, have a constant sound being produced but it increases in sonic interest and volume when a significant gesture is made.
  • More audio feedback - One example is if the players gesture leave the gesture capture zone then a subtle error sound (Game show buzzer springs to mind), which aims to suggest that the player has left the area of gameplay. Inspired by the idea of leaving the area of gameplay in video games such as Call of Duty: Modern Warfare 2 which introduced sounds of Geiger counter reflecting levels of radiation, when leaving the confines of a multiplayer map.
  • One idea I mentioned that could be implemented is music. This would certainly be suitable in the menu screen which currently only consists of a female counting down from 5-0. If the music was to play, and then reflect his counting down it could improve the sonic experience for the player.
  • I have also been thinking about creating a non-repetitive soundscape for the gameplay, which would hopefully not distract the player from the game, but enhance the sonic experience. This could be achieved by exploiting the stereo-field, with the addition of one-shots spread across the left to right axis.


Prototype demonstration

The work in-progress presentation allowed me to demonstrate the second prototype of the audio-game (Currently under the name 'Stab in the Dark'.. Pretty fitting considering the nature of the game). However the demonstration highlighted issues I knew were there and explained to the audience to expect. As it is still in development the game has some bugs, which relate to the gesture capture method always looking for the gesture even when a colour (for colour tracking) has not been selected. On loading of the game it searches for black, which I believe I can change to search for red, which is the colour I have had most success with when testing. That is one of the advantages of the prototype as being able to click on the webcam image and select what colour you want to track allows for changes and individuality of each user.

Another issue which sometimes featured in my own testing of the prototype was gestures not being acknowledged when the player makes one. After some research into the patch, one theory I have come up with regarding this is the poor lighting of certain scenarios. I have found that my tracking method works best in rooms with a good level of natural light and ceiling lights on. However, in the room for the presentation, lights were off to allow the projector to be viewed clearly. So this is something I want to work on further so I can ensure quality tracking of gestures. As Karen Collins stated in Playing with Sound, any break between sound and gesture can result in frustration and confusion... consequently breaking immersion. The gesture capture has improved considerable since the first working prototype, which used the cv.jit.track method. This was highly unreliable when making quick gestures (which is pretty common in my game!). I am currently using the colour tracking functions and I have to say the accuracy is usually great (In the correct light..)

More to follow...

Monday 1 July 2013

MFP Diary - Gesture Recognition in Audio-Only Games

The development and testing of my prototype gesture controlled audio-game produces issues daily. These problems relate to the selected target audience, which is primarily visually impaired users.

My discussion with visually impaired/able sighted members of the audiogames community highlighted the fact that blind people either don't have a webcam or those who do have immense difficulty getting the correct lighting for data capture. As an able sighted person it is easy to overlook the difficulty visually impaired users have in setting up a webcam to face the user whilst creating the correct light and making sure the camera's focus is suitable.So to counter this issue, the project will have to consider all possible options. Currently these are;


  • Abandon camera based interfaces
  • Acknowledge visually impaired users often have carers
  • Wait for the release of Leap Motion
  • Focus on able sighted
  • Ensure that visually impaired users have the ability to setup a game in order to achieve maximum accuracy

Gesture based games in 'video' games have the ability to visually demonstrate where a gesture is needed to start, pause, select etc an aspect of the game. In audio-only games this ability is not available.  So again, questions need to be asked regarding how to control these aspects of the game in an audio game.

A series of possible ideas are currently being developed which will be tested to select a suitable method. One of which is to take a similar approach to audio-based games that use a combination of very little visuals to enhance the audio-games experience. For example, Papa Sangre (2010, 2013) allows the user to tap on the visuals on the screen to affect what they hear through the speakers/headphones. A more likely scenario could be to adopt a gesture controlled system used in Wonderbook: Book of Spells (2012), which has an area on the screen which when the player gestures (using PS Move) over the visual and clicks, the game will start, pause etc.

An attempt has been made to combine the method adopted by Wonderbook: Book of Spells (2012) with the system used in PS: Eye Toy (2003) and MS Kinect: Adventures (2010). These games use a system which acknowledges a players gestures to an area on the screen and times how long the gesture is maintained in that area. For example, after 5 seconds, the game will start.

One of the issues with using gestures in video games is the unpredictability of user gestures (Collins 2013). In an attempt to combine the gesture control methods above and minimize the issues regarding user gesture unpredictability, the system will continuously acknowledge the gesture data but the player can only control one function at a time. Thus avoiding any accidental restarts, pausing etc.

More to follow...