Tuesday 12 November 2013

Careers Talk

I have been given the opportunity to talk to a class of A-Level music technology students, to discuss the career possibilities that are associated with the subject. I feel honoured to be invited back to my old college, providing the opportunity to engage with students and develop an understanding of how their education can open an exciting pallet of job opportunities.

The reason I find this opportunity so exciting and important is predominantly because I wish I had someone explain the career possibilities at that stage of my development. I spent the majority of my education naively assuming that music technology could only establish a career in the "music business", and had no idea of the extent of the careers possible when possessing an understanding of music technology.


It is important to note how highly I regard the education of music technology, as it combines the creativeness of music and the technical aspects often associated with the studies of engineering or science. I have found that those studying music technology often find solace in hearing positives stories relating to the subject. While I studied music technology there were often comments stating that "music technology is not a real subject" or "not academic", which if anything fuelled my ambition to succeed.

My first concern when asked to give a careers talk was whether I was the right person for the job. It seemed slightly hypocritical to discuss possible career choices when I am not currently employed in the career I am talking about. However, after some deliberation I realised I am more than qualified to assist young minds on the possibilities of music technology related careers. I can draw on a range of experiences relating to music technology careers that range from freelance employment and an extensive education in music technology.

An exciting aspect of this careers talk will be the reunion with college teacher, who nurtured my music and music technology development. This will include observing the once dreaded 'lunch time concert', which was a weekly solo performance required to be undertaken by every music student. As nerve racking as this once was for me, it pushed me to become the guitarist I am now. So to be on the viewing side of the concert will be an intriguing opportunity.

Monday 11 November 2013

Minecraft Music Rant

I have been playing Minecraft on and off for some time now (since the beta days), and there are some audio related issues I have with the game. I must first point out how much I love the music written by C148, and I bought the volume alpha album off bandcamp immediately after my first experience playing the game [1]. However, many updates have been and gone, yet the music has stayed the same. It is getting to the point now where I feel new music should either be added or replace the soundtrack that has been implemented for so long.

Another area that I feel needs addressing is the way in which the music is implemented by the Mojang development team. With an increasing awareness of interactivity in games, music can be used to convey information as well as being non-repetitive. However, the music in Minecraft is too linear for my likening. For example, you can be exploring the deep realms of the Minecraft world and have no clue to whether it is night or day above. Couldn't the music be arranged to provide this information? Possibly a change in tonality from major (day) to minor (night)?




This month has seen the release of a new Minecraft - Volume Beta audio collection, created and released by C148 [2]. As beautiful and well constructed this music is, I am eager to hear how it will be integrated into the Minecraft game.

[1] http://c418.bandcamp.com/album/minecraft-volume-alpha

[2] http://c418.bandcamp.com/album/minecraft-volume-beta

Tuesday 5 November 2013

Manchester Game Jam November

I attended the Manchester Game Jam, which took place in the heart of the Northern Quarter of one of my favourite UK cities. It was a pretty cool event, got to meet some professional game developers as well as a bunch of beginners who were eager to discuss ideas and techniques.

The event itself is an interesting concept; a group of people enthusiastic about games making games under the time constraint of a 9 to 5 day. The chilly, open planned room over looks a busy street of interesting characters which provides a refreshing occasional distraction from the computer monitor. There was a moment where a presumably homeless gentleman tried to enter through the locked door to inspect what was going on inside. However after being obstructed by the lock, began to dance and mumble words towards the table in which I was sat.

There was about 15-20 people involved in the game jam, which consisted of a variety of skilled individuals ranging from college students to professional game developers. In terms of "audio guys" there was myself and another guy, however we worked apart and on completely different projects. A chat with the other aspiring sound designer proved very insightful with regards to job application techniques, as we are both recent graduates looking for work. There was another individual working on a game that utilised audio as a key component of his game. A brief discussion of his game led to an issue being raised regarding audio spatial positioning. He mentioned how he had done a quick test of attempting to locate a sound positioned within the stereo field and found that when the sound was positioned in between Left and Centre and similarly Right and Centre the accuracy was diminished greatly. This topic was discussed briefly in my research leading towards my MSc final project, which contributed to the reading and prototyping of auditory dimensions.

The group of college students, who had been sent by their game development course teacher were an interesting addition to the event. They were zipping around the room engaging with people and overlooking what was happening on the screen. I spoke to several of these students and explained the concept of audio-only games and the approach I was taking to making such a game.

As soon as people (both professional game devs and students) looked at my screen it became clear that max/msp is truly an unknown game making programming tool. I once wrote about the pros and cons of using max/msp as a game developing tool and the response was that it is overlooked by the majority of developers. This raises questions regarding the practicality of using max/msp to make games, but I still stand firm on the belief that there are considerable benefits to building an indie audio-only game with the software. Nonetheless the enthusiasm shown towards max/msp and the concept of audio-only games at the Manchester Game Jam was greatly appreciated.

There were several themes suggested to make the games in conjunction with prior to the event. These were FIRE, COWS, MUTATION, SUBMARINES and TRADING. Quite a fun bunch of themes ey!? After seeing submarines as a theme and recently working on a submarine audio-only game, I thought I would take another stab at it, but with subtle differences in gameplay and much improved performance of certain technical aspects. However, seeing mutation on the screen now, I seriously wish I had picked that theme, as one of my favourite aspects of audio is the ability to mess with it, in both drastic and beautiful ways. A lot of my work is heavily inspired by my time at Huddersfield University under the supervision of a collection of tutors and composers who are leading contributors to the computer music scene.

As previously stated, I built a spin-off of an audio-only game previously prototyped several months ago. One of the key differences was the non-repetitive sound design, which was much richer in audio content and diversity than previous versions. This is mainly due to having no creative boundaries and making a game for the fun of making a game. However, one issue that arose during the development hindered the entire games' credibility. This was due to an unexpected amount of audio glitching and slow and disrupted audio playback. Now for a video game, audio issues would be annoying and distracting from the gameplay but the game would still be capable of playing. But for a game that relies entirely on audio quality it was disastrous. (I will be spending some time troubleshooting the issues to find out what was going wrong.)

Unfortunately I was unable to stay till the end, which resulted in not being able to see some of the work being displayed. Some vines and photos of the final games are available on the manchester game jam twitter - @MCRgamejam





Monday 28 October 2013

Manchester Game Jam (03/11/13)

Manchester Game Jam is an opportunity for game developers to gather and take part in a 'rapid fire' one day session of creating computer games. I have never attended an event like this before so I thought I would inquire.

After communicating with the organisers of Manchester Game Jam it became apparent that no audiogame developers have ever attended, which I found fairly understandable considering the limited number of professional audiogame developers out there. So as an aspiring audio-only games developer I thought I would take up the offer to attend the one day game jam on November 3rd. I will probably attempt to develop a small audio-only game that I have been mulling over in my head for some time, which touches on several areas of previous research and practice surrounding auditory dimensions and 'listen and find' systems. I will be using max/msp to develop the game, due to its extensive audio capabilities.

Due to the game being audio-only, I will have to prepare quite heavily for the session, as I will not be able to record any dialogue or location recordings during the session. However, to counter any issues regarding missing or requiring new sounds I will be running a DAW along side the actual game development software, which will be used to alter and/or combine sounds as well as a integrate any additional sounds gathered from a variety of online sound libraries.

One of the most exciting aspects of this event for me will be meeting a load of new people who share the same interest in making and playing games, along with the chance of hanging out with some audio guys/girls who I believe occasionally attend these game jam events!

I will probably post my thoughts on the event along with a standalone application of the game for download after the event.

http://madlab.org.uk/content/manchester-game-jam-nov-2013/

Sunday 8 September 2013

MFP Update - Weekly Diary

Weekly Diary

Date
Activity
Key Findings
07/09/13
Post Meeting Game Update
Quicker gameplay

There was too long between firing a torpedo and receiving feedback. So to increase pace this time has been cut to avoid silence and confusion

Rate of playback

The auditory dimensions have been improved to speed up the rate of sonar playback. This minimises any disadvantage caused by a battleship being positioned on the outskirts of the 2D space.

Two separate versions have now been made rather than two versions changing in one game. An adaptive or random difficulty version will be randomly selected and sent to a participant. The information will compared between the two.

Dynamic game balancing

The system now works around the mean (average) score, which sets a starting point for both sonar and ambient sounds for the round. This is then affected (increased or decreased) by the time of the level. Getting 'easier' as the level goes on, but getting harder/easier depending on average score. Consideration was made to change the difficulty through torpedoes and listens 'live' in the game. But this produced negative and clumsy results.

The alternative version uses the same method of changing the games difficulty, but does not use the same data to change the difficulty. Instead of using player/level data it takes a random sequence of numbers using the the 'drunk' object.

Instructions
After demonstrating the game, it became clear that the instructions were not clear enough. So a new instructions script was written, recorded and implemented into the game. It is a bit longer, but more detail has been added to assist player understanding.

Application Building
Two applications have been built, with careful consideration made to avoid including any unnecessary files in the game. This stems from previous mistakes on projects where a collection of old audio had been left in the final build.

Participant Testing
Participant Testing is under way. The results have began to return, which have provided the required information along with some comments from users regarding the game.

These comments will provide a platform for improvement and further development of the game.

Monday 2 September 2013

MFP Update - Weekly Diary

Weekly Diary


Date
Activity
Key Findings
01/09/2013
Audiogame development
The max/msp game has developed to include a basic training level, where the player can get a short feel for the 2D space.

The two auditory dimensions have been combined to create one auditory dimension to create an enjoyable and easiest way of locating audio in a 2D space.

Dynamic Game Balancing
The development of this game has provided an interesting opportunity to build on a fairly new type of adaptive difficulty system, which uses audio to influence to player difficulty levels.

This idea was coined by Brian Schmidt in a game called EarAttack. At GDC 2013 he spoke about using the limitations presented by audiogames to influence the difficulty settings for the player. One of the ways this can be achieved is through overloading the audio spectrum. By introducing broadband sound such as wind, fire etc it becomes increasingly difficult to locate and separate sounds.

This system has been implemented which introduces different sounds ranging from distracting one shots to broadband sounds as the game gets harder. This is determined by the previous level score.

Compare results of two versions - one using DGB and one not using DGB

- Score of each version

- Level of enjoyment of each version

- And time played of each version

Voice acting
With the instructions and gameplay improving each week the dialogue produced by the voice actors has not been able to keep up with these changes.

With a significant delay between writing/sending the script and receiving the recordings the game was evolving quicker than the dialogue was being received. This has resulted in recording the lines myself to quickly change the dialogue and implement to test the subsequent changes.

Implementing of dialogue would normally come late on in the development of a video game, however in an audio-only game that is built in max msp, requires the files to be implemented to see how the game works. This is due to dialogue producing a bang when complete which then triggers gameplay and game state changes.
Voice actors have not been used for the instructions, due to changes in controls and gameplay progression.

However, any further development of the game will include a revised script and utilise voice actors.

The game still contains a selection of feedback lines which were produced by a voice actor. These are used to provide a positive and negative vocal response to a players use of a torpedo.

Memory/File SizeConsideration has been made to the the file size of the game. With the game being a downloadable game, it is important to reduce the file size of certain audio components without drastically altering the quality. A series of techniques have been used so far including; using shorter looped sounds, randomly timed/selected/altered and sample rate.

It is important to note the platform being used for the game, which determines how much RAM (random access memory) is available. With the PC being selected as the device to run the game, a higher level of RAM can be allocated to the game as opposed to handheld consoles. Finally, another consideration has to be acknowledged with regards to file size; as this is an audio-only game, the visual material which often takes up a significant amount of memory in video games can be side-stepped in this game.

Any further development of the game will continue this work to ensure the file size is the lowest it can be whilst attempting to retain quality.


Monday 26 August 2013

MFP Update - Weekly Diary

Weekly Diary


Date
Activity
Key Findings
25/08/13
Aims/Rationale
Expand audio-only game to compare enjoyment and score over two different auditory dimensions.

Flow will be tested through the adaptive difficulty, which analyses previous score and changes difficulty respectively.

Gameplay
To present the player with an easier method of learning the game, a series of step by step instructions have been implemented. This involves a restructure of the script, controls and gameplay.

The controls have changed in an attempt to disconnect the enemy ships sound to the players sound. The amount of times the player listens to the enemy ship negatively affects the score. Also once the player feels confident they have found the area in which the enemy ship is located, they press ENTER to shoot. This produces an 'audio cut scene', where a pre-designed collection of audio aims to suggest torpedo fire and impact/miss.


The score is now altered by the players use of torpedoes and enemy battleship locators.

Max/Msp Development
The second auditory dimension has been developed, which differs to the first beep rate/pitch system. The second auditory dimension uses a synthesized sound as opposed to the buffer system in the first. The second version uses panning for x-axis and timbre for y-axis.

Sound DesignTorpedo - Instead of just an explosion, a launch sound consisting of steam, bubbles etc and distant impacts which were lowpass filtered have been included.

More random one shots have been added to create diversity in the sonic environment.

Sunday 18 August 2013

MFP Update - Weekly Diary

Weekly Diary

Week Beginning
Activity
Key Findings
18/08/13
Adaptive Difficulty
Research has suggested that there is a significant link between player skill and gameplay difficulty in relation to flow. Valve's Left 4 Dead games use a method of adjusting game difficulty to match changes in player skill. This is something I have been investigating to see whether I can implement an element of adaptive difficulty in the game. So far I have two possible methods;

- Gradually decreasing in difficulty over time

- Analyse previous level time and scale to a new difficulty for next level

Dialogue
Certain situations require speech to be used to convey information and audio feedback in more complex situations (Rober & Masuch 2005).

Currently waiting on some dialogue from voice actors. System in place, just need audio files.

One of the problems is, as the game develops, new scenarios and controls are produced but the script, which includes control instructions has already been written and recorded. So I will either have to ask for more instructions, or some how make the controls work with what I have.

Non-Repetitive Sound Design
The ambient sound design is beginning to take shape. The footstep system selects 1 of 10 files, and pans left to right/right to left.

A room tone has been added, combination of several sounds. Need to cut the file size down to comply with creating a small file size.

Radio chatter has been implemented, which was originally a 2 hour capture of RAF communication. A very small selection of this was cut into individual sound files, which are randomly selected at random time intervals. They have been panned to the left to contribute to the creation of a diverse spatial environment.

TestingThe gameplay has been completed however I am still waiting on dialogue for the player feedback and questionnaire for testing to be done.

I have set a deadline for Monday for dialogue to be sent to me, so if this deadline is met it can be implemented the same day. Meaning, testing can commence while improvements to sound design and game design can continue along side.

Tuesday 13 August 2013

MFP Update - Weekly Diary

Weekly Diary

Week Beginning
Activity
Key Findings
10/08/2013
Script Writing & Voice Actor Auditions
I have written an early script for the game, which provides the player with details of how to play the game and occasional dialogue feedback.

The search has begun to locate a suitable voice actor to deliver these lines. This has involved engaging with voice actors on several online game development forums.

Game Development
The game now has two gameplay types; seek and destroy, and submerge.

Seek and destroy has been split into two difficulties; easy and hard. Easy means you hear the enemy ship at the same time you are searching for it, which results in the player just matching the sounds (still using the x/y knowledge). Hard means you hear the sounds independently, resulting in a greater level of memory being required.

Submerge is a game type designed to test players speed. When the alarm is heard the player must quickly lower the depth of the submarine to avoid incoming attacks. This process will involve listening to multiple sounds to gather information on depth/location etc.

Data Capture
The capturing of game/player data has been a process that has developed sporadically. Early versions of the data capture system were developed for the prototype versions, however as the game system developed, problems occurred with saving and understanding saved data.

An example of how current level times are saved as a text file are displayed in the image below. The single number reflects the level number and the four numbers below is the time it took a player to complete the level. There will be an additional number included shortly, to provide x/y coordinates of the ship. This will aim to establish if an area of the 2D space is harder to locate than others.

Non-Repetitive Sound DesignA feature of the game currently in development is a non-repetitive ambient sound design.

A footstep system has been developed, which randomly selects 1 of 10 footsteps (on metal) and alternates stereo movement and amplitude. The panning is independently controlled by each .wav file, which uses a scaled midi-panning system to create the feeling of activity around the player.

Also, randomly selected 'radio chatter' can be pieced together to create a background communication ambience.
 

Data Capture Example

Tuesday 6 August 2013

MFP Update - Weekly Diary

Weekly Diary

Week Beginning
Activity
Key Findings
29/07/13
Participant Testing
Last weeks testing highlighted issues with the three variations of the x/y prototypes. Firstly the webcam interface proved to be an inaccurate way of controlling an audio-only game, especially for visually impaired users.

It soon became apparent that the webcam interface was not needed to understand what the best method of locating a sound in 2D space, and in some cases slowed the process down significantly.

There was a high percentage of results showing that a sound was not found. This suggests that the participant either could not memorize the intended sound to be found or they had difficulty navigating their tracked colour to the required area of the grid.

An example of the three variations can be found at bottom of this post

Reading List
Stevens, S. S. (1935). The relation of pitch to intensity. Journal of the Acoustical Society of America, 6, 150–154.

Stevens, S. S. & Davis, H. (1938). Hearing: Its psychology and physiology. Oxford, England: Wiley.

Walker, B. N. (2002). Magnitude estimation of conceptual data dimensions for use in sonification. Journal of Experimental Psychology: Applied, 8, 4, 211–221.

Walker, B. N. (2007). Consistency of magnitude estimations with conceptual data dimensions used for

sonification. Applied Cognitive Psychology, 21(5), 579–599.

Walker, B. N., & Ehrenstein, A. (2000). Pitch and pitch change interact in auditory displays. Journal of Experimental Psychology: Applied, 6, 15–30.

Walker, B. N., Nance, A., & Lindsay, J. (2006). Spearcons: Speech-based Earcons Improve Navigation

Performance in Auditory Menus. Proceedings of the International Conference on Auditory Display, 63–68.

Walker, B. N., & Lane, D. M. (2001). Psychophysical scaling of sonification mappings: A comparison of visually impaired and sighted listeners. Proceedings of the International Conference on Auditory Display,

90–94.

Walker, B. N., & Kramer, G. (2004). Ecological psychoacoustics and auditory displays: Hearing, grouping,
and meaning making. In J. Neuhoff (Ed.), Ecological Psychoacoustics (pp.150–175). New York: Academic

Press.

Hermann, T., Hunt, A. and Neuhoff, J.G., 2011. The Sonification Handbook. 1st ed. Berlin: Logos Publishing House



New prototype
The three prototypes confirmed the research surrounding pitch and concluded that individuals musical background affects perception of audio.

The next prototypes intend to take the findings of Walker and colleagues and developing a game for the blind - based around locating a sound in 2d space.

Details of current state of practicalThe project has began to focus on a more established end product, which will be an audio-only game for the blind that adopts the research surrounding auditory dimensions.

I am using the previous three prototypes as a base layer to expand and develop further into a fully working audio-only game.

Game Features

Spoken dialogue
- Voice actor
- Explain instructions

Button and Mouse controls
- Tracks the movement of the mouse in relation to screen
- Audio is only produced when clicked or clicked & dragged
- Space bar to start level
- Questionnaire at the end will utilise buttons 1-5

Audio-only version of battleships
- Hear the enemy battleship's sonar
- Use your computer equipment to locate and destroy the enemy ship

Sound Effects
- Sounds effects (and dialogue) used to provide occasional audio feedback

Max/Msp
- On opening the game, a jit.window is opened in full-screen
- This is creates a blank visual for the player, ensuring there is no visual advantage to those who are able sighted.



https://www.dropbox.com/s/4hnwrd43xaqheso/Participant%20Testing%202.zip

Friday 26 July 2013

MFP Update - Weekly Diary (26/07/13)

Weekly Diary

Week Beginning
Activity
Key Findings
26/07/13
Preparation for participant testing
This week has been very hands on with the practical side of the project as it was important to start getting some results, which will lead towards a project answer.

One of the issues was having three different systems for producing audio in different ways. Two were using midi (in different ways) and the other used audio. However, after creating the first two, I decided to use only audio to improve the gaming experience. This involved creating three very similar systems (all using the same grid system), but using audio in relation to the x/y axis in a different way.

So I have now produced three versions of the prototype. These are as follows...

Version 1 - X = Pitch Y = Duration
Version 2 - X = Rhythm y = Instrumentation Layers
Version 3 - X = Panning Y = Pitch

To provide the user with a familiarity with the prototype, each version uses the same system for playing as well as looking very similar.

Current Testing
The game is currently out with a participant to test the game mechanics and participate in the test. This is an important process as I have spent a long time working on it so I may have become oblivious to problems or bugs. Hopefully will have the results by tonight/tomorrow.

What the participants get...
I have produced a .zip file containing three versions of the prototype, a welcome note and a time, score and questionnaire. See image below.

What now?Wait for the feedback from the first tester, if no problems or bugs are picked up I will immediately send out the game to my list of participants. I will then aim to have received some data back from these for analysis for the upcoming meeting. If there are problems, these will be addressed immediately.

I am continuing my reading regarding spatial audio feedback, as well as investigating how other audio-only games do this.

I am also trying to think of more ways in which to use the x&y axis for audio purposes. Ideally I would like to receive feedback regarding these prototypes and analyse the average speed it took to find the sound in each game. Then further develop the two that were the quickest.






Monday 22 July 2013

MFP Update - Weekly Diary

Weekly Diary

Week Beginning
Activity
Key Findings
15/07/13
What am I trying to find out?
After a productive meeting I was able to finally focus on what the final project is trying to find out, which is to discover effective ways of positioning/locating sound in a 2D space.

The practical work completed to date has provided the groundwork for practical side of the project, and highlighted the issue regarding the positing of sound in 2D space in audio-only games and that it would be useful to have a x&y system to work out spatial location.

Research title development: 'Audio Feedback for Spatial Information: A Game for the Blind'
This title confirms clearly highlights what the research is and how it will be achieved, which is through an audio game for the blind that uses a variety of audio/x&y combinations.

Practical Project (Max/Msp)
Completed 2 different versions of the game, which use different audio/x&y combinations.

My initial thought was that I could use the same max/msp system for each game, however due to the different approaches to sound two completely different systems had to be created.

Version 1 uses a midi piano grid system

x = pitch
y = duration

Version 2 uses a system very similar to an interactive music system, which turned out to be very tricky to get right (See Image 2/3). 

However, I am happy with the results it produces and will be interested to see the timing and questionnaire information.

x = Rhythm
y = Instrumentation Layers

Participant gatheringI am beginning to have a more even ratio between visually impaired and able sighted participants. Once the third version is created I will send out three standalone applications with questionnaires. I will then make any necessary changes and improvements in an attempt to discover the optimal sound characteristics for positioning and locating sound in a 2D space.

Under the Hood



Presentation Mode