top of page

Experiment #1 - The Audio Puzzle

Before I get into this update, I have to confess I was ridiculously excited for this experiment. Not only had I been sitting on this concept for a while, but it was a great chance to take some of my established skills and utilise them in a way I had never done so before in my professional history. The Audio Puzzle is the first experiment in my current phase of multimodal exploration. This was a 3-day experiment that focused specifically on the development of an interactive game that featured an interactive sound-only user interface. The Audio Puzzle was an effort to see if design methods synonymous within one discipline could be established exclusively within another. For example, could the principles of call to action within game and visual design be successfully executed purely through the use of sound? To attempt this, The Audio Puzzle featured only 2 modalities: interaction, and sound.

Now, bear with me my internet-dwelling friends... This is a difficult experiment to document here, for two reasons: 1). The audio elements of this game relied on a spatialised audio setup (i.e. a 22.2 surround sound array). 2). The actual playtesting for this prototype game was actually conducted in the dark! So, I'll do my very best to break this down... here's how it worked: The mechanics of this game were actually modeled on the principles of safecracking - whereby a series of target sounds that emanated from different positions within the room would need to be located and honed in on, in order to crack the puzzle and complete the game. To engage with this experiment, users situated themselves in the centre of the 22.2 audio array and were handed a Playstation 4 controller. This familiar gaming peripheral would allow players to control the spatialised flight of a 'sound-avatar'. Players would use this sound-avatar to seek out the position of a target sound that was emanating from a specific position in the room.

With only their sense of hearing to go on, when the player felt that they had correctly aligned their sound-avatar with the target sound they would hit the right shoulder trigger on the PS4 controller to confirm the alignment of their sound-avatar with the target sound. If their sound-avatar did not occupy the same location as the target sound then a failure sound was triggered and they would need to try again. If the sound drone was positioned correctly and occupied the same spot as the target sound then a success sound was triggered and a new target sound would appear elsewhere within the XYZ coordinates of the audio array. The puzzle was solved when all 8 target sounds that made up the full sequence were successfully 'cracked'. On completion, a simple melodic loop would play which clearly signified the player's success in solving the puzzle. This experiment required me to develop a bespoke Max for Live plugin that allowed a Playstation 4 controller to communicate with the various parameters within the Envelop for Live plugin suite; the open-source plugins used to encode multichannel audio in Ableton Live for higher-order ambisonic speaker arrays, such as the 22.2 surround system I was utilising for this experiment. This patch was what enabled users to control the real-time movement of a 'sound-avatar' within the 22.2 surround sound audio array:

Alongside this, an unedited version of E4L's 'Source Panner' was used to position the target sounds. The core mechanics of the puzzle were driven by the data from both my edited version of their plugin, and the original, so when the numeric spatial parameters of the sound-avatar matched the numeric spatial parameters of the target sounds, then the spatialised 'safe combination' could be cracked. Programatically, it was that simple.

Many practical discoveries were made in the process of this experiment. One such discovery was how detailed the audio content needed to be to secure a successful call to action mechanism. This brought about the need to acknowledge my previously coined 'resolution of modality'. The application, in this case, was in regards to how fundamental, or complex did the sonic modality need to be in order to achieve its objective? This factor arose when playtesting the first iteration of The Audio Puzzle, which was unplayable as it was made up of simple continuous sine tone drones that failed to inform the user on the nature of the puzzle and even prevented users from being able to decipher the location of their 'sound-avatar' in the room. However, when more ingredients were introduced to the sound (i.e. increasing the resolution of the sonic modality) such as rhythm pulses and textures, then the objective of the game became far clearer and the playability of this experiment increased dramatically. Conversations with play-testers led to a wider conversation about the nature of the design of this experiment, with discussions around the challenges and values of using modal substitution (in this case, what would usually be fulfilled by visual media, being fulfilled bu sonic media) and how this could have a dramatic effect on the way multimodal media projects could be designed and conceived with this principle in mind. This has given rise to a new consideration for my toolkit for multimodal creativity - one that signposts the interplay between different sensory, and different disciplinary modalities. A map on such topics would allow makers to navigate the multidimensional nature of creating works that feature a variety of modal ingredients. However, I'm just beginning to scratch the surface here - so let's roll on with the next exploration!

Comments


bottom of page