sound design and the interactive audio

For this installation only one computer was used. The machine had to run two separate Processing patches (the software used for the graphical side of the installation) that are presented on two separate screens. That means that computer has to run as well two independent Max/MSP patches (the software used for the audio side of the installation). Since the graphical part of the installation consumes a huge amount of computer’s processing power, there was no chance to perform the sound design in real time. It is true that some simple sound design techniques could be used to generate the sound in real time but that is not something I was after. Therefore all the sound samples were prepared in advance.

To avoid the repetition I used the techniques from the world of game audio. In video games all the sounds are made in advance. The sounds are totally dry and mono because the audio engine in the video game takes control of the sound manipulation. Audio engine randomly choses sounds from a pre prepared sound banks and gives a proper acoustic treatments to the sounds according to the place in which the video game is happening. For instance when you hear the footsteps in the video game, the audio engine selects a random footstep sound from a footstep bank. If the scene is happening in a large room some big reverb would be applied to the sound and when the character moves in the space, the panning and volume is modulated according to the position of the sound source.

The second important significance of the game audio is that it can not be mixed or mastered in advance. Unlike in movies, where every sound event is determined, the sound in video games is chaotic. Many sounds can occur at the same time therefore the mixing of the sound has to be made in real time by the audio engine. When many sounds are fighting for the same frequency range, some of them need to be automatically filtered. At the same time some sounds needs to go quitter when other sound events appear. That results in a clearer sonic picture instead of an extremely loud and muddy one.

The Max/MSP patch in the 7K installation was therefore used to handle the audio samples in real time and to ensure the clean sonic picture. All the triggering messages are received from the Processing (graphical software used in the 7K installation). The communication between Max/MSP and Processing is therefore responsible that the sound and picture are synchronized.

The sounds for the installation are completely artificial although they rely on the fact that many processes in nature have a big musical potential. If we listen for example to the sound of the planets (NASA, Sound of Jupiter) or the sonification of the DNA sequences (DNA & Protein Music) we see that there are strong musical connotations in very different processes that takes place in our universe (and there are many other examples). This was the main influence on making the sound design for the 7K installation.

This entry was posted in Max/MSP, Sound. Bookmark the permalink.

Comments are closed.