Installation Software

The early versions of the installation software (Pond I and Pond II) used Macromedia Director and MIDI plug-ins, or MacroMedia Director with inter-application communications with MaxMSP. The current version uses MaxMSP with additional code in Java to handle image generation. It's a much more ambitious application than the various versions of "Pond." It handles:

  • sensor input and logic, with multiple event detection conditioned by event history (Markov chaining),
  • image generation and compositing on 3D surfaces,
  • windowed granular synthesis for four polyphonic voices, including pitch- and time-shifting,
  • multi-channel spatialized audio output with saving and loading of "flight paths" created in a visual interface,
  • loading of new images and sounds upon notification that they have been uploaded from the IgnoGame (performance) application.

Interaction in Pond I and Pond II was mediated through four photosensors that detected the interruption of light as participants waved their hands over transparent plastic rods. Participants could control the locations of four "agents" in the display. Agents were represented by four colored circles that moved over invisible paths. As an agent assumed a new position, it would generate an event associated with that position. When two or more agents were in proximity or occupied the same location, complex chains of events could be generated. The events were based on traversals of a graph derived from the tiling pattern used to fragment the faces in the display. Audio events in Pond consisted of consonantal, vowel, and syllabic sounds derived the spoken names of the persons whose images were displayed. Events were synchronized to an underlying pulse to create syncopated rhythmic patterns. See the IgnoTheory presentation (PDF, 2.0M) for details on the tiling patterns and some of the music generation strategies derived from them.

The current installation uses sixteen magnetic sensors embedded in the projection surface with sixteen polymer clay "stones" with embedded magnets. The sensors detect when a stone is located on or near them, and also detect when the stone is flipped over. The locations of the sensors are indicated by colored circles in the projection. The circles flash on and off to indicate that a magnet has been placed or removed.

The installation software is still evolving, particularly to be able to handle complex musical strategies for audio events. At the moment, placing or flipping a stone generates a single audio event. In Pond, such strategies were hard-coded. Only one tiling pattern was used, and its associated graph and audio events were manually encoded in the software. All the consonantal, vowel, and syllabic sounds had been edited manually and stored as samples. In the current installation, the patterns are generated in real time and frequently change. New patterns, faces, and voices may be uploaded from the IgnoGame performance. Using audio analysis tools (illustrated in the performance section) and new code written in Java that will derive a graph from a tiling pattern, the artist hopes to implement compositional strategies similar to those used in Pond I and II, but entirely in software operating in real time, without any manual coding. In short, the application is evolving to be much "smarter" than Pond I and II.