PixelAudio

A Library by Paul Hertz for the Processing programming environment.
Last update, 11/12/2025.

PixelAudio blends sounds and images by mapping between arrays of audio samples and arrays of pixel values.

1D Audio arrays are mapped onto 2D image arrays using space-filling curves and patterns, such as a diagonal zigzag or a Hilbert curve. PixelAudio provides a template to design your own mappings, and methods to translate values between audio and pixel data.

The example sketches incldue a tutorial designed to get you up and running with creating your own sampling audio synth, where you draw on an image to generate non-linear samples from an audio file. You can almost use the code out of the box, loading your own audio files instead of the ones in the examples. The ArgosyMixer and WaveSynthEditor can also be used without digging into the code. Each provides a GUI for exploring additive audio synthesis that also acts as a color organ (WaveSynth) and pattern generation that can produce both control pulses and audio (Argosy). ArgosyMixer and WaveSynthEditor can also output video animation files. If you are interested in combining PixelAudio output with other applications, the AudioCapture sketch provides information about audio signal routing for MacOS, and TutorialOne_05_UDP provides some clues about communicating with other applications using UDP.

Download

Download PixelAudio version 0.9.5-beta (1) in .zip format.

Installation

Unzip and put the extracted PixelAudio folder into the libraries folder of your Processing sketches. Reference and examples are included in the PixelAudio folder.

Keywords. Animation, Sound, Intermedia

Reference. Have a look at the javadoc reference here. A copy of the reference is included in the .zip as well.

Source. The source code of PixelAudio is available at GitHub, and its repository can be browsed here.

Tested

Platform osx,windows
Processing 4.4
Dependencies example code requires Minim audio library, Video Export + ffmpeg, oscP5, and G4P