Colour of Sound

a synesthetic explorationof color theory andacoustic frequency relationships.

(type)
Colour uf Sound
(Year)
2014
(following stage)
Experiential installation

Concept

Synesthetic mapping.

The colour of sound explores the phenomenon of synesthesia, particularly the chromesthesia variant where sounds automatically evoke experiences of color. While true synesthesia is a neurological condition, this project creates an algorithmic approximation of this experience that allows anyone to explore potential mappings between sound and color.

Drawing inspiration from both scientific research on cross-modal perception and the experiences of synesthetes, the project maps spectral qualities of sound (timbre, frequency, amplitude) to visual qualities (hue, saturation, brightness) using several different mapping strategies.

Note: The project is in refactoring status. Launch date: 2025.04.01

Go to demo App
watch it in action

Specs.

Technical Details

Color analysis algorithms identify pixel clusters and patterns, which are then mapped to specific audio parameters including pitch, timbre, rhythm, and spatial positioning.

The project employs multiple mapping strategies:

  • Frequency-to-hue mapping based on newtonian color wheel
  • Amplitude-to-luminance correlation
  • Spectral centroid to color saturation
  • Harmonic content to color complexity
  • Customizable mappings based on user preferences

Theory

Colour and Sound: Related Frequency Spectrums

Frequency Mapping

Imagine you could hear the color red; well, it's an A note in the 44th scale... and the color blue, it's a G in the same scale—a symphony of light hidden just beyond our perception. What if colors weren't just visual experiences, but could be translated into a language of sound?

Sound and light exist as waves with distinct frequency ranges and specturms. This project creates a bidirectional mapping between these two sensory domains:

Sound Spectrum - Mechanic

  • Audible Range: 20 Hz - 20,000 Hz
  • Musical Notes: Organized in octaves (A0, A1, A2...)
  • Middle A (A4): 440 Hz

Light Spectrum - Electromagnetic

  • Visible Range: 400-790 THz (terahertz)
  • Wavelength: 380-700 nanometers
  • Color Progression: Violet → Blue → Green → Yellow → Orange → Red (Higher frequency → Lower frequency)

Mapping System

Hue = Note

The frequency of light (perceived as color) corresponds to musical notes:

  • Higher frequencies (blue/violet) = Higher pitched notes
  • Lower frequencies (red/orange) = Lower pitched notes

Lightness = Octave

The lightness component of color (HSL model) determines which octave the note plays in:

  • L = 0: Lowest audible
  • L = 1: Highest audible range

Saturation = Volume

The saturation component of color (HSL model) determines the volume of the sound:

  • S = 0: Silent (grayscale)
  • S = 1: Maximum volume (fully saturated color)

Implementation

Color to Sound Conversion

When users interact with images or webcam feed:

  1. The system captures the HSL values of the color under the pointer
  2. Hue is translated to a corresponding musical note
  3. Lightness determines the octave of the note
  4. Saturation sets the volume level
  5. The resulting sound is played in real-time as the user moves the pointer

Sound to Color Conversion

When processing audio tracks or video:

  1. Fast Fourier Transform (FFT) is applied to decompose the complex sound wave
  2. The dominant frequency is detected and mapped to a corresponding hue
  3. The relative octave position is translated to lightness
  4. Volume information is converted to saturation
  5. The resulting color is displayed as a visual shape
  6. This process repeats at a configurable frame rate (default: 24 times per second)

Practical Application

This bidirectional mapping creates a synesthetic experience where:

  • Colors become interactive soundscapes
  • Sounds transform into dynamic visual experiences