Colour of Sound
a synesthetic explorationof color theory andacoustic frequency relationships.

Concept
Synesthetic mapping.
The colour of sound explores the phenomenon of synesthesia, particularly the chromesthesia variant where sounds automatically evoke experiences of color. While true synesthesia is a neurological condition, this project creates an algorithmic approximation of this experience that allows anyone to explore potential mappings between sound and color.
Drawing inspiration from both scientific research on cross-modal perception and the experiences of synesthetes, the project maps spectral qualities of sound (timbre, frequency, amplitude) to visual qualities (hue, saturation, brightness) using several different mapping strategies.
Note: The project is in refactoring status. Launch date: 2025.04.01
Specs.
Technical Details
Color analysis algorithms identify pixel clusters and patterns, which are then mapped to specific audio parameters including pitch, timbre, rhythm, and spatial positioning.
The project employs multiple mapping strategies:
- Frequency-to-hue mapping based on newtonian color wheel
- Amplitude-to-luminance correlation
- Spectral centroid to color saturation
- Harmonic content to color complexity
- Customizable mappings based on user preferences
Theory
Colour and Sound: Related Frequency Spectrums
Frequency Mapping
Imagine you could hear the color red; well, it's an A note in the 44th scale... and the color blue, it's a G in the same scale—a symphony of light hidden just beyond our perception. What if colors weren't just visual experiences, but could be translated into a language of sound?
Sound and light exist as waves with distinct frequency ranges and specturms. This project creates a bidirectional mapping between these two sensory domains:
Sound Spectrum - Mechanic
- Audible Range: 20 Hz - 20,000 Hz
- Musical Notes: Organized in octaves (A0, A1, A2...)
- Middle A (A4): 440 Hz
Light Spectrum - Electromagnetic
- Visible Range: 400-790 THz (terahertz)
- Wavelength: 380-700 nanometers
- Color Progression: Violet → Blue → Green → Yellow → Orange → Red (Higher frequency → Lower frequency)
Mapping System
Hue = Note
The frequency of light (perceived as color) corresponds to musical notes:
- Higher frequencies (blue/violet) = Higher pitched notes
- Lower frequencies (red/orange) = Lower pitched notes
Lightness = Octave
The lightness component of color (HSL model) determines which octave the note plays in:
- L = 0: Lowest audible
- L = 1: Highest audible range
Saturation = Volume
The saturation component of color (HSL model) determines the volume of the sound:
- S = 0: Silent (grayscale)
- S = 1: Maximum volume (fully saturated color)
Implementation
Color to Sound Conversion
When users interact with images or webcam feed:
- The system captures the HSL values of the color under the pointer
- Hue is translated to a corresponding musical note
- Lightness determines the octave of the note
- Saturation sets the volume level
- The resulting sound is played in real-time as the user moves the pointer
Sound to Color Conversion
When processing audio tracks or video:
- Fast Fourier Transform (FFT) is applied to decompose the complex sound wave
- The dominant frequency is detected and mapped to a corresponding hue
- The relative octave position is translated to lightness
- Volume information is converted to saturation
- The resulting color is displayed as a visual shape
- This process repeats at a configurable frame rate (default: 24 times per second)
Practical Application
This bidirectional mapping creates a synesthetic experience where:
- Colors become interactive soundscapes
- Sounds transform into dynamic visual experiences