Data sonification is a practice and genre in which numerical or categorical datasets are mapped to sound so that patterns in the data are perceived as musical structure.
It differs from simple parameter automation by letting the data itself determine pitch, rhythm, dynamics, timbre, and spatial behavior through clearly defined mappings.
Works range from direct audification of scientific signals (e.g., seismic or astronomical measurements) to carefully designed parameter mappings and model-based approaches that translate complex, multivariate datasets into layered sonic textures.
Because it sits at the intersection of art, science, and design, data sonification is used both for aesthetic expression and for communicating or revealing structure in data that might be hard to see visually.
Early 20th‑century scientific instruments already produced sound from measurement (e.g., the Geiger counter’s clicks), and mid‑century electroacoustic research explored audification of signals such as seismic or biomedical data. In the arts, musique concrète, computer music, and experimental electronics provided the technical and conceptual groundwork for mapping non‑musical structures to sound.
The 1990s saw sonification formalized within the auditory display research community. The International Conference on Auditory Display (ICAD) launched in 1992 in the United States, and landmark publications in the mid‑1990s defined sonification as a systematic translation of data to sound. This period established key methods—direct audification, parameter mapping, and model‑based sonification—and emphasized perceptual considerations, transparency, and reproducibility.
Artists and composer‑researchers began presenting data‑driven works in galleries, concert halls, and public media. Projects translated climate and weather records, network traffic, financial markets, social media streams, and astronomical observations into music. Parallel developments in laptop performance, SuperCollider, Max/MSP, Pure Data, and creative coding (Processing, openFrameworks, Python) made bespoke sonification pipelines feasible for solo artists.
High‑profile scientific outreach—such as astrophysical sonifications shared by space agencies—and pandemic‑era projects that mapped epidemiological datasets to sound brought sonification to wider audiences. Contemporary practice balances communicative clarity (helping listeners hear trends, outliers, and periodicities) with artistic craft, often pairing sound with synchronized visualization and clear mapping documentation.
Select a dataset with meaningful structure (time series, multivariate tables, categorical labels, networks). Clean it, handle missing values, and normalize ranges (z‑score, min–max, or logarithmic scaling). Decide whether the goal is aesthetic exploration, communication, or both, as this will guide mapping choices.