ambient Sounds: metropolitan museum of art
Visualizing sound levels across twelve spaces. A first pass proof-of concept for a platform/tool in which museums could record ambient sound in their galleries and upload photos of the spaces to generate audio visualizations.
Outcomes
While the difference between galleries is subtle and this collection might lack whiz-bang factor it’s been an interesting challenge to give meaning to something that is inherently nuanced. I ended up relying on a number of elements - color, scale, motion, labels, etc - to communicate differences. It would be interesting to do a similar comparison to another museum. Perhaps there’s more meaning in repetition.
I’ve been wrestling with the impulse to make the visualizations more dramatic but decided to accept their subdued nature for now.
A personal reminder to measure twice, cut once. Because the pipeline had so many steps there was a fair amount of friction when I made creative decisions that required me to re-work components I’d already produced. For example, I remade the p5.js sketches 3 separate times: the initial pass, with added color, then added graduated color.
Next steps
Give the visualizations more meaning: a baseline for comparison, add a dB meter.
Integrate visual playback with audio.
As a tech demo the visualization works, but it needs to be more interesting and communicative.
Create an automated, more resilient, pipeline.
Tools
Zoom H1n Handy Recorder
Used to record 1 minute of sound in each room. 30 seconds of each recording was used to make the visualizations.
Insta360 ONE X
Used to shoot the panoramic photo for each space.
p5.js
Used to generate a Fast Fourier Transform based on amplitude for each of the 12 audio recordings.
Camtasia
Screen recording of video.
Giphy
Generate low bandwidth, looping animations of the audio visualizations.
Color Schemes
Mike Bostock’s D3 color scheme tool.
Photoshop
Map customization.