a collaboration with the Itay Niv
The concept -
Create an application that generates a t-SNE map of selected audio data, and have an “outsider” to trigger the samples in the browser.
The process -
Using Gene Kogan’s ML4A guides, t-SNE audio Python script, we analyzed and segmented large quantities of audio data. Using principal component analysis, we ended up with a 2D vector map of sound-similarities represented in a JSON file.
We started to experiment with the performative aspect of the output. Initially playing back samples using video to trigger the samples through pixel analysis. The video-triggering seemed less interesting, and we were looking for a more organic generation.
So, we took the p5 Flocking example and decided to fit it into our application and have the flocking objects trigger the samples.
The biggest challenge, as always, is finding the right data (audio) that would create an interesting generation, and will also work well with the t-SNE app - meaning that the analysis would create organic segments that sound good.