tSNE-VocalSampler

In this project I aimed to connect the results of my artistic research on vocal melodies and on mining audio-data. It was part of my bachelor studies in Music and Media at Robert-Schumann-Hochschule Düsseldorf.

Samples within a library were analyzed and mapped by the t-SNE algorithm through a python-script provided by Machine learning for Artists.

Out comes a 2D-Scatterplot where each point represents a vocal-sample. These samples can now be sequenced and resynthesized live, through a gui that was programmed in SuperCollider.

For further information, please refer to this project’s GitHub-repostitory.

Todo

This was a fun little exercise, but there’s still alot to explore here!
In future versions of this application, I want to implement a 3D-Scatterplot and various serial interfacing options, like:
MIDI-out, MIDI-clock, Ableton’s LinkClock, tty and CV.

Instead of an audio library, I want to use the abovementioned script on a single audio recording of everyday human interaction.
I will also try to provide a more generally applicable version of this and check out some of the different GUI-layouts provided by SuperCollider.


Related projects: