A Drum Brain, But Not What You Think It Is
A nerdy coding friend asked if I had any ideas we could use to explore generative adversarial networks; in other words machine-learning. Building a ‘brain’ of data using a program to deconstruct source material and ‘understand’ it (in ways almost impossible for us mortals to understand). People have done lots of visual stuff with machine learning but there isn’t much in the audio realm, so that’s the plan. A special audio brain filled with sound.
A lot of the technology used in this machine-learning domain is a nightmare to use, requires (ideally) huge graphics cards to run the maths which deconstruct the material, with lots of power and memory, and at least one super-nerd to work out what went wrong, because it always goes wrong at some point.
I want a way for users to be able to explore an audio brain easily. That’s part of the vision, and there could be much more. But, you know.. GFX card prices/shortages, computing power in general..
We’ll get there, in the meantime here’s the current gui:
We’ve designed a way to explore these audio brains (which have been fed a tonne of drum sounds) algorithmically, but the interface, as it stands, is really not very user friendly! This is a big project which might take a few years before we’ve got something a bit more fun than esoteric-grids-of-colours and a few sliders, but we know we’ve got something completely original here (before someone steals it and declares it as their own!) and there may be few interesting things, and IP, we develop on the way.
In the meantime here are a couple of test pieces using only the samples generated from the drum brain. No effects apart from eq&comp.