I Was A Soundpaint Beta Tester

There was an awful lot of strange hype a few months ago when some videos appeared under the brand name Soundpaint. The videos were super-high quality digital renderings of various instruments being bent and misshapen, like a real-work kaleidoscope.

As an audio synthesis guy, I watch these and think “the synthesis engine underneath this thing must be like something from the future”, and it piques my interest, hard. So I signed up to beta-test. I would need some faster disks. I’m a bit old-school with my disks; synthesis tickles me much more than samples so I’ve never had a big sample library, and consequently never needed super-fast disks. But I had around 350GB of samples to play with so I bought a cheap 500GB SSD.

I guess the samples themselves are pretty good, after all the company behind Soundpaint, 8dio, focus on sampling all kinds of things, which should make the plugin’s output “insta-high-quality”, and I guess it kind of is.

The business model for the plugin is interesting; the plugin itself is free, and you get one free pack to get going with, a sampled piano from 1928 which is oooo-kkkaayy, but I have questioned the velocity mapping which seems heavily weighted towards quieter notes, and some mis-configuring in the higher velocity sample allocations. I guess they’ll get around to fixing this soon.

One of the biggest declarations from the company is that they’ve developed a technique which can provide expressive depth without needing to deep-sample (for example, rather than capturing the sound at every 5 velocity levels (25 samples per note), it is captured at every 20 velocity levels (6 sample per note) and played back through their reconstruction engine).

The engine ‘interpolates’ or ‘smooths’ the audio data between the velocity levels. With (eg) 6 captured velocity levels, and the player hitting a velocity between (eg) 22 and 44, the output is a blend of the sound at both velocity level 22 and 44. So, if the velocity is 33, you might imagine the output is an even mix of the sounds from velocity level 22 and 44. Hitting the note at 38 would weight the audio output more towards the sample mapped to the 44th velocity step.

NOTE: this is not simple cross-fading, or mixing of two samples being played at the same time; this is the combination of two sample-analysis files, and the data held in these files, being interpolated, or blended.

It may also be using this same engine to spread (and time-stretch) notes up and down the keyboard, rather than just speeding up or slowing down, as conventional samples do. The company are quite obtuse about sharing their technology, which is fair.

The plugin itself, from a technical perspective, runs like this:

  • they developed an audio resynthesis engine with which they’ve analysed sample packs

  • the user downloads the samples and the analysis files

  • on loading and playback, the engine they’ve developed is used to give more velocity detail (as previously explained) and possibly to fill the gaps between notes (eg, one sample at C4, the next at E4; when you hit D4 the engine generates the sound rather than playing back a pitched-up C4 or pitched-down E4)

With this engine they’ve also provided the ability to MORPH between loaded samples. Morphing in an interesting word. From testing and listening I would suggest that it is some kind of phase-vocoding going on. I’ll talk about and demonstrate this in a video, which I’ll link to (and edit this post).

Soundpaint feels like a sound-designers tool and I expect you’ll hear it in movies, tv shows, adverts, and probably the charts too. I’m still not super keen on massive sample libraries but this is the first sample playback device to utilise ‘clever’ maths to enhance the palette of available sounds, which makes it kind of cool. Hopefully it’s the first in a line of other ‘intelligent’ sound generators.

Previous
Previous

Blog-2-Blog Linking

Next
Next

Rod Stewart’s “Blend” Collection