FACTORSYNTH by JJ BURRED
Factorsynth is a Max For Live device created by J.J. Burred that uses machine learning to decompose sounds into sets of elements. Once these elements have been obtained, you can modify and rearrange them to remix existing clips, remove notes, randomize patterns, and create complex textures with only a few clicks.
Unlike traditional audio effect devices, which take the track’s audio input and generate output in real time, Factorsynth is a clip-based device. It works on audio clips from your Live set that you have selected and loaded into Factorsynth. Once an audio clip has been selected and loaded into Factorsynth, it can then be decomposed into elements. The decomposition process is called factorization, because it is based on a technique called matrix factorization.
Factorization usually takes a few seconds, and can be performed while the Live set is playing. Once the factorization is ready, you can modify your sound in real time by modifying or recombining the extracted elements.
Since it is a clip-based device, Factorsynth will only affect the clip that is currently loaded into the device, even if the track contains other clips. Also, since the device takes its input audio from the loaded clip, it must always be in the leftmost position of an audio effects chain: it will ignore any processing happening before it on the effects chain (you can of course process its output with any other audio device).
The clip that you first load into Factorsynth becomes the master sound. The position of the master sound in the Live Set determines the time in which Factorsynth will be outputting sound. In other words: Factorsynth will be playing whenever the original, unprocessed clip would play (both in Session and Arrangement views).
You can optionally load a second sound, called the x-syn sound (for “cross-synthesis”). This sound is used to add new elements to the palette for sound creation. Still, Factorsynth’s playback position is always determined by the master sound.