NN:
Filter:
Classes (extension) | UGens > Machine Learning

NN : Object
ExtensionExtension

Global interface for nn.ar: load torchscripts on scsynth
Source: NN.sc

Description

Load torchscripts on scsynth. Tested with RAVE (v1 and v2) and msprior. Models are loaded asynchronously on the server, and stored in a global dictionary so that they can then be accessed by key.

Loading models

Models are loaded with a key to identify them and a path to a torchscript file:

The sclang interface instructs the server to load the .ts file, receives and stores models' info from the server, and keeps track of which models are loaded. Once a model is loaded, and its info received, it becomes possible to create UGens for processing.

Real-time processing

You can get UGens for each models' method like this:

Each NN().ar UGen is specific to a loaded model and method. This is because different models and methods require different numbers of inputs and outputs. Each UGen loads an independent instance of the model, to make sure independent inferences on the same model don't interfere with each other. For this reason, setting attributes is supported only at the UGen level.

Attributes

Torchscript can support settable attributes:

NNUGen accepts a list of pairs (attributeName, attributeValue), and will set each attribute when its value changes. It might be useful to limit the rate at which an attribute is set by using Latch:

With debug: 1, NNUGen will print the attribute value every time it's set. The printed value is read from the model for every print.

NRT processing

In order to load and play with models on an NRT server, models' informations have to be stored in a file. This method is intended for running NRT servers without even booting a real-time one:

In the NN.nrt() { ... } block, the syntax to load methods and create SynthDefs is almost the same as in real-time, with the only difference being that SynthDefs need to be "sent" to server with .doSend(s) instead of other methods.

A second method is available if models are already loaded on a running server. The following code creates messages for the NRT server to load all models currently loaded on a RT server, with the same indices, so that SynthDefs built on the RT server work also on the NRT one. The obvious drawback is that this method is more expensive in terms of resources, since models are loaded on both the real-time and any NRT servers that are launched.

First-execution warmup

If after a model methods are very slow for the first execution right after the model is loaded, and then become much faster, it might be due to torchscript performing optimization during the first pass. NN offers a way of performing this first pass silently:

Class Methods

NN.load(key, path, id: -1, server, action)

Sends a message to the server to load a torchscript file, and gathers model informations as the server returns them. This method should be use to initialize a new NNModel object.

Arguments:

key

a Symbol to identify this model object, and to access it after it's loaded.

path

the file path of the torchscript file to load. The path is standardized with String: -standardizePath internally.

id

a number that identifies this model on the server. Pass -1 (default) to let the server set this number automatically.

server

the server that should load this model. Defaults to Server: *default.

action

function called after the model and its info are loaded. The callback function is given the model as argument.

NN.new(key, methodName)

This class doesn't construct any instance, but provides this as a convenience method for retrieving loaded models or their methods.

Arguments:

key

a Symbol that identifies the loaded model (see /Classes/NN#*load).

methodName

a Symbol. Optional.

Returns:

A NNModel, if called without providing methodName, otherwise a NNModelMethod. If the requested model or method is not found, an error is thrown.

NN.nrt(infoFile, makeBundleFn)

From extension in /home/bgola/.local/share/SuperCollider/Extensions/nn.ar/Classes/NN_nrt.sc

Facility to load model information from a YAML file and create an OSC bundle suitable for loading models and SynthDefs on an NRT server. See NN: NRT processing.

Arguments:

infoFile

path to a YAML file which contains model informations. Such a file can be obtained from a running RT server with NN: *dumpInfo

makeBundleFn

a Function to be used to create an OSC bundle. All OSC messages sent from this function will not be sent to server, but added instead to the returned bundle. See Server: -makeBundle.

Returns:

an OSC bundle, a.k.a. an Array of OSC messages.

NN.model(key)

Gets a loaded model by key. Equivalent to NN(key), but it doesn't throw an Error if the model is not found.

Arguments:

key

Returns:

an NNModel or nil if not found.

NN.models

Returns:

an Array of all loaded NNModel.

NN.describeAll

Prints all loaded model informations

NN.dumpInfo(outFile, server)

Queries the server to dump all currently loaded models informations to a YAML file or to the console.

Arguments:

outFile

path to the YAML file to be written. If nil it prints to console instead.

server

NN.keyForModel(model)

Returns the key with which a model is stored in the registry.

Arguments:

model

an NNModel

Returns:

a Symbol, or nil if model is not found in registry.

OSC Messages

NN.loadMsg(id, path, infoFile)

Returns the OSC message for the server to load a torchscript file.

Arguments:

id

a number that identifies this model on the server. Pass -1 (default) to let the server set this number automatically.

path

the file path of the torchscript file to load. The path is standardized with String: -standardizePath internally.

infoFile

the path to a file where the server is going to write model info. Defaults to nil which disables writing to a file (useful for NRT servers since they can't write to files).

NN.dumpInfoMsg(modelIdx, outFile)

Returns the OSC message for the server to print models info or write them to a file

Arguments:

modelIdx

an Integer that identifies an already loaded model on the server. Defaults to -1 which causes a dump of all loaded model informations in the same output.

outFile

the path to a file where the server is going to write model info. Defaults to nil which disables writing to a file (useful for NRT servers since they can't write to files) and prints to console instead.

Inherited class methods

Undocumented class methods

NN.isNRT

From extension in /home/bgola/.local/share/SuperCollider/Extensions/nn.ar/Classes/NN_nrt.sc

NN.nextModelID

From extension in /home/bgola/.local/share/SuperCollider/Extensions/nn.ar/Classes/NN_nrt.sc

NN.nrtModelStore

From extension in /home/bgola/.local/share/SuperCollider/Extensions/nn.ar/Classes/NN_nrt.sc

NN.nrtModelsInfo

From extension in /home/bgola/.local/share/SuperCollider/Extensions/nn.ar/Classes/NN_nrt.sc

NN.prCacheInfo(info)

NN.prGetCachedInfo(path)

NN.prPut(key, model)

NN.prReadInfoFile(infoFile)

From extension in /home/bgola/.local/share/SuperCollider/Extensions/nn.ar/Classes/NN_nrt.sc

Instance Methods

Inherited instance methods

Examples