Load torchscripts on scsynth. Tested with RAVE (v1 and v2) and msprior. Models are loaded asynchronously on the server, and stored in a global dictionary so that they can then be accessed by key.
Models are loaded with a key to identify them and a path to a torchscript file:
The sclang interface instructs the server to load the .ts file, receives and stores models' info from the server, and keeps track of which models are loaded. Once a model is loaded, and its info received, it becomes possible to create UGens for processing.
You can get UGens for each models' method like this:
Each NN().ar UGen is specific to a loaded model and method. This is because different models and methods require different numbers of inputs and outputs. Each UGen loads an independent instance of the model, to make sure independent inferences on the same model don't interfere with each other. For this reason, setting attributes is supported only at the UGen level.
Torchscript can support settable attributes:
NNUGen accepts a list of pairs (attributeName, attributeValue), and will set each attribute when its value changes. It might be useful to limit the rate at which an attribute is set by using Latch:
With debug: 1
, NNUGen will print the attribute value every time it's set. The printed value is read from the model for every print.
In order to load and play with models on an NRT server, models' informations have to be stored in a file. This method is intended for running NRT servers without even booting a real-time one:
In the NN.nrt() { ... }
block, the syntax to load methods and create SynthDefs is almost the same as in real-time, with the only difference being that SynthDefs need to be "sent" to server with .doSend(s)
instead of other methods.
A second method is available if models are already loaded on a running server. The following code creates messages for the NRT server to load all models currently loaded on a RT server, with the same indices, so that SynthDefs built on the RT server work also on the NRT one. The obvious drawback is that this method is more expensive in terms of resources, since models are loaded on both the real-time and any NRT servers that are launched.
If after a model methods are very slow for the first execution right after the model is loaded, and then become much faster, it might be due to torchscript performing optimization during the first pass. NN offers a way of performing this first pass silently:
Sends a message to the server to load a torchscript file, and gathers model informations as the server returns them. This method should be use to initialize a new NNModel object.
key |
a Symbol to identify this model object, and to access it after it's loaded. |
path |
the file path of the torchscript file to load. The path is standardized with String: -standardizePath internally. |
id |
a number that identifies this model on the server. Pass |
server |
the server that should load this model. Defaults to Server: *default. |
action |
function called after the model and its info are loaded. The callback function is given the model as argument. |
This class doesn't construct any instance, but provides this as a convenience method for retrieving loaded models or their methods.
key |
a Symbol that identifies the loaded model (see /Classes/NN#*load). |
methodName |
a Symbol. Optional. |
A NNModel, if called without providing methodName, otherwise a NNModelMethod. If the requested model or method is not found, an error is thrown.
Facility to load model information from a YAML file and create an OSC bundle suitable for loading models and SynthDefs on an NRT server. See NN: NRT processing.
infoFile |
path to a YAML file which contains model informations. Such a file can be obtained from a running RT server with NN: *dumpInfo |
makeBundleFn |
a Function to be used to create an OSC bundle. All OSC messages sent from this function will not be sent to server, but added instead to the returned bundle. See Server: -makeBundle. |
an OSC bundle, a.k.a. an Array of OSC messages.
Gets a loaded model by key. Equivalent to NN(key)
, but it doesn't throw an Error if the model is not found.
key |
an NNModel or nil if not found.
an Array of all loaded NNModel.
Prints all loaded model informations
Queries the server to dump all currently loaded models informations to a YAML file or to the console.
outFile |
path to the YAML file to be written. If |
server |
Returns the key with which a model is stored in the registry.
model |
an NNModel |
a Symbol, or nil
if model is not found in registry.
Returns the OSC message for the server to load a torchscript file.
id |
a number that identifies this model on the server. Pass |
path |
the file path of the torchscript file to load. The path is standardized with String: -standardizePath internally. |
infoFile |
the path to a file where the server is going to write model info. Defaults to |
Returns the OSC message for the server to print models info or write them to a file
modelIdx |
an Integer that identifies an already loaded model on the server. Defaults to |
outFile |
the path to a file where the server is going to write model info. Defaults to |