On this page

The goal of the example-based ML tools is to learn a function from a fixed number of variables to another fixed number of variables. The ML Regression Train node aims at approximating continuous functions, where the value of the output variables vary smoothly from the input variable’s changes.

There are at least two use cases in Houdini:

  1. A procedural network (e.g., a simulation) can be approximated using ML resulting in a trained model. When this is inferenced, it’s much faster than the original procedural network. This can give you a faster replacement of the original network at the cost of approximation error.

  2. The ML can be trained to find the inverse of a mapping. For example, it is common to have a network in Houdini that gives a different result based on a set of settings. You may construct examples whose inputs are the output of this network and whose targets are the inputs to this network. This way, you may sometimes be able to train a model that can approximate the settings that lead to a certain unseen output.

Regression can be applied in Houdini when you have a procedural network that can be encoded as a mapping from a fixed-dimensional space to another fixed-dimensional space. You need to choose a way to represent or approximate each input and each output. In the case of the ML Deformer H20.5 Content Library example, the input variables are the components of all the joint rotations and the output variables are PCA weights that represent a skin deformation.

To see how your training is going, look at the log files generated by ML Regression Train. This log file has a column of the training loss (on the left). If early stopping is enabled, there is a separate column for the validation loss (on the right).

Ensuring sufficient capacity

The training goal is to obtain a model that generalizes well. You train it on a specific subset of the data set and then you expect the trained model to give accurate results on inputs that were never seen during training. Before generalization, ensure the model has enough capacity to accurately approximate the training data:

  • To verify this, you can use ML Regression Train with the regularization options disabled. The weight decay would be set to zero and turn off early stopping.

    • With these settings, if the model isn’t able to find a close approximation of the training data, then the model may not have a sufficient representational capacity (underfitting). You can look at the training loss column of the training log generated by ML Regression Train for this.

  • To increase the representational capacity, you can increase the number of hidden layers, increase the number of units per hidden layer, or a combination of both.

    • If that doesn’t work, then the problem may be due to the way the inputs and outputs are represented (poor choice of hypothesis space).

    • It may also be the data set is poor quality. There may be intrinsic noise on the targets not related to the inputs. In that case, it is generally not possible to get an accurate match for the training data.

  • For some problems, it may be the learning rate parameter on the ML Regression Train top is set too high by default.

    • Lowering it will generally extend the amount of training time, but may result in a trained model with a lower loss (more accurate).

Generalization to unseen inputs

After ensuring the model has enough representational capacity for training, you can focus on how well the model generalizes to unseen examples.

  • Enable early stopping on the ML Regression Train and then see how this affects generalization. Consult the validation loss column of the training log generated by ML Regression Train for this.

    • If the validation loss is significantly higher than the training loss (overfitting), you may want to increase the amount of data points in the data set until the generalization improves enough.

    • If increasing the data set size is not possible or desired, you may alternatively increase the weight decay parameter on ML Regression Train. Increasing weight decay tends to increase the training loss while decreasing the validation loss.

  • It is recommended to use the Wedge TOP to try training with various settings for weight decay to find the one that works best. Weight decay is the simplest possible type of regularization.

ML Regression Train is intended to be a simple node that can serve as an example. It doesn’t incorporate every possible regularization approach. If you need to incorporate more refined type of regularization into your training, you can use the existing PyTorch script that’s used by ML Regression Train as a starting point and modify it. This script is located at $HHP/hutil/ml/regression.

Machine Learning without Deep Learning

ML Regression Train allows you to train a neural network with hidden layers. If the dimension of the input is low, you can use alternative ML techniques to approximate a function. An example is nearest-neighbor search. A labeled example is found when an input is closest to the query input and the target component corresponding to this labeled example is returned as the output.

ML Regression Proximity allows you to perform this type of proximity-based function approximation. No training is required for this. This node is also useful for troubleshooting your ML setup. If your data set preparation network is correctly set up, you can retrieve matching a target component if you give an input component in your set of labeled examples.

Machine Learning

General Support

Example-based ML

Reference