Anndl: Difference between revisions

From Eigenvector Research Documentation Wiki
Jump to navigation Jump to search
Line 57: Line 57:
** e.g. Let nhid1 = 8, nhid1 looping array will be [1:8]
** e.g. Let nhid1 = 8, nhid1 looping array will be [1:8]
* If nhid1 > 10 and nhid1 <= 100, cross-validation looping is done over [1 2 3 5 mod(nhid1,25) nhid1] (this array contains each value where mod(nhid1,25) is 0)
* If nhid1 > 10 and nhid1 <= 100, cross-validation looping is done over [1 2 3 5 mod(nhid1,25) nhid1] (this array contains each value where mod(nhid1,25) is 0)
** e.g. Let nhid1 = 95,  nhid1 looping array will be [1 2 3 5 250 50 75 95]
** e.g. Let nhid1 = 95,  nhid1 looping array will be [1 2 3 5 25 50 75 95]
* If nhid1 > 100, looping is done over [10 20 30 50 100 mod(nhid1,100) nhid1] (this array contains each value where mod(nhid1,100) is 0)
* If nhid1 > 100, looping is done over [10 20 30 50 100 mod(nhid1,100) nhid1] (this array contains each value where mod(nhid1,100) is 0)
** e.g. Let nhid1 = 250, nhid1 looping array will be [10 20 30 50 100 200 250]
** e.g. Let nhid1 = 250, nhid1 looping array will be [10 20 30 50 100 200 250]

Revision as of 07:20, 2 September 2021

Purpose

Predictions based on Artificial Deep Learning Neural Network (ANNDL) regression models.

Synopsis

anndl - Launches an Analysis window with ANN as the selected method.
[model] = anndl(x,y,options);
[pred] = anndl(x,model,options);
[valid] = anndl(x,y,model,options);

Please note that the recommended way to build and apply an ANNDL model from the command line is to use the Model Object. Please see this wiki page on building and applying models using the Model Object.

Description

Build an ANNDL model from input X and Y block data using the specified number of layers and layer nodes. Alternatively, if a model is passed in ANNDL makes a Y prediction for an input test X block. The ANNDL model contains quantities (weights etc) calculated from the calibration data. When a model structure is passed in to ANNDL then these weights do not need to be calculated.

There are two options of ANNDL available:'sklearn' and 'tensorflow'.

'sklearn' is the ANNDL version used by default but the user can specify the option 'algorithm' = 'tensorflow' to use Tensorflow instead. The Scikit-Learn implementation is fast, while Tensorflow is slower but provides more customization of the network architecture. Comparisons are discussed in further detail below.

Inputs

  • x = X-block (predictor block) class "double" or "dataset", containing numeric values,
  • y = Y-block (predicted block) class "double" or "dataset", containing numeric values,
  • model = previously generated model (when applying model to new data).

Outputs

  • model = a standard model structure model with the following fields (see Standard Model Structure):
    • modeltype: 'ANNDL',
    • datasource: structure array with information about input data,
    • date: date of creation,
    • time: time of creation,
    • info: additional model information,
    • pred: 2 element cell array with
      • model predictions for each input block (when options.blockdetail='normal' x-block predictions are not saved and this will be an empty array)
    • detail: sub-structure with additional model details and results.
  • pred a structure, similar to model for the new data.

Training Termination

The ANNDL is trained on a calibration dataset to minimize prediction error, RMSEC. It is important to not overtrain, however, so some criteria for ending training are needed.

Sklearn's max_iter parameter is the maximum number of iterations for weight optimization. However, this number may not be reached for a couple of reasons. One reason being that the sklearn early stopping has been enabled. This means is that the sklearn method automatically sets 10% of the calibration data aside as validation data and optimization will stop if the validation score is not improving by n_iter_no_change (hard set to 10) or tol (this is a adjustable parameter in PLS_Toolbox). Accuracy can be increased on the calibration set by decreasing tol, but this leads to overfitting when cross-validating or predicting on the validation set.

Tensorflow training termination follows the same convention as the sklearn implementation, just under the software's respective parameter names. Termination will occur whenever either options.tf.epochs is reached or the rate of improvement does not exceed options.tf.min_delta after 20 epochs. Note these RMSE values refer to the internal preprocessed and scaled y values.

Cross-validation

Cross-validation can be applied to ANNDL when using either the ANNDL Analysis window or the command line. From the Analysis window specify the cross-validation method in the usual way (clicking on the model icon's red check-mark, or the "Choose Cross-Validation" link in the flowchart). In the cross-validation window the "Maximum Number of Nodes" specifies how many nodes in the first hidden layer (nhid1) to test over. Viewing RMSECV versus number of nhid1 nodes (toolbar icon to left of Scores Plot) is useful for choosing the number of layer 1 nodes. From the command line use the crossval method to add crossvalidation information to an existing model. Since these networks generally require large node sizes (unlike ANN), cross-validation is not done on every possible value from 1:nhid1 as this would take some time. Instead, we have implemented a rule as to what node sizes for nhid1 to test over should be. Here's the cross-validation rule that is set in place:

  • If nhid1 <= 10, cross-validation looping is done over [1:nhid1]
    • e.g. Let nhid1 = 8, nhid1 looping array will be [1:8]
  • If nhid1 > 10 and nhid1 <= 100, cross-validation looping is done over [1 2 3 5 mod(nhid1,25) nhid1] (this array contains each value where mod(nhid1,25) is 0)
    • e.g. Let nhid1 = 95, nhid1 looping array will be [1 2 3 5 25 50 75 95]
  • If nhid1 > 100, looping is done over [10 20 30 50 100 mod(nhid1,100) nhid1] (this array contains each value where mod(nhid1,100) is 0)
    • e.g. Let nhid1 = 250, nhid1 looping array will be [10 20 30 50 100 200 250]

Again, this is to avoid doing cross-validation over every possible value in 1:nhid1.

Options

options = a structure array with the following fields:

  • display : [ 'off' |{'on'}] Governs display
  • plots: [ {'none'} | 'final' ] governs plotting of results.
  • blockdetails : [ {'standard'} | 'all' ] extent of detail included in model. 'standard' keeps only y-block, 'all' keeps both x- and y- blocks.
  • waitbar : [ 'off' |{'auto'}| 'on' ] governs use of waitbar during analysis. 'auto' shows waitbar if delay will likely be longer than a reasonable waiting period.
  • algorithm : [{'sklearn'} | 'tensorflow'] ANN implementation to use.
  • preprocessing: {[] []} preprocessing structures for x and y blocks (see PREPROCESS).
  • compression: [{'none'}| 'pca' | 'pls' ] type of data compression to perform on the x-block prior to calculaing or applying the ANN model. 'pca' uses a simple PCA model to compress the information. 'pls' uses a pls model. Compression can make the ANN more stable and less prone to overfitting.
  • compressncomp: [1] Number of latent variables (or principal components to include in the compression model.
  • compressmd: [{'yes'} | 'no'] Use Mahalnobis Distance corrected.
  • cvi : M element vector with integer elements allowing user defined subsets. (cvi) is a vector with the same number of elements as x has rows i.e., length(cvi) = size(x,1). Each cvi(i) is defined as:
cvi(i) = -2 the sample is always in the test set.
cvi(i) = -1 the sample is always in the calibration set,
cvi(i) = 0 the sample is always never used, and
cvi(i) = 1,2,3... defines each test subset.
  • sk : structure representing the input parameters for when algorithm=sklearn
  • sk.activation : [ {'relu'} | 'tanh' | 'logistic' | 'identity' ] Type of activation function.
  • sk.solver : [ {'adam'} | 'lbfgs' | 'sgd' ] Solver for weight optimization. lbfgs does especially well for smaller datasets and converges faster.
  • sk.alpha : [ {'1.0000e-04'} ] L2 Penalty parameter.
  • sk.max_iter : [ {'200'} ] Maximum number of iterations for weight optimization.
  • sk.hidden_layer_sizes : [ {'100'} ] Vector of node sizes. The ith element represents the number of nodes in the ith hidden layer in the network.
  • sk.random_state : [ {'1'} ] Random seed number. Set this to a number for reproducibility.
  • sk.tol : [ {'1.0000e-04'} ] Tolerance for optimization.
  • sk.learning_rate_init : [ {'1.0000e-03'} ] Initial learning rate.
  • sk.batch_size : [ {'12'} ] Number of samples in each minibatches.
  • tf : structure representing the input parameters for when algorithm=tensorflow
  • tf.activation : [ {'relu'} | 'tanh' | 'sigmoid' | 'linear' ] Type of activation function.
  • tf.optimizer : [{'adam'} | 'adamax' | 'rmsprop' | 'sgd'] Solver for weight optimization.
  • tf.loss : [{'mean_squared_error'} 'mean_absolute_error' 'log_cosh'] Choice of loss function to be minimized.
  • tf.epochs : [ {'200'} ] Maximum number of iterations for weight optimization.
  • tf.hidden_layer : [ {struct('type','Dense','units',100)} ] Cell array of structs, where each struct represents a hidden layer in the network. The struct accepts 3 possible fields: 'type', 'units', and 'size'. These layers are further explained below.
  • tf.random_state : [ {'1'} ] Random seed number. Set this to a number for reproducibility.
  • tf.min_delta : [ {'1.0000e-04'} ] Tolerance for optimization.
  • tf.learning_rate : [ {'1.0000e-03'} ] Initial learning rate.
  • tf.batch_size : [ {'12'} ] Number of samples in each minibatches.

Additional information on the ‘tensorflow’ ANNDL implementation

PLS_Toolbox does not include the full slate that Tensorflow has to offer, but more than enough to get off the ground running to build deep neural networks. Tensorflow offers a wide variety of the types of layers to use, loss functions, optimizers, and activation functions. This chart goes over what has been adapted from Tensorflow thus far:

  • Layers (visit here for more info: [1]
  1. Dense (fully connected layer)
  2. Flatten (Takes weights from the previous layer and flattens to a 1-dimensional vector)
  3. Dropout (Randomly assign node values to 0.)
  4. BatchNormalization (Normalizes weights to have a mean output close to 0 and standard deviation close to 1)
  5. Conv[123]D (all three dimensions included)
  6. AveragePooling[123]D (all three dimensions included)
  7. MaxPooling[123]D (all three dimensions included)
  • Optimizers (visit here for more info: [2]
  1. Adam
  2. Adamax
  3. RMSProp
  4. SGD
  • Loss functions (visit here for more info: [3]
  1. Mean Square Error
  2. Mean Absolute Error
  3. Logcosh
  • Activation functions (visit here for more info: [4]
  1. Relu
  2. Tanh
  3. Sigmoid
  4. Linear
  • Convolutional Neural Networks (CNN)


ANNDL and ANN

The two neural network Python implementations have similarities and differences with our ANN implementation. ANNDL offers the ability to build more than 2 hidden layers, unlike ANN. This can help in contexts where a more complex network architecture is needed to for complex datasets. The node sizes in these neural networks should also be treated differently. In ANN, it is advised to keep these node sizes small and avoid using a second hidden layer, if possible. After initial testing by some of our staff, we have found that the Python neural networks in ANNDL do well when the node sizes are much larger than that of ANN. Not only can these ANNDL models perform comparably well with ANN, but the speed when changing node sizes scales very well. Another advantage (and disadvantage) is the breadth of parameters to tinker with. While it is nice to have more of a variety options to choose from, building the perfect Python neural network can be time-consuming.


Usage from ANNDL Analysis window

When using the ANNDL Analysis window, like in the ANN Analysis window, it is possible to specify a scan over a range of hidden layer nodes to use in the first hidden layer. This is enabled by setting the “Maximum number of Nodes” value in the cross-validation window. This causes ANNDL models to be built for the range of hidden layer nodes up to the specified number and the resulting RMSECV plotted versus the number of nodes is shown by clicking on the “Plot cross-validation results” plot icon in the ANNDL Analysis window’s toolbar. This can be useful for deciding how many nodes to use in the first hidden layer. While cross-validating over a range of node sizes in the first hidden layer, the sizes in the remaining hidden layers stay fixed. Note that this plot is only advisory. The resulting model is built with the input parameter number of nodes, ‘nhid’, and its model.detail.rmsecv value relates to this number of nodes. It is important to check for the optimal number of nodes to use in the ANNDL but this feature can greatly lengthen the time taken to build the ANNDL model and should be be set = 1 once the number of hidden nodes is decided.

Summary of model building speed-up settings

From the Analysis window:

ANN in PLS_Toolbox or Solo version 8.2 and earlier can be very slow if you use cross-validation (CV). This is mostly due to the CV settings window also specifying a test to find the optimal number of hidden layer 1 nodes, testing ANN models with 1, 2, …,20 nodes, each with CV. This is set by the top slider field “Maximum Number of Nodes L1”. For example, if you want to build an ANN model with 4 layer 1 nodes (using the “ANN Settings” field) but leave the CV settings window’s top slider set = 20, then you will actually build 20 models, each with CV, and save the RMSECV from each. This can be very slow, especially for the models with many nodes.

To make ANN perform faster it is recommended that you drag this CV window’s “Maximum Number of Nodes L1” slider to the left, setting = 1, unless you really want to see the results of such a parameter search over the range specified by this slider. This is the default in PLS_Toolbox and Solo versions after version 8.2. The RMSECV versus number of Layer 1 Nodes can be seen by clicking on the “Plot cross-validation results” icon (next to the Scores Plot icon).

Summary: To make ANNDL perform faster:

1. Move the top CV slider to the left, setting value = 1.

2. Turning CV off or using a small number of CV splits.

3. Choose to use a small number of L1 nodes in the ANNDL settings window.

4. Increase batch size

From the command line

1. Initially build ANN without cross-validation so as to decide on values for learnrate and learncycles by examining where the minimum value of model.detail.ann.rmscviter occurs versus learncycles. Note this uses a single-split CV to estimate rmsecv when the ANN cross-validation is set as "None". It is inefficient to use a larger than necessary value for option "learncycles".

2. Determine the number of hidden layer nodes to use by building a range of models with different number of nodes, nhid1, nhid2. If using the ANN Analysis window and the ANN has a single hidden layer then this can be done conveniently by using the “Maximum number of Nodes L1” setting in the cross-validation settings window. It is best to use a simple cross-validation at this stage, with a small number of splits and iterations at this survey stage.

See Also

anndlda, annda, analysis, crossval, lwr, modelselector, pls, pcr, preprocess, svm, EVRIModel_Objects