Hmac: Difference between revisions

From Eigenvector Research Documentation Wiki
Jump to navigation Jump to search
(Created page with "===Purpose=== Hierarchical Model Automatic Classifier. Modelselector model for classification. Creates a tree of PLSDA models calibrated on subclass data. Follows the ahimbu...")
 
Line 15: Line 15:
:                  modelselectorgui(hmac.model);                         
:                  modelselectorgui(hmac.model);                         


See [https://www.wiki.eigenvector.com/index.php?title=Hierarchical_Model_Builder#Automatic_Hierarchical_Model_Classification| Automatic Hierarchical Model Classification] to use Hmac in the Hierarchical Model Builder interface.
See [https://www.wiki.eigenvector.com/index.php?title=Hierarchical_Model_Builder#Automatic_Hierarchical_Model_Classification Automatic Hierarchical Model Classification] to use Hmac in the Hierarchical Model Builder interface.


===Description===
===Description===

Revision as of 11:23, 3 January 2023

Purpose

Hierarchical Model Automatic Classifier. Modelselector model for classification. Creates a tree of PLSDA models calibrated on subclass data. Follows the ahimbu approach described in https://doi.org/10.1002/cem.3455. See more about Modelselector models Modelselector. Build Modelselector models in our Hierarchical Model Builder interface.

Synopsis

[hmac] = Hmac();
[hmac] = hmac.setX(x);
[hmac] = hmac.setY(y);
[options] = hmac.getOptions;
[hmac] = hmac.setOptions(options);
[hmac] = hmac.calibrate;
modelselectorgui(hmac.model);

See Automatic Hierarchical Model Classification to use Hmac in the Hierarchical Model Builder interface.

Description

Build a Modelselector model via the ahimbu algorithm from input dataset X, or input X and Y if classes are in X. Each node in the resulting Modelselector model will be a PLSDA model that was calibrated on all or, most likely, and subset of classes. This works by peeling off one or few classes at a time and creating a PLSDA model for each split and is completed when all classes are accounted for or there is perfect classification.

Inputs

  • x = X-block (predictor block) class "double" or "dataset", containing numeric values,
  • y = Y-block (optional) class "double" sample class values,
  • options = an optional input options structure (see below)


Outputs

  • hmac = an object of class Hmac, contains 'model' field which is the resulting Modelselector model.

Cross-validation

Cross-validation can be applied to each PLSDA node model. The CV settings are the same for each node, the default being Venetian Blinds, 10 splits, and 1 Sample per blind.

From the Hierarchical Model Builder interface, customize the CV settings by clicking on the Cross-Validation 'Set' button in the Hmac interface.

Preprocessing

Like the Cross-Validation options, the Xblock preprocessing settings will be the same across all of the potential PLSDA models in the final Modelselector model.

Options

options = a structure array with the following fields:

  • classset : [ {1} ], Indicates which class set in x to use when no y-block is provided,
  • maxlvs: [ {6} ] Maximum number of latent variables to use in crossval (see crossval),
  • cvopts : struct of cross-validation options, including preprocessing at each node in the modelselector model. See [1] for all available options,
  • cvi : [{'vet' 10 1}] Standard cross-validation cell (see crossval) defining a split method, number of splits, and number of iterations. This cross-validation is for splitting each class or subclasses in the ahimbu algorithm.


See Also

Modelselector, Hierarchical Model Builder, analysis, crossval, preprocess, EVRIModel_Objects