Model Exporter Reference Manual and Evri faq: Difference between pages

From Eigenvector Research Documentation Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Mathias
 
imported>Lyle
 
Line 1: Line 1:
__TOC__
__TOC___
==Introduction==
==Importing / Exporting==


[[Model_Exporter]] converts models created within the [[Software_User_Guide|PLS_Toolbox or Solo]] chemometrics modeling environments into an interpretable format for use outside of these products. These exported models can be used with the included C# or Java interpreters or with a user-supplied interpreter to make predictions on new data.
[[faq_concatenate_multiple_files|How do I concatenate multiple files into a single DataSet?]]


Model_Exporter takes as input a standard model structure created in PLS_Toolbox or Solo and outputs the model into one of three formats: an [[#XML_File_Format|XML file]] (executable by a user-supplied external parser or the Java or C# [[Model _Exporter Interpreter]] class provided with Model_Exporter), an [[#M-file_Format|m-file]] (executable in MATLAB – separately distributed by Mathworks, Inc – without any additional toolboxes, or LabView with their MathScript addon package) or a [[#TCL_File_Format|TCL file]] (executable in a Tcl interpreter or in the Symbion software package – by Symbion Systems, Inc.).
[[faq_create_multivariate_image_from_separate_images|How do I create a multivariate image from separate images?]]


The exported model requires very few resources to be executed. Specifically, it requires floating-point numerical calculations, a small amount of memory, and the overhead resources required by the specific interpreter.
[[faq_export_PCA_scores_and_loadings_to_text_file|How do I export PCA scores and loadings to a text file (to read into MS Excel, for example)?]]


This documentation describes the use of the Model_Exporter, the use of exported [[#M-file_Format|M-file]] and [[#TCL_File_Format|TCL-file]] formats as well as to help in the design of external [[#XML_File_Format|XML parsing engines]]. Model_Exporter includes a freely-distributable interpreter class with versions in C# and Java as described on the [[Model_Exporter Interpreter]] page. In addition, an example interpreter engine is supplied for the PHP language (often used for web-page scripting predictions; see http://www.php.net for more information on PHP). Additional engines may be available - [mailto:helpdesk@eigenvector.com Contact Eigenvector Research, Inc.] for more information.
[[faq_import_three-way_data|How do I import three-way data into Solo or PLS_Toolbox?]]


Latest version release notes can be found at http://wiki.eigenvector.com/index.php?title=Model_Exporter_Release_Notes
[[faq_import_horiba_NGC_64bit |Why can't I import a Horiba NGC file on my 64-bit computer?]]


==System Requirements==
[[faq_SPCREADR_cant_read_multiple_files |Why can't SPCREADR read multiple files I've selected?]]


Model_Exporter can be executed from either the MATLAB computational environment ([http://mathworks.com Mathworks, Inc., Natick, MA]), or  [[Software User Guide|Solo]] (Eigenvector Research, Inc., Wenatchee, WA). Model_Exporter converts models created by PLS_Toolbox 3.5 or higher or Solo 4.0 or higher.
[[faq_some_EXCEL_files_fail_to_import |Why do some Excel files fail to import?]]


===Matlab-Based Exporter Requirements===
==General==


For execution of Model_Exporter within the MATLAB environment, the following is required:
[[faq_PARALIND_in_PLS_Toolbox |Can I do PARALIND in PLS_Toolbox?]]


:Compatible with any version of Matlab released within five years of this product's release.  For example, Model exporter 3.3.0 released in 2016, is guaranteed to be compatible with any version of Matlab released within 5 years of the 2016 release data, so in this case Matlab 2011 would be the oldest version of Matlab that we fully guarantee compatibility. 
[[faq_install_on_more_than_one_PC | Can I install PLS_Toolbox (or Solo) on more than one PC, such as on my desktop and laptop computer?]]
:256 MB RAM (recommended – less may be required)


===Solo-Based Exporter Requirements===
[[faq_multiple_class_sets_together_in_SIMCA_PLSDA_LDA | Can I use multiple class sets (categorical variables) together in a SIMCA, PLSDA, or LDA model?]]


For execution of the Model_Exporter, the following is recommended
[[faq_more_info_on_R_Squared_statistic | Can you give me more information on the R-Squared statistic?]]


:Solo+Model_Exporter 4.1 or higher
[[faq_how_RMSEC_and_RMSECV_related to R2Y_and_Q2Y_seen_other_software | How are RMSEC and RMSECV related to R2Y and Q2Y I see in other software?]]
:Operating system requirements as listed for the specified Solo version
:200 MB Disk Space (for installation; some models may require additional space)
:256 MB RAM (recommended – less may be required)


===Requirements for Using Exported Models===
[[faq_convergence_of_PARAFAC| Convergence of PARAFAC. How much variation between models is expected a particular PARAFAC is fit multiple times with the same settings?]]


The requirements to execute an exported model vary depending on the interpreter used, the number of variables in the modeled data, and the complexity of the model (i.e. the number of factors/components included in the model and the types of preprocessing used).
[[faq_does_software_stop_working_if_maintenance_expires | Does the software stop working if my maintenance expires?]]


Memory requirements depend on the precision required for the application, the number of variables in the data and the total number of factors in the model. For example, a model working on 10,000 variables and 5 factors would require around 1MB for double-precision calculations and 500KB single-precision calculations.
[[faq_report_a_problem_with_PLS_Toolbox | How and where do I report a problem with PLS_Toolbox?]]


The software which executes the specific file formats may have additional requirements. See the file format description sections later in this manual for where to locate model execution details.
[[faq_how_are_T_contributions_calculated | How are T-contributions calculated?]]


==Installation==
[[faq_how_are_ROC_curves_calculated_for_PLSDA | How are the ROC curves calculated for PLSDA?]]


Model_Exporter is installed by adding the Model_Exporter folder to your Matlab path. Once installed, PLS_Toolbox will recognize the installation and enable the proper menu items. Solo+Model_Exporter has the folder pre-installed.
[[faq_how_are_error_bars_calculated_regression_model | How are the error bars calculated for a regression model and can they be related to a confidence limit (confidence in the prediction)?]]


# Make the Model_Exporter folder your Current Folder in Matlab.
[[faq_improve_performance_with_PLS_Toolbx_and_Matlab_on_Mac | How can I improve performance with PLS_Toolbox and Matlab on the Mac platform?]]
# Run the command 'setpath' from the command line.


==Supported Methods==
[[faq_assign_classes_for_samples_in_a_DataSet | How do I assign classes for samples in a DataSet?]]


Model_Exporter supports the following model types:
[[faq_build_a_classification_model_from_class_set_other_than_the_first | How do I build a classification model from a class set other than the first?]]


:PCA – Principal Components Analysis model
[[faq_choose_between_different_cross_validation_leave_out_options | How do I choose between the different cross-validation leave-out options?]]
:PLS – Partial Least Squares regression model
:PLSDA – Partial Least Squares discriminant analysis model
:PCR – Principal Components Regression model
:CLS – Classical Least Squares Regression model
:SVM – Support Vector Machine Regression model
:SVMDA – Support Vector Machine Classification model
:ANN – Artificial Neural Network Regression model
:MLR - Multivariate Linear Regression model


and preprocessing methods:
[[faq_reference_Eigenvector| How do I cite/reference Eigenvector?]]


:Absolute value     
[[faq_interpret_ROC_curves_and_Sensitivity_Specificity_plots_from_PLSDA | How do I interpret the ROC curves and Sensitivity / Specificity plots from PLSDA?]]
:Autoscale       
:Baseline (specified)
:Derivative (SavGol) 
:Detrend         
:ELS
:EPO                 
:GLS weighting   
:Log Decay Scaling
:Log10               
:MSC             
:Mean center
:Median center       
:Normalize       
:OSC
:Pareto Scaling     
:Poisson Scaling 
:SNV
:Smooth (SavGol)     
:Sqrt Mean Scale 
:Transmission to Absorbance
:Variance Scaling


[[faq_make_DataSet_backwards_compatible | How do I make a DataSet backwards compatible?]]


Normalization and Baseline support windowing. Normalization supports type 1 (area) and type 2 (length) normalization, but does not support 'Inf' type normalization.
[[faq_obtain_or_use_recompilation_license_for_PLS_Toolbox | How do I obtain or use a recompilation license for PLS_Toolbox?]]


Model_Exporter does not support replacement of missing values (values must be measured for all variables).
[[faq_use_custon_cross_validation_option | How do I use the "custom" cross-validation option?]]


==Exporting a Model==
[[faq_out_of_memory_error_when_analyzing_data | I keep getting "out of memory" errors when analyzing my data. What can I do?]]


===Exporting from PLS_Toolbox and MATLAB===
[[faq_java_lang_OutOfMemoryError| What can I do if I get a java.lang.OutOfMemoryError error?]]


Model_Exporter is easily called from the MATLAB environment. After adding the Model_Exporter folder to the MATLAB path, a model can be exported by simply calling the exportmodel function, passing the model structure itself, and an optional input specifying the file name and type to which the exported model should be written. When filename is omitted, Model_Exporter will prompt for a filename, file type, and location.
[[faq_why_get_negative_scores_when_all_modes_are_set_to_nonnegativity | Nonnegativity (PARAFAC, PARAFAC2, Tucker): Why do I get negative scores when all modes are set to nonnegativity?]]


    exportmodel(modelstructure,filename)
[[faq_what_are_relative_contributions | What are "Relative Contributions"?]]


    exportmodel(modelstructure,filename, options)
[[faq_what_are_reduced_T^2_and_Q_Statistics | What are the "Reduced" T^2 and Q Statistics?]]


The third parameter, options, allows specification of how excluded variables are handled, how numerical values are stored (text or binary), and whether the exported m-file is a script or function. See below for further details of options.
==Command Line==
 
==Manual==
Model_Exporter is also accessible from the PLS_Toolbox through the Analysis GUI. With the model to export loaded into the Analysis GUI, go to the '''File > Export Model > To Predictor…''' menu and select the file type to export from the flyout menu.
==GUI==
 
==Installation==
===Exporting from Solo===
 
When installed with the stand-alone Solo software, a model is exported from the Analyis GUI. With the model to export loaded into the Analysis GUI, Go to the '''File > Export Model > To Predictor…''' menu and select the file type to export from the flyout menu.
 
===Handling Excluded Variables===
 
When excluded variables are detected within a model, the user will be given two options for how to handle these variables.
 
# Compress Model – Model_Exporter will attempt to remove all references to excluded variables. The created predictor will expect values for only the included variables.
# Use Placeholders – Model_Exporter will create a predictor which expects values for all variables, excluded or included, although excluded values will be ignored.
 
The choice between these two methods depends on the environment in which the exported model is going to be used. If it is easier to always provide all variables to the predictor, then the “Use Placeholders” option is probably preferred. If, instead, only the included variables will be available (e.g. the excluded variables are not going to be measured), compressing the model is the correct approach.
 
In general, the two methods give identical numerical results with the sole exception of models which make use of smoothing and derivative preprocessing. These methods may give slightly different “edge effects” after compressing a model and validation of such models is encouraged.
 
In either case, the header information in the exported model will always reflect the number of variables expected and any labels or axisscale information for those variables.
 
===Storing Numerical Values as Binary===
 
With large numbers of variables, and with certain types of preprocessing (e.g. derivatives and smoothing), the numerical matrices needed to apply the model can become quite large, particularly when stored in the standard text format of an exported model. When the '''m-file format''' is selected as the output target, you have the choice to store the numerical values in one of three formats:
 
* Text in the script (Default)
* Binary data file in DOUBLE (64-bit) precision
* Binary data file in SINGLE (32-bit) precision
 
Text in the script is the default format to store numerical values and allows all the model information to be included in a single file (the text script.) The second two options instead store these values in a separate binary file as a simple stream of numerical values of the indicated precision. When the binary formats are selected, the script is written to automatically open the binary file and read in the values from there instead of parsing them out of the script.
 
:'''Notes:'''
:# This file format is currently only available for scripts exported in the m-file format. [mailto:helpdesk@eigenvector.com|Contact Eigenvector Research] if you have interest in using a similar format for other export formats.
:# Single Precision Binary will reduce the accuracy of the predictions due to rounding error. The extent of error will depend greatly on the noise level of the data and the precision required by the model. Models exported with this precision should be validated with known samples to determine the effect of rounding on the predictions for the given model.
:# It is assumed that the binary file is in the "current working directory" (unless the script is edited to change the file location.)
 
==M-file Format==
 
The m-files output by [[Model Exporter]] are stand-alone. That is, they can be run by the MATLAB computational environment (available from Mathworks, Inc., http://www.mathworks.com) without any additional toolboxes or the LabVIEW environment (available from National Instruments, Inc., http://www.ni.com) with any MathScript-enabled package.
 
For maximum flexibility, an exported model is written as a script which expects only to find a variable named x in its workspace. This variable provides the input data to which the model should be applied. It is important to note that the variable x will be modified by the script and, thus, the caller should not expect the variable to remain unchanged. See "Creating Functions from Exported Models", below, for more information on how to isolate the script and call it as a function. (Those unfamiliar with MATLAB scripts and functions should read the MATLAB documentation describing these concepts and the associated "variable scope" documentation.)
 
The input variable x should be a vector, representing a single sample, and the output will be a prediction for this one sample.
 
===Options===
When exporting from the Matlab environment to a .m file using the ''options'' parameter it is possible to specify preferred behavior for these choices. The available options are:
* '''handleexcludes''': [ {'ask'} | 'ignore' | 'placeholders'} Governs how excluded variables should be handled.
:'ignore' = attempt to remove all references to excluded variables. Only included values will be expected.
:'placeholders' = expect values for all variables, although excluded values will not be used by model.
:'ask' = prompt user for desired behavior.
 
* '''datastorageformat''': [ {'ask'} | 'text' | 'binarydouble' | 'binarysingle' ] Governs output format of numerical values.
:'text' = store numerical values as text in the script (the normal output mode).
:'binarydouble' = store as binary data file in DOUBLE precision.
:'binarysingle' = store as binary data file in SINGLE precision.
:'ask' = prompt user for desired format.
Note: Single Precision Binary will reduce the accuracy of the predictions due to rounding error. Validate results against known samples if single precision is used. Note: Binary output formats provide for smaller memory footprint but require parsers that can execute binary file read instructions.
 
* '''creatematlabfunction''': [ {'no'}  | 'yes' ]  Governs m-file format specifying whether to create a Matlab script or function.
:'yes' outputs m-files with appropriate code to allow calls to the model application in a functional form.
:'no' outputs m-file in script form.
 
===Input Data===
 
The expected length (number of elements) and contents of the input x vector are defined in the comments and initial sections of the exported model script. The script, as exported, does not use this information to perform any validity testing on the input variable. This information is only provided to indicate to the user what type of data is expected.
 
The example below shows the part of an exported model which indicates the expected data size and associated context information. This particular model expects input data of ten variables as a row vector (as described by inputdata.size). The labels of these ten variables are specified in the string array inputdata.label. As there was no axisscale information in this particular data, the inputdata.axisscale value is empty.
 
inputdata.size = [ 1 10 ];
inputdata.axisscale = [ ];
inputdata.label = ['Fe';'Ti';'Ba';'Ca';'K ';'Mn';'Rb';'Sr';'Y ';'Zr'];
 
The user can make use of this information to assure the data being passed to the model is correct. Again, as written, the script provides no testing. Incorrect data sizes will be indicated by a runtime error when executing the script.
 
===Returned Results===
 
The results available from a model prediction will be present as variables in the script's workspace. The user is responsible for making use of these variables as needed. The following list specifies the supported results which may be of interest to the user.
 
:'''scores''' - Scores for each component as a row vector.
:'''T2''' - The Hotelling's T^2 as a scalar value.
:'''Q''' - The sum squared x residuals (Q value) as a scalar value.
:'''Tcon''' - Variable contributions to T2 as a row vector.
:'''Qcon''' - Q residuals contributions (x residuals) as a row vector.
:'''x''' - The preprocessed version of the input data.
:'''Xhat''' - Model estimate of the data as a row vector (in preprocessed units - comparable to the preprocessed '''x''').
 
All regression and PLSDA models return the following additional value:
 
:'''yhat''' - Model prediction for y (predicted y value) as a scalar value or vector.
 
PLSDA discriminant analysis models also return an additional value
 
:'''probs''' - Model predicted probability of the input sample belonging to each class, where the classes are ordered as unique(y), as a vector. (y refers to the classes variable originally used in building the model).
 
SVM regression analysis models return values:
:'''yhat''' - Model prediction for y (predicted y value) as a scalar.
:'''nsvs''' - Number of support vectors used by the model, as a scalar.
 
SVMDA discriminant analysis (classification) models return values:
:'''probs''' - Model predicted "probability" of the input sample belonging to each class, where the classes are ordered as shown in classorder (below). Note that the probability reported here is '''not''' the same as the probability reported by the SVM algorithm (based on a maximum likelihood calculation). Instead, this is based on the classvotes reported below. The class votes are normalized to give fraction of votes for each class. This fraction is raised to the power of 10, then normalized to unit area again. This gives a ROUGH estimate of probability where the class with the highest votes also gets the highest probability and the remaining classes are ranked in decreasing order. The log of the probability is roughly proportional to the number of class votes that would have to change to cause the assignment to change.
::NOTE: For historical reasons, the output '''prob''' will also contain the identical probabilities as '''probs'''.
:'''classvotes''' - Votes cast in favor of each class, as a vector. The class with most votes is the predicted class of the input sample.
:'''classorder''' - Is a vector of class numbers identifying which class each classvotes value is associated with. For example, if the second entry in classvotes has the largest value then the second value in classorder give the winning class number. See the model's model.classification.classnums and model.classification.classids to translate between class numbers with class names. Ties between two or more classes are resolved by choosing the first.
:'''nsvs''' - Number of support vectors used by the model, as a scalar.
:'''df''' - Vector of decision function values for pairwise classifiers, as a vector. If there are N classes then there are N*(N-1)/2 pairwise classifiers used. The decision functions are in order: class 1-2, 1-3,...1-N, 2-3, 2-4, ...,2-N,..., (N-1)-N. The classvotes are based on the decision function values.
 
:Note these exported model results should be the same as results from SVMDA when using option probabilityestimates = 0 (even if the exported model was built using option probabilityestimates = 1). Thus the exported model's predictions should only be validated against SVMDA models built using probabilityestimates = 0.
 
===Creating Functions from Exported Models===
 
Although the exported model is written as a script which would normally operate in the base workspace of MATLAB, the user can also wrap the script into a function by simply adding a standard function definition to the script file. A function wrapper keeps the input variable x from being modified outside the function. This approach tends to be safer than a script, but is not implemented by default in order to provide the widest flexibility to the user.
 
An example function line is provided in the exported model file (commented out) along with instructions for customization. In addition, there is an example block of code (also commented out by default) which will return “expected information” about x if the function is called without any inputs.
 
In general, the function definition requires only one input, x, and can output any of the variables which are present after the script's execution. An example would be:
 
  function [scores,Q,T2,Qcon,Tcon] = mymodel(x)
 
This function definition returns the vectors: scores, Qcon, and Tcon, as well as the scalar values: Q and T2 to the caller.
 
Note, as discussed above, the user can have this conversion of the exported m-file from a script to a function applied automatically by specifying the exportmodel option '''creatematlabfunction''' = 'yes'.
 
==TCL File Format==
 
The tcl-files output by [[Model Exporter]] can be run by either a stand-alone Tcl parser (for example see the "Batteries Included" ActiveTcl Distribution http://www.tcl.tk/software/tcltk/ ) or by Symbion (available from Symbion Systems, Inc., http://www.gosymbion.com ). When run in a stand-alone Tcl parser, the La package for matrix support is required (available free from: http://www.hume.com/la/ )
 
For maximum flexibility, an exported model is written as a Tcl script which expects only to find a variable named x in its workspace. This variable provides the input data to which the model should be applied. It is important to note that the variable x will be modified by the script and, thus, the caller should not expect the variable to remain unchanged.
 
The input variable x should be a vector, representing a single sample, and the output will be a prediction for this one sample.
 
===Input Data===
 
The expected length (number of elements) and contents of the input x vector are defined in the comments and initial sections of the exported model script. The script, as exported, does not use this information to perform any validity testing on the input variable. This information is only provided to indicate to the user what type of data is expected.
 
The example below shows the part of an exported model which indicates the expected data size and associated context information. This particular model expects input data of ten variables as a row vector (as described by inputdata.size). The labels of these ten variables are specified in the string array inputdata.label. As there was no axisscale information in this particular data, the inputdata.axisscale value is empty.
 
# inputdata.size = [ 1 10 ];
# inputdata.axisscale = [ ];
# inputdata.label = ['Fe';'Ti';'Ba';'Ca';'K ';'Mn';'Rb';'Sr';'Y ';'Zr'];
 
The user can make use of this information assure the data being passed to the model is correct. Again, such testing is not provided by the script as written. Incorrect data sizes will be indicated by a runtime error when executing the script.
 
===Returned Results===
 
The results available from a model prediction will be present as variables in the script's workspace. The user is responsible for making use of these variables as needed. The list of output variables is the same as those listed under the [[#Returned_Results|M-file format description]].
 
==XML File Format==
 
The input variable x should be a vector, representing a single sample, and the output will be a prediction for this one sample.
 
===Numerical Matrix Definitions===
 
The XML format utilizes custom tags to define various parts of the model. For some tags, the content is a vector or matrix of values. In these cases, a comma character delineates different column elements and semicolon indicates the end of a matrix row and the beginning of the next. All white space is ignored. If a given matrix contains only one row, it is described as a "row vector". A matrix with a single column is described as a "column vector". Orientation of such vectors is critical to the mathematical operations and must be parsed appropriately.
 
===XML Structure===
 
The XML file will consist of a top level <model> tag which will contain an <information> tag, an <inputdata> tag, and one or more step segments, each wrapped in a separate <step> tag.
 
'''<model>'''
:'''<information>'''  General information on the encoded model.
::'''<source>'''    Text description of file source (EVRI Model_Exporter).
::'''<modeltype>'''  Standard model method acronym (PCA, PLS, etc).
::'''<description>''' Text description of model including preprocessing, data size(s), and number of components. Each row of this multi-row string is delineated by <sr> (string row) tags.
::'''<datasource>''' Information block of modeled calibration data. <datasource> is a multi-cell table format. There will be one column of information for each block of data required by the given modeltype (e.g. PCA requires 1 block, PLS requires 2). Each <td> tag will contain a number of sub-fields describing the data used for the given block. Informational only, sub-fields may change.
::'''<outputs>''' Table-formatted (TR and TD wrapped) array of the names associated with the columns of the yhat output (if any). These can be used by the caller to assign string descriptions to each output. They are not used by the interpreter itself, however.
:'''</information>'''
:'''<inputdata>'''    Specific requirements for input data including the following information:
::'''<size>'''    Numeric class row vector describing the size expected for the input data (x). The first element of the vector gives the expected number of rows, the second is the expected number of columns.
::'''<axisscale>'''  Numeric class row vector providing the expected axisscale of the input values. The actual values stored in the axisscale vector are completely dependent on the application and the analytical method used and may be empty.
::'''<label>'''    Strings (delimited by <sr> sub-tags) defining the names of the variables expected in the input data (x). The names are dependent on the application and the analytical method used and may be empty.
:'''</inputdata>'''
:'''<step>'''      Repeated tag for each step required for making a prediction using this model. Will contain the following sub-fields:
::'''<sequence>'''  Numeric class single value indicating the order in which this step should be performed. The steps are generally included in the XML file in sequence-order (sequence 1 will be the first step in the file), but this field can be used to assure in-order processing of steps.
::'''<description>''' String class description of the step (informational only)
::'''<constants>'''  Contains information on constants required by this step. Each constant is defined as a sub-tag herein. The name of the constant is the sub-tag name and will contain a matrix (or vector) of values to use for the given constant. See below for more information.
::'''<script>'''    One or more rows of strings describing the mathematical operations to perform this step. When more than one mathematical operation is to be performed, each will be given in a separate string row <sr> tag. However, these can be ignored. Each mathematical operation will be terminated with a semicolon.
:'''</step>'''
''(Additional <step> tags located here…)''
 
'''</model>'''
 
See the provided files "pcaexample.xml" and "plsexample.xml" for full examples of the XML structure.
 
==Requirements for XML Interpreters==
 
To execute each of the <step> segments contained in the XML file, an interpreter must be able to parse the constants defined into matrices and be able to execute the script commands. The following give the specifications for an interpreter.
 
For examples of interpreters, see the [[Model_Exporter Interpreter]] objects in folder: interpreters/MEInterpreter, or the PHP interpreter in interpreters/predict.php. These are all distributed with Model_Exporter and are ''freely-distributable without additional licensing''.
 
===Managing of Constants and Variables===
* The interpreter must  maintain a "workspace" of stored constants and variables in  which the matrices can be accessed by a variable name (specified by the  tag in which the given constant was read, for example:<pre>&lt;s class="numeric" size="[1,1]">4&lt;/s></pre>
:would define a constant "s" which was equal to the scalar value 4).
* Constants are NOT case sensitive and any interpreter must be written to consider the upper or lower case variables as the same.
* "Constants" are just pre-defined variables. Although every effort will be made to avoid changing these values, it is NOT a rule that these "constants" cannot be  changed – scripts may modify and overwrite these values. They are called  "constants" because they are initially defined by the model.
* The enclosing tag for the  constant will define the class of the constant (in this application,  constants will always be "numeric") and will also define the  size of the constant using the attribute 'size'. For example, <pre>&lt;s class="numeric" size="[1,5]"></pre> defines that the enclosed constant will be a row vector (1 row) of 5 elements (5 columns).
* Prior to the execution of  the script(s), the XML interpreter must place a variable named "x"  (lower-case) in their workspace. This variable must contain the data to which  the model should be applied. The value of "x" will be modified by the script so, following initial assignment, no alteration of this  variable should be done outside what is specified by the script.
* All constants/variables  must be retained for the entirety of a given step. In many cases, the  variables remaining in the workspace will contain results of interest to  the caller and, therefore, all workspace values should be retained. The variable "x" must always be present.
 
===Script Execution===
 
The following lists define the script commands which must be supported by the interpreter (scripts may contain only these commands). When applicable, the Matlab operator corresponding to the given function is given. Interpreters do not need to interpret these operators. They will never be used in any script and are provided here only for reference.
 
====Single Input Functions====
 
C = function(A); 
  abs            Absolute Value    Removal of sign of elements ( abs(A) )
  log10          log (base 10)      Base 10 logarithm of elements ( log10(A) )
  transpose      transpose array    Exchange rows for columns ( A' )
 
====Double Input Functions====
 
C = function(A,B);
    plus          Plus                              Addition of paired elements ( A+B )
    minus        Minus                            Subtraction of paired elements ( A-B )
    mtimes        Matrix multiply (dot product)    Dot product of matrices ( A*B )
    times        Array multiply                    Multiplication of paired elements ( A.*B )
    power        Array power                      Exponent using paired elements ( A.^B )
    rdivide      Right array divide                Division of paired elements ( A./B )
    cols          Index into columns of matrix      Select or replicate columns  ( A(:,B) )
    rows          Index into rows of matrix        Select or replicate rows    ( A(B,:) )
 
===Mathematical Operation Requirements===
 
* All mathematical operations are expected to be performed using signed, single precision numbers.
* With the exception of mtimes (dot product), all operations are "element-by-element". That is, the two matrices passed will be equal in size (see scalar exception below) and the mathematical operation is performed between each element of matrix A and its corresponding element in matrix B. The output matrix C is always the same size as A and B.
* Scalar Exception (except mtimes): A or B may be a scalar even if the other isn't. In this situation, the scalar input must be interpreted as an appropriately-sized matrix containing all the same value.
* mtimes (dot product) is performed using the standard linear-algebraic dot-product operation. In generic terms, the input matrix A will contain m rows and k columns, the input matrix B will contain k rows and n columns and the output matrix C will contain m rows and n columns. The following equation is used to calculate each element of the C matrix (loop for i = 1 to m and for j = 1 to n):
::<math>C_{i,j}=A_{i,1}B_{1,j} + A_{i,2}B_{2,j} + A_{i,3}B_{3,j}  + ... + A_{i,k}B_{k,j}</math>
:Subscripts indicate the row and column indexing (respectively) into the  given array. When either A or B is a scalar, the mtimes operation should  be handled as a "times" operation. That is, the operation  becomes an element-by-element multiplication where each element of the  matrix input is multiplied by the scalar value and C is the same size as  the input matrix.
* cols and rows indexing operations should expect a row vector for B that may have repeated elements (which allows replication of a given row or column). For example, given a row vector for B of
::B = [1 1 1 2 2 2]
:passed into the cols operation, this would replicate column 1 three times then replicate column 2 three times giving a total of 6 columns in the output.
 
===Script Execution Requirements===
* The format for a single  script command is: <pre> C = function(A,B);</pre> where function is one of the above functions, A and B are the pre-defined constants / variables to use as input to function, and C is the output. Input B will be omitted for functions which require only one input. Each command of the script will end in a semi-colon ";". All commands must be performed in the order in which they appear in the script.
* The expected size, axisscale,  and labels associated with x will be stored in the <sourcedata> tab  (if any exist). These values can be used by an XML interpreter to verify  the data being analyzed.
* Constants are NOT case  sensitive and any interpreter must be written to consider the upper or lower  case variables as the same.
 
===Returned Results===


The results returned by a model prediction will be present as variables in the interpreter's workspace upon completion of the XML parsing. The returned results are the same as those listed for the [[#Returned_Results|M-file format]].


==Requirements for XML Writers==


The following rules are to be followed by the script creation algorithm of Model_Exporter. These rules may be of interest to script interpreters, but should not have any critical impact on interpreter design.


* Nesting of functions is not  allowed. Functions can only take variables or pre-defined constants as  input.
* NO iterative processes are  supported. All scripts must be straight-through executing (no control  structures such as "ifs", "while", etc are supported.)
* Missing data replacement  is not supported.
* As of version 1.0 of this product,  only variables or pre-defined constants may be used in a function. No  "in-line" constants may be used. For example:


    C = minus(A,1);


is invalid because the constant "1" has to be pre-defined. This command should instead be written where the "1" is pre-defined as a constant and the name of that constant is used.


* Variables are NOT case  sensitive and any interpreter must be written to consider the upper or  lower case variables as the same. Note, however, that the Matlab output of  this function will be case-sensitive code so the scripts should try to be  consistent in case, even if other interpreters won't care.
[[Category:FAQ]]

Revision as of 14:25, 29 November 2018

_

Importing / Exporting

How do I concatenate multiple files into a single DataSet?

How do I create a multivariate image from separate images?

How do I export PCA scores and loadings to a text file (to read into MS Excel, for example)?

How do I import three-way data into Solo or PLS_Toolbox?

Why can't I import a Horiba NGC file on my 64-bit computer?

Why can't SPCREADR read multiple files I've selected?

Why do some Excel files fail to import?

General

Can I do PARALIND in PLS_Toolbox?

Can I install PLS_Toolbox (or Solo) on more than one PC, such as on my desktop and laptop computer?

Can I use multiple class sets (categorical variables) together in a SIMCA, PLSDA, or LDA model?

Can you give me more information on the R-Squared statistic?

How are RMSEC and RMSECV related to R2Y and Q2Y I see in other software?

Convergence of PARAFAC. How much variation between models is expected a particular PARAFAC is fit multiple times with the same settings?

Does the software stop working if my maintenance expires?

How and where do I report a problem with PLS_Toolbox?

How are T-contributions calculated?

How are the ROC curves calculated for PLSDA?

How are the error bars calculated for a regression model and can they be related to a confidence limit (confidence in the prediction)?

How can I improve performance with PLS_Toolbox and Matlab on the Mac platform?

How do I assign classes for samples in a DataSet?

How do I build a classification model from a class set other than the first?

How do I choose between the different cross-validation leave-out options?

How do I cite/reference Eigenvector?

How do I interpret the ROC curves and Sensitivity / Specificity plots from PLSDA?

How do I make a DataSet backwards compatible?

How do I obtain or use a recompilation license for PLS_Toolbox?

How do I use the "custom" cross-validation option?

I keep getting "out of memory" errors when analyzing my data. What can I do?

What can I do if I get a java.lang.OutOfMemoryError error?

Nonnegativity (PARAFAC, PARAFAC2, Tucker): Why do I get negative scores when all modes are set to nonnegativity?

What are "Relative Contributions"?

What are the "Reduced" T^2 and Q Statistics?

Command Line

Manual

GUI

Installation