Advanced Preprocessing: Multivariate Filtering and Release Notes Version 7 0 2: Difference between pages

From Eigenvector Research Documentation Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Jeremy
 
imported>Jeremy
No edit summary
 
Line 1: Line 1:
===Introduction===
==Changes and Bug Fixes in Version 7.0.2==


In some cases, there is insufficient selectivity in the variables to easily remove things like backgrounds or other signals which are interferences to a multivariate model. In these cases, using multivariate filtering methods before model calibration may help simplify the end model. Multivariate filters identify some unwanted covariance structure (i.e., how variables change together) and remove these sources of variance from the data prior to calibration or prediction. In a simple way, these filters can be viewed as pattern filters in that they remove certain patterns among the variables. The resulting data contain only those covariance patterns which passed through the filter and are, ideally, useful or interesting in the context of the model.
===Bug Fixes and Enhancements===
{|


Identification of the patterns to filter can be based on a number of different criteria. The full discussion of multivariate filtering methods is outside the scope of this chapter, but it is worth noting that these methods can be very powerful for calibration transfer and instrument standardization problems, as well as for filtering out other differences between measurements which should otherwise be the same (e.g., differences in the same sample due to changes with time, or differences within a class of items being used in a classification problem).
|----valign="top"
|'''[[analysis]]'''
|
* Allow split cal/val even when no cal is present
* Fix for error when loading old model with custom cross-validation (loaded cvi which had only the INCLUDED samples liseted. New detail.cvi field contains both included and excluded samples and is what crossval was expecting to get)
* Fix for missing "block" information when drilling down from summary contributions to full contributions in MPCA model
* Allow relative T and Q contributions in MPCA models
* Fix for multiway bug in calculating Q contributions
* Give warning when user attempts to change conf. limit on batch maturity model type that this has no effect on shown conf. limits.
* Show used conf. limit in plot controls for Batch Maturity


One common method to identify the multivariate filter "target" uses the Y-block of a multivariate regression problem. This Y-block contains the quantitative (or qualitative) values for each sample and, theoretically, samples with the same value in the Y-block should have the same covariance structure (i.e., they should be similar in a multivariate fashion). A multivariate filter can be created which attempts to remove differences between samples with similar y-values. This filter should reduce the complexity of any regression model needed to predict these data. Put in mathematical terms, the multivariate filter removes signals in the X-block (measured responses) which are orthogonal to the Y-block (property of interest).


Three multivariate filtering methods are provided in the Preprocessing window: Orthogonal Signal Correction (OSC), Generalized Least Squares Weighting (GLSW), and External Parameter Orthogonalization (EPO) where this last one also encompasses Extended Mixture Model (EMM) filtering. In the context of the Preprocessing window, both methods require a Y-block and are thus only relevant in the context of regression models. Additionally, as of the current version of PLS_Toolbox, the graphical interface access to these functions only permits their use to orthogonalize to a Y-block, not for calibration transfer applications. From the command line, however, both of these functions can also be used for calibration transfer or other filtering tasks. For more information on these uses, please see the calibration transfer and instrument standardization chapter of this manual.
|----valign="top"
|'''[[batchfold]]'''
'''[[bspcgui|Batch Processor]]'''
|
* If steps are disabled, ignore extraction by steps!
* Remove forced removal of steps if Batch Maturity.
* Add name to dataset.
* Add per batch linear axis scale.
* Updates for alignment on BM and other.
* Fix model saving. Fix cow options. Add 'none' option in alignment. Add better loading of model and settings. Fix tab enable on load of model.
* Fix for allowing no steps. Become all one step.
* Add new plotting style, apply to new data, and remove class 0 from batch list.
* Always push data into the same Analysis window (if it is still open), otherwise use a new window
* If model or data is loaded, ask how to load data when pushed (calibration / validation)
* Add default alignment plus default method for BM and other.
* Add "stacked" plotting on batch plot.
* Update to drag patch behavior in linear view.
* Fix for batch list selections, make default batch plot style = stack.
* Remove unneeded batch selection now that Class 0 has been removed.


===OSC (Orthogonal Signal Correction)===
|----valign="top"
|'''[[b3spline]]'''
|
* Fix error in display option handling


Orthogonal Signal Correction (Sjöblom et al., 1998) removes variance in the X-block which is orthogonal to the Y-block. Such variance is identified as some number of factors (described as components) of the X-block which have been made orthogonal to the Y-block. When applying this preprocessing to new data, the same directions are removed from the new data prior to applying the model.
|----valign="top"
|'''[[batchmaturity]]'''
|
* Added asymmetric standard deviation as method to calculate confidence limits
* Added confidence limit algorithm (clalgorithm) option with default to asymmetric least squares (astd)
* Adjusted default confidence limit to 95% to match default in other level 2 functions
* Remove weighting applied to deviations when calculating the score limits using "percentile" method
* Don't calculate score limits when building raw model as this would be done unnecessarily for 10 PCs. This could be time consuming.


The algorithm starts by identifying the first principal component (PC) of the X-block. Next, the loading is rotated to make the scores be orthogonal to the Y-block. This loading represents a feature which is not influenced by changes in the property of interest described in the Y-block. Once the rotation is complete, a PLS model is created which can predict these orthogonal scores from the X-block. The number of components in the PLS model is adjusted to achieve a given level of captured variance for the orthogonal scores. Finally, the weights, loadings, and predicted scores are used to remove the given orthogonal component, and are also set aside for use when applying OSC to a new unknown sample. This entire process can then be repeated on the "deflated" X-block (the X-block with the previously-identified orthogonal component removed) for any given number of components. Each cycle results in additional PLS weights and loadings being added to the total that will be used when applying to new data.
|----valign="top"
|'''[[boxplot]]'''
|
* No "Extreme" outliers plotted if there were no "Standard" outliers. This was the case for either upper or lower outliers, so upper (lower) extremes only plotted if there were upper (lower) standard outliers.


There are three settings for the OSC preprocessing method: number of components, number of iterations, and tolerance level. The number of components defines how many times the entire process will be performed. The number of iterations defines how many cycles will be used to rotate the initial PC loading to be as orthogonal to Y as possible. The tolerance level defines the percent variance that must be captured by the PLS model(s) of the orthogonalized scores.
|----valign="top"
|'''[[browse]]'''
|
* Add message saying browse is initializing


In the Preprocessing window, this method allows for adjustment of the settings identified above. From the command line, this method is performed using the osccalc and oscapp functions.
|----valign="top"
|'''[[corrspecgui]]'''
|
* Fix typo in plot type.


===GLS Weighting and EPO===
|----valign="top"
|'''[[summary]]'''
|
* Fix for error when all of a given variable are excluded/missing


Generalized Least Squares Weighting (GLSW) is a filter calculated from the differences between samples which should otherwise be similar. These differences are considered interferences or "clutter" and the filter attempts to down-weight (shrink) those interferences. A simplified version of GLSW is called External Parameter Orthogonalization (EPO), which does an orthogonalization (complete subtraction) of some number of significant patterns identified as clutter. A simplified version of EPO emulates the Extended Mixture Model (EMM) in which all identified clutter patterns are orthogonalized to.
|----valign="top"
|'''[[experimentreadr]]'''
|
* Switch cal/val class numbers (so calibration is 0 and shows as black circles, and 1 as red triangles as with scores plots)
* Handle case when all samples are converted to validation


====Clutter Identification====
|----valign="top"
|'''[[genalgplot]]'''
|
* add drawnow to make sure some plots get updated when we switch from selection plot to the information plot


In the case of a classification problem, similar samples would be the members of a given class. Any variation within each class group (known as "within-class variance") can be considered clutter which will make the classification task harder. The goal of GLSW in this case is to remove this within-class variance as much as possible without making the classes closer together (between-class variance).
|----valign="top"
|'''[[modelcache]]'''
|
* Add new deletedates mode to modelcache


In the case of a calibration transfer problem, similar samples would be data from the same samples measured on two different instruments or on the same instrument at two different points in time. The goal of GLSW is to down-weight the differences between the two instruments and, therefore, make them appear more similar. A regression model built from GLSW-filtered data can be used on either instrument after applying the filtering to any measured spectrum. Although this specific application of GLSW is not covered by this chapter, the description below gives the mathematical basis of this use.
|----valign="top"
|'''[[mscorr]]'''
|
* Fix typo in error message


GLSW can also be used prior to building a regression model in order to remove variance from the X-block which is mostly orthogonal to the Y-block. This application of GLSW is similar to OSC (see above), and such filtering can allow a regression model to achieve a required error of calibration and prediction using fewer latent variables. In this context, GLSW uses samples with similar Y-block values to identify the sources of variance to down-weight.
|----valign="top"
|'''[[parafac]]'''
|
* Fix for serious but rare bug in PARAFAC: For higher than three-way, the constraint in mode two was also imposed in mode three. So the bug is only seen when those constraints are different. Most of the time constraints would just be nonneg all over the place,so bug is unlikely to be seen.


In all cases, the default algorithm for GLSW uses a single adjustable parameter, <math>\alpha</math>, which defines how strongly GLSW downweights interferences. Adjusting  <math>\alpha</math>    towards larger values (typically above 0.001) decreases the effect of the filter. Smaller  <math>\alpha</math>s (typically 0.001 and below) apply more filtering.
|----valign="top"
|'''[[peakfind]]'''
|
* Don't do search for peaks if fewer than window*2 variables!


====GLSW Algorithm====
|----valign="top"
|'''[[plotgui|Plot Controls]]'''
|
* Add separators above Bar and Mesh to make menu easier to read
* Add "enhanced surface" mode
* Better handling duplication of data as needed for 3D plots (to avoid errors when plotting)
* Change settings on viewinterpolated so it will be available from the settings control button on the toolbar
* Fix for plotting scatter plots with n-way data in 3rd dimension (xdata is row vector instead of column vector)
* Don't reset 'PlotBoxAspectRatioMode','CameraViewAngleMode', or 'DataAspectRatioMode' in 2008b or later (seems to cause strange plot box resizing problems)
* Better position labels when rotated text is being used
* Add ability to use logical in search


The GLSW algorithm will be described here for the calibration transfer application (because it is simpler to visualize) and then the use of GLSW in classification and regression applications will be described. In all cases, the approach involves the calculation of a covariance matrix from the differences between similar samples. In the case of calibration transfer problems, this difference is defined as the numerical difference between the two groups of mean-centered transfer samples. Given two sample matrices, X1 and X2, the data are mean-centered and the difference calculated:
|----valign="top"
|'''Adjust Axis Limits Interface'''
|
* Fix use with multiple axes and multiple figures. Fix bugs with initializing settings. Better handle restoring color.
* Fix for color of background when target figure has BLACK (or dark gray) background (can't see text!!)


:<math>\mathbf{X}_{1,mc}=\mathbf{X}_{1}-\mathbf{1}\bar{\mathbf{x}}_{1}</math> <div align="right">(1)</div>
|----valign="top"
|'''[[plsda]]'''
|
* Treat "0" as unknown class only if input y has more than 2 unique values


:<math>\mathbf{X}_{2,mc}=\mathbf{X}_{2}-\mathbf{1}\bar{\mathbf{x}}_{2}</math> <div align="right">(2)</div>
|----valign="top"
|'''[[preprocess]]'''
|
* Add "Favorites" button to
: (a) move certain methods to the top of the preprocessing list OR
: (b) to create new aggregate methods from the current selection of multiple methods
* Add "Hide/Unhide" button to hide items you don't use often
* Add hidden support for font size changing


:<math>\mathbf{X}_{d}=\mathbf{X}_{2}-\mathbf{X}_{1}</math> <div align="right">(3)</div>
|----valign="top"
|'''[[splitcaltest]]'''
|
* Fix bug where splitcaltest does nothing (all samples remain as calibration) if input data is "short and wide", as with nir_data for example with SVM, or when ncomp >=10 for PCA, LWR, etc.
* Remove requirement that the input data were acquired in a random order
* Initial demo added


|----valign="top"
|'''[[tconcalc]]'''
|
* Add support for tcon calculation from PCR and PLS models even when tconcalc is passed ONLY the prediction structure (as long as the necessary eigenvalues information is in the model details)


where '''1''' is a vector of ones equal in length to the number of rows in '''X<sub>1</sub>''',  <math>\bar{x}_1</math>  is the mean of all rows of '''X<sub>1</sub>''', and  <math>\bar{x}_2</math>  is the mean of all rows of '''X<sub>2</sub>'''. Note that this requires that '''X<sub>1</sub>''' and '''X<sub>2</sub>''' are arranged such that the rows are in the same order in terms of samples measured on the two instruments.
|----valign="top"
|'''[[trendtool]]'''
|
* Consider a "viewSpec" request for the a spectrum beyond the highest numbered spectrum as a request for "the last" spectrum (e.g. "inf" will give the max)
* Add 'interpolation' as new property that trendtool can set on the trend view
* Add ability to access this through evrigui as property: obj.setInterpolation(n)
* Add plottype surface and evrigui connection to modify it (setPlottype)


The next step is to calculate the covariance matrix, C:
|----valign="top"
|'''[[EVRIGUI Objects]]'''
|
* Add fieldnames to EVRIGUI object to allow tab-completion of valid methods and properties


:<math>\mathbf{C}=\mathbf{X}_d^T\mathbf{X}_d</math> <div align="right">(4)</div>
|----valign="top"
|'''[[EVRIModel Objects]]'''
|
* Rearrange logic when updating from old model version (generalize copying of fields from old model into new one
* Add conrearrange as private method to re-arrange contributions into "used", "passed", or "full" forms (like with Solo_Predictor)
* Add "contributions" and "matchvarsmap" (hidden) properties
* Fix logic which assigns calibrate.options.plots and calibrate.options.display settings (also set in top-level)
* Add "matchvars" property to models as option to DISABLE call to matchvars during apply, xhat and tcon/qcon calculations.
* If user turns off model object, don't expect evrimodelversion field (use modelversion only) and automatically extract model contents. Now users can automatically down-grade models using simply:
setplspref('evrimodel','noobject',1)
:then loading the new model


followed by the singular-value decomposition of the matrix, which produces the left eigenvectors, '''V''', and the diagonal matrix of singular values, '''S''':
|----valign="top"
|'''add3dlight'''
|
* Add "add3dlight" as new GUI utility to add 3D lighting effects for enhanced surface plots


:<math>\mathbf{C}=\mathbf{V}S^2\mathbf{V}^T</math> <div align="right">(5)</div>
|----valign="top"
|'''modelviewertool'''
|
* Fixed a bug in Tucker where the core was plotted as a loading in modelviewer when fitting e.g. Tucker(X,[3 3 1])


|----valign="top"
|'''peakfindgui'''
|
* Allow for more or less adjustability in sensitivity depending on the # of variables
* Encode logic to handle non-integer values for found peak position (in case center of mass calculation is used and non-integer peak positions values get returned)


Next, a weighted, ridged version of the singular values is calculated
|----valign="top"
 
|'''[[piconnectgui]]'''
:<math>\mathbf{D}=\sqrt{\frac{\mathbf{S}^2}{\alpha}+\mathbf{1}_D}</math> <div align="right">(6)</div>
|
 
* better handling of errors thrown during initialization
where '''1'''<sub>D</sub> is a diagonal matrix of ones of appropriate size and  <math>\alpha</math>  is the weighting parameter mentioned earlier. The scale of the weighting parameter depends on the scale of the variance in '''X'''<sub>d</sub>. Finally, the inverse of these weighted eigenvalues are used to calculate the filtering matrix.
|----
 
|}
:<math>\mathbf{G}=\mathbf{V}\mathbf{D}^{-1}\mathbf{V}^T</math> <div align="right">(7)</div>
 
This multivariate filtering matrix can be used by simply projecting a sample into the matrix. The result of this projection is that correlations present in the original covariance matrix are down-weighted (to the extent defined by    <math>\alpha</math> ). The filtering matrix is used both on the original calibration data prior to model calibration, and any future new data prior to application of the regression model.
 
The choice of <math>\alpha</math> depends on the scale of the original values but also how similar the interferences are to the net analyte signal. If the interferences are similar to the variance necessary to the analytical measurement, then  <math>\alpha</math>    will need to be higher in order to keep from removing analytically useful variance. However, a higher  <math>\alpha</math>    will decrease the extent to which interferences are down-weighted. In practice, values between 1 and 0.0001 are often used.
 
====Y-Gradient GLSW====
When using GLSW to filter out X-block variance which is orthogonal to a Y-block, a different approach is used to calculate the difference matrix,  '''X'''<sub>d</sub>. In this situation we have only one X-block, '''X''', of measured calibration samples, but we also have a Y-block, '''y''' (here defined only for a single column-vector), of reference measurements. To a first approximation, the Y-block can be considered a description of the similarity between samples. Samples with similar y values should have similar values in the X-block.
 
In order to identify the differences between samples with similar y values, the rows of the X- and Y-blocks are first sorted in order of increasing y value. This puts samples with similar values near each other in the matrix. Next, the difference between proximate samples is determined by calculating the derivative of each column of the X-block. These derivatives are calculated using a 5-point, first-order, Savitzky-Golay first derivative (note that a first-order polynomial derivative is essentially a block-average derivative including smoothing and derivatizing simultaneously). This derivative yields a matrix,  '''X'''<sub>d</sub>  , in which each sample (row) is an average of the difference between it and the four samples most similar to it. A similar derivative is calculated for the sorted Y-block, yielding vector  '''y'''<sub>d</sub>    , a measure of how different the y values are for each group of 5 samples.
 
At this point,  '''X'''<sub>d</sub>    could be used in equation 4 to calculate the covariance matrix of differences. However, some of the calculated differences (rows) may have been done on groups of samples with significantly different y values. These rows contain features which are correlated to the Y-block and should not be removed by GLS. To avoid this, the individual rows of  '''X'''<sub>d</sub>    need to be re-weighted by converting the sorted Y-block differences into a diagonal re-weighting matrix,  '''W'''    , in which the ''i''<sup>th</sup> diagonal element, ''w''<sub>i</sub>, is calculated from the rearranged equation
 
:<math>\log_2(w_i)=-\frac{\mathbf{y}_{d,i}}{s_{yd}}</math> <div align="right">(8)</div>
 
The value  <math>\mathbf{y}_{d,i}</math>  is the ''i''<sup>th</sup> element of the  '''y'''<sub>d</sub>    vector, and  ''s''<sub>yd</sub>    is the standard deviation of y-value differences:
 
:<math>s_{yd}=\sqrt{\sum_{i=1}^m{\frac{(y_{d,i}-\bar{y}_d)^2}{m-1}}}</math> <div align="right">(9)</div>
 
 
The re-weighting matrix is then used along with  '''X'''<sub>d</sub>    to form the covariance matrix
 
:<math>\mathbf{C}=\mathbf{X}_d^T\mathbf{W}^{2}\mathbf{X}_d</math> <div align="right">(10)</div>
 
which is then used in equations 5 through 7 as described above.
 
This approach is discussed in:
:B. M. Zorzetti, J. M. Shaver, J. J. Harynuk, "Estimation of the age of a weathered mixture of volatile organic compounds," Analytica Chimica Acta, '''694''', 31–37, 2011.
 
====External Parameter Orthogonalization (EPO)====
An alternative multivariate filter called External Parameter Orthogonalization (EPO) uses the same process as GLSW except that only a certnain number of eigenvectors calculated in equation 5 are kept and the '''D''' matrix calculated in equation 6 is a diagonal vector of ones. The result is that '''X''' is "hard-orthogonalized" to the eigenvectors (the directions are completely removed) rather than simply "shrinking" these directions as is done with GLSW.
 
If all of the calculated eigenvectors are used in an EPO filter, the method becomes equivalent to the Extended Mixture Model (EMM) method described in Martens and Naes 1989.
 
For a literature reference on EPO, see: Roger, Chauchard, Bellon-Maurel, "EPO–PLS external parameter orthogonalisation of PLS application to temperature-independent measurement of sugar content of intact fruits." Chemom. Intell. Lab. Syst., 66, 191– 204 (2003).
 
====Settings and Command-line Usage====
 
In the Preprocessing window, the GLSW method has a [[Declutter_Settings_Window|Settings Window]] to allow for adjustment of the weighting parameter,  <math>\alpha</math>, whether or not to include mean-centering ("ignore means"), whether to use '''EPO''' mode and select a given number of components to orthongonalize to, or whether to use '''EMM/ELS''' mode in which the data is orthogonalized to all available components. From the command line, this method is performed using the [[glsw]] function, which also permits a number of other modes of application (including identification of "classes" of similar samples).

Revision as of 16:37, 20 November 2012

Changes and Bug Fixes in Version 7.0.2

Bug Fixes and Enhancements

analysis
  • Allow split cal/val even when no cal is present
  • Fix for error when loading old model with custom cross-validation (loaded cvi which had only the INCLUDED samples liseted. New detail.cvi field contains both included and excluded samples and is what crossval was expecting to get)
  • Fix for missing "block" information when drilling down from summary contributions to full contributions in MPCA model
  • Allow relative T and Q contributions in MPCA models
  • Fix for multiway bug in calculating Q contributions
  • Give warning when user attempts to change conf. limit on batch maturity model type that this has no effect on shown conf. limits.
  • Show used conf. limit in plot controls for Batch Maturity


batchfold

Batch Processor

  • If steps are disabled, ignore extraction by steps!
  • Remove forced removal of steps if Batch Maturity.
  • Add name to dataset.
  • Add per batch linear axis scale.
  • Updates for alignment on BM and other.
  • Fix model saving. Fix cow options. Add 'none' option in alignment. Add better loading of model and settings. Fix tab enable on load of model.
  • Fix for allowing no steps. Become all one step.
  • Add new plotting style, apply to new data, and remove class 0 from batch list.
  • Always push data into the same Analysis window (if it is still open), otherwise use a new window
  • If model or data is loaded, ask how to load data when pushed (calibration / validation)
  • Add default alignment plus default method for BM and other.
  • Add "stacked" plotting on batch plot.
  • Update to drag patch behavior in linear view.
  • Fix for batch list selections, make default batch plot style = stack.
  • Remove unneeded batch selection now that Class 0 has been removed.
b3spline
  • Fix error in display option handling
batchmaturity
  • Added asymmetric standard deviation as method to calculate confidence limits
  • Added confidence limit algorithm (clalgorithm) option with default to asymmetric least squares (astd)
  • Adjusted default confidence limit to 95% to match default in other level 2 functions
  • Remove weighting applied to deviations when calculating the score limits using "percentile" method
  • Don't calculate score limits when building raw model as this would be done unnecessarily for 10 PCs. This could be time consuming.
boxplot
  • No "Extreme" outliers plotted if there were no "Standard" outliers. This was the case for either upper or lower outliers, so upper (lower) extremes only plotted if there were upper (lower) standard outliers.
browse
  • Add message saying browse is initializing
corrspecgui
  • Fix typo in plot type.
summary
  • Fix for error when all of a given variable are excluded/missing
experimentreadr
  • Switch cal/val class numbers (so calibration is 0 and shows as black circles, and 1 as red triangles as with scores plots)
  • Handle case when all samples are converted to validation
genalgplot
  • add drawnow to make sure some plots get updated when we switch from selection plot to the information plot
modelcache
  • Add new deletedates mode to modelcache
mscorr
  • Fix typo in error message
parafac
  • Fix for serious but rare bug in PARAFAC: For higher than three-way, the constraint in mode two was also imposed in mode three. So the bug is only seen when those constraints are different. Most of the time constraints would just be nonneg all over the place,so bug is unlikely to be seen.
peakfind
  • Don't do search for peaks if fewer than window*2 variables!
Plot Controls
  • Add separators above Bar and Mesh to make menu easier to read
  • Add "enhanced surface" mode
  • Better handling duplication of data as needed for 3D plots (to avoid errors when plotting)
  • Change settings on viewinterpolated so it will be available from the settings control button on the toolbar
  • Fix for plotting scatter plots with n-way data in 3rd dimension (xdata is row vector instead of column vector)
  • Don't reset 'PlotBoxAspectRatioMode','CameraViewAngleMode', or 'DataAspectRatioMode' in 2008b or later (seems to cause strange plot box resizing problems)
  • Better position labels when rotated text is being used
  • Add ability to use logical in search
Adjust Axis Limits Interface
  • Fix use with multiple axes and multiple figures. Fix bugs with initializing settings. Better handle restoring color.
  • Fix for color of background when target figure has BLACK (or dark gray) background (can't see text!!)
plsda
  • Treat "0" as unknown class only if input y has more than 2 unique values
preprocess
  • Add "Favorites" button to
(a) move certain methods to the top of the preprocessing list OR
(b) to create new aggregate methods from the current selection of multiple methods
  • Add "Hide/Unhide" button to hide items you don't use often
  • Add hidden support for font size changing
splitcaltest
  • Fix bug where splitcaltest does nothing (all samples remain as calibration) if input data is "short and wide", as with nir_data for example with SVM, or when ncomp >=10 for PCA, LWR, etc.
  • Remove requirement that the input data were acquired in a random order
  • Initial demo added
tconcalc
  • Add support for tcon calculation from PCR and PLS models even when tconcalc is passed ONLY the prediction structure (as long as the necessary eigenvalues information is in the model details)
trendtool
  • Consider a "viewSpec" request for the a spectrum beyond the highest numbered spectrum as a request for "the last" spectrum (e.g. "inf" will give the max)
  • Add 'interpolation' as new property that trendtool can set on the trend view
  • Add ability to access this through evrigui as property: obj.setInterpolation(n)
  • Add plottype surface and evrigui connection to modify it (setPlottype)
EVRIGUI Objects
  • Add fieldnames to EVRIGUI object to allow tab-completion of valid methods and properties
EVRIModel Objects
  • Rearrange logic when updating from old model version (generalize copying of fields from old model into new one
  • Add conrearrange as private method to re-arrange contributions into "used", "passed", or "full" forms (like with Solo_Predictor)
  • Add "contributions" and "matchvarsmap" (hidden) properties
  • Fix logic which assigns calibrate.options.plots and calibrate.options.display settings (also set in top-level)
  • Add "matchvars" property to models as option to DISABLE call to matchvars during apply, xhat and tcon/qcon calculations.
  • If user turns off model object, don't expect evrimodelversion field (use modelversion only) and automatically extract model contents. Now users can automatically down-grade models using simply:
setplspref('evrimodel','noobject',1)
then loading the new model
add3dlight
  • Add "add3dlight" as new GUI utility to add 3D lighting effects for enhanced surface plots
modelviewertool
  • Fixed a bug in Tucker where the core was plotted as a loading in modelviewer when fitting e.g. Tucker(X,[3 3 1])
peakfindgui
  • Allow for more or less adjustability in sensitivity depending on the # of variables
  • Encode logic to handle non-integer values for found peak position (in case center of mass calculation is used and non-integer peak positions values get returned)
piconnectgui
  • better handling of errors thrown during initialization