Maxautofactors: Difference between revisions

From Eigenvector Research Documentation Wiki
Jump to navigation Jump to search
imported>Neal
imported>Neal
Line 25: Line 25:
* '''display''': [ 'off' | {'on'} ]      governs level of display to command window.
* '''display''': [ 'off' | {'on'} ]      governs level of display to command window.
* '''plots''': [ 'none' | {'final'} ]  governs level of plotting.
* '''plots''': [ 'none' | {'final'} ]  governs level of plotting.
* '''algorithm''': [ {'maf'} | 'paf' | 'mdf' | 'pdf' | 'manual']
* '''algorithm''': [ {'maf'} | 'paf' | 'mdf' | 'pdf' ]
: if algorithm == 'maf' or 'paf' the options settings are for numerator and denomenator operators to be I and the first difference respectively.
: if algorithm == 'mdf' or 'pdf' the options settings are for numerator and denomenator operators to be 1stD and the 2ndD respectively.
* '''varcap''': [{0.999}] 0<varcap<1, specifies the variance of X to be captured when approximating the input '''X''' with a PCA model.
: If (varcap) is an integer >=ncomp, this is the number of PCs used. The minimum number is (ncomp).
* '''smooth''': [ ] smoothness penalty, based on the fraction of variance of the numerator (typical value might be 1e-3 to 0.05).
: Smoothness is only available for MAF and MDF.


===See Also===
===See Also===


[[mcr]], [[parafac]], [[pca]]
[[mcr]], [[parafac]], [[pca]]

Revision as of 09:35, 29 September 2011

Purpose

Maximum / Principal Autocorrelation Factors.

Synopsis

[model] = maxautofactors(x,ncomp,options)

Description

In it's default mode, MAXAUTOFACTORS uses a generalized eigenvalue decomposition to provide a model of data (x) which captures maximum spatially-correlated variance. An approximate solution is used to stabelize and speed up the algorithm (see options.varcap).

Inputs

  • x = MxNxP image class 'dataset' or 'double'.
  • ncomp = number of components (integer).

Outputs

The resulting scores (scores), loadings (loads), and mean spectrum (mn) can be used to reconstruct the mean-centered data matrix X_mn

X_mn = scores*loads

The difference between PCA and MAF is that MAF extracts loadings which are highly correlated in the spatial dimension of an image. In addition, MAF always returns the entire set of components up to the rank of the data matrix. Input X is either a Image DataSet object or a three-way double matrix (first two dimensions are spatial, last is variable). Output ssq is an experimental sum of squares captured table but because of the nature of the decomposition, this table is only approximate. It includes the component number (column 1), estimated eigenvalue (column 2), and the estimated % captured per variable and total (columns 3 and 4, respectively).

Options

options = a structure array with the following fields:

  • display: [ 'off' | {'on'} ] governs level of display to command window.
  • plots: [ 'none' | {'final'} ] governs level of plotting.
  • algorithm: [ {'maf'} | 'paf' | 'mdf' | 'pdf' ]
if algorithm == 'maf' or 'paf' the options settings are for numerator and denomenator operators to be I and the first difference respectively.
if algorithm == 'mdf' or 'pdf' the options settings are for numerator and denomenator operators to be 1stD and the 2ndD respectively.
  • varcap: [{0.999}] 0<varcap<1, specifies the variance of X to be captured when approximating the input X with a PCA model.
If (varcap) is an integer >=ncomp, this is the number of PCs used. The minimum number is (ncomp).
  • smooth: [ ] smoothness penalty, based on the fraction of variance of the numerator (typical value might be 1e-3 to 0.05).
Smoothness is only available for MAF and MDF.

See Also

mcr, parafac, pca