Signtest: Difference between revisions
Jump to navigation
Jump to search
imported>Scott (Created page with " ===Purpose=== Pairwise sign test for evaluating residuals from two models. Adapted from: ''Edward V. Thomas, "Non-parametric statistical methods for multivariate calibration m...") |
imported>Scott |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
===Purpose=== | ===Purpose=== | ||
Pairwise sign test for evaluating residuals from two models. | Pairwise sign test for evaluating residuals from two models. | ||
===Synopsis=== | ===Synopsis=== | ||
Line 14: | Line 9: | ||
Pairwise comparison between two sets of model residuals using the signs of the residuals. Output is the probability that model 2 (the model producing the second set of residuals) is better than model 1 (the model that produces the first set of residuals). | Pairwise comparison between two sets of model residuals using the signs of the residuals. Output is the probability that model 2 (the model producing the second set of residuals) is better than model 1 (the model that produces the first set of residuals). | ||
Adapted from: ''Edward V. Thomas, "Non-parametric statistical methods for multivariate calibration model selection and comparison", J. Chemometrics 2003; 17: 653659. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cem.833'' | |||
====Inputs==== | ====Inputs==== | ||
* '''err_1''' = Prediction errors from model #1 | * '''err_1''' = Prediction errors from model #1 |
Latest revision as of 15:07, 27 September 2011
Purpose
Pairwise sign test for evaluating residuals from two models.
Synopsis
- prob = signtest(err_1,err_2)
Description
Pairwise comparison between two sets of model residuals using the signs of the residuals. Output is the probability that model 2 (the model producing the second set of residuals) is better than model 1 (the model that produces the first set of residuals).
Adapted from: Edward V. Thomas, "Non-parametric statistical methods for multivariate calibration model selection and comparison", J. Chemometrics 2003; 17: 653659. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cem.833
Inputs
- err_1 = Prediction errors from model #1
- err_2 = Prediction errors from model #2
Outputs
- prob = Prob{# of times model#2 wins <=k} Probability that model#2 is better than model#1.