__What is a non-parametric test?__A non-parametric test is a method of analysis which is utilized to analyze sets of data which do not comply with a specific distribution type. As a result of such, this particular type of test is by design, more robust.

Many tests require as a prerequisite, that the underlying data be structured in a certain manner. However, typically these requirements do not significantly cause test results to be adversely impacted, as many tests which are parametric in nature, have a good deal of robustness included within their models.

Therefore, though I believe that it is important to be familiar with tests of this particular type, I would typically recommend performing their parametric alternatives. The reason for this recommendation relates to the general acceptance and greater familiarity that the parametric tests provide.

__Friedman Test__The Friedman Test provides a non-parametric alternative to the one way repeated measures analysis of variance (ANOVA). Like many other non-parametric tests, it utilizes a ranking system to increase the robustness of measurement. In this manner, the test is similar to tests such as the Kruskal-Wallis test. This method of analysis was originally derived by Milton Friedman, an American economist.

__Example__Below is our sample data set:

To begin, select

**“Analyze”**,

**“Nonparametric Tests”**, followed by

**“Legacy Dialogs”**, and then

**“K Related Samples”**.

This populates the following screen:

Through the utilization of the center arrow button, designate

**“Measure1”**,

**“Measure2”**, and

**“Measure3”**as

**“Test Variables”**. Make sure that the box

**“Friedman”**is selected beneath the text which reads:

**“Test Type”**.

After clicking

**“OK”**, the following output should be produced:

Since the

**“Asymp. Sig.”**(p-value) equals .667, we will not reject the null hypothesis (.667 > .05). Therefore, we can conclude, that the three conditions did not significantly differ.To perform this test within R, we will utilize the following code:

**# Create the data vectors to populate each group #**

Measure1 <- c(7.00, 5.00, 16.00, 3.00, 19.00, 10.00, 16.00, 9.00, 10.00, 18.00, 6.00, 12.00)

Measure2 <- c(14.00, 12.00, 8.00, 17.00, 18.00, 7.00, 16.00, 10.00, 16.00, 9.00, 10.00, 8.00)

Measure3 <- c(9.00, 13.00, 3.00, 6.00, 2.00, 16.00, 15.00, 7.00, 13.00, 17.00, 9.00, 13.00)

# Create the data matrix necessary to perform the analysis #

results <-matrix(c(Measure1, Measure2, Measure3),

ncol = 3,

dimnames = list(1 : 12,

c("Measure1", "Measure2", "Measure3")))

# Perform the analysis through the utilization of The Friedman Test #

friedman.test(results)

Measure1 <- c(7.00, 5.00, 16.00, 3.00, 19.00, 10.00, 16.00, 9.00, 10.00, 18.00, 6.00, 12.00)

Measure2 <- c(14.00, 12.00, 8.00, 17.00, 18.00, 7.00, 16.00, 10.00, 16.00, 9.00, 10.00, 8.00)

Measure3 <- c(9.00, 13.00, 3.00, 6.00, 2.00, 16.00, 15.00, 7.00, 13.00, 17.00, 9.00, 13.00)

# Create the data matrix necessary to perform the analysis #

results <-matrix(c(Measure1, Measure2, Measure3),

ncol = 3,

dimnames = list(1 : 12,

c("Measure1", "Measure2", "Measure3")))

# Perform the analysis through the utilization of The Friedman Test #

friedman.test(results)

This produces the output:

*Friedman rank sum test*

data: results

Friedman chi-squared = 0.80851, df = 2, p-value = 0.6675

data: results

Friedman chi-squared = 0.80851, df = 2, p-value = 0.6675

That’s it for this article, stay tuned for more. Never stop learning, Data Heads!

## No comments:

## Post a Comment

Note: Only a member of this blog may post a comment.