Free Statistics

of Irreproducible Research!

Author's title

Author*The author of this computation has been verified*
R Software Modulerwasp_pairs.wasp
Title produced by softwareKendall tau Correlation Matrix
Date of computationTue, 06 Dec 2016 19:45:56 +0100
Cite this page as followsStatistical Computations at FreeStatistics.org, Office for Research Development and Education, URL https://freestatistics.org/blog/index.php?v=date/2016/Dec/06/t1481050063szl055si7u0o7z7.htm/, Retrieved Sat, 04 May 2024 09:33:27 +0000
Statistical Computations at FreeStatistics.org, Office for Research Development and Education, URL https://freestatistics.org/blog/index.php?pk=297903, Retrieved Sat, 04 May 2024 09:33:27 +0000
QR Codes:

Original text written by user:
IsPrivate?No (this computation is public)
User-defined keywords
Estimated Impact72
Family? (F = Feedback message, R = changed R code, M = changed R Module, P = changed Parameters, D = changed Data)
-       [Kendall tau Correlation Matrix] [] [2016-12-06 18:45:56] [a5a1109b2531d70fe6d77f4f71bfe676] [Current]
Feedback Forum

Post a new message
Dataseries X:
11	11	10
11	9	15
15	12	13
15	NA	13
13	NA	11
14	12	15
13	12	12
15	NA	15
10	NA	14
15	11	12
10	12	15
11	12	NA
16	15	15
17	13	14
14	12	11
13	11	11
10	NA	15
5	NA	NA
13	9	12
17	NA	11
18	11	12
17	NA	8
11	12	14
15	NA	14
12	NA	14
15	NA	12
15	12	14
12	12	13
19	14	14
13	NA	14
15	12	NA
13	9	14
10	13	15
10	NA	3
12	13	14
15	12	13
13	NA	12
18	12	12
15	12	14
9	12	13
14	NA	12
11	12	13
14	11	10
9	13	15
13	13	15
13	NA	5
12	NA	9
13	13	11
16	10	12
15	NA	NA
16	13	14
16	NA	15
13	NA	12
13	5	13
12	NA	15
11	10	12
13	NA	14
15	15	12
13	13	12
14	NA	10
13	12	11
15	13	13
14	13	13
14	11	13
13	NA	13
11	NA	12
14	12	10
17	12	12
15	13	10
15	14	13
13	NA	11
12	NA	15
14	NA	9
11	NA	10
14	NA	14
18	12	10
15	12	15
18	10	13
16	12	10
12	12	13
14	NA	15
14	NA	12
14	12	11
14	13	13
13	NA	15
12	14	11
13	10	14
12	12	14
13	NA	15
14	13	13
15	11	12
13	NA	12
14	12	15
17	NA	12
15	12	15
13	13	14
14	12	14
17	9	12
8	NA	15
15	12	15
10	NA	9
15	14	14
15	NA	15
14	11	15
15	NA	NA
18	NA	13
11	NA	12
19	NA	12
16	NA	15
17	12	14
18	NA	10
13	NA	11
10	NA	10
14	12	13
13	NA	11
12	9	14
13	13	13
12	NA	13
13	10	15
16	14	13
12	10	11
14	12	11
17	NA	14
14	11	15
12	NA	13
14	14	13
17	13	13
13	12	13
9	NA	11
14	NA	14
11	10	14
17	NA	13
15	12	15
8	NA	12
15	12	12
16	NA	12
17	15	13
11	NA	7
12	NA	12
15	12	14
10	12	15
13	10	15
17	12	12
17	12	13
16	NA	13
15	12	13
16	11	14
16	13	15
15	NA	13
16	NA	14
14	NA	12
17	13	13
14	11	9
12	10	11
15	9	13
14	NA	13
11	12	11
14	NA	10
13	NA	15
16	13	14
13	10	13
14	13	13
13	NA	15
13	NA	14
15	NA	15
13	NA	14
14	12	12
13	NA	13
12	12	11




Summary of computational transaction
Raw Input view raw input (R code)
Raw Outputview raw output of R engine
Computing time0 seconds
R ServerBig Analytics Cloud Computing Center

\begin{tabular}{lllllllll}
\hline
Summary of computational transaction \tabularnewline
Raw Input view raw input (R code)  \tabularnewline
Raw Outputview raw output of R engine  \tabularnewline
Computing time0 seconds \tabularnewline
R ServerBig Analytics Cloud Computing Center \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=297903&T=0

[TABLE]
[ROW]
Summary of computational transaction[/C][/ROW] [ROW]Raw Input[/C] view raw input (R code) [/C][/ROW] [ROW]Raw Output[/C]view raw output of R engine [/C][/ROW] [ROW]Computing time[/C]0 seconds[/C][/ROW] [ROW]R Server[/C]Big Analytics Cloud Computing Center[/C][/ROW] [/TABLE] Source: https://freestatistics.org/blog/index.php?pk=297903&T=0

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=297903&T=0

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Summary of computational transaction
Raw Input view raw input (R code)
Raw Outputview raw output of R engine
Computing time0 seconds
R ServerBig Analytics Cloud Computing Center







Correlations for all pairs of data series (method=kendall)
V1V2V3
V110.17-0.043
V20.1710.079
V3-0.0430.0791

\begin{tabular}{lllllllll}
\hline
Correlations for all pairs of data series (method=kendall) \tabularnewline
  & V1 & V2 & V3 \tabularnewline
V1 & 1 & 0.17 & -0.043 \tabularnewline
V2 & 0.17 & 1 & 0.079 \tabularnewline
V3 & -0.043 & 0.079 & 1 \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=297903&T=1

[TABLE]
[ROW][C]Correlations for all pairs of data series (method=kendall)[/C][/ROW]
[ROW][C] [/C][C]V1[/C][C]V2[/C][C]V3[/C][/ROW]
[ROW][C]V1[/C][C]1[/C][C]0.17[/C][C]-0.043[/C][/ROW]
[ROW][C]V2[/C][C]0.17[/C][C]1[/C][C]0.079[/C][/ROW]
[ROW][C]V3[/C][C]-0.043[/C][C]0.079[/C][C]1[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=297903&T=1

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=297903&T=1

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Correlations for all pairs of data series (method=kendall)
V1V2V3
V110.17-0.043
V20.1710.079
V3-0.0430.0791







Correlations for all pairs of data series with p-values
pairPearson rSpearman rhoKendall tau
V1;V20.19930.21410.1703
p-value(0.0503)(0.0352)(0.033)
V1;V3-0.0802-0.0533-0.0427
p-value(0.435)(0.6042)(0.5882)
V2;V30.06050.10260.0789
p-value(0.5562)(0.3173)(0.3346)

\begin{tabular}{lllllllll}
\hline
Correlations for all pairs of data series with p-values \tabularnewline
pair & Pearson r & Spearman rho & Kendall tau \tabularnewline
V1;V2 & 0.1993 & 0.2141 & 0.1703 \tabularnewline
p-value & (0.0503) & (0.0352) & (0.033) \tabularnewline
V1;V3 & -0.0802 & -0.0533 & -0.0427 \tabularnewline
p-value & (0.435) & (0.6042) & (0.5882) \tabularnewline
V2;V3 & 0.0605 & 0.1026 & 0.0789 \tabularnewline
p-value & (0.5562) & (0.3173) & (0.3346) \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=297903&T=2

[TABLE]
[ROW][C]Correlations for all pairs of data series with p-values[/C][/ROW]
[ROW][C]pair[/C][C]Pearson r[/C][C]Spearman rho[/C][C]Kendall tau[/C][/ROW]
[ROW][C]V1;V2[/C][C]0.1993[/C][C]0.2141[/C][C]0.1703[/C][/ROW]
[ROW][C]p-value[/C][C](0.0503)[/C][C](0.0352)[/C][C](0.033)[/C][/ROW]
[ROW][C]V1;V3[/C][C]-0.0802[/C][C]-0.0533[/C][C]-0.0427[/C][/ROW]
[ROW][C]p-value[/C][C](0.435)[/C][C](0.6042)[/C][C](0.5882)[/C][/ROW]
[ROW][C]V2;V3[/C][C]0.0605[/C][C]0.1026[/C][C]0.0789[/C][/ROW]
[ROW][C]p-value[/C][C](0.5562)[/C][C](0.3173)[/C][C](0.3346)[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=297903&T=2

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=297903&T=2

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Correlations for all pairs of data series with p-values
pairPearson rSpearman rhoKendall tau
V1;V20.19930.21410.1703
p-value(0.0503)(0.0352)(0.033)
V1;V3-0.0802-0.0533-0.0427
p-value(0.435)(0.6042)(0.5882)
V2;V30.06050.10260.0789
p-value(0.5562)(0.3173)(0.3346)







Meta Analysis of Correlation Tests
Number of significant by total number of Correlations
Type I errorPearson rSpearman rhoKendall tau
0.01000
0.02000
0.03000
0.0400.330.33
0.0500.330.33
0.060.330.330.33
0.070.330.330.33
0.080.330.330.33
0.090.330.330.33
0.10.330.330.33

\begin{tabular}{lllllllll}
\hline
Meta Analysis of Correlation Tests \tabularnewline
Number of significant by total number of Correlations \tabularnewline
Type I error & Pearson r & Spearman rho & Kendall tau \tabularnewline
0.01 & 0 & 0 & 0 \tabularnewline
0.02 & 0 & 0 & 0 \tabularnewline
0.03 & 0 & 0 & 0 \tabularnewline
0.04 & 0 & 0.33 & 0.33 \tabularnewline
0.05 & 0 & 0.33 & 0.33 \tabularnewline
0.06 & 0.33 & 0.33 & 0.33 \tabularnewline
0.07 & 0.33 & 0.33 & 0.33 \tabularnewline
0.08 & 0.33 & 0.33 & 0.33 \tabularnewline
0.09 & 0.33 & 0.33 & 0.33 \tabularnewline
0.1 & 0.33 & 0.33 & 0.33 \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=297903&T=3

[TABLE]
[ROW][C]Meta Analysis of Correlation Tests[/C][/ROW]
[ROW][C]Number of significant by total number of Correlations[/C][/ROW]
[ROW][C]Type I error[/C][C]Pearson r[/C][C]Spearman rho[/C][C]Kendall tau[/C][/ROW]
[ROW][C]0.01[/C][C]0[/C][C]0[/C][C]0[/C][/ROW]
[ROW][C]0.02[/C][C]0[/C][C]0[/C][C]0[/C][/ROW]
[ROW][C]0.03[/C][C]0[/C][C]0[/C][C]0[/C][/ROW]
[ROW][C]0.04[/C][C]0[/C][C]0.33[/C][C]0.33[/C][/ROW]
[ROW][C]0.05[/C][C]0[/C][C]0.33[/C][C]0.33[/C][/ROW]
[ROW][C]0.06[/C][C]0.33[/C][C]0.33[/C][C]0.33[/C][/ROW]
[ROW][C]0.07[/C][C]0.33[/C][C]0.33[/C][C]0.33[/C][/ROW]
[ROW][C]0.08[/C][C]0.33[/C][C]0.33[/C][C]0.33[/C][/ROW]
[ROW][C]0.09[/C][C]0.33[/C][C]0.33[/C][C]0.33[/C][/ROW]
[ROW][C]0.1[/C][C]0.33[/C][C]0.33[/C][C]0.33[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=297903&T=3

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=297903&T=3

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Meta Analysis of Correlation Tests
Number of significant by total number of Correlations
Type I errorPearson rSpearman rhoKendall tau
0.01000
0.02000
0.03000
0.0400.330.33
0.0500.330.33
0.060.330.330.33
0.070.330.330.33
0.080.330.330.33
0.090.330.330.33
0.10.330.330.33



Parameters (Session):
par1 = TRUE ;
Parameters (R input):
par1 = kendall ;
R code (references can be found in the software module):
par1 <- 'kendall'
panel.tau <- function(x, y, digits=2, prefix='', cex.cor)
{
usr <- par('usr'); on.exit(par(usr))
par(usr = c(0, 1, 0, 1))
rr <- cor.test(x, y, method=par1)
r <- round(rr$p.value,2)
txt <- format(c(r, 0.123456789), digits=digits)[1]
txt <- paste(prefix, txt, sep='')
if(missing(cex.cor)) cex <- 0.5/strwidth(txt)
text(0.5, 0.5, txt, cex = cex)
}
panel.hist <- function(x, ...)
{
usr <- par('usr'); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, col='grey', ...)
}
x <- na.omit(x)
y <- t(na.omit(t(y)))
bitmap(file='test1.png')
pairs(t(y),diag.panel=panel.hist, upper.panel=panel.smooth, lower.panel=panel.tau, main=main)
dev.off()
load(file='createtable')
n <- length(y[,1])
print(n)
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,paste('Correlations for all pairs of data series (method=',par1,')',sep=''),n+1,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,' ',header=TRUE)
for (i in 1:n) {
a<-table.element(a,dimnames(t(x))[[2]][i],header=TRUE)
}
a<-table.row.end(a)
for (i in 1:n) {
a<-table.row.start(a)
a<-table.element(a,dimnames(t(x))[[2]][i],header=TRUE)
for (j in 1:n) {
r <- cor.test(y[i,],y[j,],method=par1)
a<-table.element(a,round(r$estimate,3))
}
a<-table.row.end(a)
}
a<-table.end(a)
table.save(a,file='mytable.tab')
ncorrs <- (n*n -n)/2
mycorrs <- array(0, dim=c(10,3))
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,'Correlations for all pairs of data series with p-values',4,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'pair',1,TRUE)
a<-table.element(a,'Pearson r',1,TRUE)
a<-table.element(a,'Spearman rho',1,TRUE)
a<-table.element(a,'Kendall tau',1,TRUE)
a<-table.row.end(a)
cor.test(y[1,],y[2,],method=par1)
for (i in 1:(n-1))
{
for (j in (i+1):n)
{
a<-table.row.start(a)
dum <- paste(dimnames(t(x))[[2]][i],';',dimnames(t(x))[[2]][j],sep='')
a<-table.element(a,dum,header=TRUE)
rp <- cor.test(y[i,],y[j,],method='pearson')
a<-table.element(a,round(rp$estimate,4))
rs <- cor.test(y[i,],y[j,],method='spearman')
a<-table.element(a,round(rs$estimate,4))
rk <- cor.test(y[i,],y[j,],method='kendall')
a<-table.element(a,round(rk$estimate,4))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'p-value',header=T)
a<-table.element(a,paste('(',round(rp$p.value,4),')',sep=''))
a<-table.element(a,paste('(',round(rs$p.value,4),')',sep=''))
a<-table.element(a,paste('(',round(rk$p.value,4),')',sep=''))
a<-table.row.end(a)
for (iii in 1:10) {
iiid100 <- iii / 100
if (rp$p.value < iiid100) mycorrs[iii, 1] = mycorrs[iii, 1] + 1
if (rs$p.value < iiid100) mycorrs[iii, 2] = mycorrs[iii, 2] + 1
if (rk$p.value < iiid100) mycorrs[iii, 3] = mycorrs[iii, 3] + 1
}
}
}
a<-table.end(a)
table.save(a,file='mytable1.tab')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,'Meta Analysis of Correlation Tests',4,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Number of significant by total number of Correlations',4,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Type I error',1,TRUE)
a<-table.element(a,'Pearson r',1,TRUE)
a<-table.element(a,'Spearman rho',1,TRUE)
a<-table.element(a,'Kendall tau',1,TRUE)
a<-table.row.end(a)
for (iii in 1:10) {
iiid100 <- iii / 100
a<-table.row.start(a)
a<-table.element(a,round(iiid100,2),header=T)
a<-table.element(a,round(mycorrs[iii,1]/ncorrs,2))
a<-table.element(a,round(mycorrs[iii,2]/ncorrs,2))
a<-table.element(a,round(mycorrs[iii,3]/ncorrs,2))
a<-table.row.end(a)
}
a<-table.end(a)
table.save(a,file='mytable2.tab')