Home » date » 2009 » Nov » 19 »

Model 4 - WZM & WZM<25j

*The author of this computation has been verified*
R Software Module: /rwasp_multipleregression.wasp (opens new window with default values)
Title produced by software: Multiple Regression
Date of computation: Thu, 19 Nov 2009 14:57:32 -0700
 
Cite this page as follows:
Statistical Computations at FreeStatistics.org, Office for Research Development and Education, URL http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag.htm/, Retrieved Thu, 19 Nov 2009 22:59:03 +0100
 
BibTeX entries for LaTeX users:
@Manual{KEY,
    author = {{YOUR NAME}},
    publisher = {Office for Research Development and Education},
    title = {Statistical Computations at FreeStatistics.org, URL http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag.htm/},
    year = {2009},
}
@Manual{R,
    title = {R: A Language and Environment for Statistical Computing},
    author = {{R Development Core Team}},
    organization = {R Foundation for Statistical Computing},
    address = {Vienna, Austria},
    year = {2009},
    note = {{ISBN} 3-900051-07-0},
    url = {http://www.R-project.org},
}
 
Original text written by user:
 
IsPrivate?
No (this computation is public)
 
User-defined keywords:
 
Dataseries X:
» Textbox « » Textfile « » CSV «
6.5 15.8 6.8 7.5 8 8.2 6.6 15.8 6.5 6.8 7.5 8 7.6 23.2 6.6 6.5 6.8 7.5 8 23.2 7.6 6.6 6.5 6.8 8.1 23.2 8 7.6 6.6 6.5 7.7 20.9 8.1 8 7.6 6.6 7.5 20.9 7.7 8.1 8 7.6 7.6 20.9 7.5 7.7 8.1 8 7.8 19.8 7.6 7.5 7.7 8.1 7.8 19.8 7.8 7.6 7.5 7.7 7.8 19.8 7.8 7.8 7.6 7.5 7.5 20.6 7.8 7.8 7.8 7.6 7.5 20.6 7.5 7.8 7.8 7.8 7.1 20.6 7.5 7.5 7.8 7.8 7.5 21.1 7.1 7.5 7.5 7.8 7.5 21.1 7.5 7.1 7.5 7.5 7.6 21.1 7.5 7.5 7.1 7.5 7.7 22.4 7.6 7.5 7.5 7.1 7.7 22.4 7.7 7.6 7.5 7.5 7.9 22.4 7.7 7.7 7.6 7.5 8.1 20.5 7.9 7.7 7.7 7.6 8.2 20.5 8.1 7.9 7.7 7.7 8.2 20.5 8.2 8.1 7.9 7.7 8.2 18.4 8.2 8.2 8.1 7.9 7.9 18.4 8.2 8.2 8.2 8.1 7.3 18.4 7.9 8.2 8.2 8.2 6.9 17.6 7.3 7.9 8.2 8.2 6.6 17.6 6.9 7.3 7.9 8.2 6.7 17.6 6.6 6.9 7.3 7.9 6.9 18.5 6.7 6.6 6.9 7.3 7 18.5 6.9 6.7 6.6 6.9 7.1 18.5 7 6.9 6.7 6.6 7.2 17.3 7.1 7 6.9 6.7 7.1 17.3 7.2 7.1 7 6.9 6.9 17.3 7.1 7.2 7.1 7 7 16.2 6.9 7.1 7.2 7.1 6.8 16.2 7 6.9 7.1 7.2 6.4 16.2 6.8 7 6.9 7.1 6.7 18.5 6.4 6.8 7 6.9 6.6 18.5 6.7 6.4 6.8 7 6.4 18.5 6.6 6.7 6 etc...
 
Output produced by software:

Enter (or paste) a matrix (table) containing all data (time) series. Every column represents a different variable and must be delimited by a space or Tab. Every row represents a period in time (or category) and must be delimited by hard returns. The easiest way to enter data is to copy and paste a block of spreadsheet cells. Please, do not use commas or spaces to seperate groups of digits!


Summary of computational transaction
Raw Inputview raw input (R code)
Raw Outputview raw output of R engine
Computing time3 seconds
R Server'Gwilym Jenkins' @ 72.249.127.135


Multiple Linear Regression - Estimated Regression Equation
Y[t] = + 0.0438440963714834 + 0.091191104880777X[t] + 1.10242811026445`Y(t-1)`[t] -0.486419144668766`Y(t-2)`[t] -0.188441799569958`Y(t-3)`[t] + 0.342364461886313`Y(t-4)`[t] -0.189339107617867M1[t] -0.248582058103744M2[t] + 0.00334296687708956M3[t] -0.53509934530751M4[t] -0.353680066285393M5[t] -0.188467403632904M6[t] -0.259620844371123M7[t] -0.0467034678873066M8[t] -0.0132662230148949M9[t] -0.113338664654363M10[t] -0.0864416950016453M11[t] + 0.00188780839695149t + e[t]


Multiple Linear Regression - Ordinary Least Squares
VariableParameterS.D.T-STAT
H0: parameter = 0
2-tail p-value1-tail p-value
(Intercept)0.04384409637148340.4019430.10910.9136030.456802
X0.0911911048807770.02154.24150.0001035.2e-05
`Y(t-1)`1.102428110264450.1564437.046800
`Y(t-2)`-0.4864191446687660.229403-2.12040.0392810.019641
`Y(t-3)`-0.1884417995699580.221128-0.85220.3984340.199217
`Y(t-4)`0.3423644618863130.1189042.87930.0059820.002991
M1-0.1893391076178670.106143-1.78380.0809110.040456
M2-0.2485820581037440.11166-2.22620.0308290.015415
M30.003342966877089560.140870.02370.9811680.490584
M4-0.535099345307510.126357-4.23480.0001065.3e-05
M5-0.3536800662853930.143484-2.46490.0174130.008707
M6-0.1884674036329040.119133-1.5820.1203580.060179
M7-0.2596208443711230.121117-2.14360.0372740.018637
M8-0.04670346788730660.121031-0.38590.7013260.350663
M9-0.01326622301489490.117847-0.11260.910850.455425
M10-0.1133386646543630.117581-0.96390.3400210.170011
M11-0.08644169500164530.111035-0.77850.4401730.220086
t0.001887808396951490.0014411.30990.196590.098295


Multiple Linear Regression - Regression Statistics
Multiple R0.97556268486673
R-squared0.951722552104384
Adjusted R-squared0.934260496482565
F-TEST (value)54.5023205008703
F-TEST (DF numerator)17
F-TEST (DF denominator)47
p-value0
Multiple Linear Regression - Residual Statistics
Residual Standard Deviation0.169643885658411
Sum Squared Residuals1.35261525324035


Multiple Linear Regression - Actuals, Interpolation, and Residuals
Time or IndexActualsInterpolation
Forecast
Residuals
Prediction Error
16.56.445434009957430.054565990042572
26.66.423591843465040.176408156534963
37.67.569114436143460.03088556385654
487.903223544703950.0967764552960502
58.17.919529213037140.180470786962857
67.77.638359942638410.061640057361593
77.57.346468893782810.153531106217185
87.67.65345771927573-0.0534577192757292
97.87.90561236315305-0.105612363153049
107.87.88001401265601-0.0800140126560126
117.87.724197889437670.0758021105623299
127.57.88202836301553-0.382028363015528
137.57.432321523092540.0676784769074585
147.17.52089212440425-0.420892124404246
157.57.435861805987630.0641381940123728
167.57.432136865607370.0678631343926295
177.67.496253014986920.103746985013084
187.77.67982222882530.0201777711746977
197.77.80910327779813-0.109103277798129
207.97.95642236825502-0.0564223682550237
218.18.054362210535440.0456377894645638
228.28.113615816600690.0863841833993132
238.28.117671216829060.0823287831709441
248.27.996642017974410.203357982025585
257.97.858819431173770.0411805688262343
267.37.50497230219414-0.204972302194139
276.97.17030112890926-0.270301128909262
286.66.541159407688080.0588405923119164
296.76.59866146107140.101338538928596
306.96.97396052363681-0.0739605236368146
3176.996125353998020.00387464600197746
327.17.10233600244859-0.00233600244859139
337.27.086380712695230.113619287304771
347.17.099425688432550.000574311567450727
356.96.98471800722053-0.0847180072205312
3676.81628585389590.183714146104103
376.86.8894418207808-0.089441820780806
386.46.56641105589747-0.166411055897474
396.76.598958942994760.101041057005237
406.66.65962533625658-0.0596253362565784
416.46.59366769669929-0.193667696699291
426.36.19482570479950.105174295200500
436.26.23415460888843-0.0341546088884289
446.56.390810810934990.109189189065011
456.86.80147305177069-0.00147305177068493
466.86.87269884197524-0.0726988419752369
476.46.66478889056466-0.264788890564657
486.16.17594173879083-0.075941738790829
495.85.95503900292398-0.155039002923979
506.15.788257890984330.311742109015668
517.27.040172948171670.159827051828328
527.37.52418682357938-0.224186823579379
536.97.12343378445237-0.223433784452367
546.16.21303160009998-0.113031600099976
555.85.81414786553260-0.0141478655326054
566.26.196973099085670.00302690091433393
577.17.1521716618456-0.0521716618456001
587.77.634245640335510.0657543596644856
597.97.708623995948090.191376004051914
607.77.629102026323330.0708979736766684
617.47.318944212071480.0810557879285197
627.57.195874783054770.304125216945229
6388.08559073779321-0.085590737793215
648.18.039668022164640.0603319778353614
6587.968454829752880.0315451702471214


Goldfeld-Quandt test for Heteroskedasticity
p-valuesAlternative Hypothesis
breakpoint indexgreater2-sidedless
210.4909449354361380.9818898708722760.509055064563862
220.5095054004777140.9809891990445720.490494599522286
230.4933359675569630.9866719351139270.506664032443037
240.6925895780582370.6148208438835270.307410421941763
250.6257946277413290.7484107445173420.374205372258671
260.6244045839758420.7511908320483160.375595416024158
270.8470316896452920.3059366207094160.152968310354708
280.788594421625030.4228111567499410.211405578374970
290.7475940511650280.5048118976699440.252405948834972
300.7332659918232360.5334680163535270.266734008176764
310.6506797800704380.6986404398591240.349320219929562
320.5580440157978430.8839119684043150.441955984202157
330.4995483548714920.9990967097429830.500451645128508
340.39365431735210.78730863470420.6063456826479
350.3241271497964490.6482542995928970.675872850203551
360.3421524127591020.6843048255182040.657847587240898
370.2585591347875220.5171182695750430.741440865212478
380.4562748359792650.912549671958530.543725164020735
390.4100850510873230.8201701021746460.589914948912677
400.3977336826754060.7954673653508130.602266317324593
410.3400303594545690.6800607189091380.659969640545431
420.3019275834193180.6038551668386360.698072416580682
430.1881430872535420.3762861745070840.811856912746458
440.2682375200178970.5364750400357940.731762479982103


Meta Analysis of Goldfeld-Quandt test for Heteroskedasticity
Description# significant tests% significant testsOK/NOK
1% type I error level00OK
5% type I error level00OK
10% type I error level00OK
 
Charts produced by software:
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/10lxwz1258667848.png (open in new window)
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/10lxwz1258667848.ps (open in new window)


http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/126lx1258667848.png (open in new window)
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/126lx1258667848.ps (open in new window)


http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/2ukfb1258667848.png (open in new window)
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/2ukfb1258667848.ps (open in new window)


http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/3fvyl1258667848.png (open in new window)
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/3fvyl1258667848.ps (open in new window)


http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/44dm31258667848.png (open in new window)
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/44dm31258667848.ps (open in new window)


http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/591a81258667848.png (open in new window)
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/591a81258667848.ps (open in new window)


http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/6q97j1258667848.png (open in new window)
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/6q97j1258667848.ps (open in new window)


http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/764os1258667848.png (open in new window)
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/764os1258667848.ps (open in new window)


http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/8nue01258667848.png (open in new window)
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/8nue01258667848.ps (open in new window)


http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/9jk481258667848.png (open in new window)
http://www.freestatistics.org/blog/date/2009/Nov/19/t12586679315bezu9vmz4kccag/9jk481258667848.ps (open in new window)


 
Parameters (Session):
par2 = Include Monthly Dummies ; par3 = Linear Trend ;
 
Parameters (R input):
par1 = 1 ; par2 = Include Monthly Dummies ; par3 = Linear Trend ;
 
R code (references can be found in the software module):
library(lattice)
library(lmtest)
n25 <- 25 #minimum number of obs. for Goldfeld-Quandt test
par1 <- as.numeric(par1)
x <- t(y)
k <- length(x[1,])
n <- length(x[,1])
x1 <- cbind(x[,par1], x[,1:k!=par1])
mycolnames <- c(colnames(x)[par1], colnames(x)[1:k!=par1])
colnames(x1) <- mycolnames #colnames(x)[par1]
x <- x1
if (par3 == 'First Differences'){
x2 <- array(0, dim=c(n-1,k), dimnames=list(1:(n-1), paste('(1-B)',colnames(x),sep='')))
for (i in 1:n-1) {
for (j in 1:k) {
x2[i,j] <- x[i+1,j] - x[i,j]
}
}
x <- x2
}
if (par2 == 'Include Monthly Dummies'){
x2 <- array(0, dim=c(n,11), dimnames=list(1:n, paste('M', seq(1:11), sep ='')))
for (i in 1:11){
x2[seq(i,n,12),i] <- 1
}
x <- cbind(x, x2)
}
if (par2 == 'Include Quarterly Dummies'){
x2 <- array(0, dim=c(n,3), dimnames=list(1:n, paste('Q', seq(1:3), sep ='')))
for (i in 1:3){
x2[seq(i,n,4),i] <- 1
}
x <- cbind(x, x2)
}
k <- length(x[1,])
if (par3 == 'Linear Trend'){
x <- cbind(x, c(1:n))
colnames(x)[k+1] <- 't'
}
x
k <- length(x[1,])
df <- as.data.frame(x)
(mylm <- lm(df))
(mysum <- summary(mylm))
if (n > n25) {
kp3 <- k + 3
nmkm3 <- n - k - 3
gqarr <- array(NA, dim=c(nmkm3-kp3+1,3))
numgqtests <- 0
numsignificant1 <- 0
numsignificant5 <- 0
numsignificant10 <- 0
for (mypoint in kp3:nmkm3) {
j <- 0
numgqtests <- numgqtests + 1
for (myalt in c('greater', 'two.sided', 'less')) {
j <- j + 1
gqarr[mypoint-kp3+1,j] <- gqtest(mylm, point=mypoint, alternative=myalt)$p.value
}
if (gqarr[mypoint-kp3+1,2] < 0.01) numsignificant1 <- numsignificant1 + 1
if (gqarr[mypoint-kp3+1,2] < 0.05) numsignificant5 <- numsignificant5 + 1
if (gqarr[mypoint-kp3+1,2] < 0.10) numsignificant10 <- numsignificant10 + 1
}
gqarr
}
bitmap(file='test0.png')
plot(x[,1], type='l', main='Actuals and Interpolation', ylab='value of Actuals and Interpolation (dots)', xlab='time or index')
points(x[,1]-mysum$resid)
grid()
dev.off()
bitmap(file='test1.png')
plot(mysum$resid, type='b', pch=19, main='Residuals', ylab='value of Residuals', xlab='time or index')
grid()
dev.off()
bitmap(file='test2.png')
hist(mysum$resid, main='Residual Histogram', xlab='values of Residuals')
grid()
dev.off()
bitmap(file='test3.png')
densityplot(~mysum$resid,col='black',main='Residual Density Plot', xlab='values of Residuals')
dev.off()
bitmap(file='test4.png')
qqnorm(mysum$resid, main='Residual Normal Q-Q Plot')
qqline(mysum$resid)
grid()
dev.off()
(myerror <- as.ts(mysum$resid))
bitmap(file='test5.png')
dum <- cbind(lag(myerror,k=1),myerror)
dum
dum1 <- dum[2:length(myerror),]
dum1
z <- as.data.frame(dum1)
z
plot(z,main=paste('Residual Lag plot, lowess, and regression line'), ylab='values of Residuals', xlab='lagged values of Residuals')
lines(lowess(z))
abline(lm(z))
grid()
dev.off()
bitmap(file='test6.png')
acf(mysum$resid, lag.max=length(mysum$resid)/2, main='Residual Autocorrelation Function')
grid()
dev.off()
bitmap(file='test7.png')
pacf(mysum$resid, lag.max=length(mysum$resid)/2, main='Residual Partial Autocorrelation Function')
grid()
dev.off()
bitmap(file='test8.png')
opar <- par(mfrow = c(2,2), oma = c(0, 0, 1.1, 0))
plot(mylm, las = 1, sub='Residual Diagnostics')
par(opar)
dev.off()
if (n > n25) {
bitmap(file='test9.png')
plot(kp3:nmkm3,gqarr[,2], main='Goldfeld-Quandt test',ylab='2-sided p-value',xlab='breakpoint')
grid()
dev.off()
}
load(file='createtable')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a, 'Multiple Linear Regression - Estimated Regression Equation', 1, TRUE)
a<-table.row.end(a)
myeq <- colnames(x)[1]
myeq <- paste(myeq, '[t] = ', sep='')
for (i in 1:k){
if (mysum$coefficients[i,1] > 0) myeq <- paste(myeq, '+', '')
myeq <- paste(myeq, mysum$coefficients[i,1], sep=' ')
if (rownames(mysum$coefficients)[i] != '(Intercept)') {
myeq <- paste(myeq, rownames(mysum$coefficients)[i], sep='')
if (rownames(mysum$coefficients)[i] != 't') myeq <- paste(myeq, '[t]', sep='')
}
}
myeq <- paste(myeq, ' + e[t]')
a<-table.row.start(a)
a<-table.element(a, myeq)
a<-table.row.end(a)
a<-table.end(a)
table.save(a,file='mytable1.tab')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,hyperlink('http://www.xycoon.com/ols1.htm','Multiple Linear Regression - Ordinary Least Squares',''), 6, TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Variable',header=TRUE)
a<-table.element(a,'Parameter',header=TRUE)
a<-table.element(a,'S.D.',header=TRUE)
a<-table.element(a,'T-STAT<br />H0: parameter = 0',header=TRUE)
a<-table.element(a,'2-tail p-value',header=TRUE)
a<-table.element(a,'1-tail p-value',header=TRUE)
a<-table.row.end(a)
for (i in 1:k){
a<-table.row.start(a)
a<-table.element(a,rownames(mysum$coefficients)[i],header=TRUE)
a<-table.element(a,mysum$coefficients[i,1])
a<-table.element(a, round(mysum$coefficients[i,2],6))
a<-table.element(a, round(mysum$coefficients[i,3],4))
a<-table.element(a, round(mysum$coefficients[i,4],6))
a<-table.element(a, round(mysum$coefficients[i,4]/2,6))
a<-table.row.end(a)
}
a<-table.end(a)
table.save(a,file='mytable2.tab')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a, 'Multiple Linear Regression - Regression Statistics', 2, TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Multiple R',1,TRUE)
a<-table.element(a, sqrt(mysum$r.squared))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'R-squared',1,TRUE)
a<-table.element(a, mysum$r.squared)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Adjusted R-squared',1,TRUE)
a<-table.element(a, mysum$adj.r.squared)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'F-TEST (value)',1,TRUE)
a<-table.element(a, mysum$fstatistic[1])
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'F-TEST (DF numerator)',1,TRUE)
a<-table.element(a, mysum$fstatistic[2])
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'F-TEST (DF denominator)',1,TRUE)
a<-table.element(a, mysum$fstatistic[3])
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'p-value',1,TRUE)
a<-table.element(a, 1-pf(mysum$fstatistic[1],mysum$fstatistic[2],mysum$fstatistic[3]))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Multiple Linear Regression - Residual Statistics', 2, TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Residual Standard Deviation',1,TRUE)
a<-table.element(a, mysum$sigma)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Sum Squared Residuals',1,TRUE)
a<-table.element(a, sum(myerror*myerror))
a<-table.row.end(a)
a<-table.end(a)
table.save(a,file='mytable3.tab')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a, 'Multiple Linear Regression - Actuals, Interpolation, and Residuals', 4, TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Time or Index', 1, TRUE)
a<-table.element(a, 'Actuals', 1, TRUE)
a<-table.element(a, 'Interpolation<br />Forecast', 1, TRUE)
a<-table.element(a, 'Residuals<br />Prediction Error', 1, TRUE)
a<-table.row.end(a)
for (i in 1:n) {
a<-table.row.start(a)
a<-table.element(a,i, 1, TRUE)
a<-table.element(a,x[i])
a<-table.element(a,x[i]-mysum$resid[i])
a<-table.element(a,mysum$resid[i])
a<-table.row.end(a)
}
a<-table.end(a)
table.save(a,file='mytable4.tab')
if (n > n25) {
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,'Goldfeld-Quandt test for Heteroskedasticity',4,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'p-values',header=TRUE)
a<-table.element(a,'Alternative Hypothesis',3,header=TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'breakpoint index',header=TRUE)
a<-table.element(a,'greater',header=TRUE)
a<-table.element(a,'2-sided',header=TRUE)
a<-table.element(a,'less',header=TRUE)
a<-table.row.end(a)
for (mypoint in kp3:nmkm3) {
a<-table.row.start(a)
a<-table.element(a,mypoint,header=TRUE)
a<-table.element(a,gqarr[mypoint-kp3+1,1])
a<-table.element(a,gqarr[mypoint-kp3+1,2])
a<-table.element(a,gqarr[mypoint-kp3+1,3])
a<-table.row.end(a)
}
a<-table.end(a)
table.save(a,file='mytable5.tab')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,'Meta Analysis of Goldfeld-Quandt test for Heteroskedasticity',4,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Description',header=TRUE)
a<-table.element(a,'# significant tests',header=TRUE)
a<-table.element(a,'% significant tests',header=TRUE)
a<-table.element(a,'OK/NOK',header=TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'1% type I error level',header=TRUE)
a<-table.element(a,numsignificant1)
a<-table.element(a,numsignificant1/numgqtests)
if (numsignificant1/numgqtests < 0.01) dum <- 'OK' else dum <- 'NOK'
a<-table.element(a,dum)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'5% type I error level',header=TRUE)
a<-table.element(a,numsignificant5)
a<-table.element(a,numsignificant5/numgqtests)
if (numsignificant5/numgqtests < 0.05) dum <- 'OK' else dum <- 'NOK'
a<-table.element(a,dum)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'10% type I error level',header=TRUE)
a<-table.element(a,numsignificant10)
a<-table.element(a,numsignificant10/numgqtests)
if (numsignificant10/numgqtests < 0.1) dum <- 'OK' else dum <- 'NOK'
a<-table.element(a,dum)
a<-table.row.end(a)
a<-table.end(a)
table.save(a,file='mytable6.tab')
}
 





Copyright

Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License.

Software written by Ed van Stee & Patrick Wessa


Disclaimer

Information provided on this web site is provided "AS IS" without warranty of any kind, either express or implied, including, without limitation, warranties of merchantability, fitness for a particular purpose, and noninfringement. We use reasonable efforts to include accurate and timely information and periodically update the information, and software without notice. However, we make no warranties or representations as to the accuracy or completeness of such information (or software), and we assume no liability or responsibility for errors or omissions in the content of this web site, or any software bugs in online applications. Your use of this web site is AT YOUR OWN RISK. Under no circumstances and under no legal theory shall we be liable to you or any other person for any direct, indirect, special, incidental, exemplary, or consequential damages arising from your access to, or use of, this web site.


Privacy Policy

We may request personal information to be submitted to our servers in order to be able to:

  • personalize online software applications according to your needs
  • enforce strict security rules with respect to the data that you upload (e.g. statistical data)
  • manage user sessions of online applications
  • alert you about important changes or upgrades in resources or applications

We NEVER allow other companies to directly offer registered users information about their products and services. Banner references and hyperlinks of third parties NEVER contain any personal data of the visitor.

We do NOT sell, nor transmit by any means, personal information, nor statistical data series uploaded by you to third parties.

We carefully protect your data from loss, misuse, alteration, and destruction. However, at any time, and under any circumstance you are solely responsible for managing your passwords, and keeping them secret.

We store a unique ANONYMOUS USER ID in the form of a small 'Cookie' on your computer. This allows us to track your progress when using this website which is necessary to create state-dependent features. The cookie is used for NO OTHER PURPOSE. At any time you may opt to disallow cookies from this website - this will not affect other features of this website.

We examine cookies that are used by third-parties (banner and online ads) very closely: abuse from third-parties automatically results in termination of the advertising contract without refund. We have very good reason to believe that the cookies that are produced by third parties (banner ads) do NOT cause any privacy or security risk.

FreeStatistics.org is safe. There is no need to download any software to use the applications and services contained in this website. Hence, your system's security is not compromised by their use, and your personal data - other than data you submit in the account application form, and the user-agent information that is transmitted by your browser - is never transmitted to our servers.

As a general rule, we do not log on-line behavior of individuals (other than normal logging of webserver 'hits'). However, in cases of abuse, hacking, unauthorized access, Denial of Service attacks, illegal copying, hotlinking, non-compliance with international webstandards (such as robots.txt), or any other harmful behavior, our system engineers are empowered to log, track, identify, publish, and ban misbehaving individuals - even if this leads to ban entire blocks of IP addresses, or disclosing user's identity.


FreeStatistics.org is powered by