Free Statistics

of Irreproducible Research!

Author's title

Author*The author of this computation has been verified*
R Software Modulerwasp_multipleregression.wasp
Title produced by softwareMultiple Regression
Date of computationSun, 10 Jan 2016 16:58:26 +0000
Cite this page as followsStatistical Computations at FreeStatistics.org, Office for Research Development and Education, URL https://freestatistics.org/blog/index.php?v=date/2016/Jan/10/t1452445126p8d638zbitvx7iq.htm/, Retrieved Sun, 05 May 2024 06:12:43 +0000
Statistical Computations at FreeStatistics.org, Office for Research Development and Education, URL https://freestatistics.org/blog/index.php?pk=287972, Retrieved Sun, 05 May 2024 06:12:43 +0000
QR Codes:

Original text written by user:
IsPrivate?No (this computation is public)
User-defined keywords
Estimated Impact82
Family? (F = Feedback message, R = changed R code, M = changed R Module, P = changed Parameters, D = changed Data)
-       [Multiple Regression] [] [2016-01-10 16:58:26] [b8b7836a0ee13e4c7bab4c9c2704ddd6] [Current]
Feedback Forum

Post a new message
Dataseries X:
46.4 68.5 0.4
45.7 87.8 0.61
45.3 115.8 0.53
38.6 106.8 0.53
37.2 71.6 0.53
35 60.2 0.37
34 118.7 0.3
28.3 33.7 0.19
24.7 27.2 0.12
24.7 62 0.2
24.4 24.9 0.19
22.7 22.9 0.12
22.3 65.7 0.53
21.7 21.6 0.14
21.6 32.4 0.34
21.3 108.7 0.69
21.2 38.6 0.49
20.8 46.7 0.42
20.3 56.5 0.48
18.9 44.4 0.25
18.8 47.4 0.52
18.6 21.7 0.19
18 55.7 0.44
17.6 27.1 0.24
17 28.5 0.16
16.7 41.6 0.1
15.9 44.6 0.15
15.3 26.1 0.05
15 18.7 0.24
14.8 49.1 0.22




Summary of computational transaction
Raw Inputview raw input (R code)
Raw Outputview raw output of R engine
Computing time5 seconds
R Server'Sir Ronald Aylmer Fisher' @ fisher.wessa.net

\begin{tabular}{lllllllll}
\hline
Summary of computational transaction \tabularnewline
Raw Input & view raw input (R code)  \tabularnewline
Raw Output & view raw output of R engine  \tabularnewline
Computing time & 5 seconds \tabularnewline
R Server & 'Sir Ronald Aylmer Fisher' @ fisher.wessa.net \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=287972&T=0

[TABLE]
[ROW][C]Summary of computational transaction[/C][/ROW]
[ROW][C]Raw Input[/C][C]view raw input (R code) [/C][/ROW]
[ROW][C]Raw Output[/C][C]view raw output of R engine [/C][/ROW]
[ROW][C]Computing time[/C][C]5 seconds[/C][/ROW]
[ROW][C]R Server[/C][C]'Sir Ronald Aylmer Fisher' @ fisher.wessa.net[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=287972&T=0

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=287972&T=0

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Summary of computational transaction
Raw Inputview raw input (R code)
Raw Outputview raw output of R engine
Computing time5 seconds
R Server'Sir Ronald Aylmer Fisher' @ fisher.wessa.net







Multiple Linear Regression - Estimated Regression Equation
HIV_Risk[t] = + 13.2352 + 0.207932Homicides[t] + 1.86966Prop_Population_on_Farms[t] + e[t]

\begin{tabular}{lllllllll}
\hline
Multiple Linear Regression - Estimated Regression Equation \tabularnewline
HIV_Risk[t] =  +  13.2352 +  0.207932Homicides[t] +  1.86966Prop_Population_on_Farms[t]  + e[t] \tabularnewline
 \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=287972&T=1

[TABLE]
[ROW][C]Multiple Linear Regression - Estimated Regression Equation[/C][/ROW]
[ROW][C]HIV_Risk[t] =  +  13.2352 +  0.207932Homicides[t] +  1.86966Prop_Population_on_Farms[t]  + e[t][/C][/ROW]
[ROW][C][/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=287972&T=1

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=287972&T=1

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Multiple Linear Regression - Estimated Regression Equation
HIV_Risk[t] = + 13.2352 + 0.207932Homicides[t] + 1.86966Prop_Population_on_Farms[t] + e[t]







Multiple Linear Regression - Ordinary Least Squares
VariableParameterS.D.T-STATH0: parameter = 02-tail p-value1-tail p-value
(Intercept)+13.23 3.025+4.3750e+00 0.0001632 8.16e-05
Homicides+0.2079 0.06506+3.1960e+00 0.003534 0.001767
Prop_Population_on_Farms+1.87 10.85+1.7230e-01 0.8645 0.4323

\begin{tabular}{lllllllll}
\hline
Multiple Linear Regression - Ordinary Least Squares \tabularnewline
Variable & Parameter & S.D. & T-STATH0: parameter = 0 & 2-tail p-value & 1-tail p-value \tabularnewline
(Intercept) & +13.23 &  3.025 & +4.3750e+00 &  0.0001632 &  8.16e-05 \tabularnewline
Homicides & +0.2079 &  0.06506 & +3.1960e+00 &  0.003534 &  0.001767 \tabularnewline
Prop_Population_on_Farms & +1.87 &  10.85 & +1.7230e-01 &  0.8645 &  0.4323 \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=287972&T=2

[TABLE]
[ROW][C]Multiple Linear Regression - Ordinary Least Squares[/C][/ROW]
[ROW][C]Variable[/C][C]Parameter[/C][C]S.D.[/C][C]T-STATH0: parameter = 0[/C][C]2-tail p-value[/C][C]1-tail p-value[/C][/ROW]
[ROW][C](Intercept)[/C][C]+13.23[/C][C] 3.025[/C][C]+4.3750e+00[/C][C] 0.0001632[/C][C] 8.16e-05[/C][/ROW]
[ROW][C]Homicides[/C][C]+0.2079[/C][C] 0.06506[/C][C]+3.1960e+00[/C][C] 0.003534[/C][C] 0.001767[/C][/ROW]
[ROW][C]Prop_Population_on_Farms[/C][C]+1.87[/C][C] 10.85[/C][C]+1.7230e-01[/C][C] 0.8645[/C][C] 0.4323[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=287972&T=2

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=287972&T=2

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Multiple Linear Regression - Ordinary Least Squares
VariableParameterS.D.T-STATH0: parameter = 02-tail p-value1-tail p-value
(Intercept)+13.23 3.025+4.3750e+00 0.0001632 8.16e-05
Homicides+0.2079 0.06506+3.1960e+00 0.003534 0.001767
Prop_Population_on_Farms+1.87 10.85+1.7230e-01 0.8645 0.4323







Multiple Linear Regression - Regression Statistics
Multiple R 0.6648
R-squared 0.4419
Adjusted R-squared 0.4006
F-TEST (value) 10.69
F-TEST (DF numerator)2
F-TEST (DF denominator)27
p-value 0.0003804
Multiple Linear Regression - Residual Statistics
Residual Standard Deviation 7.415
Sum Squared Residuals 1485

\begin{tabular}{lllllllll}
\hline
Multiple Linear Regression - Regression Statistics \tabularnewline
Multiple R &  0.6648 \tabularnewline
R-squared &  0.4419 \tabularnewline
Adjusted R-squared &  0.4006 \tabularnewline
F-TEST (value) &  10.69 \tabularnewline
F-TEST (DF numerator) & 2 \tabularnewline
F-TEST (DF denominator) & 27 \tabularnewline
p-value &  0.0003804 \tabularnewline
Multiple Linear Regression - Residual Statistics \tabularnewline
Residual Standard Deviation &  7.415 \tabularnewline
Sum Squared Residuals &  1485 \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=287972&T=3

[TABLE]
[ROW][C]Multiple Linear Regression - Regression Statistics[/C][/ROW]
[ROW][C]Multiple R[/C][C] 0.6648[/C][/ROW]
[ROW][C]R-squared[/C][C] 0.4419[/C][/ROW]
[ROW][C]Adjusted R-squared[/C][C] 0.4006[/C][/ROW]
[ROW][C]F-TEST (value)[/C][C] 10.69[/C][/ROW]
[ROW][C]F-TEST (DF numerator)[/C][C]2[/C][/ROW]
[ROW][C]F-TEST (DF denominator)[/C][C]27[/C][/ROW]
[ROW][C]p-value[/C][C] 0.0003804[/C][/ROW]
[ROW][C]Multiple Linear Regression - Residual Statistics[/C][/ROW]
[ROW][C]Residual Standard Deviation[/C][C] 7.415[/C][/ROW]
[ROW][C]Sum Squared Residuals[/C][C] 1485[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=287972&T=3

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=287972&T=3

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Multiple Linear Regression - Regression Statistics
Multiple R 0.6648
R-squared 0.4419
Adjusted R-squared 0.4006
F-TEST (value) 10.69
F-TEST (DF numerator)2
F-TEST (DF denominator)27
p-value 0.0003804
Multiple Linear Regression - Residual Statistics
Residual Standard Deviation 7.415
Sum Squared Residuals 1485







Multiple Linear Regression - Actuals, Interpolation, and Residuals
Time or IndexActualsInterpolationForecastResidualsPrediction Error
1 46.4 28.23 18.17
2 45.7 32.63 13.07
3 45.3 38.3 6.995
4 38.6 36.43 2.167
5 37.2 29.11 8.086
6 35 26.44 8.556
7 34 38.48-4.478
8 28.3 20.6 7.702
9 24.7 19.12 5.585
10 24.7 26.5-1.801
11 24.4 18.77 5.632
12 22.7 18.22 4.479
13 22.3 27.89-5.587
14 21.7 17.99 3.712
15 21.6 20.61 0.9922
16 21.3 37.13-15.83
17 21.2 22.18-0.9775
18 20.8 23.73-2.931
19 20.3 25.88-5.581
20 18.9 22.93-4.035
21 18.8 24.06-5.263
22 18.6 18.1 0.4975
23 18 25.64-7.64
24 17.6 19.32-1.719
25 17 19.46-2.46
26 16.7 22.07-5.372
27 15.9 22.79-6.889
28 15.3 18.76-3.456
29 15 17.57-2.572
30 14.8 23.86-9.056

\begin{tabular}{lllllllll}
\hline
Multiple Linear Regression - Actuals, Interpolation, and Residuals \tabularnewline
Time or Index & Actuals & InterpolationForecast & ResidualsPrediction Error \tabularnewline
1 &  46.4 &  28.23 &  18.17 \tabularnewline
2 &  45.7 &  32.63 &  13.07 \tabularnewline
3 &  45.3 &  38.3 &  6.995 \tabularnewline
4 &  38.6 &  36.43 &  2.167 \tabularnewline
5 &  37.2 &  29.11 &  8.086 \tabularnewline
6 &  35 &  26.44 &  8.556 \tabularnewline
7 &  34 &  38.48 & -4.478 \tabularnewline
8 &  28.3 &  20.6 &  7.702 \tabularnewline
9 &  24.7 &  19.12 &  5.585 \tabularnewline
10 &  24.7 &  26.5 & -1.801 \tabularnewline
11 &  24.4 &  18.77 &  5.632 \tabularnewline
12 &  22.7 &  18.22 &  4.479 \tabularnewline
13 &  22.3 &  27.89 & -5.587 \tabularnewline
14 &  21.7 &  17.99 &  3.712 \tabularnewline
15 &  21.6 &  20.61 &  0.9922 \tabularnewline
16 &  21.3 &  37.13 & -15.83 \tabularnewline
17 &  21.2 &  22.18 & -0.9775 \tabularnewline
18 &  20.8 &  23.73 & -2.931 \tabularnewline
19 &  20.3 &  25.88 & -5.581 \tabularnewline
20 &  18.9 &  22.93 & -4.035 \tabularnewline
21 &  18.8 &  24.06 & -5.263 \tabularnewline
22 &  18.6 &  18.1 &  0.4975 \tabularnewline
23 &  18 &  25.64 & -7.64 \tabularnewline
24 &  17.6 &  19.32 & -1.719 \tabularnewline
25 &  17 &  19.46 & -2.46 \tabularnewline
26 &  16.7 &  22.07 & -5.372 \tabularnewline
27 &  15.9 &  22.79 & -6.889 \tabularnewline
28 &  15.3 &  18.76 & -3.456 \tabularnewline
29 &  15 &  17.57 & -2.572 \tabularnewline
30 &  14.8 &  23.86 & -9.056 \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=287972&T=4

[TABLE]
[ROW][C]Multiple Linear Regression - Actuals, Interpolation, and Residuals[/C][/ROW]
[ROW][C]Time or Index[/C][C]Actuals[/C][C]InterpolationForecast[/C][C]ResidualsPrediction Error[/C][/ROW]
[ROW][C]1[/C][C] 46.4[/C][C] 28.23[/C][C] 18.17[/C][/ROW]
[ROW][C]2[/C][C] 45.7[/C][C] 32.63[/C][C] 13.07[/C][/ROW]
[ROW][C]3[/C][C] 45.3[/C][C] 38.3[/C][C] 6.995[/C][/ROW]
[ROW][C]4[/C][C] 38.6[/C][C] 36.43[/C][C] 2.167[/C][/ROW]
[ROW][C]5[/C][C] 37.2[/C][C] 29.11[/C][C] 8.086[/C][/ROW]
[ROW][C]6[/C][C] 35[/C][C] 26.44[/C][C] 8.556[/C][/ROW]
[ROW][C]7[/C][C] 34[/C][C] 38.48[/C][C]-4.478[/C][/ROW]
[ROW][C]8[/C][C] 28.3[/C][C] 20.6[/C][C] 7.702[/C][/ROW]
[ROW][C]9[/C][C] 24.7[/C][C] 19.12[/C][C] 5.585[/C][/ROW]
[ROW][C]10[/C][C] 24.7[/C][C] 26.5[/C][C]-1.801[/C][/ROW]
[ROW][C]11[/C][C] 24.4[/C][C] 18.77[/C][C] 5.632[/C][/ROW]
[ROW][C]12[/C][C] 22.7[/C][C] 18.22[/C][C] 4.479[/C][/ROW]
[ROW][C]13[/C][C] 22.3[/C][C] 27.89[/C][C]-5.587[/C][/ROW]
[ROW][C]14[/C][C] 21.7[/C][C] 17.99[/C][C] 3.712[/C][/ROW]
[ROW][C]15[/C][C] 21.6[/C][C] 20.61[/C][C] 0.9922[/C][/ROW]
[ROW][C]16[/C][C] 21.3[/C][C] 37.13[/C][C]-15.83[/C][/ROW]
[ROW][C]17[/C][C] 21.2[/C][C] 22.18[/C][C]-0.9775[/C][/ROW]
[ROW][C]18[/C][C] 20.8[/C][C] 23.73[/C][C]-2.931[/C][/ROW]
[ROW][C]19[/C][C] 20.3[/C][C] 25.88[/C][C]-5.581[/C][/ROW]
[ROW][C]20[/C][C] 18.9[/C][C] 22.93[/C][C]-4.035[/C][/ROW]
[ROW][C]21[/C][C] 18.8[/C][C] 24.06[/C][C]-5.263[/C][/ROW]
[ROW][C]22[/C][C] 18.6[/C][C] 18.1[/C][C] 0.4975[/C][/ROW]
[ROW][C]23[/C][C] 18[/C][C] 25.64[/C][C]-7.64[/C][/ROW]
[ROW][C]24[/C][C] 17.6[/C][C] 19.32[/C][C]-1.719[/C][/ROW]
[ROW][C]25[/C][C] 17[/C][C] 19.46[/C][C]-2.46[/C][/ROW]
[ROW][C]26[/C][C] 16.7[/C][C] 22.07[/C][C]-5.372[/C][/ROW]
[ROW][C]27[/C][C] 15.9[/C][C] 22.79[/C][C]-6.889[/C][/ROW]
[ROW][C]28[/C][C] 15.3[/C][C] 18.76[/C][C]-3.456[/C][/ROW]
[ROW][C]29[/C][C] 15[/C][C] 17.57[/C][C]-2.572[/C][/ROW]
[ROW][C]30[/C][C] 14.8[/C][C] 23.86[/C][C]-9.056[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=287972&T=4

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=287972&T=4

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Multiple Linear Regression - Actuals, Interpolation, and Residuals
Time or IndexActualsInterpolationForecastResidualsPrediction Error
1 46.4 28.23 18.17
2 45.7 32.63 13.07
3 45.3 38.3 6.995
4 38.6 36.43 2.167
5 37.2 29.11 8.086
6 35 26.44 8.556
7 34 38.48-4.478
8 28.3 20.6 7.702
9 24.7 19.12 5.585
10 24.7 26.5-1.801
11 24.4 18.77 5.632
12 22.7 18.22 4.479
13 22.3 27.89-5.587
14 21.7 17.99 3.712
15 21.6 20.61 0.9922
16 21.3 37.13-15.83
17 21.2 22.18-0.9775
18 20.8 23.73-2.931
19 20.3 25.88-5.581
20 18.9 22.93-4.035
21 18.8 24.06-5.263
22 18.6 18.1 0.4975
23 18 25.64-7.64
24 17.6 19.32-1.719
25 17 19.46-2.46
26 16.7 22.07-5.372
27 15.9 22.79-6.889
28 15.3 18.76-3.456
29 15 17.57-2.572
30 14.8 23.86-9.056







Goldfeld-Quandt test for Heteroskedasticity
p-valuesAlternative Hypothesis
breakpoint indexgreater2-sidedless
6 0.8724 0.2553 0.1276
7 0.8919 0.2163 0.1081
8 0.9294 0.1411 0.07055
9 0.9323 0.1354 0.0677
10 0.9722 0.05566 0.02783
11 0.9872 0.02561 0.01281
12 0.9951 0.009716 0.004858
13 1 9.897e-05 4.948e-05
14 1 2.17e-05 1.085e-05
15 1 1.131e-05 5.656e-06
16 1 2.573e-06 1.287e-06
17 1 8.386e-06 4.193e-06
18 1 1.613e-05 8.063e-06
19 1 4.259e-05 2.13e-05
20 1 7.793e-05 3.896e-05
21 0.9998 0.0004209 0.0002104
22 0.9996 0.0008149 0.0004074
23 0.9982 0.003525 0.001762
24 0.9959 0.008275 0.004138

\begin{tabular}{lllllllll}
\hline
Goldfeld-Quandt test for Heteroskedasticity \tabularnewline
p-values & Alternative Hypothesis \tabularnewline
breakpoint index & greater & 2-sided & less \tabularnewline
6 &  0.8724 &  0.2553 &  0.1276 \tabularnewline
7 &  0.8919 &  0.2163 &  0.1081 \tabularnewline
8 &  0.9294 &  0.1411 &  0.07055 \tabularnewline
9 &  0.9323 &  0.1354 &  0.0677 \tabularnewline
10 &  0.9722 &  0.05566 &  0.02783 \tabularnewline
11 &  0.9872 &  0.02561 &  0.01281 \tabularnewline
12 &  0.9951 &  0.009716 &  0.004858 \tabularnewline
13 &  1 &  9.897e-05 &  4.948e-05 \tabularnewline
14 &  1 &  2.17e-05 &  1.085e-05 \tabularnewline
15 &  1 &  1.131e-05 &  5.656e-06 \tabularnewline
16 &  1 &  2.573e-06 &  1.287e-06 \tabularnewline
17 &  1 &  8.386e-06 &  4.193e-06 \tabularnewline
18 &  1 &  1.613e-05 &  8.063e-06 \tabularnewline
19 &  1 &  4.259e-05 &  2.13e-05 \tabularnewline
20 &  1 &  7.793e-05 &  3.896e-05 \tabularnewline
21 &  0.9998 &  0.0004209 &  0.0002104 \tabularnewline
22 &  0.9996 &  0.0008149 &  0.0004074 \tabularnewline
23 &  0.9982 &  0.003525 &  0.001762 \tabularnewline
24 &  0.9959 &  0.008275 &  0.004138 \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=287972&T=5

[TABLE]
[ROW][C]Goldfeld-Quandt test for Heteroskedasticity[/C][/ROW]
[ROW][C]p-values[/C][C]Alternative Hypothesis[/C][/ROW]
[ROW][C]breakpoint index[/C][C]greater[/C][C]2-sided[/C][C]less[/C][/ROW]
[ROW][C]6[/C][C] 0.8724[/C][C] 0.2553[/C][C] 0.1276[/C][/ROW]
[ROW][C]7[/C][C] 0.8919[/C][C] 0.2163[/C][C] 0.1081[/C][/ROW]
[ROW][C]8[/C][C] 0.9294[/C][C] 0.1411[/C][C] 0.07055[/C][/ROW]
[ROW][C]9[/C][C] 0.9323[/C][C] 0.1354[/C][C] 0.0677[/C][/ROW]
[ROW][C]10[/C][C] 0.9722[/C][C] 0.05566[/C][C] 0.02783[/C][/ROW]
[ROW][C]11[/C][C] 0.9872[/C][C] 0.02561[/C][C] 0.01281[/C][/ROW]
[ROW][C]12[/C][C] 0.9951[/C][C] 0.009716[/C][C] 0.004858[/C][/ROW]
[ROW][C]13[/C][C] 1[/C][C] 9.897e-05[/C][C] 4.948e-05[/C][/ROW]
[ROW][C]14[/C][C] 1[/C][C] 2.17e-05[/C][C] 1.085e-05[/C][/ROW]
[ROW][C]15[/C][C] 1[/C][C] 1.131e-05[/C][C] 5.656e-06[/C][/ROW]
[ROW][C]16[/C][C] 1[/C][C] 2.573e-06[/C][C] 1.287e-06[/C][/ROW]
[ROW][C]17[/C][C] 1[/C][C] 8.386e-06[/C][C] 4.193e-06[/C][/ROW]
[ROW][C]18[/C][C] 1[/C][C] 1.613e-05[/C][C] 8.063e-06[/C][/ROW]
[ROW][C]19[/C][C] 1[/C][C] 4.259e-05[/C][C] 2.13e-05[/C][/ROW]
[ROW][C]20[/C][C] 1[/C][C] 7.793e-05[/C][C] 3.896e-05[/C][/ROW]
[ROW][C]21[/C][C] 0.9998[/C][C] 0.0004209[/C][C] 0.0002104[/C][/ROW]
[ROW][C]22[/C][C] 0.9996[/C][C] 0.0008149[/C][C] 0.0004074[/C][/ROW]
[ROW][C]23[/C][C] 0.9982[/C][C] 0.003525[/C][C] 0.001762[/C][/ROW]
[ROW][C]24[/C][C] 0.9959[/C][C] 0.008275[/C][C] 0.004138[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=287972&T=5

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=287972&T=5

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Goldfeld-Quandt test for Heteroskedasticity
p-valuesAlternative Hypothesis
breakpoint indexgreater2-sidedless
6 0.8724 0.2553 0.1276
7 0.8919 0.2163 0.1081
8 0.9294 0.1411 0.07055
9 0.9323 0.1354 0.0677
10 0.9722 0.05566 0.02783
11 0.9872 0.02561 0.01281
12 0.9951 0.009716 0.004858
13 1 9.897e-05 4.948e-05
14 1 2.17e-05 1.085e-05
15 1 1.131e-05 5.656e-06
16 1 2.573e-06 1.287e-06
17 1 8.386e-06 4.193e-06
18 1 1.613e-05 8.063e-06
19 1 4.259e-05 2.13e-05
20 1 7.793e-05 3.896e-05
21 0.9998 0.0004209 0.0002104
22 0.9996 0.0008149 0.0004074
23 0.9982 0.003525 0.001762
24 0.9959 0.008275 0.004138







Meta Analysis of Goldfeld-Quandt test for Heteroskedasticity
Description# significant tests% significant testsOK/NOK
1% type I error level13 0.6842NOK
5% type I error level140.736842NOK
10% type I error level150.789474NOK

\begin{tabular}{lllllllll}
\hline
Meta Analysis of Goldfeld-Quandt test for Heteroskedasticity \tabularnewline
Description & # significant tests & % significant tests & OK/NOK \tabularnewline
1% type I error level & 13 &  0.6842 & NOK \tabularnewline
5% type I error level & 14 & 0.736842 & NOK \tabularnewline
10% type I error level & 15 & 0.789474 & NOK \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=287972&T=6

[TABLE]
[ROW][C]Meta Analysis of Goldfeld-Quandt test for Heteroskedasticity[/C][/ROW]
[ROW][C]Description[/C][C]# significant tests[/C][C]% significant tests[/C][C]OK/NOK[/C][/ROW]
[ROW][C]1% type I error level[/C][C]13[/C][C] 0.6842[/C][C]NOK[/C][/ROW]
[ROW][C]5% type I error level[/C][C]14[/C][C]0.736842[/C][C]NOK[/C][/ROW]
[ROW][C]10% type I error level[/C][C]15[/C][C]0.789474[/C][C]NOK[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=287972&T=6

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=287972&T=6

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Meta Analysis of Goldfeld-Quandt test for Heteroskedasticity
Description# significant tests% significant testsOK/NOK
1% type I error level13 0.6842NOK
5% type I error level140.736842NOK
10% type I error level150.789474NOK



Parameters (Session):
par1 = 1 ; par2 = 2 ; par3 = Exact Pearson Chi-Squared by Simulation ;
Parameters (R input):
par1 = 1 ; par2 = Do not include Seasonal Dummies ; par3 = No Linear Trend ; par4 = 0 ; par5 = 0 ;
R code (references can be found in the software module):
library(lattice)
library(lmtest)
n25 <- 25 #minimum number of obs. for Goldfeld-Quandt test
mywarning <- ''
par1 <- as.numeric(par1)
if(is.na(par1)) {
par1 <- 1
mywarning = 'Warning: you did not specify the column number of the endogenous series! The first column was selected by default.'
}
if (par4=='') par4 <- 0
par4 <- as.numeric(par4)
if (par5=='') par5 <- 0
par5 <- as.numeric(par5)
x <- na.omit(t(y))
k <- length(x[1,])
n <- length(x[,1])
x1 <- cbind(x[,par1], x[,1:k!=par1])
mycolnames <- c(colnames(x)[par1], colnames(x)[1:k!=par1])
colnames(x1) <- mycolnames #colnames(x)[par1]
x <- x1
if (par3 == 'First Differences'){
(n <- n -1)
x2 <- array(0, dim=c(n,k), dimnames=list(1:n, paste('(1-B)',colnames(x),sep='')))
for (i in 1:n) {
for (j in 1:k) {
x2[i,j] <- x[i+1,j] - x[i,j]
}
}
x <- x2
}
if (par3 == 'Seasonal Differences (s=12)'){
(n <- n - 12)
x2 <- array(0, dim=c(n,k), dimnames=list(1:n, paste('(1-B12)',colnames(x),sep='')))
for (i in 1:n) {
for (j in 1:k) {
x2[i,j] <- x[i+12,j] - x[i,j]
}
}
x <- x2
}
if (par3 == 'First and Seasonal Differences (s=12)'){
(n <- n -1)
x2 <- array(0, dim=c(n,k), dimnames=list(1:n, paste('(1-B)',colnames(x),sep='')))
for (i in 1:n) {
for (j in 1:k) {
x2[i,j] <- x[i+1,j] - x[i,j]
}
}
x <- x2
(n <- n - 12)
x2 <- array(0, dim=c(n,k), dimnames=list(1:n, paste('(1-B12)',colnames(x),sep='')))
for (i in 1:n) {
for (j in 1:k) {
x2[i,j] <- x[i+12,j] - x[i,j]
}
}
x <- x2
}
if(par4 > 0) {
x2 <- array(0, dim=c(n-par4,par4), dimnames=list(1:(n-par4), paste(colnames(x)[par1],'(t-',1:par4,')',sep='')))
for (i in 1:(n-par4)) {
for (j in 1:par4) {
x2[i,j] <- x[i+par4-j,par1]
}
}
x <- cbind(x[(par4+1):n,], x2)
n <- n - par4
}
if(par5 > 0) {
x2 <- array(0, dim=c(n-par5*12,par5), dimnames=list(1:(n-par5*12), paste(colnames(x)[par1],'(t-',1:par5,'s)',sep='')))
for (i in 1:(n-par5*12)) {
for (j in 1:par5) {
x2[i,j] <- x[i+par5*12-j*12,par1]
}
}
x <- cbind(x[(par5*12+1):n,], x2)
n <- n - par5*12
}
if (par2 == 'Include Monthly Dummies'){
x2 <- array(0, dim=c(n,11), dimnames=list(1:n, paste('M', seq(1:11), sep ='')))
for (i in 1:11){
x2[seq(i,n,12),i] <- 1
}
x <- cbind(x, x2)
}
if (par2 == 'Include Quarterly Dummies'){
x2 <- array(0, dim=c(n,3), dimnames=list(1:n, paste('Q', seq(1:3), sep ='')))
for (i in 1:3){
x2[seq(i,n,4),i] <- 1
}
x <- cbind(x, x2)
}
(k <- length(x[n,]))
if (par3 == 'Linear Trend'){
x <- cbind(x, c(1:n))
colnames(x)[k+1] <- 't'
}
x
(k <- length(x[n,]))
head(x)
df <- as.data.frame(x)
(mylm <- lm(df))
(mysum <- summary(mylm))
if (n > n25) {
kp3 <- k + 3
nmkm3 <- n - k - 3
gqarr <- array(NA, dim=c(nmkm3-kp3+1,3))
numgqtests <- 0
numsignificant1 <- 0
numsignificant5 <- 0
numsignificant10 <- 0
for (mypoint in kp3:nmkm3) {
j <- 0
numgqtests <- numgqtests + 1
for (myalt in c('greater', 'two.sided', 'less')) {
j <- j + 1
gqarr[mypoint-kp3+1,j] <- gqtest(mylm, point=mypoint, alternative=myalt)$p.value
}
if (gqarr[mypoint-kp3+1,2] < 0.01) numsignificant1 <- numsignificant1 + 1
if (gqarr[mypoint-kp3+1,2] < 0.05) numsignificant5 <- numsignificant5 + 1
if (gqarr[mypoint-kp3+1,2] < 0.10) numsignificant10 <- numsignificant10 + 1
}
gqarr
}
bitmap(file='test0.png')
plot(x[,1], type='l', main='Actuals and Interpolation', ylab='value of Actuals and Interpolation (dots)', xlab='time or index')
points(x[,1]-mysum$resid)
grid()
dev.off()
bitmap(file='test1.png')
plot(mysum$resid, type='b', pch=19, main='Residuals', ylab='value of Residuals', xlab='time or index')
grid()
dev.off()
bitmap(file='test2.png')
hist(mysum$resid, main='Residual Histogram', xlab='values of Residuals')
grid()
dev.off()
bitmap(file='test3.png')
densityplot(~mysum$resid,col='black',main='Residual Density Plot', xlab='values of Residuals')
dev.off()
bitmap(file='test4.png')
qqnorm(mysum$resid, main='Residual Normal Q-Q Plot')
qqline(mysum$resid)
grid()
dev.off()
(myerror <- as.ts(mysum$resid))
bitmap(file='test5.png')
dum <- cbind(lag(myerror,k=1),myerror)
dum
dum1 <- dum[2:length(myerror),]
dum1
z <- as.data.frame(dum1)
z
plot(z,main=paste('Residual Lag plot, lowess, and regression line'), ylab='values of Residuals', xlab='lagged values of Residuals')
lines(lowess(z))
abline(lm(z))
grid()
dev.off()
bitmap(file='test6.png')
acf(mysum$resid, lag.max=length(mysum$resid)/2, main='Residual Autocorrelation Function')
grid()
dev.off()
bitmap(file='test7.png')
pacf(mysum$resid, lag.max=length(mysum$resid)/2, main='Residual Partial Autocorrelation Function')
grid()
dev.off()
bitmap(file='test8.png')
opar <- par(mfrow = c(2,2), oma = c(0, 0, 1.1, 0))
plot(mylm, las = 1, sub='Residual Diagnostics')
par(opar)
dev.off()
if (n > n25) {
bitmap(file='test9.png')
plot(kp3:nmkm3,gqarr[,2], main='Goldfeld-Quandt test',ylab='2-sided p-value',xlab='breakpoint')
grid()
dev.off()
}
load(file='createtable')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a, 'Multiple Linear Regression - Estimated Regression Equation', 1, TRUE)
a<-table.row.end(a)
myeq <- colnames(x)[1]
myeq <- paste(myeq, '[t] = ', sep='')
for (i in 1:k){
if (mysum$coefficients[i,1] > 0) myeq <- paste(myeq, '+', '')
myeq <- paste(myeq, signif(mysum$coefficients[i,1],6), sep=' ')
if (rownames(mysum$coefficients)[i] != '(Intercept)') {
myeq <- paste(myeq, rownames(mysum$coefficients)[i], sep='')
if (rownames(mysum$coefficients)[i] != 't') myeq <- paste(myeq, '[t]', sep='')
}
}
myeq <- paste(myeq, ' + e[t]')
a<-table.row.start(a)
a<-table.element(a, myeq)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, mywarning)
a<-table.row.end(a)
a<-table.end(a)
table.save(a,file='mytable1.tab')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,hyperlink('ols1.htm','Multiple Linear Regression - Ordinary Least Squares',''), 6, TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Variable',header=TRUE)
a<-table.element(a,'Parameter',header=TRUE)
a<-table.element(a,'S.D.',header=TRUE)
a<-table.element(a,'T-STAT
H0: parameter = 0',header=TRUE)
a<-table.element(a,'2-tail p-value',header=TRUE)
a<-table.element(a,'1-tail p-value',header=TRUE)
a<-table.row.end(a)
for (i in 1:k){
a<-table.row.start(a)
a<-table.element(a,rownames(mysum$coefficients)[i],header=TRUE)
a<-table.element(a,formatC(signif(mysum$coefficients[i,1],5),format='g',flag='+'))
a<-table.element(a,formatC(signif(mysum$coefficients[i,2],5),format='g',flag=' '))
a<-table.element(a,formatC(signif(mysum$coefficients[i,3],4),format='e',flag='+'))
a<-table.element(a,formatC(signif(mysum$coefficients[i,4],4),format='g',flag=' '))
a<-table.element(a,formatC(signif(mysum$coefficients[i,4]/2,4),format='g',flag=' '))
a<-table.row.end(a)
}
a<-table.end(a)
table.save(a,file='mytable2.tab')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a, 'Multiple Linear Regression - Regression Statistics', 2, TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Multiple R',1,TRUE)
a<-table.element(a,formatC(signif(sqrt(mysum$r.squared),6),format='g',flag=' '))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'R-squared',1,TRUE)
a<-table.element(a,formatC(signif(mysum$r.squared,6),format='g',flag=' '))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Adjusted R-squared',1,TRUE)
a<-table.element(a,formatC(signif(mysum$adj.r.squared,6),format='g',flag=' '))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'F-TEST (value)',1,TRUE)
a<-table.element(a,formatC(signif(mysum$fstatistic[1],6),format='g',flag=' '))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'F-TEST (DF numerator)',1,TRUE)
a<-table.element(a, signif(mysum$fstatistic[2],6))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'F-TEST (DF denominator)',1,TRUE)
a<-table.element(a, signif(mysum$fstatistic[3],6))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'p-value',1,TRUE)
a<-table.element(a,formatC(signif(1-pf(mysum$fstatistic[1],mysum$fstatistic[2],mysum$fstatistic[3]),6),format='g',flag=' '))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Multiple Linear Regression - Residual Statistics', 2, TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Residual Standard Deviation',1,TRUE)
a<-table.element(a,formatC(signif(mysum$sigma,6),format='g',flag=' '))
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Sum Squared Residuals',1,TRUE)
a<-table.element(a,formatC(signif(sum(myerror*myerror),6),format='g',flag=' '))
a<-table.row.end(a)
a<-table.end(a)
table.save(a,file='mytable3.tab')
if(n < 200) {
a<-table.start()
a<-table.row.start(a)
a<-table.element(a, 'Multiple Linear Regression - Actuals, Interpolation, and Residuals', 4, TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a, 'Time or Index', 1, TRUE)
a<-table.element(a, 'Actuals', 1, TRUE)
a<-table.element(a, 'Interpolation
Forecast', 1, TRUE)
a<-table.element(a, 'Residuals
Prediction Error', 1, TRUE)
a<-table.row.end(a)
for (i in 1:n) {
a<-table.row.start(a)
a<-table.element(a,i, 1, TRUE)
a<-table.element(a,formatC(signif(x[i],6),format='g',flag=' '))
a<-table.element(a,formatC(signif(x[i]-mysum$resid[i],6),format='g',flag=' '))
a<-table.element(a,formatC(signif(mysum$resid[i],6),format='g',flag=' '))
a<-table.row.end(a)
}
a<-table.end(a)
table.save(a,file='mytable4.tab')
if (n > n25) {
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,'Goldfeld-Quandt test for Heteroskedasticity',4,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'p-values',header=TRUE)
a<-table.element(a,'Alternative Hypothesis',3,header=TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'breakpoint index',header=TRUE)
a<-table.element(a,'greater',header=TRUE)
a<-table.element(a,'2-sided',header=TRUE)
a<-table.element(a,'less',header=TRUE)
a<-table.row.end(a)
for (mypoint in kp3:nmkm3) {
a<-table.row.start(a)
a<-table.element(a,mypoint,header=TRUE)
a<-table.element(a,formatC(signif(gqarr[mypoint-kp3+1,1],6),format='g',flag=' '))
a<-table.element(a,formatC(signif(gqarr[mypoint-kp3+1,2],6),format='g',flag=' '))
a<-table.element(a,formatC(signif(gqarr[mypoint-kp3+1,3],6),format='g',flag=' '))
a<-table.row.end(a)
}
a<-table.end(a)
table.save(a,file='mytable5.tab')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,'Meta Analysis of Goldfeld-Quandt test for Heteroskedasticity',4,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Description',header=TRUE)
a<-table.element(a,'# significant tests',header=TRUE)
a<-table.element(a,'% significant tests',header=TRUE)
a<-table.element(a,'OK/NOK',header=TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'1% type I error level',header=TRUE)
a<-table.element(a,signif(numsignificant1,6))
a<-table.element(a,formatC(signif(numsignificant1/numgqtests,6),format='g',flag=' '))
if (numsignificant1/numgqtests < 0.01) dum <- 'OK' else dum <- 'NOK'
a<-table.element(a,dum)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'5% type I error level',header=TRUE)
a<-table.element(a,signif(numsignificant5,6))
a<-table.element(a,signif(numsignificant5/numgqtests,6))
if (numsignificant5/numgqtests < 0.05) dum <- 'OK' else dum <- 'NOK'
a<-table.element(a,dum)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'10% type I error level',header=TRUE)
a<-table.element(a,signif(numsignificant10,6))
a<-table.element(a,signif(numsignificant10/numgqtests,6))
if (numsignificant10/numgqtests < 0.1) dum <- 'OK' else dum <- 'NOK'
a<-table.element(a,dum)
a<-table.row.end(a)
a<-table.end(a)
table.save(a,file='mytable6.tab')
}
}