View unanswered posts | View active topics It is currently 27 May 2018



Reply to topic  [ 3 posts ] 
 Type 1, 2 errors driving me crazy 
Author Message

Joined: 01 Mar 2013
Posts: 14
space
I thought i had this straight, but i always get caught. Does anyone has the real solution for when we have a type 1,2 errors ?
Here’s what I have until now :
n increase -> F-stat increase -> type 1 decrease -> type 2 increase
n increase -> t-stat decrease -> type 1 increase -> type 2 decrease
alpha increase => t-stat decrease -> type 1 increase -> type 2 decrease
From the above we can conclude that when t decrease, type 1 increase (and F is the opposite). Now if we move to reading 11, in effects of serial correlation and heteroskedacitty, they say :
t, F increase -> type 1 decrease -> type 2 increase
Who’s wrong who’s right ? Please help !!!


08 Jul 2015
Profile
space

Joined: 01 Mar 2013
Posts: 20
space
My interpretation: (caution–I’m learning as I go here and haven’t had any formal training)
Your first three conditional statements are referring to the statistics that bound your confidence intervals or make up your testing parameters, one-sided for F-tests and two or one-sided for t-statistics. An increase in sample size improves the power of both tests. For F, less regression information will now be explained by residuals, pro-rata. For t, a larger n should decrease your t, because as n increases, your sample’s variance increases proportionately… that means the standard error (the square root of sample variance divided by square root of n) moves in the exact opposite way as t due to n getting bigger (or smaller). Setting an arbitrarily larger alpha means our potential p-values are increasing, so the probability of rejecting the null hypothesis isn’t so crazy anymore, and we should expect it to happen more often, all things equal. More alpha, more n = more power and, unfortunately, false alarms!
Now, to your calculated or observed statistics, which are the empircal data or results: serial/autocorrelation (residuals that are bending/distorting the OLS output in a pattern) or heteroscedasticity (the space between residuals changes from output to output), a few points: 1. Autocorrelated residuals violate the Gauss-Markov assumptions. We assumed the OLS output, and thus our slope term, or beta, was more accurate than it really is; that means our standard error is too small (there’s more error than we see per on our vanilla regression model) and that smaller denominator makes our calculated t-statistic too big. It follows that with false confidence our detecting problems that don’t exist (Type-I) would decrease. 2. Heteroscedasticity is common, and your straight regression line will still typically work for the most part, because if the residuals are clustered on the line, or far from the line, but on average an equal distance from the line, your OLS line and hopefully predictions will be useful still. (At least your data’s (and your forecast’s) tendency is to remain straight, rather than in autocorrelation where it bends over time, etc.) Again, t-statistics will be off, since we’re taking an average but s.e. could be more/less in actuality. The F-statistic could be too big or too small.
In short, I think both of those are right. Hope my musings helped.


08 Jul 2015
Profile
space

Joined: 12 Jul 2015
Posts: 6
space
I also encountered this problem. Who can help me
thank you


22 Jul 2015
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 3 posts ] 

Who is online

Registered users: Baidu [Spider], Google [Bot]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group

Copyright © 2004-2017 AnalystSpace.net. All Rights Reserved