logistic regression significance checking

mikaelgirum 32 views 21 slides Jul 20, 2024
Slide 1
Slide 1 of 21
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21

About This Presentation

statistics


Slide Content

Testing for the Significance of the Coefficients After estimating the coefficients of the logistic regression model using the maximum likelihood method, the next step is to assess the statistical significance of the independent variables. This involves formulating and testing a statistical hypothesis to determine whether the predictors in the model are significantly related to the outcome variable. The general approach for performing this test is applicable across different types of models, with the specific details varying based on the particular model being used .

The null hypothesis typically states that the coefficient for a given independent variable is equal to zero, meaning that the variable is not significantly associated with the outcome after accounting for the other predictors in the model. The alternative hypothesis is that the coefficient is not equal to zero, indicating that the variable has a significant effect on the outcome. To test this hypothesis, we can use the Wald statistic, which is the ratio of the estimated coefficient to its standard error.

Under the null hypothesis, the Wald statistic follows a standard normal distribution, allowing us to compute a p-value and assess the statistical significance of the coefficient. In addition to the Wald statistic, other test statistics, such as the likelihood ratio test and the score test, can also be used to evaluate the significance of the coefficients in a logistic regression model.

Dichotomous Independent Variables in Logistic Regression This section provides a detailed explanation of how to interpret logistic regression coefficients when the independent variable is nominal-scaled and dichotomous (i.e., measured at two levels ). When the independent variable, x, is coded as either 0 or 1, the difference in the logit for a subject with x = 1 and x = 0 is simply the value of the regression coefficient, β1. This can be shown through a straightforward algebraic derivation: g(1) - g(0) = (β0 + β1 × 1) - (β0 + β1 × 0) = (β0 + β1) - (β0) = β1.

The key steps in this process are: Identifying the coding of the dichotomous independent variable, Calculating the difference in the logit for x = 1 and x = 0, Recognizing that this difference is equal to the regression coefficient β1, and Interpreting the coefficient accordingly.

The interpretation of logistic regression coefficients when the independent variable is nominal-scaled and dichotomous (i.e., measured at two levels) involves a multi-step process. First , we need to identify the two values of the covariate to be compared, typically coded as 0 and 1. Next , we substitute these values into the equation for the logit , calculating the difference between the logit when x = 1 and the logit when x = 0 . As shown, for a dichotomous covariate coded 0 and 1, this difference is equal to the regression coefficient β1 . This logit difference represents the change in the log of the odds associated with a one-unit change in the covariate.

The odds ratio represents the ratio of the odds of the outcome being present when x = 1 to the odds when x = 0 . Importantly, for a logistic regression model with a dichotomous independent variable coded 0 and 1, the odds ratio is simply the exponentiated value of the regression coefficient, OR = eβ1 . This relationship between the odds ratio and the logistic regression coefficient is a crucial step in interpreting the effect of a dichotomous covariate. By exponentiating the logit difference, we can provide a meaningful and easily interpretable measure of the magnitude and direction of the association between the independent variable and the outcome of interest.

The Odds Ratio as a Measure of Association The odds ratio is a powerful measure of association in logistic regression, particularly when the independent variable is dichotomous. The odds ratio represents the ratio of the odds of the outcome being present when x = 1 to the odds when x = 0 . Importantly, for a logistic regression model with a dichotomous independent variable coded 0 and 1, the odds ratio is simply the exponentiated value of the regression coefficient, OR = eβ1 . This relationship between the odds ratio and the logistic regression coefficient is crucial for interpreting the effect of a dichotomous covariate. By exponentiating the logit difference, we can provide a meaningful and easily interpretable measure of the magnitude and direction of the association between the independent variable and the outcome of interest. For example, an OR = 2 would indicate that the odds of the outcome are twice as high for those with x = 1 compared to those with x = 0 .

Confidence Interval Estimation for the Odds Ratio T he estimator of the odds ratio, denoted as OR, tends to have a highly skewed distribution, especially with small sample sizes. This is because the range of the odds ratio is between 0 and ∞, with the null value being 1. make inferences based on the sampling distribution of the log-odds ratio, ln (OR) = β1 , which is more likely to follow a normal distribution even with smaller sample sizes. This allows for the construction of confidence intervals using the familiar formula: exp [β1 ± z1-α/2 × SE(β1)] .

Odds Ratio Estimates by Software Packages Many statistical software packages automatically provide point and confidence interval estimates based on the exponentiation of each coefficient in a fitted logistic regression model. only provide accurate estimates of the odds ratio in a few special cases, such as when the independent variable is dichotomous and coded as 0 and 1, and is not involved in any interactions with other variables.

consider the effect that coding has on computing the estimator of the odds ratio. the estimator is given by OR = exp (β₁̂) , and this is correct as long as the independent variable is coded as 0 or 1 (or any two distinct values). However , the choice of coding can impact the interpretation and the calculation of the confidence interval for the odds ratio

The effect of Coding on the Confidence Intervals The choice of coding for a dichotomous independent variable can have a significant impact on the interpretation and calculation of the confidence interval for the odds ratio. when the covariate is coded as 0 and 1, the estimator of the odds ratio is simply the exponentiated value of the regression coefficient, OR = exp (β₁̂) . However , this straightforward relationship does not hold for other coding schemes .

The choice of coding (i.e., the values a and b ) can influence the magnitude and interpretation of the odds ratio, as well as the calculation of the confidence interval. For example, coding the covariate as -1 and 1 will result in an odds ratio estimator of OR = exp (2β₁̂) , which is the square of the odds ratio obtained when the covariate is coded as 0 and 1. It is important to note that software packages may not always provide the correct odds ratio estimates and confidence intervals, especially when the coding of the independent variable is not the standard 0 and 1. Therefore , it is crucial for the researcher to understand the four-step process and apply it correctly, regardless of the coding scheme used, to ensure accurate interpretation of the odds ratio and its associated uncertainty.

Polychotomous Independent Variable When the independent variable in a logistic regression model is nominal-scaled and has more than two levels (a polychotomous variable), the interpretation of the regression coefficients and the odds ratio becomes more complex. Unlike the case of a dichotomous covariate, where the odds ratio represents the comparison of a single group to a reference group, with a polychotomous variable, we need to make multiple comparisons to fully characterize the effect. The key steps for handling a polychotomous independent variable are: 1 ) Identify the reference group, 2 ) Create a set of design variables to represent the remaining categories, 3 ) Interpret the regression coefficients and odds ratios for each comparison to the reference group. This allows us to evaluate how the odds of the outcome differ across the various levels of the nominal variable.

Careful attention must be paid to the interpretation, as the odds ratios now represent the change in odds for each category relative to the chosen reference. The choice of reference group can impact the magnitude and direction of the odds ratios, so it is important to select the most meaningful or clinically relevant reference for the research question at hand.

Assessing the Fit of the Model The model building techniques such as hypothesis tests comparing nested models, are not true assessments of model fit. These methods merely compare the fitted values of different models, not the actual fit of a single model to the data. To properly evaluate the goodness of fit, we must consider both summary measures of the distance between the observed outcomes ( y ) and the model-estimated outcomes ( ŷ ), as well as a thorough examination of the individual contributions of each data point to these summary measures. The key criteria for assessing model fit are: ( 1) the summary measures of distance between y and ŷ should be small, and ( 2) the contribution of each individual pair of ( yi , ŷi ) should be unsystematic and small relative to the error structure of the model. This suggests that a comprehensive model fit assessment involves both global and local evaluations of the discrepancy between the observed and predicted outcomes.

Model Fit Assessment Techniques The key components of a comprehensive model fit assessment approach are: -Computation and evaluation of overall measures of fit, such as the Pearson Chi-Square statistic, deviance, or sum-of-squares, which provide a global indication of how well the model fits the data.

Examination of the individual components of the summary statistics, often through graphical techniques, to identify any systematic patterns or outliers that may indicate areas where the model is not adequately fitting the data . - Comparison of the observed and fitted values, not in relation to a smaller model, but in an absolute sense, considering the fitted model as a representation of the best possible (saturated) model .

Measures of Goodness of Fit When assessing the fit of a model, it is important to consider various summary measures of goodness of fit. These statistics provide a global indication of how well the model fits the data, but they do not necessarily reveal information about the individual contributions of each data point. While a small value for these measures can be a good sign, it does not rule out the possibility of substantial deviations from fit for some observations. Conversely, a large value is a clear indication that the model is not adequately capturing the underlying relationships in the data. An important consideration when using these summary measures is the effect the fitted model has on the degrees of freedom available for the assessment.

The term covariate pattern refers to the unique combinations of covariate values observed in the data. The number of covariate patterns can impact the degrees of freedom used in the goodness of fit calculations, as the assessment is based on the fitted values determined by the covariates in the model, not the total number of available covariates . In logistic regression, where the outcome variable is binary, the summary measures of model fit differ from those used in linear regression. Unlike linear regression, where the residual is simply the difference between the observed and fitted values (y - ŷ), the calculation of residuals in logistic regression must account for the fact that the fitted values represent estimated probabilities rather than continuous outcomes.

The three key summary measures used to assess the goodness of fit in logistic regression are the Pearson Chi-Square statistic , the deviance , and the sum-of-squares . For a particular covariate pattern j , the Pearson residual is calculated as: r( yj , π̂j) = ( yj - mj π̂j) / √(mjπ̂j(1 - π̂j )) The Pearson Chi-Square statistic is then the sum of the squares of these Pearson residuals across all covariate patterns: X2 = Σj =1J [r( yj , π̂j)]2 The deviance residual and the sum-of-squares residual are two alternative measures that also capture the discrepancy between the observed and fitted values. These summary statistics, when considered alongside the examination of individual residuals, provide a comprehensive assessment of the model's goodness of fit.