Grant Examines Impact of Drug Use Among Women Living with HIV
Congratulations to Methodology Center researchers Stephanie Lanza, Runze Li, and Jingyun (Michael) Yang who have been awarded a supplement to the Women's Interagency Health Study (WIHS) by the National Institute on Drug Abuse (NIDA). Because treatment of HIV has become more successful, researchers are examining factors that impact the quality and span of life for individuals living with HIV. This project, "Joint Modeling of the Effects of Substance Use on Changes in CD4 and Survival Time of Women," will combine models for survival data and time-varying effect models to examine how alcohol, tobacco, and other drug use impacts the health and survival of women with HIV.
Read about time-varying effect models.
Does it Really Matter How I Code My Data?
Although it is commonly written in textbooks, researchers sometimes forget that how a categorical variable is coded determines the interpretation of its associated beta coefficient in regression analyses. In a new technical report, "Effect Coding Versus Dummy Coding in Analysis of Data From Factorial Experiments," Methodology Center researchers Kari Kugler, Jessica Trail, John Dziak, and Linda Collins explain the differences between effect coding and dummy coding when interpreting coefficients in an ANOVA.
Download the tech report.
Now Accepting Applications: Postdoc in Prevention and Methodology
The Prevention and Methodology Training (PAMT) program has an opening for a postdoctoral fellow. PAMT, a joint effort between The Prevention Research Center and The Methodology Center, cross-trains graduate and postdoctoral researchers as prevention scientists and methodologists. Through this National Institute on Drug Abuse (NIDA)-funded program, prevention researchers are trained in the latest and most innovative research methods, and methodologists gain an understanding of the realities and challenges facing prevention efforts in real-world settings. If you are interested in PAMT or if you know someone with strong methodological skills and a passion for prevention science, take a moment to review the application requirements.
Read more about PAMT.
Ask a Methodologist: AIC vs BIC
Dear Methodology Center,
I was recently performing a latent class analysis (LCA) and, as is fairly common, I had trouble interpreting the fit statistics. The BIC indicated a 3-class model; the AIC indicated a 5-class model. How do I interpret model fit when the penalized-likelihood information criteria do not point me to a single model?
Stymied by Fit Statistics
Many LCA users have questions like this. As you stated, AIC and BIC are both penalized-likelihood criteria. They are often used for comparing non-nested models, which ordinary statistical tests cannot do, and help with the fundamental question, “How many classes should there be in the model I select?”
Despite the differences in their theoretical derivation and motivation, their primary difference in practice is the size of the penalty; BIC penalizes model complexity more heavily. The only way they should disagree is when AIC indicates a model with more latent classes than BIC. In practice, AIC always has a chance of choosing too big a model, regardless of n. BIC has very little chance of choosing many classes if n is sufficient, but it has a larger chance than AIC, for any given n, of choosing too few classes.