Search Results

Source: Multivariate Behavioral Research
Resulting in 14 citations.
1. Chen, Jinsong
A Generalized Partially Confirmatory Factor Analysis Framework with Mixed Bayesian Lasso Methods
Multivariate Behavioral Research published online (18 May 2021): DOI: 10.1080/00273171.2021.1925520.
Also: https://www.tandfonline.com/doi/full/10.1080/00273171.2021.1925520
Cohort(s): NLSY97
Publisher: Taylor & Francis
Keyword(s): Bayesian; Educational Aspirations/Expectations; Modeling; Peers/Peer influence/Peer relations; Schooling; Statistical Analysis

This research extends the partially confirmatory approach to accommodate mixed types of data and missingness in a unified framework that can address a wide range of the confirmatory-exploratory continuum in factor analysis. A mix of Bayesian adaptive and covariance Lasso procedures was developed to estimate model parameters and regularize the loading structure and local dependence simultaneously. Several model variants were offered with different constraints for identification. The less-constrained variant can achieve sufficient condition for the more-powerful variant, although loading estimates associated with local dependence can be inflated. Parameter recovery was satisfactory, but the information on local dependence was partially lost with categorical data or missingness. A real-life example illustrated how the models can be used to obtain a more discernible loading pattern and to identify items that do not measure what they are supposed to measure. The proposed methodology has been implemented in the R package LAWBL.
Bibliography Citation
Chen, Jinsong. "A Generalized Partially Confirmatory Factor Analysis Framework with Mixed Bayesian Lasso Methods." Multivariate Behavioral Research published online (18 May 2021): DOI: 10.1080/00273171.2021.1925520.
2. Choi, Ji Yeh
Kyung, Minjung
Hwang, Heungsun
Park, Ju-Hyun
Bayesian Extended Redundancy Analysis: A Bayesian Approach to Component-based Regression with Dimension Reduction
Multivariate Behavioral Research 55 (2020): 30-48.
Also: https://www.tandfonline.com/doi/full/10.1080/00273171.2019.1598837
Cohort(s): Children of the NLSY79
Publisher: Taylor & Francis
Keyword(s): Bayesian; Markov chain / Markov model; Monte Carlo; Peabody Individual Achievement Test (PIAT- Math); Peabody Individual Achievement Test (PIAT- Reading); Peabody Picture Vocabulary Test (PPVT)

Extended redundancy analysis (ERA) combines linear regression with dimension reduction to explore the directional relationships between multiple sets of predictors and outcome variables in a parsimonious manner. It aims to extract a component from each set of predictors in such a way that it accounts for the maximum variance of outcome variables. In this article, we extend ERA into the Bayesian framework, called Bayesian ERA (BERA). The advantages of BERA are threefold. First, BERA enables to make statistical inferences based on samples drawn from the joint posterior distribution of parameters obtained from a Markov chain Monte Carlo algorithm. As such, it does not necessitate any resampling method, which is on the other hand required for (frequentist’s) ordinary ERA to test the statistical significance of parameter estimates. Second, it formally incorporates relevant information obtained from previous research into analyses by specifying informative power prior distributions. Third, BERA handles missing data by implementing multiple imputation using a Markov Chain Monte Carlo algorithm, avoiding the potential bias of parameter estimates due to missing data. We assess the performance of BERA through simulation studies and apply BERA to real data regarding academic achievement.
Bibliography Citation
Choi, Ji Yeh, Minjung Kyung, Heungsun Hwang and Ju-Hyun Park. "Bayesian Extended Redundancy Analysis: A Bayesian Approach to Component-based Regression with Dimension Reduction." Multivariate Behavioral Research 55 (2020): 30-48.
3. Choi, Ji Yeh
Seo, Juwon
Copula-Based Redundancy Analysis
Multivariate Behavioral Research published online (26 July 2021): DOI: 10.1080/00273171.2021.1941729.
Also: https://www.tandfonline.com/doi/full/10.1080/00273171.2021.1941729
Cohort(s): Children of the NLSY79
Publisher: Taylor & Francis
Keyword(s): Behavior Problems Index (BPI); Children, Academic Development; Home Observation for Measurement of Environment (HOME); Modeling; Peabody Individual Achievement Test (PIAT- Math); Peabody Individual Achievement Test (PIAT- Reading); Statistical Analysis

Extended Redundancy Analysis (ERA) has recently been developed and widely applied to investigate component regression models. In this paper, we propose Copula-based Redundancy Analysis (CRA) to improve the performance of regression-based ERA. Our simulation results indicate that CRA is significantly superior to the regression-based ERA. We also discuss how to modify CRA to accommodate models with discrete, censored, truncated outcome variables, or a combination thereof, where ERA cannot be employed. For applications, we provide two empirical analyses: one on academic achievement and one on drug use and health.
Bibliography Citation
Choi, Ji Yeh and Juwon Seo. "Copula-Based Redundancy Analysis." Multivariate Behavioral Research published online (26 July 2021): DOI: 10.1080/00273171.2021.1941729.
4. Grimm, Kevin J.
Jacobucci, Ross
Reliable Trees: Reliability Informed Recursive Partitioning for Psychological Data
Multivariate Behavioral Research published online (16 April 2020): DOI: 10.1080/00273171.2020.1751028.
Also: https://www.tandfonline.com/doi/full/10.1080/00273171.2020.1751028
Cohort(s): NLSY79 Young Adult
Publisher: Taylor & Francis
Keyword(s): Depression (see also CESD); Health, Mental/Psychological; Monte Carlo; Statistical Analysis

Recursive partitioning, also known as decision trees and classification and regression trees (CART), is a machine learning procedure that has gained traction in the behavioral sciences because of its ability to search for nonlinear and interactive effects, and produce interpretable predictive models. The recursive partitioning algorithm is greedy--searching for the variable and the splitting value that maximizes outcome homogeneity. Thus, the algorithm can be overly sensitive to chance associations in the data, particularly in small samples. In an effort to limit chance associations, we propose and evaluate a reliability-based cost function for recursive partitioning. The reliability-based cost function increases the likelihood of selecting variables that are more reliable, which should have more consistent associations with the outcome of interest. Two reliability-based cost functions are proposed, evaluated through simulation, and compared to the CART algorithm. Results indicate that reliability-based cost functions can be beneficial, particularly with smaller samples and when more reliable variables are important to the prediction, but can overlook important associations between the outcome and lower reliability predictors. The use of these cost functions was illustrated using data on depression and suicidal ideation from the National Longitudinal Survey of Youth.
Bibliography Citation
Grimm, Kevin J. and Ross Jacobucci. "Reliable Trees: Reliability Informed Recursive Partitioning for Psychological Data." Multivariate Behavioral Research published online (16 April 2020): DOI: 10.1080/00273171.2020.1751028.
5. Hasl, Andrea
Voelkle, Manuel
Kretschmann, Julia
Richter, Dirk
Brunner, Martin
A Dynamic Structural Equation Approach to Modeling Wage Dynamics and Cumulative Advantage across the Lifespan
Multivariate Behavioral Research published online (7 February 2022): DOI: 10.1080/00273171.2022.2029339.
Also: https://www.tandfonline.com/doi/full/10.1080/00273171.2022.2029339
Cohort(s): NLSY79
Publisher: Taylor & Francis
Keyword(s): Educational Attainment; Intelligence; Modeling, Structural Equation; Research Methodology; Wage Dynamics; Wage Growth

Wages and wage dynamics directly affect individuals' and families' daily lives. In this article, we show how major theoretical branches of research on wages and inequality--that is, cumulative advantage (CA), human capital theory, and the lifespan perspective--can be integrated into a coherent statistical framework and analyzed with multilevel dynamic structural equation modeling (DSEM). This opens up a new way to empirically investigate the mechanisms that drive growing inequality over time. We demonstrate the new approach by making use of longitudinal, representative U.S. data (NLSY-79). Analyses revealed fundamental between-person differences in both initial wages and autoregressive wage growth rates across the lifespan. Only 0.5% of the sample experienced a "strict" CA and unbounded wage growth, whereas most individuals revealed logarithmic wage growth over time. Adolescent intelligence and adult educational levels explained substantial heterogeneity in both parameters. We discuss how DSEM may help researchers study CA processes and related developmental dynamics, and we highlight the extensions and limitations of the DSEM framework.
Bibliography Citation
Hasl, Andrea, Manuel Voelkle, Julia Kretschmann, Dirk Richter and Martin Brunner. "A Dynamic Structural Equation Approach to Modeling Wage Dynamics and Cumulative Advantage across the Lifespan." Multivariate Behavioral Research published online (7 February 2022): DOI: 10.1080/00273171.2022.2029339.
6. Jeon, Saebom
Seo, Tae Seok
Anthony, James C.
Chung, Hwan
Latent Class Analysis for Repeatedly Measured Multiple Latent Class Variables
Multivariate Behavioral Research published online (25 November 2020): DOI: 10.1080/00273171.2020.1848515.
Also: https://www.tandfonline.com/doi/full/10.1080/00273171.2020.1848515
Cohort(s): NLSY97
Publisher: Taylor & Francis
Keyword(s): Alcohol Use; Drug Use; Modeling, Latent Class Analysis/Latent Transition Analysis; Statistical Analysis

Research on stage-sequential shifts across multiple latent classes can be challenging in part because it may not be possible to observe the particular stage-sequential pattern of a single latent class variable directly. In addition, one latent class variable may affect or be affected by other latent class variables and the associations among multiple latent class variables are not likely to be directly observed either. To address this difficulty, we propose a multivariate latent class analysis for longitudinal data, joint latent class profile analysis (JLCPA), which provides a principle for the systematic identification of not only associations among multiple discrete latent variables but sequential patterns of those associations. We also propose the recursive formula to the EM algorithm to overcome the computational burden in estimating the model parameters, and our simulation study shows that the proposed algorithm is much faster in computing estimates than the standard EM method. In this work, we apply a JLCPA using data from the National Longitudinal Survey of Youth 1997 in order to investigate the multiple drug-taking behavior of early-onset drinkers from their adolescence, via young adulthood, to adulthood.
Bibliography Citation
Jeon, Saebom, Tae Seok Seo, James C. Anthony and Hwan Chung. "Latent Class Analysis for Repeatedly Measured Multiple Latent Class Variables." Multivariate Behavioral Research published online (25 November 2020): DOI: 10.1080/00273171.2020.1848515.
7. Lu, Zhenqiu Laura
Zhang, Zhiyong
Lubke, Gitta H.
Bayesian Inference for Growth Mixture Models with Latent Class Dependent Missing Data
Multivariate Behavioral Research 46,4 (2011): 567-597.
Also: http://www.tandfonline.com/doi/abs/10.1080/00273171.2011.589261
Cohort(s): NLSY97
Publisher: Taylor & Francis
Keyword(s): Bayesian; Missing Data/Imputation; Modeling, Growth Curve/Latent Trajectory Analysis

Growth mixture models (GMMs) with nonignorable missing data have drawn increasing attention in research communities but have not been fully studied. The goal of this article is to propose and to evaluate a Bayesian method to estimate the GMMs with latent class dependent missing data. An extended GMM is first presented in which class probabilities depend on some observed explanatory variables and data missingness depends on both the explanatory variables and a latent class variable. A full Bayesian method is then proposed to estimate the model. Through the data augmentation method, conditional posterior distributions for all model parameters and missing data are obtained. A Gibbs sampling procedure is then used to generate Markov chains of model parameters for statistical inference. The application of the model and the method is first demonstrated through the analysis of mathematical ability growth data from the National Longitudinal Survey of Youth 1997 (Bureau of Labor Statistics, U.S. Department of Labor, 1997). A simulation study considering 3 main factors (the sample size, the class probability, and the missing data mechanism) is then conducted and the results show that the proposed Bayesian estimation approach performs very well under the studied conditions. Finally, some implications of this study, including the misspecified missingness mechanism, the sample size, the sensitivity of the model, the number of latent classes, the model comparison, and the future directions of the approach, are discussed.
Bibliography Citation
Lu, Zhenqiu Laura, Zhiyong Zhang and Gitta H. Lubke. "Bayesian Inference for Growth Mixture Models with Latent Class Dependent Missing Data." Multivariate Behavioral Research 46,4 (2011): 567-597.
8. Malone, Patrick S.
Lamis, Dorian A.
Masyn, Katherine E.
Northrup, Thomas F.
A Dual-Process Discrete-Time Survival Analysis Model: Application to the Gateway Drug Hypothesis
Multivariate Behavioral Research 45,5 (2010): 790-805.
Also: http://www.informaworld.com/smpp/content~db=all~content=a929458147~frm=abslink
Cohort(s): NLSY97
Publisher: Taylor & Francis
Keyword(s): Drug Use; Modeling; Statistical Analysis; Time Theory

The gateway drug model is a popular conceptualization of a progression most substance users are hypothesized to follow as they try different legal and illegal drugs. Most forms of the gateway hypothesis are that 'softer' drugs lead to 'harder,' illicit drugs. However, the gateway hypothesis has been notably difficult to directly test-that is, to test as competing hypotheses in a single model that licit drug use might lead to illicit drug use or the reverse. This article presents a novel statistical technique, dual-process discrete-time survival analysis, which enables this comparison. This method uses mixture-modeling software to estimate 2 concurrent time-to-event processes and their effects on each other. Using this method, support for the gateway hypothesis in the National Longitudinal Survey of Youth, 1997, was weak. However, this article was not designed as a strong test of causal direction but more as a technical demonstration and suffered from certain technological limitations. Both these limitations and future directions are discussed. [ABSTRACT FROM AUTHOR]
Bibliography Citation
Malone, Patrick S., Dorian A. Lamis, Katherine E. Masyn and Thomas F. Northrup. "A Dual-Process Discrete-Time Survival Analysis Model: Application to the Gateway Drug Hypothesis." Multivariate Behavioral Research 45,5 (2010): 790-805.
9. O'Keefe, Patrick
Rodgers, Joseph Lee
Double Decomposition of Level-1 Variables in Multilevel Models: An Analysis of the Flynn Effect in the NLSY Data
Multivariate Behavioral Research 52,5 (2017): 630-647.
Also: http://www.tandfonline.com/doi/full/10.1080/00273171.2017.1354758
Cohort(s): Children of the NLSY79
Publisher: Taylor & Francis
Keyword(s): Flynn Effect; I.Q.; Modeling, Multilevel

This paper introduces an extension of cluster mean centering (also called group mean centering) for multilevel models, which we call "double decomposition (DD)." This centering method separates between-level variance, as in cluster mean centering, but also decomposes within-level variance of the same variable. This process retains the benefits of cluster mean centering but allows for context variables derived from lower level variables, other than the cluster mean, to be incorporated into the model. A brief simulation study is presented, demonstrating the potential advantage (or even necessity) for DD in certain circumstances. Several applications to multilevel analysis are discussed. Finally, an empirical demonstration examining the Flynn effect, our motivating example, is presented. The use of DD in the analysis provides a novel method to narrow the field of plausible causal hypotheses regarding the Flynn effect, in line with suggestions by a number of researchers.
Bibliography Citation
O'Keefe, Patrick and Joseph Lee Rodgers. "Double Decomposition of Level-1 Variables in Multilevel Models: An Analysis of the Flynn Effect in the NLSY Data." Multivariate Behavioral Research 52,5 (2017): 630-647.
10. O'Keefe, Patrick
Rodgers, Joseph Lee
The Corrosive Influence of the Flynn Effect on Age Normed Tests
Multivariate Behavioral Research 54,1 (2019): 155.
Also: https://www.tandfonline.com/doi/full/10.1080/00273171.2018.1562322
Cohort(s): Children of the NLSY79
Publisher: Taylor & Francis
Keyword(s): Children, Academic Development; Cognitive Ability; Flynn Effect; I.Q.; Test Scores/Test theory/IRT

This project provides empirical evidence for this built-in FE [Flynn Effect]. Using the National Longitudinal Survey of Youth-Children dataset (the NLSYC) and a variety of multilevel models, we: (1) Show a within person effect with individuals scoring higher over time and (2) Do not find evidence for practice. Previous work (O’Keefe & Rodgers, 2017 O’Keefe, P., & Rodgers, J. L. (2017) with this sample suggests the within person effect is not the FE itself. The NLSYC is well-suited to the task because it includes a known FE and longitudinal data. We conclude that there may be an artificial FE built into ability instruments because of this norming bias.
Bibliography Citation
O'Keefe, Patrick and Joseph Lee Rodgers. "The Corrosive Influence of the Flynn Effect on Age Normed Tests." Multivariate Behavioral Research 54,1 (2019): 155.
11. O'Rourke, Holly P.
Fine, Kimberly L.
Grimm, Kevin J.
MacKinnon, David P.
The Importance of Time Metric Precision When Implementing Bivariate Latent Change Score Models
Multivariate Behavioral Research published online (1 February 2021): DOI: 10.1080/00273171.2021.1874261.
Also: https://www.tandfonline.com/doi/full/10.1080/00273171.2021.1874261
Cohort(s): Children of the NLSY79
Publisher: Taylor & Francis
Keyword(s): Modeling; Peabody Individual Achievement Test (PIAT- Math); Peabody Individual Achievement Test (PIAT- Reading); Statistical Analysis; Test Scores/Test theory/IRT

The literature on latent change score models does not discuss the importance of using a precise time metric when structuring the data. This study examined the influence of time metric precision on model estimation, model interpretation, and parameter estimate accuracy in bivariate LCS (BLCS) models through simulation. Longitudinal data were generated with a panel study where assessments took place during a given time window with variation in start time and measurement lag. The data were analyzed using precise time metric, where variation in time was accounted for, and then analyzed using coarse time metric indicating only that the assessment took place during the time window. Results indicated that models estimated using the coarse time metric resulted in biased parameter estimates as well as larger standard errors and larger variances and covariances for intercept and slope. In particular, the coupling parameter estimates--which are unique to BLCS models--were biased with larger standard errors. An illustrative example of longitudinal bivariate relations between math and reading achievement in a nationally representative survey of children is then used to demonstrate how results and conclusions differ when using time metrics of varying precision. Implications and future directions are discussed.
Bibliography Citation
O'Rourke, Holly P., Kimberly L. Fine, Kevin J. Grimm and David P. MacKinnon. "The Importance of Time Metric Precision When Implementing Bivariate Latent Change Score Models." Multivariate Behavioral Research published online (1 February 2021): DOI: 10.1080/00273171.2021.1874261.
12. Ou, Lu
Chow, Sy-Miin
Ji, Linying
Molenaar, Peter C.M.
(Re)evaluating the Implications of the Autoregressive Latent Trajectory Model Through Likelihood Ratio Tests of Its Initial Conditions
Multivariate Behavioral Research 52,2 (2017): 178-199.
Also: http://www.tandfonline.com/doi/full/10.1080/00273171.2016.1259980
Cohort(s): NLSY79
Publisher: Taylor & Francis
Keyword(s): Family Income; Modeling, Growth Curve/Latent Trajectory Analysis; Monte Carlo

The autoregressive latent trajectory (ALT) model synthesizes the autoregressive model and the latent growth curve model. The ALT model is flexible enough to produce a variety of discrepant model-implied change trajectories. While some researchers consider this a virtue, others have cautioned that this may confound interpretations of the model's parameters. In this article, we show that some--but not all--of these interpretational difficulties may be clarified mathematically and tested explicitly via likelihood ratio tests (LRTs) imposed on the initial conditions of the model. We show analytically the nested relations among three variants of the ALT model and the constraints needed to establish equivalences. A Monte Carlo simulation study indicated that LRTs, particularly when used in combination with information criterion measures, can allow researchers to test targeted hypotheses about the functional forms of the change process under study. We further demonstrate when and how such tests may justifiably be used to facilitate our understanding of the underlying process of change using a subsample (N = 3,995) of longitudinal family income data from the National Longitudinal Survey of Youth.
Bibliography Citation
Ou, Lu, Sy-Miin Chow, Linying Ji and Peter C.M. Molenaar. "(Re)evaluating the Implications of the Autoregressive Latent Trajectory Model Through Likelihood Ratio Tests of Its Initial Conditions." Multivariate Behavioral Research 52,2 (2017): 178-199.
13. Tong, Xin
Zhang, Zhiyong
Diagnostics of Robust Growth Curve Modeling Using Student's t Distribution
Multivariate Behavioral Research 47,4 (2012): 493-518.
Also: http://www.tandfonline.com/doi/full/10.1080/00273171.2012.692614
Cohort(s): NLSY97
Publisher: Taylor & Francis
Keyword(s): Modeling, Growth Curve/Latent Trajectory Analysis; Peabody Individual Achievement Test (PIAT- Math)

Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors are proposed and evaluated. The methods include (a) distribution checking based on individual growth curve analysis; (b) distribution comparison based on Deviance Information Criterion, and (c) post hoc checking of degrees of freedom estimates for t distributions. The performance of the methods is compared through simulation studies. When the sample size is reasonably large, the method of post hoc checking of degrees of freedom estimates works best. A web interface is developed to ease the use of the 3 methods. Application of the 3 methods is illustrated through growth curve analysis of mathematical ability development using data on the Peabody Individual Achievement Test Mathematics assessment from the National Longitudinal Survey of Youth 1997 Cohort (Bureau of Labor Statistics, U.S. Department of Labor, 2005).
Bibliography Citation
Tong, Xin and Zhiyong Zhang. "Diagnostics of Robust Growth Curve Modeling Using Student's t Distribution." Multivariate Behavioral Research 47,4 (2012): 493-518.
14. Tong, Xin
Zhang, Zhiyong
Outlying Observation Diagnostics in Growth Curve Modeling
Multivariate Behavioral Research 52,6 (2017): 768-788.
Also: http://www.tandfonline.com/doi/full/10.1080/00273171.2017.1374824
Cohort(s): NLSY97
Publisher: Taylor & Francis
Keyword(s): Modeling, Growth Curve/Latent Trajectory Analysis; Monte Carlo; Peabody Individual Achievement Test (PIAT- Math); Statistical Analysis

Growth curve models are widely used for investigating growth and change phenomena. Many studies in social and behavioral sciences have demonstrated that data without any outlying observation are rather an exception, especially for data collected longitudinally. Ignoring the existence of outlying observations may lead to inaccurate or even incorrect statistical inferences. Therefore, it is crucial to identify outlying observations in growth curve modeling. This study comparatively evaluates six methods in outlying observation diagnostics through a Monte Carlo simulation study on a linear growth curve model, by varying factors of sample size, number of measurement occasions, as well as proportion, geometry, and type of outlying observations. It is suggested that the greatest chance of success in detecting outlying observations comes from use of multiple methods, comparing their results and making a decision based on research purposes. A real data analysis example is also provided to illustrate the application of the six outlying observation diagnostic methods.
Bibliography Citation
Tong, Xin and Zhiyong Zhang. "Outlying Observation Diagnostics in Growth Curve Modeling." Multivariate Behavioral Research 52,6 (2017): 768-788.