Search Results

Source: American Association of Public Opinion Research
Resulting in 6 citations.
1. Black, Dan A.
Datta, Rupa
Krishnamurty, Parvati
Mode Effects and Item Nonresponse: Evidence from CPS and NLSY Income Questions
Presented: Anaheim CA, American Association of Public Opinion Research, Sixty-Second Annual, May 2007.
Also: http://poq.oxfordjournals.org/content/71/3/E485.full
Cohort(s): NLSY79
Publisher: American Association of Public Opinion Research
Keyword(s): Current Population Survey (CPS) / CPS-Fertility Supplement; Income; Interviewing Method; Nonresponse

Permission to reprint the abstract has not been received from the publisher.

Bibliography Citation
Black, Dan A., Rupa Datta and Parvati Krishnamurty. "Mode Effects and Item Nonresponse: Evidence from CPS and NLSY Income Questions." Presented: Anaheim CA, American Association of Public Opinion Research, Sixty-Second Annual, May 2007.
2. Branden, Laura
Pergamit, Michael R.
Response Error in Reporting Starting Wages
Presented: Danvers, MA, American Association of Public Opinion Research Annual Conference, May 1994
Cohort(s): NLS General, NLSY79
Publisher: American Association of Public Opinion Research
Keyword(s): Data Quality/Consistency; Human Capital; Job Tenure; Panel Study of Income Dynamics (PSID); Wage Determination; Wages

Permission to reprint the abstract has not been received from the publisher.

Human capital models in labor economics emphasize, among other things, the returns to tenure on a job. While longitudinal data improve these measures compared with cross-sectional data, complete wage profiles for an individual in any household data set do not exist. Generally, the available data consist of a series of contemporaneous wage observations gathered at infrequent intervals, usually once each year. This is the standard in the Panel Study of Income Dynamics (PSID) and in the various National Longitudinal Surveys, the primary longitudinal data sets in labor economics. Since it is unlikely that we observe a person exactly when they begin their job, we must retrospectively ask their starting wage. Retrospective questions tax people's memories in different ways depending on the nature of the information to be retrieved, how it is stored in memory, the length of recall required, the saliency of the event, etc. Starting wages are expected to be perhaps the most easily recalled wages other than the current wage because the starting wage is connected with a specific event, i.e. beginning work for a given employer. Therefore, an investigation of individuals' reports of starting wages are probably the most accurate of any wage reports other than their current wage. In this paper, we use the National Longitudinal Survey of Youth (NLSY), taking advantage of a skip pattern error which resulted in re-asking most of the sample about the starting wage for their employer at two consecutive interviews. Because we never know the true starting wage, this paper examines the consistency in response between the two answers given at two different interviews, roughly one year apart.
Bibliography Citation
Branden, Laura and Michael R. Pergamit. "Response Error in Reporting Starting Wages." Presented: Danvers, MA, American Association of Public Opinion Research Annual Conference, May 1994.
3. Ernst, Michelle
Pergamit, Michael R.
Data Quality and the Use of Standardized Child Assessments in Survey Research
Presented: Miami Beach, FL, Annual Meeting of the American Association For Public Opinion Association, May 2005.
Cohort(s): Children of the NLSY79
Publisher: American Association of Public Opinion Research
Keyword(s): Panel Study of Income Dynamics (PSID); Peabody Individual Achievement Test (PIAT- Math); Peabody Individual Achievement Test (PIAT- Reading); Peabody Picture Vocabulary Test (PPVT); Testing Conditions; Tests and Testing

Permission to reprint the abstract has not been received from the publisher.

Beginning with the Children of the National Longitudinal Survey of Youth/1979 cohort (CNLSY79) in 1986, large-scale surveys began to incorporate standardized assessments. Formerly used only in clinical settings or schools, these assessments are now administered in a household setting by lay field interviewers. The administration of standardized assessments to children by lay field interviewers raises data quality concerns. Standardized child assessments have rigid administration protocols. Deviation from procedure can greatly affect a child's response. Furthermore, administrative complexity varies across assessments. While some assessments consist of a very simple and straight-forward administrative protocol, other assessments rely much more on the skills of the individual conducting the administration. It is hypothesized that an administratively complex assessment with strong published psychometric properties may not maintain those properties when administered by interviewers in large-scale studies. This paper proposes examining the published psychometrics for three assessments (the Woodcock-Johnson, the PPVT, and the PIAT) and comparing the published psychometrics with its reliability and validity within single, longitudinal studies (the NLSY79 and the PSID). By using multiple years of assessment data from the Children of the NLSY79 (PPVT/PIAT) and the Panel Study of Income Dynamics Child Supplement (Woodcock-Johnson), we have access to a large number of assessments conducted by a large number of interviewers. We can compare distributions between interviewers as well as looking at the same interviewer over time. These two data sets provide a rich source of data on assessments that allows us to examine many differences in administration. It will also be possible to examine how the psychometric properties of different assessments stand up in a large-scale survey as a function of the complexity of the assessment. If interviewer variability is greater in administrations of the complex tests, this argues for greater consideration of administrative procedures when choosing assessments for large-scale survey research.
Bibliography Citation
Ernst, Michelle and Michael R. Pergamit. "Data Quality and the Use of Standardized Child Assessments in Survey Research." Presented: Miami Beach, FL, Annual Meeting of the American Association For Public Opinion Association, May 2005.
4. Krishnamurty, Parvati
Daquilanea, Jodie
Fennell, Kyle
Long-Term Effects of Incentives: Results from the NLSY97
Presented: Hollywood, FL, American Association for Public Opinion Research, 64th Annual Conference, May 2009.
Also: http://www3.norc.org/Publications/Long-Term+Effects+of+Incentives+-+Results+from+the+NLSY97.htm
Cohort(s): NLSY97
Publisher: American Association of Public Opinion Research
Keyword(s): Attrition; Disadvantaged, Economically; Interviewing Method

Permission to reprint the abstract has not been received from the publisher.

Bibliography Citation
Krishnamurty, Parvati, Jodie Daquilanea and Kyle Fennell. "Long-Term Effects of Incentives: Results from the NLSY97." Presented: Hollywood, FL, American Association for Public Opinion Research, 64th Annual Conference, May 2009.
5. Pedlow, Steven
O'Muircheartaigh, Colm
Combining Samples versus Cumulating Cases: A Comparison of Two Weighting Strategies in NLSY97
Presented: New York, NY, Annual Meeting of the American Statistical Association, August 11-15, 2002
Cohort(s): NLSY97
Publisher: American Association of Public Opinion Research
Keyword(s): Longitudinal Surveys; Sample Selection; Sampling Weights/Weighting; Statistical Analysis

Permission to reprint the abstract has not been received from the publisher.

Also presented: Portland, OR, American Association for Public Opinion Research (AAPOR) 55th Annual Conference Meetings, May 2000.

AAPOR SESSION E: Impact of Telephone Sampling Design on Sample Efficiency and Bias -- Friday 5/19/2000 1. Introduction. The National Longitudinal Survey of Youth (NLSY97) is the latest in a series of surveys sponsored by the U.S. Department of Labor (DoL) to examine issues surrounding youth entry into the work force and subsequent transitions in and out of the work force. The NLSY97 is following a cohort of approximately 9,000 youths who completed an interview in 1997 (the base year). These youths were between 12 and 16 years of age as of December 31, 1996, and are being interviewed annually using a mix of some core questions asked annually and varying subject modules. We will compare two different weighting strategies for the first three rounds of NLSY97 data.

In order to improve the precision of estimates for minority youths, the overall study design for NLSY97 included a large oversample of Hispanic youths and non-Hispanic black youths. The overall design resulted in one large screening sample of over 90,000 housing units to generate youth participants for NLSY97. These housing units were drawn from two independent area-probability samples: 1. a crosssectional (CX) sample designed to represent the various segments of the eligible population in their proper population proportions, and 2. a supplemental (SU) sample designed to produce, in the most statistically efficient way, the required oversamples of Hispanic youths and non-Hispanic black youths. This paper’s main concern is with the construction of sampling weights for estimating population characteristics using both samples together. The paper gives more detailed descriptions of the

Bibliography Citation
Pedlow, Steven and Colm O'Muircheartaigh. "Combining Samples versus Cumulating Cases: A Comparison of Two Weighting Strategies in NLSY97." Presented: New York, NY, Annual Meeting of the American Statistical Association, August 11-15, 2002.
6. Wang, Yongyi
Krishnamurty, Parvati
Interview Mode Effects in NLSY97 Round 4 and Round 5
Presented: Phoenix, AZ, American Association of Public Opinion Research Annual Meeting, May 2004
Cohort(s): NLSY97
Publisher: American Association of Public Opinion Research
Keyword(s): Crime; Data Quality/Consistency; Drug Use; Interviewing Method; Self-Reporting; Sexual Behavior; Smoking (see Cigarette Use)

Permission to reprint the abstract has not been received from the publisher.

The incidence of telephone interviewing has been increasing in successive rounds of NLSY97. There are concerns about the accuracy of responses to sensitive questions when the interview is conducted by telephone compared to when these questions are self-administered as part of an in-person interview. This study explores the impact of interview mode on respondents' willingness to reveal sensitive information in NLSY97 round 4 and round 5. The dependent measures for this study include sex behavior, smoking, drug use, destroying, stealing, attacking and arrest. Within each round, controlling for the differences in demographic characteristics, respondents tend to underreport negative behaviors on most SAQ items when interviews are conducted by telephone. They are also less willing to respond to these sensitive questions, resulting in more missing data. We also linked the two rounds together by looking at how individual respondents responded to the same questions in round 4 and round 5. The results show that for respondents who did not switch interview mode across rounds, the distributions of response differences do not differ much regardless of whether the interviews were conducted consistently in-person or by phone. If the respondents did switch interview modes across rounds, the distribution of response differences are significantly different for some sensitive items, depending on whether the switch is from in-person to phone or the other way round. This evidence also supports the existence of interview mode effects.
Bibliography Citation
Wang, Yongyi and Parvati Krishnamurty. "Interview Mode Effects in NLSY97 Round 4 and Round 5." Presented: Phoenix, AZ, American Association of Public Opinion Research Annual Meeting, May 2004.