Using Administrative Data to



Examine the Efficacy of Welfare-to-Work Programs



















Peter Mueser



Sharon Ryan



Melinda Thielbar





Department of Economics

University of Missouri-Columbia





September 1998











Address correspondence to: Peter Mueser/Sharon Ryan, Department of Economics, University of Missouri, Columbia, MO 65211

E-mail: mueser@econ.missouri.edu ryan@econ.missouri.edu

Using Administrative Data to

Examine the Efficacy of Welfare-to-Work Programs



Peter Mueser, Sharon Ryan, and Melinda Thielbar



Department of Economics

University of Missouri-Columbia



Abstract



This paper investigates the feasibility of measuring the efficacy of programs designed to move welfare recipients into jobs using administrative records on individual program participation linked with state records on individual earnings.

Like many states, Missouri collects quarterly earnings for all individuals covered by its unemployment insurance system. We have linked this information to administrative records listing participants in the FUTURES program, Missouri's welfare-to-work program developed under federal JOBS legislation. We are able to distinguish those forced to participate in the program from those who chose to participate. In addition, since a waiting list was used to allocate individuals to the program during the period of our study, those on the waiting list who were not chosen to receive services may serve as a control group.

Our analysis indicates that volunteers and required participants differ in ways consistent with administrative rules. The process by which individuals were selected from the waiting list also reflected such rules. Selection did not appear to favor those whose characteristics would suggest employment success.

Estimates of program impact on participant earnings underscore the importance of controlling for a variety of characteristics, in particular prior earnings and employment experience, and of distinguishing volunteers from those required to participate. Ordinary least squares estimates of program impact are modest but positive and statistically significant, and they are generally consistent with expectations. Instrumental variables estimates are similar, suggesting that individual differences in willingness to remain on the waiting list are not an important source of bias in estimated impact.

Recent years have seen a flurry of studies evaluating government training programs. In large part, this work has been fed by a substantial body of data based on experimental designs. Earlier attempts to evaluate programs had been criticized because they failed to recognize the critical role that self selection could play in biasing estimated effects. In contrast, an experimental design, with subjects randomly allocated to receive the program "treatment," can eliminate the bias caused by selection of program participants.(1)

Unfortunately, randomization is often not feasible. One common approach is to find a group with similar characteristics to serve as a control, possibly selecting from a large population using individual demographic characteristics and labor market experience to match those who have participated in the program (Johnson and Stromsdorfer, 1990). Such an approach suffers from the difficulty that even when many individual characteristics are available, those who choose to participate in the program may differ in unmeasured ways from the selected group. LaLonde (1986) concluded that program impact estimates based on nonexperimental comparison groups did not correspond with those based on random assignment designs. Although an extensive literature has attempted to develop models and matching methods that can be used to obtain program impact estimates using nonexperimental data (Bassi, 1984; Ashenfelter and Card, 1985; Heckman and Robb, 1985; Heckman and Hotz, 1989; Heckman, Ichimura and Todd, 1997), no definitive solution has emerged.

Bell et al. (1995) suggest a nonexperimental approach based on using as a comparison group applicants to a program who have been denied access by staff. Such "screened out" individuals differ in systematic ways from those provided access, reflecting staff judgments about the likely success applicants will experience following participation. While the screened out sample is clearly inappropriate as a simple control group, if all individual factors used in the screening process are available to the investigator, including these in the prediction of the outcome can allow consistent estimates of program effect.

The analysis here investigates whether waiting lists used to allocate potential participants to a job readiness and training program may be used to identify the impact of the program. The approach is similar to that of Bell et al. in that it creates an internal control group whose labor market performance is, in effect, compared with program participants. It shares with that approach the important advantage that motivational factors that draw individuals to a program will be similar for those on the waiting list and those who are chosen from the waiting list. On the other hand, whereas Bell et al. had available a rating assigned by staff indicating expected applicant success, we have no such rating in our data. We do, however, have a variety of demographic characteristics, as well as measures of labor market and welfare experience, that would have been available to staff in selecting applicants from the waiting list.

Our use of a comparison group based on a waiting list removes two important sources of bias that have been identified in the literature, those due to geographical mismatch and lack of comparability between data sources ( Friedlander and Robins, 1995; Heckman, Ichimura and Todd, 1997). Those on the waiting list who are not offered services are resident in the same geographical areas within the state as those chosen from the waiting list, eliminating labor market differences. Also, our comparison group data are obtained from the same sources as our treatment group, eliminating inconsistencies due to variation in survey instruments or data processing. More generally, the vagaries of selection from the waiting list assure that there is substantial overlap in the characteristics of those offered program services and those left on the waiting list. In contrast, in many natural environments, those who do not participate differ dramatically in their characteristics from those in the program, which makes statistical correction for such differences difficult (Heckman, Ichimura, Smith and Todd, 1996).

Our study is unusual in that it includes both those who volunteer to participate in a job readiness and training program and those who are required to participate. Theory suggests that the impact of a job assistance program may differ for these groups. Those AFDC recipients who are required to participate face a reduction in benefits or related sanctions if they fail to undertake specified training activities. If these requirements constitute a burden, some individuals may take actions that remove this requirement, possibly discontinuing both AFDC participation and training activities. The result may be either higher or lower labor market involvement than in the absence of the program, but, in either case, this impact will be due, in part, to incentives created by program demands and sanctions, not the value of training or related services (Friedlander, Greenberg, and Robins, 1997). The impact of the program for volunteers and required participants may also differ because of unmeasured factors tracing to differential selection into the two groups.

It must be recognized that our methods share the shortcomings implicit in most evaluations of training programs. For both voluntary and mandatory participants, the measure of impact we obtain is for the policy that makes the program available or required for certain individuals, that is, we examine the impact of an offer of services. As Heckman, Hohmann, Khoo, and Smith (1997) note, such policy impact is not a clean measure of the efficacy of training or other services a program provides. Some of the individuals who are offered services never actually respond to the offer, and some who initially respond drop out before they receive any meaningful services. Equally important, those who are denied program services may receive similar or related training through other channels. This reduces differences between those who receive an offer of service and those who do not in terms of the services actually received, with the result that our measure of impact will understate the impact of services received. This understatement will be particularly large where there are good available substitutes for services offered by the program.(2)



Data Structure and Basic Definitions

Our focus will be on Missouri's FUTURES program, adopted under federal JOBS legislation, which provides various services designed to aid and encourage employment of individuals receiving AFDC.(3) Although the program first began operation in 1990, it was not available on a statewide basis until 1992. Until the mid 1990s, scarce training slots were allocated based on waiting lists, which were generated through the AFDC program and passed on to FUTURES staff. An AFDC recipient was either exempt (because of a physical inability to work, the presence of young children in the home, or other factors) or mandatory (formally required to participate in FUTURES). Any AFDC recipient could also volunteer to participate in the program. FUTURES staff chose individuals from two lists, a "mandatory" and a "voluntary" list. Those who were mandatory and had volunteered appeared on both lists. When a slot became available, an individual could be chosen from either list, with the criteria for the choice between lists, and among those on a list, being idiosyncratic across various state offices. While the number of individuals on waiting lists, in aggregate, exceeded those in the program during this period, a substantial number of individuals spent less than one month on a waiting list.

Our data are in the form of a file for each month indicating whether an individual was on a waiting list, was receiving FUTURES services, or had been called by program staff and offered FUTURES services but had not yet responded. The file also indicates whether the individual was designated as voluntary, mandatory, or both, during that month. A small proportion of those on our files were receiving services but were not identified as either voluntary or mandatory. For the most part, these were individuals in special programs whose placement into FUTURES differed from that of other participants; our analysis will omit them.

The analysis focuses on women who had a date of entry into the FUTURES system October 1992-September 1995.(4) We coded date of entry of an individual into FUTURES based on month of first appearance on our files, omitting any individual who appeared on the file for September 1992 in order to avoid including those whose contact extended from an earlier period.(5) We coded individuals into three categories on the basis of how they were classified the first time they appeared on our files: voluntary only, mandatory only, or both voluntary and mandatory. Those cases for which the first appearance on the file indicated that they were being offered services were coded as having spent zero time on the relevant waiting list. For those individuals that we first observed on a waiting list (i.e., who had not yet offered services), we counted time on the waiting list as the number of months they were on that list until they were offered services (meaning they were either called by program staff or were observed receiving services) or they ceased to appear on that waiting list.(6)

In addition to sex, we have available from FUTURES or AFDC files the standard demographic characteristics: age, education and race. We have information about the family, including the number of children in the household and the age of the youngest child. We also know the number of months the applicant had been on AFDC at the time of first appearance on our file. Finally, we have employer reports of total quarterly earnings for all employees covered by Missouri's unemployment insurance program. For each individual, we have calculated total earnings indicated in these reports in the four quarters prior to the quarter in which the individual appeared on our files. Earnings data are available also for the follow-up period. While these earnings data have the obvious shortcoming of omitting earnings from employment not covered by the unemployment insurance system(7) or earnings from employment outside Missouri, the earnings data are otherwise quite reliable.



Voluntary and Mandatory: The Interaction of Bureaucratic Rules and Participant Choice

As noted above, individuals on our files either volunteered to participate in the FUTURES program or were required to participate. At the time of our study, regulations classified AFDC recipients as mandatory FUTURES participants unless they met exemption criteria, the most important being the presence of a child age three or less, or a child under age six if appropriate child care was not available. For both volunteers and mandatory individuals, most spent some time on the waiting list, and only a minority in either group were ultimately offered FUTURES services.

Table 1 provides statistics which allow us to observe how volunteers differ from those who were mandatory. The sample consists of all individuals who volunteered or were first classified as mandatory for FUTURES in the period October 1992-September 1995. The most important difference is in the period of time for which the two groups had been receiving AFDC. Those listed as mandatory were in large part new AFDC applicants. Rows 5 and 6 show that three quarters of those in the mandatory classification had received AFDC for one month or less, while only about a quarter of those listed as voluntary were in this category. In large part, this may reflect how we selected individuals from the FUTURES files. Those who had received AFDC for extended periods may have been classified as mandatory participants prior to October 1992 and were therefore omitted from our study by construction. Among long-time AFDC recipients, only those who experienced a change in circumstances during our study period--e.g., the aging of a child--would appear on our file as mandatory FUTURES participants. In contrast, any new AFDC applicant who failed to meet exemption criteria would be classified as mandatory.

The selection criterion explains why volunteers differ so dramatically in the age of the youngest child. Rows 8 and 9 show that among mandatory participants, fewer than 7 percent had a youngest child under age two, and only 22 percent had a child under age four. On the other hand, among volunteers, 44 percent had a child under age two, while more than three quarters had a child under age four. This points up an interesting contrast between the standards imposed for required participation and the characteristics of those who volunteered: A substantial number of those who were exempt because they had young children chose to volunteer for services. Statistics for those who were required to participate and who also volunteered (right columns) are very similar to those who were required to participate and did not volunteer.

The volunteers were much more likely to be nonwhite(8) than were required participants (row 2). On the other hand, those mandatory participants who also volunteered were less likely to be nonwhite than other mandatory participants, suggesting that it is the application of the mandatory criterion that causes the racial difference between the mandatory and voluntary samples. Nonwhites are likely to have longer spells on AFDC than whites, so that the low proportion of nonwhites among mandatory participants may well reflect the fact that our mandatory sample is largely drawn among new AFDC applicants.

One measure that summarizes important differences between samples is the average prior earnings, which is the sum of earnings in the four quarters prior to appearance on our waiting list or participation in the program, based on employer reports to the Missouri unemployment insurance system. Row 15 shows that volunteers and mandatory participants were about equally likely to have earnings but row 14 indicates that average earnings for volunteers were only half of that for mandatory individuals. This difference, which clearly results primarily from the fact that volunteers are much more likely to have extensive AFDC histories, suggests that volunteers do not necessarily have more promising employment prospects than mandatory individuals.

Finally, row 16 shows that nearly half of all voluntary cases in our file were ultimately offered FUTURES services, whereas fewer than one-fifth of those who were only on the mandatory list were offered services. Apparently, a large number of individuals were formally required to participate in the FUTURES program but remained on the waiting list without ever receiving an offer of service. About a third of the individuals who were required to participate but also volunteered were ultimately offered services.(9)

The differences in the probability of being offered services for those on the voluntary and mandatory lists could be partly due to federal restrictions on the use of FUTURES program funds. According to federal JOBS legislation, in order to be eligible for continued federal funding, 55 percent of FUTURES funds must be spent on AFDC recipients who fall into one of three federal target groups:

1) Households that have received AFDC 36 of the last 60 months.

2) Heads of household under 24 without a high school degree, GED, or with little labor market experience.

3) Households that will lose AFDC eligibility in two years because the youngest child is 16 years or older.

In our sample, the fact that volunteers are more likely to be long-term AFDC recipients and are more likely to be chosen for FUTURES is not surprising given the constraints placed on program staff.

Federal regulations also require that 20 percent of the FUTURES mandatory population must be served. The actual proportion of the mandatory participants in our study who were offered services was about 23 percent, suggesting that this constraint may have been binding. Conditions of the offer of service are explored more fully in the next section.



Access to Service: The Waiting List and Determinants of Service Offer

Table 2 presents the proportions of individuals in our sample who were offered FUTURES services over the period in question, classified by the length of time on the waiting list. For those who were offered services, length of time on the waiting list varied substantially. About one in five of those who were ultimately offered services received the offer without any time on the waiting list. A substantial portion of those who were offered services waited several months. The fourth column indicates the proportion of individuals who were offered services after waiting more than a year, while the fifth column identifies those who were offered services after some interruption in their time on the waiting list. The sixth column identifies those who did not receive an offer of service during the period of our study. Of those not offered services, some remained on a waiting list for extended periods, while others dropped off after relatively short periods. In tabulations not listed here, we found that the proportion receiving an offer of services was higher in later years of our study, reflecting expansion in the program relative to the population eligible to receive services.

The waiting list serves the function of providing access to the program for some individuals while limiting the access of others. Tables 3 and 4 provide means and standard deviations of several variables, dividing the sample by whether the individual received an offer of service, separately for the voluntary and mandatory populations. There are numerous differences between those receiving services and those not, but, in general, they are smaller than differences between volunteers and mandatory participants. For both voluntary and mandatory cases, those receiving an offer of service have, on average, spent longer on AFDC and have lower prior earnings.

Observed differences reflect both self-selection by individuals and selection off the waiting list by agency employees. Among those who are not selected immediately to receive services, a substantial portion left the waiting list after some period of time. For both volunteers and mandatory individuals, the most common reason to drop off the waiting list was leaving AFDC. Mandatory individuals could also leave the list if their status changed so that they qualified for an exemption (e.g., birth of a child), or if they got part time jobs. Since those who remained longer on the waiting list had a greater chance of being offered services, those offered services are likely to differ for reasons of self-selection.

In order to identify the impact of agency selection, as distinct from individual self-selection, we have run analyses that predict selection from the waiting list, controlling for time on the waiting list at that point. As noted above, the process by which individuals were chosen from the waiting list was not fully specified, and undoubtedly varied across local offices. A simple “first-come-first-serve” standard was frequently applied, but there was substantial uncertainty about how an initial ordering was established, especially among individuals placed on the waiting list in a given month. Much of the variation in chance of selection from the waiting list was due to differences in the number of available slots, relative to the demand, across offices.

Tables 5 and 6 present coefficients for logistic regressions that show how selection from the waiting list occurred for volunteers and mandatory participants. Coefficient estimates suggest that FUTURES program staff selected heavily on individual characteristics for the first and second months on the waiting list but less for those who had been on the waiting list for longer periods. It appears that FUTURES workers were looking for certain criteria for first selection into the program, and, after the pool of individuals with those characteristics has been exhausted, chose more-or-less at random.

The voluntary list shows this pattern most strongly. As the reader can see from Table 5, region within the state and year of entry are statistically significant through all four initial months on the waiting list. This is not surprising since program resources relative to the AFDC population are different across the state. However, the likelihood of being chosen as a volunteer without spending any time on a waiting list is heavily dependant on individual characteristics such as the age of the youngest child, education, and age of the AFDC recipient. Given that young AFDC recipients with little labor market experience fall into one of the three federal target groups, it is not surprising that age and education would be considered when admitting applicants. This is also true for length of AFDC receipt. Another characteristic that significantly affects the likelihood of being chosen into FUTURES is school attendance or involvement in a training program upon entering AFDC (In School). One possibility is that these recipients may only apply for FUTURES supportive services or for FUTURES to pay tuition or other training-related costs. If this is true, these recipients would be among the easiest and least expensive to serve, and an AFDC or FUTURES caseworker would want to make an effort to encourage a self-initiated effort at training.(10)

After one month on the voluntary waiting list, the only characteristics (other than region of service) that are consistently significant are the measures of time spent on AFDC prior to volunteering. This is probably because FUTURES staff continued to look at these characteristics in order to meet federal guidelines.

Examination of the pseudo-R2 for the selection from the mandatory list (Table 6) suggests that, especially in the early months on the list, selection is much better predicted than is selection from the voluntary list (Table 5). In addition, the probability of selection appears less dependant on region within the state and more dependant on individual characteristics, even for longer periods of waiting, than is selection from the voluntary list. The difference between mandatory and voluntary selection may be due, in part, to FUTURES workers being more selective when they must "force" unwilling participants into job training programs because of the federal requirement that FUTURES serve 20 percent of the mandatory population. Those characteristics that are most consistently significant are race and age of the youngest child.



Program Impact: Ordinary Least Squares Regression Estimates

The outcome measure that we will consider is annualized earnings in the first half of 1997. Since the period of our study is from October 1992 through September 1995, those offered services would have had from one to four years to complete the program. The simplest approach to examining the impact of the FUTURES program would compare earnings of those who were offered services with those who were not. The first regression reported in Table 7 provides estimates based on this comparison. Dummy variables identify individuals who were on the mandatory list, and those on both mandatory and voluntary lists; the omitted category is volunteers. Three dummy variables indicate whether the individual received an offer of services for each waiting list category.

The intercept indicates that the average earnings for volunteers who were not mandatory and received no offer of services was $2474. Coefficients for the dummies identifying other listings indicate that individuals who were on the mandatory waiting list or on both lists and were not offered services received slightly higher earnings than did volunteers, but the differences are not statistically significant. Volunteers who received an offer of services obtained earnings that were $144 higher than those who did not, a difference that is statistically significant. Coefficients for the variables indicating an offer of service for the other waiting list categories show that those who were required to participate in futures earned $149 less if they were offered services, which is also statistically significant. Those who were on both lists earned $127 more if they were offered services, but this difference is not statistically significant.

These coefficients combine program impact with any differences that are due to composition effects associated with receipt of an offer of services. In the remaining equations presented in Table 7, we examine the impact of controlling for additional factors. One obvious question is whether some of the observed differences in outcomes are due to geographic variation in labor market opportunities that are associated with an offer of services. In the second equation, we have controlled for the seven service regions, and, within these, we have identified additional geographic divisions, distinguishing county groups by population density and similarity of socioeconomic structure. Altogether, the state is divided into 14 regions, identified by 13 dummies. These controls for region alter estimates of program impact substantially. Volunteers who are also required to participate now display a positive and significant effect of a service offer, while the estimated negative impact of an offer on mandatory participants is no longer statistically significant.

We anticipate that earnings will very likely differ according to the year in which individuals appeared on the waiting list. By design, individuals are receiving AFDC when they appear on the waiting list, and, for many, this identifies a period of temporary economic distress. With time, many of these individuals would obtain employment in the natural course of events. This means that a greater gap between first appearance on our file and post-program measured earnings will be associated with higher average earnings. Since the chance of selection into the program differs by year, controls for year are necessary to determine program impact. Equation (3) introduces controls for year of entry (1992 is the omitted category). As expected, those who entered the list in more recent years have substantially lower wages in 1997. Estimates of program effect change very little, however.

The fourth equation presented in Table 7 adds controls for age, race and educational attainment. These measures have estimated coefficients that are as expected. The program impact for volunteers declines by about a quarter, while other changes in estimates of program are smaller. The next estimation equation controls for number of children in the household, the age of the youngest child, and the marital circumstance of the individual at the time she first began receiving AFDC. As expected, those with more children and younger children are expected to earn less, and those who were coded as never having lived with the father of their children (in contrast to those whose marriages dissolved) earn less. Changes in estimates of program impact are minimal.

Equation (6) adds controls for time on AFDC as well as measures of labor market experience. Those with no time or one month on AFDC appear to have lower earnings than those with greater time on AFDC. Although additional months on AFDC are associated with a small decline in earnings, the estimated impact is not statistically significant. The "work experience" variable indicates the number of months the individual worked in the year prior to beginning AFDC, while a dummy variable identifies the large proportion who did not work at all. As expected, more work is associated with higher post-program earnings. Also included is reported earnings in the year prior to entry onto the list, which is entered as a quadratic and as a dummy variable identifying those with no earnings. The measures of prior earnings clearly capture factors that are critical in predicting post-program earnings.

These labor market controls increase estimated program impacts substantially. In equation (6), the effect of an offer of service is statistically significant in all cases, varying from $120 for mandatory participants to over $300 for mandatory participants who also volunteered. Changes in estimates reflect the fact that those offered services generally had less labor market experience and lower earnings than those who were not offered services. In large part, this is due to the agency selection by time on AFDC: Offers of service were much more likely to go to long-term AFDC recipients, whose labor market histories were less promising.

Tests for interactions in the impacts of independent variables. The analysis of Table 7 has assumed that the control variables have similar effects on volunteers and mandatory participants, and that controls have the same impacts for every year and across all regions of the state. Table 8 presents estimation equations that include interactions to examine whether estimates of program impact are sensitive to these assumptions.

Equation (2) enters interactions that allow every variable controlled in the previous analysis to have different impacts for the three waiting list classifications. An F-test shows that the additional 72 variables improve the fit significantly, but changes in measures of program effect are modest. Compared to equation (1), which repeats the final equation in the previous table, we see that the estimated impact of the program for voluntary recipients increases by almost 20 percent, while impacts for the other programs decline by less than 10 percent. Program effects are essentially the same in equation (3), which presents estimates for an equation which controls only the subset of interactions that are statistically significant.

Equation (4) presents program effect estimates controlling for interactions involving the seven main service delivery regions with other variables in the equation, while equation (5) controls only those interactions among these that are statistically significant. Changes in estimated effects due to inclusion of these interactions are small.

Finally, equations (6) and (7) include interactions involving dummies for year of entry into the program. We might anticipate that measured variables would have different effects depending on year of entry, especially given that the time between variable measurement and post-program earnings differs by year. While some interaction effects are, in fact, statistically significant, estimates of program impact are not substantially affected.

In sum, it appears that interaction effects involving independent variables have little substantive impact on estimates of program impact. None of the estimates in equations that control for interactions differs by as much as a standard error from those that ignore such effects.



Program Impact: Instrumental Variables

Although we have controlled for all measured characteristics that are available, if individuals who are offered services differ in systematic but unmeasured ways that would help to predict their ultimate earnings, estimates based on ordinary least squares will be inconsistent. As we have noted, one advantage of our approach is that the administrative selection and self-selection implicit in the voluntary/mandatory classification are controlled in our analysis. However, the offer of service clearly is not random within waiting list classification. The likelihood of ultimately receiving an offer of service can be traced to two types of factors.

The first is the selection made by FUTURES staff from waiting lists. Our analysis above shows that this selection is strongly associated with several measured characteristics, reflecting various constraints imposed by federal regulations. The second source of variation is an individual's choice of how long to remain on the waiting list. Most of those who appear on our files but were not offered services dropped off the waiting list after relatively short periods of time, generally just a few months. Those who remained on the waiting list for longer periods would be more likely to receive services.

While both of these sources of variation in the likelihood of an offer of service will cause difficulties for estimation, we hypothesize that staff choices are less tied to unmeasured personal characteristics. If this is the case, selection from the waiting list may usefully serve as an instrument for program participation. Heckman (1997) has argued that, in the presence of heterogeneity in the benefits individuals anticipate from participation, instrumental variable methods may not work, even where the instrument appears exogenous. In our case, however, we show that using a measure based on the waiting list as an instrumental variable does provide meaningful estimates if it is independent of unmeasured factors predicting post-program earnings.

For individual i specify that Zi is a binary variable indicating whether the individual is chosen from the waiting list after spending t months on the list. Condition the analysis on the individual being available to be selected, meaning that she has been on the waiting list at least t months. Assume Zi is independent of individuals' unmeasured characteristics. Among those not chosen (Zi=0), some ultimately obtain an offer of services, while others do not. Although a delay in treatment could have an effect on its impact, in our sample the additional wait is usually just a few months. We therefore make the assumption that the expected outcome is the same whether the individual is selected to be offered treatment at time t or after a more extended time on the waiting list.

Given these assumptions, it is possible to show that the treatment effect for individual i with characteristics X, who is offered services at time t but would not otherwise be offered services, can be written as:(11)

where the numerator indicates the difference in expected earnings between those selected to be offered services as time t and those who are not, and the denominator is the difference in the chance of being offered services for those offered services at time t (i.e., unity) and the chance of being offered services for those who are not offered services at time t. This is the population analog to the standard instrumental variables estimator. So long as Zi is independent of unmeasured factors influencing the final outcome, it can be used as an instrument for the offer of service.

The estimate described above is what Imbens and Angrist (1994) label a local average treatment effect (LATE). It must be stressed that this estimate of program impact is for individuals who received an offer of service because they were selected at time t but who would not otherwise have received an offer. The estimate does not identify the program effect for the inframarginal participant, one who would have received an offer of service after time t, reflecting, in part, willingness to remain on the waiting list additional periods.

It is clear that our data provide multiple waiting list variables, each of which can be used to obtain a conceptually distinct measure of program impact, corresponding to populations spending various times on the waiting list. Table 9 presents estimates of program impact using this method. The equation used to obtain estimates reported in each column corresponds to the final estimation equation reported in Table 7, but with the three variables indicating an offer of service treated as endogenous. Variables indicating selection from the waiting list at the specified time serve as instruments that just identify the estimated impact. For example in the first equation, the three variables that identify selection without any time on the waiting list (for volunteers, mandatory participants, and mandatory participants who volunteered) are the instrumental variables that identify program impacts.

Each column therefore presents program impact estimated for a different subpopulation. In contrast to ordinary least squares estimates, which, in effect, draw on a comparison of all individuals who are offered FUTURES services with observationally similar individuals who are not, each of these estimates is based on the small number of individuals selected from the waiting list at a given point in time who would not otherwise have been offered services.

For volunteers, the estimated program impact based on these estimates is positive in each case, and is larger than the comparable ordinary least squares estimate in four out of the five cases. It is statistically significant in only one case, although it approaches statistical significance in another. For mandatory participants, the estimated impact is also generally larger than that of the ordinary least squares estimate, but again is only significant in one case. A similar generalization applies for the third group, mandatory participants who also volunteered, although for these individuals, one of the estimates is much larger than the ordinary least squares estimates, and one of the estimates is negative although small in absolute value. While differences between estimates reported in the five columns are substantial, they are clearly dwarfed by sampling error.(12)

While it is clear that sampling error precludes any strong inferences, we may conclude that these estimates provide little basis to infer that the ordinary least squares estimates are seriously misleading. If the instruments we have constructed are appropriate--meaning that selection from the waiting list is independent of unmeasured factors that influence earnings--these estimates support the conclusion that those who receive a service offer experience a modest increment in their post-program earnings.



Comparison with Experimental Estimates of Program Impact

Beginning in the 1980s, federal waivers allowed states to design more complex welfare-work programs. Federal rules, however, generally required an evaluation protocol as well, resulting in a number of evaluations involving random assignment of a control group. Data from these experiments have been evaluated by a large number of researchers during the past twenty years. A review of this literature shows that our results are in line with what other researchers have found using experimental methods, supporting the view that our method of choosing a comparison group from those on the waiting list for FUTURES services is successful.

Most evaluations find that program participants experience modest but statistically significant gains in employment (see for example Bane and Ellwood, 1994; Blank, 1997; Couch, 1992; Gueron 1996; Friedlander and Hamilton 1996; and Friedlander and Robins, 1995). As Blank (1994) reports, "Results (from program evaluations involving random assignment) showed clear positive results in terms of employment gains for program participants. The cost-benefit evaluations indicated that states more than covered costs of the programs through lower AFDC costs" (p. 186). In general, the modest increases in annual earnings were due to increased hours worked by program participants rather than to any increase in wage rate. The impact on wage rates has been found to be negative as often as they are positive (O'Neill and O'Neill, 1997; Couch, 1992).

For the purposes of comparing our results with prior studies, we focus on estimates of annual earnings gains. In each case, we report estimates based on studies employing experimental designs. In all the studies, many of those in the treatment group did not receive services, often because they dropped out early in the program. Hence, these estimates, like ours, identify the impact of an initial offer of program services. Since the distinction between mandatory and voluntary participation may be important, we consider estimates for the two kinds of programs separately.

Gueron and Pauly (1991) review evaluations of a variety of mandatory programs which were undertaken as demonstrations in the 1980s. If we examine the seven "broad-coverage" programs which provided services similar to Missouri FUTURES, we find an average annual gain in earnings, adjusted for inflation, running from $15 to $738, with a mean of $396. However, four of these programs provided less intensive services, primarily aid in job search, than the FUTURES program. If we consider studies of the three programs that provided both job search assistance and more extensive training, the range of effects is from $317 to $738, with a mean of $597.

Since these are demonstration programs, we might expect impacts to differ from those of the federally mandated programs set up under JOBS legislation. Friedlander, Greenberg and Robins (1998) report on four studies which obtained experimental estimates of JOBS program effects. Impacts on annual earnings range from $91 to $1183, with a mean of $459. Our estimate, indicating a program effect of $120 for mandatory FUTURES participants, is somewhat lower than the average of the experimental studies, although it is in the range of observed estimates.

We know of no experimental evaluation that focuses on voluntary participants in JOBS programs, but evaluations have been done of several voluntary demonstration programs designed to move AFDC participants into the labor market. Friedlander, Greenberg and Robins (1998) report on four programs with estimated impacts on annual earnings varying from $215 to $3873, with a mean effect across the four programs of $1372. These numbers are generally higher than our estimates, which are $152 for FUTURES volunteers who were not mandatory and $301 for those on both the mandatory and voluntary list. Each of these programs differed in structure, and none offered the same combination of services as FUTURES. Based on cost per participant, three of these programs were substantially more expensive than FUTURES, suggesting that the greater impacts may reflect greater service impact.



Conclusion

The analysis here attempts to determine whether administrative data on participants in Missouri's welfare-to-work program, linked with individual earnings data provided by employers, can be used to estimate program impact. In addition to providing information on individuals who were offered services, the data include information on individuals who were on waiting lists but who never received an offer of service. Since those receiving an offer and those not receiving an offer are similar in important respects--both comprise AFDC recipients who were placed on the list in anticipation of participation in the program--it is natural to consider program impact by comparing these groups. The data also distinguish individuals by whether they volunteered to participate in the program or whether they were required to participate, allowing examination of the role this distinction may play.

We have first focused on the process by which individuals became identified as volunteers or required participants, and how they were selected from the waiting list to receive an offer of service. Our analysis shows that differences in individual characteristics observed across these groups are explicable in terms of the constraints faced by program administrators. For example, long-term AFDC recipients were much more likely to be chosen to receive services, consistent with federal requirements that such individuals be serviced. Overall, the selection criteria did not appear to favor those whose characteristics would suggest employment success.

Estimates of program impact underscore the importance of controlling for a variety of characteristics, but there is no evidence that interactions among year, waiting list classification, or region substantially influence estimated impact. Ordinary least squares estimates of program impact were modest but positive, and broadly consistent with impact estimates in studies of similar programs based on experimental design. Instrumental variables estimates were similar, suggesting that individual differences in willingness to remain on the waiting list were not an importance source of bias in estimated impact.

In summary, our analysis shows that where administrative data include waiting list information, it is feasible to develop estimates of the impact of welfare-to-work programs.

References





Ashenfelter, Orley and Card, David. "Using the Longitudinal Structure of Earnings to Estimate the Effect of Training Programs." Review of Economics and Statistics, 1985, 60(3), pp. 648-60.



Bane, Mary Jo and Ellwood, David T. Welfare Realities: From rhetoric to reform. Cambridge, Mass.: Harvard University Press, 1994.



Bassi, Laurie J. "Estimating the Effect of Training Programs With Non-Random Selection." Review of Economics and Statistics, February 1984, 66(1), pp. 36-43.



Bell, Stephen H.; Orr, Larry L.; Blomquist, John D. and Cain, Glen G. Program applicants as a comparison group in evaluating training programs. Kalamazoo, Mich.: W.E. Upjohn Institute, 1995.



Blank, Rebecca M. "The Employment Strategy: Public Policies to Increase Work and Earnings," in Sheldon H. Danziger, Gary D. Sandefur and Daniel H. Weinberg, eds., Confronting poverty: Prescriptions for change. New York: Russell Sage Foundation, 1994, pp. 168-204.



Blank, Rebecca M. It takes a nation: A new agenda for fighting poverty. New York: Russell Sage and Princeton University Press, 1997.



Bloom, Howard S.; Orr, Larry L.; Bell, Stephen H.; Cave, George; Doolittle, Fred; Lin, Winston and Box, Johannes M. "The Benefits and Costs of JTPA Title II-A Programs: Key Findings from the National Job Training Partnership Act Study." Journal of Human Resources, Summer 1997, 32(3), pp. 549-676.



Couch, Kenneth A. "New Evidence On the Long-Term Effects of Employment Training Programs." Journal of Labor Economics, October 1992, 10(4), pp. 380-88.



Friedlander, Daniel and Hamilton, Gayle. "The Impact of a Continuous Participation Obligation in a Welfare Employment Program." Journal of Human Resources, Fall 1996, 31(4), pp. 734-56.



Friedlander, Daniel and Robins, Philip K. "Evaluating Program Evaluations: New Evidence On Commonly Used Nonexperimental Methods." American Economic Review, September 1995, 85(4), pp. 923-37.



Friedlander, Daniel; Greenberg, David H.; Robins, Philip K. and Caldwell, Bruce. "Evaluating Government Training Programs for the Economically Disadvantaged." Journal of Economic Literature, December 1997, 35(4), pp. 1809-55.



Gueron, Judith M. "A Research Context for Welfare Reform." Journal of Policy Analysis and Management, Fall 1996, 15(4), pp. 547-61.



Gueron, Judith M. and Pauly, Edward. From welfare to work. New York: Russell Sage, 1991.



Heckman, James J. "Instrumental Variables: A Study of Implicit Behavioral Assumptions in One Widely Used Estimator." Journal of Human Resources, Summer 1997, 32(3), pp. 441-62.



Heckman, James J. and Hotz, V. Joseph. "Choosing Among Alternative Nonexperimental Methods for Estimating the Impact of Social Programs." Journal of the American Statistical Association, December 1989, 84, pp. 862-74.



Heckman, James J.; Ichimura, Hidehiko and Todd, Petra E. "Matching As an Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme." Review of Economic Studies, 1997, 64, pp. 605-54.



Heckman, James J. and Robb, Richard Jr. "Alternative Methods for Evaluating the Impact of Interventions: An Overview." Journal of Econometrics, October/November 1985, 30(1/2), pp. 239-67.



Heckman, James J.; Ichimura, Hidehiko; Smith, Jeffrey and Todd, Petra. "Sources of Selection Bias in Evaluating Social Programs: An Interpretation of Conventional Measures and Evidence On the Effectiveness of Matching As a Program Evaluation Method." Proceedings of the National Academy of Sciences, November 1996, 93, pp. 13416-20.



Heckman, James J.; Hohmann, Neil; Khoo, Michael and Smith, Jeffrey. "Substitution and Drop Out Bias in Social Experiments: A Study of an Influential Experiment." Unpublished paper, 1997.



Imbens, Guido and Angrist, Joshua. "Identification and Estimation of Local Average Treatment Effects." Econometrica, March 1994, 62(4), pp. 467-76.



Johnson, Terry R. and Stromsdorfer, Ernest. "Evaluating Net Program Impact," in Ann Bonar Blalock, ed., Evaluating social programs at the state and local level: The JTPA evluation design project. Kalamazoo, Michigan: W.E. Upjohn Institute, 1990.



Koon, Richard L. Welfare reform: Helping the least fortunate become less dependent. New York: Garland, 1997.



LaLonde, Robert J. "Evaluating the Econometric Evaluations of Training Programs With Experimental Data." American Economic Review, September 1986, 76(4), pp. 604-20.



O'Neill, Dave M. and O'Neill, June Ellenoff. Lessons for welfare reform: An analysis of the AFDC caseload and past welfare-to-work programs. Kalamazoo, Mich.: W.E. Upjohn Institute, 1997.

Appendix: Using Selection as an Instrument

We show here that our measure of selection from the waiting list can be used as an instrument if it is independent of unmeasured factors influencing post-program earnings. The estimate we obtain has been called a local average treatment effect (LATE) by Imbens and Angrist (1994).

Specify that the outcome of interest is a function of program participation and individual characteristics, where we allow the impact of characteristics to depend on participation:

Yi is the relevant outcome for individual i (earnings in our study); Xi is a vector of measured characteristics; Pi is a binary variable equal to zero if the individual does not receive the treatment and one if she does; and Ui0 and Ui1 are independent errors, capturing, in part, unmeasured individual characteristics. Receipt of treatment can be viewed as a function of individual measured characteristics and unmeasured factors, as well as the vagaries of the waiting list:

Zi is equal to zero if an individual is not chosen for receipt of services at a particular time, and is equal to one if the individual is chosen. ei is an error term that is person-specific.(13)

The measure of obvious interest is

Here Pi=1 implies that the individual chooses to participate. The expression identifies the value of the program for those who participate. In effect, it requires, at least implicitly, a measure of the earnings that would have been obtained by participants had they been precluded from participation. An instrumental variables solution, using Zi as an instrument for Pi , can estimate this difference as long as Zi is independent of Ui0 and Ui1, and Pi is a function of Zi.

Heckman (1997) has argued that terms used as instruments seldom satisfy these formal requirements, even when it might appear that they are fully exogenous. For example, he argues that use of the draft lottery as an instrument to predict military service produces inconsistent estimates of the effects of military service on later earnings. The reason is that those who decide to join the military despite little risk of draft (i.e., those with high numbers in the lottery) will do so because their expected benefits from military service are larger than their observed characteristics would imply, inducing a dependence between risk of being drafted and expected gain among those who enter the military. This difficulty would appear to preclude the use of most instruments, even those that appear exogenous; the exception would be the case where a proportion of all applicants were randomly denied access to the program.

Our waiting list variable is subject to Heckman's criticism. Our maintained assumption is that the decision at one point in time to offer services to an individual is essentially random, conditional on measured individual characteristics. Those who are not offered the service at such a point, however, may remain on the waiting list, and many ultimately do receive an offer of service. Among this latter group, those who are offered services will be those who were most willing to remain on the waiting list, a group for whom the service is most valuable. This implies that Zi will not be independent of Ui1-Ui0, suggesting that Zi is not a good instrument. We show next that this conclusion is somewhat misleading. The waiting list variable can be used to identify effects of the program we examine, where those effects must be understood to apply to a specific group, not all those offered services.

Let us define the waiting list variable we use in greater detail. For individual i, Zi is a binary variable indicating whether the individual is chosen from the waiting list after spending t months on the list. Condition the analysis on the individual being available to be selected, meaning that she has been on the waiting list at least t months. Among those not chosen, some ultimately obtain services, while others do not. These two groups are defined as:

The population of individuals chosen at time t to receive an offer of services can also be divided into two groups, where group membership is defined by whether they would ultimately have been offered services if they had not been chosen at time t. Let us designate these groups as:



Let us define the expected outcome measures, taken to be earnings at a later date,

If Zi is independent of individuals' unmeasured characteristics then C1(X) and T1(X) display the same conditional population distribution for Ui0, and they display the same conditional population distribution for Ui1. They also identify individuals who are offered the same set of services, the treatment group receiving the offer immediately and the control group after some wait. Although a delay in treatment could have an effect on its impact, in our sample the additional wait is usually just a few months. We therefore make the assumption that the expected outcome is the same whether the individual is selected to be offered treatment at time t or after a more extended time on the waiting list, meaning that YC1(X)=YT1(X).

Designate k(X) as the proportion of individuals in the population with characteristic X who would be offered the service if no offers were made at time t. If we take the expected difference between the group offered services at time t, and those who are not, we can write



which we can write as:

This is the population analog to the standard instrumental variables estimator. In short, so long as Zi is independent of unmeasured factors influencing the final outcome, it can be used as an instrument for the program.





Data Appendix: Variable Definitions



Age Age in years, rounded to the nearest whole number, at the time of entry onto waiting list or receipt of service.
Race (nonwhite) 0 if coded as "white";

1 otherwise.

Years of Education Highest grade completed at start of current AFDC spell.
High School Grad 1 if Years of Education is greater than or equal to 12;

0 otherwise.

Years of College Years of Education - 12 if Years of Education >12;

0 otherwise.

In School 1 if attending school or other training at start of current AFDC spell;

0 otherwise.

Months on AFDC Months between start of current AFDC spell and entry onto waiting list, in whole months.
Zero Months 1 if no months on AFDC;

0 if one or more.

One Month 1 if one month on AFDC;

0 otherwise.

Number of Children Number of children in household at time of entry onto waiting list.
Age of Youngest Child Age in years of youngest child in household, rounded to the nearest whole number, at time of entry onto waiting list.
Age<2 1 if Age of Youngest Child coded 0 or 1;

0 otherwise.

Age 2-3 1 if Age of Youngest Child coded 2 or 3;

0 otherwise.

Age 4-6 1 if Age of Youngest Child coded 4, 5 or 6;

0 otherwise.

Father Never in Home Reason for starting current spell on AFDC indicated as "father never lived in home." (Primary alternative reasons: divorced, separated, lost job.)
Work Experience Prior to AFDC Months employed during year prior to start of current AFDC spell, coded to the nearest whole number.
No Work 1 if Work Experience Prior to AFDC is 0;

0 otherwise.

Received Offer of Service Received an offer of service during the study period, identified by a code indicating that individual had been called by a FUTURES staff member but had not yet responded, or that they had received services.
Earnings in Previous Year Earnings in the four quarters prior to the quarter of entry onto the waiting list, based on Missouri quarterly earnings data.
No Earnings 1 if Earnings in Previous Year is zero;

0 otherwise.

Future Earnings Annualize earnings, measured in dollars, for the first half of 1997, based on Missouri quarterly earnings data.
Region 1 31 counties in northwest Missouri, adjoining but not including Jackson County (Kansas City). Includes one metropolitan area (St. Joseph.)
Subregion 1A Counties adjacent to Jackson County, and the county defining the St. Joseph metropolitan area.
Subregion 1B Remainder of Region 1
Region 2 25 counties in northeastern and central Missouri, adjoining but not including St. Louis County and St. Louis City. Includes one metropolitan area (Columbia).
Subregion 2A Counties adjacent to St. Louis City or St. Louis County.
Subregion 2B 3 largely urban counties in central Missouri, including the cities of Columbia and Jefferson City.
Subregion 2C Remainder of Region 2.
Region 3 29 counties in southeast Missouri. Includes no metropolitan areas.
Subregion 3A 5 counties in Missouri's "bootheel." This area contains relatively dense rural settlement.
Subregion 3B Remainder of Region 3.
Region 4 33 counties in southwest Missouri. Includes two metropolitan areas (Springfield, Joplin).
Subregion 4A County defining the Springfield metropolitan area.
Subregion 4B County defining the Joplin metropolitan area.
Subregion 4C 8 counties in the southern part of Region 4.
Subregion 4D Remainder of Region 4.
Region 5 Jackson County. Includes Kansas City.
Region 6 St. Louis City.
Region 7 St. Louis County.
Enter List 1992 First appeared on the waiting list (or was offered services) as indicated in files for October, November, or December 1992.
Enter List 1993 First appeared on the waiting list (or was offered services) as indicated in any of the 12 monthly files for 1993.
Enter List 1994 First appeared on the waiting list (or was offered services) as indicated in any of the 12 monthly files for 1994.
Enter List 1995 First appeared on the waiting list (or was offered services) as indicated in the monthly files for January-September, 1995.





Acknowledgments

Research reported here was undertaken in connection with a University of Missouri-Columbia project funded by the State of Missouri Coordinating Board for Higher Education, the Departments of Economic Development, Elementary and Secondary Education, Labor and Industrial Relations, and Social Services, under the direction of the Missouri Training and Employment Council. We wish to thank Richard L. Koon, Robert LaLonde, David Mandy, Seth Sanders, Ken Troske for helpful comments. The opinions and interpretations expressed in this paper are those of the authors and do not necessarily represent those of any agency of the State of Missouri.

Table 1: Means and Standard Deviations for Individuals on the FUTURES Waiting List Classified by Waiting List Status: Individuals Who Volunteered to Participate, Individuals Who Were Required to Participate, and Individuals Who Were Required to Participate Who Also Volunteered


Voluntary Only


Required Only
Required Participants Who Also Volunteered
Variable Mean Std Dev Mean Std Dev Mean Std Dev
1. Age 25.89 5.88 31.65 7.04 30.74 6.72
2. Race (Nonwhite) 0.48 0.50 0.28 0.45 0.20 0.40
3. Education (years) 11.40 1.39 11.33 1.61 11.47 1.61
4. Months on AFDC 21.19 24.96 8.87 20.46 6.31 16.28
5. Zero months 0.11 0.31 0.40 0.49 0.42 0.49
6. One month 0.15 0.35 0.37 0.48 0.39 0.49
7. Number of Children 1.35 0.76 1.32 0.69 1.27 0.61
8. Youngest Child Age<2 0.44 0.50 0.07 0.25 0.08 0.26
9. Youngest Child Age 2-3 0.33 0.47 0.16 0.36 0.15 0.36
10. Youngest Child Age 4-6 0.12 0.33 0.34 0.47 0.36 0.48
11. Father Never in Home 0.39 0.49 0.20 0.40 0.21 0.41
12.Work Exper. prior to AFDC 3.14 3.94 4.06 4.49 4.39 4.51
13. No Work 0.48 0.50 0.43 0.50 0.39 0.49
14. Earnings in Previous Year 1470.55 2644.53 2816.46 4470.22 2700.34 4281.93
15. No Earnings 0.45 0.50 0.45 0.50 0.44 0.50
16. Received Offer of Service 0.49 0.50 0.20 0.40 0.35 0.48
17. Region 1 0.07 0.26 0.13 0.34 0.12 0.33
18. Region 2 0.13 0.34 0.13 0.34 0.16 0.37
19. Region 3 0.11 0.31 0.18 0.39 0.18 0.39
20. Region 4 0.14 0.35 0.22 0.42 0.31 0.46
21. Region 5 0.21 0.40 0.12 0.33 0.10 0.30
22. Region 6 0.18 0.38 0.14 0.35 0.06 0.25
23. Region 7 0.16 0.36 0.07 0.26 0.05 0.23
24. Enter List 1992 0.12 0.32 0.11 0.32 0.15 0.35
25. Enter List 1993 0.38 0.49 0.35 0.48 0.36 0.48
26. Enter List 1994 0.30 0.46 0.32 0.47 0.28 0.45
27. Enter List 1995 0.20 0.40 0.22 0.41 0.22 0.41
N 23158 25453 5387


Key: See data appendix.















Table 2: Distribution of Individuals by Time on FUTURES Waiting List, Waiting List Classification (Voluntary/Mandatory) and Offer of FUTURES Services.
N Offered Services Within Study Period Never Offered Services
No wait Wait 1 to 12 Months Wait 13 or More Months After Inter-ruption
Voluntary Only 23158 6.6% 30.8% 2.9% 8.6% 51.2% 100.0%
Required Only 25453 3.4% 7.9% 1.9% 6.4% 80.4% 100.0%
Required Participants Who Also Volunteered

5387

7.1%



18.8%


3.4%


5.5%


65.3%


100.0%












Table 3: Means and Standard Deviations: Individuals Who Volunteered to Participate in FUTURES and Who Were Not Required to Participate
No Offer of Service Offer of Service
Variable Mean Std Dev Mean Std Dev
Age 26.07 5.99 25.69 5.76
Race (Nonwhite) 0.44 0.49 0.51 0.49
Education (years) 11.38 1.42 11.42 1.35
Months on AFDC 19.28 24.30 23.18 25.47
Zero months 0.14 0.34 0.07 0.26
One month 0.17 0.37 0.11 0.32
Number of Children 1.33 0.74 1.37 0.78
Youngest Child Age < 2 0.42 0.49 0.45 0.49
Youngest Child Age 2-3 0.32 0.46 0.32 0.46
Youngest Child Age 6-4 0.12 0.33 0.11 0.32
Father Never in Home 0.37 0.48 0.40 0.49
Work Experience Prior to AFDC 3.25 3.95 3.02 3.92
No Work 0.46 0.49 0.49 0.50
Earnings in Previous Year 1632.62 2861.04 1300.74 2385.13
No Earnings 0.43 0.49 0.47 0.49
Region 1 0.07 0.25 0.07 0.26
Region 2 0.16 0.36 0.10 0.30
Region 3 0.10 0.30 0.10 0.31
Region 4 0.15 0.36 0.13 0.34
Region 5 0.21 0.41 0.19 0.39
Region 6 0.13 0.33 0.22 0.41
Region 7 0.15 0.36 0.15 0.36
Enter List 1992 0.12 0.32 0.11 0.31
Enter List 1993 0.40 0.49 0.36 0.48
Enter List 1994 0.25 0.43 0.35 0.47
Enter List 1995 0.22 0.41 0.16 0.37
Future Earnings 2473.99 3458.17 2618.43 3441.80
N 11849 11309


Key: See data appendix.











Table 4: Means and Standard Deviations: Individuals Who Were Required to Participate in FUTURES and Who Did Not Volunteer.
No Offer of Service Offer of Service
Variable Mean Std Dev Mean Std Dev
Age 32.07 7.02 29.90 6.85
Race (Nonwhite) 0.31 0.46 0.18 0.39
Education (years) 11.36 1.61 11.24 1.61
Months on AFDC 8.32 19.98 11.16 22.21
Zero months 0.41 0.49 0.35 0.48
One month 0.38 0.49 0.32 0.47
Number of Children 1.30 0.67 1.38 0.77
Youngest Child Age<2 0.04 0.20 0.18 0.38
Youngest Child Age 2-3 0.14 0.35 0.21 0.41
Youngest Child Age 4-6 0.34 0.47 0.31 0.46
Father Never in Home 0.22 0.41 0.14 0.35
Work Experience Prior to AFDC 4.15 4.51 3.70 4.36
No Work 0.43 0.49 0.46 0.50
Earnings in Previous Year 2947.00 4622.75 2279.02 3730.49
No Earnings 0.44 0.50 0.47 0.50
Region 1 0.12 0.33 0.16 0.37
Region 2 0.13 0.33 0.15 0.36
Region 3 0.17 0.37 0.24 0.43
Region 4 0.22 0.41 0.24 0.43
Region 5 0.14 0.35 0.06 0.24
Region 6 0.15 0.36 0.10 0.31
Region 7 0.08 0.27 0.04 0.20
Enter List 1992 0.12 0.32 0.09 0.29
Enter List 1993 0.36 0.48 0.31 0.46
Enter List 1994 0.32 0.47 0.33 0.47
Enter List 1995 0.21 0.41 0.27 0.44
Future Earnings 2487.61 3627.91 2338.16 3282.27
N 20479 4974


Key: See data appendix.

Table 5: Logistic Regression of Selection from the Waiting List Predicted by Individual Characteristics and Length of Time on Waiting List: Voluntary Participants Who Were Not Required to Participate.
Waiting Time: No Wait One Month Two Months Three Months Four Months
B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

Intercept -2.32

(0.28)

-3.01

(0.28)

-3.42

(0.28)

-3.42

(0.33)

-3.40

(0.41)

Age -0.05*

(0.006)

-0.003

(0.005)

0.00028

(0.005)

0.006

(0.006)

0.009

(0.007)

Race (Nonwhite) -0.16*

(0.07)

-0.05

(0.07)

-0.12

(0.07)

0.15

(0.09)

0.01

(0.11)

Education 0.12*

(0.02)

0.03

(0.02)

0.04*

(0.02)

0.04

(0.02)

0.04*

(0.03)

In School 0.74*

(0.07)

0.16*

(0.08)

-0.13

(0.09)

0.17

(0.10)

0.16

(0.12)

Zero Months on AFDC -1.95*

(0.16)

-0.43*

(0.10)

-0.51*

(0.10)

-0.70*

(0.12)

-0.73

(0.15)

One Month on AFDC -0.75*

(0.09)

-0.26*

(0.08)

-0.26*

(0.08)

-0.49*

(0.10)

-0.34

(0.12)

Number of Children 0.07

(0.03)

-0.02

(0.03)

0.03

(0.03)

-0.06

(0.04)

-0.001

(0.04)

Youngest Child < 2 0.11*

(0.05)

0.002

(0.05)

0.07

(0.05)

0.15*

(0.06)

0.02

(0.08)

Father Never in Home -0.10

(0.05)

-0.000

(0.05)

0.04

(0.05)

0.03

(0.06)

-0.03

(0.08)

No Work in Year Prior to AFDC -0.03

(0.05)

0.04

(0.05)

0.02

(0.05)

-0.02

(0.06)

0.007

(0.08)

Earnings in Previous Year -0.00003*

(0.000013)

0.000003

(0.000011)

.000008

(0.000012)

.0000021

(0.000014)

-0.0000087

(0.000018)

Region 1 0.89*

(0.11)

0.07

(0.13)

-0.51*

(0.14)

-0.91*

(0.17)

-1.37*

(0.22)

Region 2 -0.20*

(0.11)

-0.25*

(0.11)

-0.87*

(0.12)

-1.07*

(0.13)

-1.05*

(0.16)

Region 3 0.50*

(0.10)

0.32*

(0.11)

-0.44*

(0.12)

-0.73*

(0.14)

-0.82*

(0.17)

Region 4 -0.19

(0.12)

-0.29*

(0.12)

-0.34*

(0.11)

-0.66*

(0.13)

-1.07*

(0.17)

Region 5 -0.28

(0.09)

0.24*

(0.08)

-0.35*

(0.08)

-0.60*

(0.09)

-0.67*

(0.11)

Region 7 -0.76*

(0.11)

-0.19*

(0.09)

-0.03

(0.07)

-1.13*

(0.10)

-1.39*

(0.14)

Enter List 1993 -0.44*

(0.08)

-0.28*

(0.10)

0.19

(0.12)

0.44*

(0.13)

0.38*

(0.15)

Enter List 1994 -0.63*

(0.09)

0.46*

(0.09)

1.59*

(0.11)

1.68*

(0.13)

1.61*

(0.14)

Enter List 1995 0.59*

(0.08)

1.11*

(0.10)

1.67*

(0.12)

1.60*

(0.14)

1.72*

(0.17)

Pseudo R2 0.096 0.043 0.081 0.092 0.096
Average Selection Probability 0.066 0.075 0.100 0.086 0.068
N 23158 19992 16768 13414 10713



Key: See data appendix.

Table 6: Logistic Regressions of Selection from the Waiting list Predicted by Individual Characteristics, by Length of

Time on Waiting List: Mandatory Participants Who Did Not Volunteer

Waiting Time No Wait

B

(Std Error)

One Month

B

(Std Error)

Two Months

B

(Std Error)

Three Months

B

(Std Error)

Four Months

B

(Std Error)

Intercept -4.32

(0.49)

-5.94

(0.48)

-6.21

(0.62)

-5.42

(0.73)

-6.20

(0.94)

Age -0.057*

(0.007)

-0.017

(0.006)

-0.012

(0.008)

-0.008

(0.011)

-0.02

(0.01)

Race (Nonwhite) -0.76*

(0.14)

-0.45*

(0.14)

-0.73*

(0.20)

-0.63*

(0.25)

-0.29

(0.28)

Education 0.09*

(0.02)

0.01

(0.02)

0.07*

(0.03)

0.04

(0.04)

-0.008

(0.052)

In School 0.36*

(0.15)

0.25

(0.16)

-0.03

(0.24)

0.02

(0.30)

-1.10

(0.58)

Zero Months on AFDC -1.99*

(0.11)

0.24*

(0.11)

0.02

(0.14)

-0.06

(0.18)

-0.30

(0.20)

One Month on AFDC -1.05*

(0.09)

0.07

(0.11)

-0.05

(0.15)

-0.25

(0.19)

-0.23

(0.20)

Number of Children 0.25*

(0.05)

0.21*

(0.05)

0.09

(0.07)

0.03

(0.10)

0.12

(0.10)

Youngest Child < 2 1.81*

(0.09)

1.81*

(0.11)

1.81*

(0.15)

1.56*

(0.21)

1.53*

(0.24)

Father Never in Home -1.22*

(0.14)

-0.93*

(0.14)

-0.95*

(0.20)

-0.40

(0.2127)

-0.26

(0.22)

No Work in Year Prior to AFDC 0.13

(0.08)

-0.21*

(0.09)

-0.14

(0.12)

-0.16

(0.15)

-0.31

(0.18)

Earnings in Previous Year -0.00004* 0.000013 -0.00000805 (0.000011) 0.000016

(0.000014)

0.00001

(0.00002)

-0.00000612

(0.000026)

Region 1 0.07

(0.17)

1.44*

(0.24)

0.69*

(0.27)

0.61

(0.34)

1.79*

(0.49)

Region 2 -0.19

(0.17)

1.26*

(0.23)

0.43

(0.27)

0.79*

(0.33)

1.31*

(0.50)

Region 3 0.06

(0.16)

1.44*

(0.23)

0.71

(0.26)

0.63

(0.33)

1.73*

(0.48)

Region 4 -0.38*

(0.17)

0.86*

(0.24)

0.25

(0.27)

0.23

(0.34)

1.57*

(0.48)

Region 5 -1.52*

(0.25)

-0.26

(0.28)

-0.26

(0.30)

0.26

(0.34)

0.43

(0.54)

Region 7 0.01

(0.21)

0.44

(0.28)

-0.40

(0.38)

-1.48*

(0.75)

0.85

(0.56)

Enter List 1993 -0.08

(0.36)

-0.11

(0.28)

0.41

(0.33)

-0.06

(0.30)

0.90

(0.47)

Enter List 1994 1.54*

(0.33)

1.83*

(0.25)

1.89*

(0.31)

1.08*

(0.27)

2.10*

(0.46)

Enter List 1995 3.72*

(0.32)

2.78*

(0.25)

2.29*

(0.32)

1.18*

(0.30)

1.95*

(0.49)

Pseudo R2 0.384 0.208 0.131 0.077 0.102
Average Selection Probability 0.034 0.033 0.019 0.019 0.011
N 24596 21700 18635 16490 14759

Key: See data appendix.

Table 7: Ordinary Least Squares Estimates of Program Impact Based on Equations Predicting Post-Program Earnings Controlling for Individual Characteristics
Variable (1) (2) (3) (4) (5) (6)
B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

Intercept 2473.98

(32.36)

2598.86

(49.23)

2841.04

(63.47)

-377.44

(299.87)

255.08

(329.10)

722.52

(322.74)

Effect of Program Offer for:
Volunteers Who Were

Not Mandatory

144.44*

(46.30)

138.83*

(46.18)

133.89*

(46.26)

98.72*

(45.44)

104.82*

(45.42)

152.38*

(44.41)

Mandatory Participants,

Did Not Volunteer

-149.45*

(55.68)

-44.70

(55.58)

-12.00

(55.57)

16.94

(54.78)

49.15

(55.21)

120.356*

(54.005)

Mandatory Participants

Who Also Volunteered

127.41

(100.82)

259.64*

(100.20)

268.18*

(100.07)

216.95*

(98.32)

231.35*

(98.28)

300.98*

(95.91)

Mandatory 13.62

(40.65)

176.02*

(40.93)

184.55*

(40.94)

164.36*

(42.88)

78.42

(46.03)

-96.56*

(47.60)

Mandatory Participants Who Volunteered 79.41

(67.63)

281.18*

(67.67)

278.65*

(67.59)

181.58*

(67.61)

97.38

(69.63)

-75.52

(70.13)

Enter List 1993 -108.59*

(50.33)

-106.22*

(49.42)

-102.86*

(49.38)

-134.10*

(48.15)

Enter List 1994 -306.96*

(51.49)

-297.31*

(50.56)

-292.44*

(50.53)

-312.36*

(49.29)

Enter List 1995 -560.36*

(54.74)

-561.34*

(53.77)

-555.61*

(53.78)

-688.40*

(52.52)

Age 98.62*

(14.45)

90.92*

(15.92)

72.87*

(15.61)

Age Squared -1.61*

( 0.22)

-1.62*

(0.23)

-1.26*

(0.23)

Race (Nonwhite) 193.79*

(43.08)

232.52*

(43.71)

194.33*

(42.82)

Years of Education 94.93*

(20.07)

95.10*

(20.07)

53.13*

(19.58)

High School Grad. 766.35*

(52.35)

758.98*

(52.38)

622.40*

(51.18)

Years of College 300.007*

(31.260)

301.56*

(31.24)

300.20*

(30.47)

In School 871.35*

(55.33)

859.23*

(55.35)

793.885*

(54.009)

14 Regions/Subregions Included Included Included Included Included



Continued

Table 7 continued (1) (2) (3) (4) (5) (6)
Variable B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

Number of Children -98.90*

(22.01)

-16.05

(21.72)

Youngest Child Age <2 -210.20*

(59.19)

-95.54

(58.21)

Youngest Child Age 2-3 -345.01*

(51.91)

-249.15*

(50.96)

Youngest Child Age 4-6 -155.41*

(44.21)

-57.68

(43.24)

Father Never in Home -155.57*

(35.14)

-109.42*

(34.28)

Months on AFDC -1.24*

(0.86)

Zero Months -224.22*

(48.77)

One Month -144.03*

(47.43)

Work Experience Prior to AFDC 43.22*

(5.78)

No Work Experience -93.46

(47.56)

Earnings in Previous Year 0.160*

(0.008)

Earnings Squared -0.00000021

(0.00000039)

No Earnings -257.47*

(38.37)

R2 0.0005 0.0174 0.0203 0.0561 0.0577 0.1061



Table 8: Ordinary Least Squares Estimates of Program Impact Based on Equations Predicting Post-Program Earnings Controlling for Individual Characteristics and Interactions
(1) (2) (3) (4) (5) (6) (7)
B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

Effect of Program Offer for:
Volunteers Who Were

Not Mandatory

152.38*

(44.41)

178.18*

(45.16 )

176.22*

(44.49)

161.30*

(44.46)

157.86*

(44.34)

160.25*

(44.69)

154.25*

(44.43)

Mandatory Participants,

Did Not Volunteer

120.35*

(54.00)

108.05*

(55.35)

121.51*

(54.15)

112.59*

(53.96)

114.99*

(53.92)

119.21*

(54.11)

117.93*

(53.98)

Mandatory Participants

Who Volunteered

300.98*

(95.91)

267.46*

(100.05)

265.64*

(97.22)

294.61*

(95.79)

295.58*

(95.74)

288.63*

(95.95)

290.23*

(95.87)

Individual and Region Controls Included Included Included Included Included Included Included
Interactions of All Other Variables with Waiting List Status Included Included If Significant
Interactions of

Region with

Selected Individual

Controls1

Included If Significant Included
Interactions of Year of Entry with

Selected Individual and Region Controls2

Included Included If Significant
R2 0.1061 0.1098 0.1086 0.1107 0.1100 0.1076 0.1071



1Interactions of seven region dummies with Age, Race, Years of Education, Number of Children, Months on AFDC, Work Experience Prior to AFDC, and Earnings in Previous Year.

2Interactions of Year of Entry dummies with Age, Race, Years of Education, Number of Children, Months on AFDC, Work Experience Prior to AFDC, and Earnings in Previous Year.













Table 9: Instrumental Variables Estimates of Program Impact Based on Equations Predicting Post-Program Earnings Controlling for Individual Characteristics:

No Wait One Month Two Months Three Months Four Months


Effect of Program Offer for:
B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

B

(Std Error)

Volunteers Who Were

Not Mandatory

290.15

(162.65)

636.17*

(160.64)

10.16

(149.88)

302.22

(171.51)

277.67

(205.43)

Mandatory Participants,

Did Not Volunteer

62.15

(152.44)

358.71*

(154.87)

300.79

(211.76)

429.61

(275.48)

286.44

(314.22)

Mandatory Participants

Who Volunteered

916.22*

(257.22)

543.18

(295.73)

395.05

(325.16)

-11.19

(447.08)

198.43

(494.20)

R2 0.1058 0.1028 0.1006 0.0988 0.1006

1. See Friedlander, Greenberg and Robins (1997) for a review of studies evaluating training programs.

2. Bloom et al. (1997) point out that if receipt of service can be fully measured and it can be assumed that dropouts obtain no benefits from the program, adjustments can be made to determine service impact. Our data allow us to determine whether individuals offered services actually participated during the three years of our study. However, some of those who were denied access to the program in this period undoubtedly obtained the same or similar services though other channels or obtained services through the program after the period of our study. Given this difficulty, we have not attempted any adjustment of our estimates.

3. See Koon (1997) for an examination of Missouri's FUTURES program focusing on the first two years of implementation.

4. Women made up four-fifths of the individuals on our files for whom the voluntary/mandatory classification was available.

5. This means that a small number of individuals who first appear in our files after September 1992 may have had contact with the FUTURES system prior to that time.

6. Most of those who ceased to appear on the waiting list disappeared from our FUTURES file, and therefore had no further contact with FUTURES in our study period. A small portion shifted status (e.g., an individual originally coded as a volunteer only was reclassified as mandatory), and so could have been offered services later.

7. Coverage is similar to that in other states. Among the most important omissions are U.S. Postal Service workers, employees in religious and some nonprofit organizations, railroad employees, college students working for their colleges, military employees, and the self-employed.

8. Approximately 97 percent of nonwhites in our sample are identified as black, with the remainder in other racial groups or not identified.

9. The rates, in part, reflect the high levels of turnover in the AFDC population. See Gueron and Pauly (1991) for a discussion of factors affect participation rates in employment programs.

10. Of course, such selection, insofar as it is not captured in measured variables, would bias estimated program impacts if selected individuals were likely to have promising employment prospects.

11. The detailed derivation is shown in the appendix.

12. We also estimated effects using separate equations for each year of entry and each waiting list classification. Estimates vary substantially among subgroups, but, given the importance of sampling error, no patterns emerge.

13. Note that ei depends in part on whether the individual is selected at a future time.