DRAFT

March 6, 1997


Monetary Incentive Response Effects in Contingent Valuation Mail Surveys

William J. Wheeler

Jeffrey K. Lazo

Matthew T. Heberling

Ann N. Fisher

Donald J. Epp

Lazo is with the Department of Energy, Environmental, and Mineral Economics, Penn State University. The other authors are with the Department of Agricultural Economics and Rural Sociology, Penn State University.

Funding provided by U.S. Environmental Protection Agency Cooperative Agreement CR 824369-01.

The authors appreciate the comments of Warren Fisher and Jim Shortle on this manuscript.

A shorter version of this paper was submitted for presentation at the NAREA meetings.


Monetary Incentive Response Effects in Contingent Valuation Mail Surveys


Abstract

Monetary incentives are one approach for increasing response rates in contingent valuation surveys. We present the results of a case study designed to assess the effect of incentives on response rates and respondent behavior. We compare response rates and quality of answers for five incentive levels. Including incentives increased the response rate, decreased item non-response rates, but had no effect on stated willingness-to-pay.

1. Introduction

The contingent valuation method (CVM) is increasingly accepted for obtaining economic values of commodities for which a market may not exist (Mitchell and Carson 1989; Arrow et al. 1993). CVM often is applied for environmental commodities or public goods for which no other valuation method is available (Freeman 1993). To provide useful information, CVM studies must elicit values under a set of conditions consistent with economic theory. Foremost, the responses received must be valid and reliable statements of individuals= values. The values obtained must also be representative of the relevant population if they are to be useful for policy makers or for natural resources damage assessment (Kopp and Smith 1993).

In general, surveys are used in CVM studies. There is considerable debate as to acceptable methods of administering CVM surveys, but many consider mail surveys to be a cost effective method of contacting a diverse population. For mail CVM surveys to be acceptable, responses must be representative of the population and invariant to the method of survey administration. Adequate response rates and completion of the survey instrument are important steps in meeting these criteria. Monetary incentives have been used in CVM surveys to increase response rates (see Mitchell and Carson 1989; Schulze et al. 1994).

This paper describes a case study designed to examine how monetary incentives affect response rates and survey completion. It tests the effect of monetary incentives on three categories: overall response rates, item non-response rates, and response quality. Section 2 discusses recent concerns about response rates to mail surveys. Section 3 reviews previous research on the effects of monetary incentives in mail surveys. Section 4 discusses optimal incentive levels from a benefit-cost perspective. Section 5 describes the survey and empirical results of the monetary incentives. Section 6 presents conclusions and suggests future research.

2. Concerns about Low Response Rates

Contingent valuation researchers share concerns expressed in the wider survey literature about low response rates. There is also a concern that response rates have been declining over time (Bradburn 1992; Panel on Incomplete Data 1983; Goyder 1987). Although the evidence for this decline is mixed (Smith 1994), it seems clear that achieving a given response rate now requires using more resources than was the case a few years ago (Davis and Smith 1992; Spaeth 1992).

Low response rates would not be a problem if non-respondents are spread randomly among the population of interest. But low response rates often reflect bias, in that some subgroups are more likely to respond than others (Groves 1989). This can invalidate these surveys if respondents aren=t representative of the target population (Mitchell and Carson 1989; Arrow et al. 1994; Desvousges et al. 1992). To the extent that response rates have been a concern in the CVM literature, it is not response rates themselves but this non-response bias that has been an issue (Edwards and Anderson 1987; Loomis 1987; Dalecki et al. 1993; Mattson and Li 1994).

There are several tools to increase response rates. Perhaps the most common is to use follow-ups, such as repeat mailings to non-respondents in mail surveys (Dillman 1978). Another simple method is to give the respondent an incentive to encourage completion of the survey. Incentives can be monetary, non-monetary (e.g. pens), or charitable contributions for each returned questionnaire. The incentive can be included when the survey is mailed or it can be contingent upon completion and return of the survey.

A large body of literature establishes that incentives can increase survey response rates substantially. Incentives also can reduce the item non-response rate for individual questions and increase the completeness of open-ended answers. Monetary (cash or check) incentives may be more effective than non-monetary incentives (Yammarino et al. 1991). Furthermore, cash is probably easier to administer and more generalizable.

While it is desirable to have more interviews completed and more complete interviews, it is not desirable for incentives to bias the responses given by survey participants. Therefore, it is also necessary to check that responses to CVM surveys are invariant to incentive treatments.

Our discussion concentrates on mail surveys because mail surveys are likely to be the administrative mode of choice for many CVM surveys. In­person interviews are likely to be very expensive for a representative sample, and less costly telephone interviews may not accommodate visual aids or detailed informational materials.

3. Previous Research on Monetary Incentives

Response Rates and Item Non-Response

Higher cash values of the incentives are correlated with higher response rates in nearly all available studies (Church 1993; Yu and Cooper 1983). However, results from Armstrong (1975), Fox, Crask and Kim (1988), Church (1993), James and Bolstein (1992), and Yammarino et al. (1991) suggest that this effect diminishes as the size of the incentive increases. Including incentives with the questionnaire itself is far more effective than promising that an incentive will be sent when the completed questionnaire is received by the researcher (Yu and Cooper 1983; Church 1993; Hopkins and Gullickson 1992).

Most tests of incentives have used amounts of $1 and smaller, with results showing an increase in response rate as the incentive increases (Furse and Stewart 1982; Hubbard and Little 1988a, 1988b; James and Bolstein 1990). Two studies (James and Bolstein 1990; Pino 1986) found no significant difference between $1 and $2. Mizes et al. (1984) found no significant difference between $1 and $5. Yammarino et al. (1991) suggest that the impacts of incentive size are "situation specific." James and Bolstein (1992) test a wider range of incentives, showing significant increases in response rates as the incentive is raised from $1 to $5 and to $20 (but not for $10 or $40). They say this might be partly because of a relatively small sample with only 150 subjects in each incentive treatment. They also sample a group (owners of small construction subcontracting companies) that might differ from other target groups.

Considerably less attention has been paid to the effect of incentives on item non-response. For CVM researchers, item non-response to valuation or demographic questions could mean that a returned survey is unusable. Therefore, the potential impact on item non-response could be very important when choosing an incentive amount. Hansen (1980) found that incentives did not have an effect on the proportion of close-ended questions completed but had a detrimental effect on the completeness of open-ended questions. James and Bolstein (1990) discovered that incentives led to more complete open-ended questions and more comments on the questionnaire but had no effect on item non-response. Berk et al. (1987) report that incentives decreased item non-response.

Response Quality

Theories differ as to why participants respond to surveys and how the use of incentives might affect responses (Dillman 1978; Hansen 1980). These theories do not agree on what, if any, effect incentives have on response quality. If incentives increase response rate and decrease item-nonresponse, it is entirely possible that they will alter the distribution of responses. Some of this will be expected because some participants respond in the presence of incentives but do not without those incentives. If these respondents are demographically (or in some other way) different from respondents who participate with or without incentives, the distribution of responses will change. This change in distribution is desirable if it increases the representativeness of the responses by decreasing nonresponse bias. If, however, incentives cause respondents to alter their responses from what those responses would have been without incentives, then another bias may be introduced. One hypothesis is that incentives induce a kind of social desirability bias, making some respondents behave in a manner that they believe will please the designers of the survey. If this hypothesis is true, respondents to CVM surveys might give higher willingness-to-pay (WTP) amounts than they would without incentives. Unfortunately, there is very little evidence with which to answer these questions.

In a pair of studies, James and Bolstein (1990, 1992) found little evidence that incentives cause differences in responses. In their first (1990) study they found that incentives result in somewhat more requests for information. In their second study, they found slight demographic differences between treatments but no differences to closed-ended questions. Biner and Barton (1990) find no effect across their 2x2 design of cover letter tone and incentive level, although their evidence may not be as clear-cut as they argue.

4. Optimal Incentive Size from a Benefit Cost Perspective

Many considerations go into determining optimal incentives in mail surveys. Ultimately these are defined by the purpose of the survey. The objective of the survey administrator is to maximize the net benefit of the survey. The net benefit of a survey is the total benefit of the survey, which is a function of the responses, minus the total cost of the responses.



Focussing on the incentive level, the total benefit of the survey is a function of the response rate which in turn is function of the incentive level. The total cost of the survey is also a function of the incentive level. Ceteris paribus, the total value of the survey will be maximized by choosing the appropriate incentive level. On the margin, the net value of an additional response is the marginal value of an additional response minus the marginal cost of an additional response. The optimal incentive level is the solution to:



The optimal incentive level depends on three components. Term A is the marginal benefit of responses which includes many considerations. These may include the overall improvement in data provided by an additional response, such as reduction in errors and improved data analysis. This includes increasing total returned surveys as well as reducing item non-response. As adequate response rates are necessary to ensure representativeness of the population, the benefits of increased responses depends on the importance of obtaining a representative sample. For instance, minimum required response rates of 70% suggested by the NOAA panel (Arrow et al. 1993) imply that increasing the response rate from 69% to 70% prevents the value of a survey from being zero.

Term B is the increase in response rate and quality induced by increasing the incentive level. Determination of this involves understanding how potential respondents react to the level of incentive. Two components are often identified in individuals= reaction to an incentive. First may be an implication that the incentive indicates the importance of the survey or the good will of the surveyor. Thus even a minimal incentive may induce increased responses. This may be tested by examining response rates with and without incentives (regardless of incentive size). Second is the feeling that the incentive Apays@ the respondent for his or her time to complete the survey. This suggests that the incentive must compensate the respondent for some appropriate value of time. If the value of time is important then the Aquality@of responses may be a function of the incentive size. This may be tested by examining item non-response or by examining if respondents= characteristics differ as incentive size increases (e.g. higher income individuals may be more likely to respond to larger incentives).

Term C is the marginal cost of the incentive level. The total cost of a survey involves all of the printing, mailing, labor, and administration costs as well as data entry and processing. Depending on how the survey is administered many of these costs are essentially fixed costs and thus the primary marginal cost is the size of the incentive.

For a survey mailed to 1000 households the cost of increasing incentive levels by one dollar is $1000. If incentives reduce the need for follow-up mailings additional costs are lower. Dividing both sides of Equation 1 by Term B we derive the optimal incentive level by equating the marginal benefit of a response to the marginal cost of response.



Using incentives to induce responses, the marginal cost of a response may be quite high. For instance, suppose the researcher has a survey to be mailed to 1000 households. If increasing response rates by 1% (10 additional responses) requires incentives to be $1 larger ($1000 total increase in incentives), the marginal cost of each additional response is $100 just in incentives.

5. Survey Design and Results

This contingent valuation exercise used a mail survey to collect information on recreational anglers= values for changes in fish populations and habitat. The sample was drawn from counties in the Susquehanna River Basin (SRB) in Pennsylvania of people who had previously indicated that they or their spouse were interested in fishing. Following a modified Total Design Method (TDM) (Dillman 1978), 600 questionnaires were mailed on March 7, 1996. A postcard follow-up was sent March 18 to 255 anglers who had not yet returned a questionnaire. A second follow-up with another questionnaire was sent April 1 to 137 people. A third and final follow-up was sent April 22, 1996 to the remaining 64 nonrespondents. Our methodology differs from the TDM because we did not send the third follow-up by registered mail. The questionnaires were randomly assigned to five groups of 120: a no-incentive control group and treatments of $1, $2, $5, and $10.

Of the 600 names in our sample, only one questionnaire was returned with a bad address, leaving 599 potential respondents to be used in our response rate calculations. Of these, 554 returned the questionnaire for an overall response rate of 92.5 percent. Then, 156 questionnaires were eliminated from parts of the analysis because the respondents are non-anglers or anglers who do not fish in the SRB. The 398 respondents who indicated that they fish in the SRB are the basis for our analysis of item response completeness and quality.

Response Rates

Incentives could affect both response rate and speed. The initial mailing occurred on a Wednesday and the follow-up mailings on Mondays according to a schedule consistent with the TDM. Responses received within three days of a follow-up mailing were classified as belonging to the prior mailing. These mailings are designated as the initial mailing, the postcard mailing and the 2nd and 3rd follow-ups. Table 1 reports the response rates for the sample of 599, broken down by incentive and by mailing. There is a clear difference between using no incentive and using any of the given incentives. However, the effect of increasing incentives is unclear.

The results of statistical tests for the differences between treatments are given in Table 2. The statistical tests were performed using nonparametric methods for differences between rates given in Fleiss (1981). Although there are alternative analytical methods that treat the raw data as counts, rather than rates, the method for rates is used here. This makes the results more useful for thinking about other applications, including replications of this study, where sample sizes might vary dramatically. This is especially probable in mail surveys, where the number of bad addresses cannot be forseen. The test statistic is distributed chi-square and large values of the statistic indicate significant differences between treatment groups. The first column gives the chi-square and corresponding p-value for a test of a difference in response rates between no incentive and the group of positive incentive treatments. The second column gives the results of a grouped test for a difference across the all of the positive incentives.

The results indicate, at convincing levels of significance, that at each level of follow-up there is a statistical difference between no incentive and some incentive. However, there is no statistical significance between using $1, $2, $5, or $10 (column 2).

As a further test, regressions are performed on the final response rates. Since these are group means of proportions, grouped logit regressions are estimated (Greene 1997). If Ri is the response rate for group i, then the regression function is



where x is a vector of explanatory variables. This regression can be estimated using a two-step procedure. We tested two models; one with a constant term and an Incentive (equal to the amount of the monetary incentive) term as a slope variable (Model A), the other with these two variables and an IncenDUM, which is a dummy variable equal to one if an incentive was included (Model B). We are therefore able to test how much the presence of incentives alone increases the response rate and how much increasing the incentive amount continues to increase the response rate. The results are presented in Table 3. The coefficients are not directly interpretable, but the significance levels are (standard errors are computed using the formula in Maddala [1983]). In Models A and B, the Incentive variable is significant. However, in Model B, the IncenDUM variable is not significant. These results indicate, in contrast to the nonparametric tests, that increasing incentives do have an effect on response rates. The predicted response rates from each model are compared to the observed response rates in Table 4. Based on Table 4 and the adjusted R2s (from Table 3) Model B appears to have a slightly better fit.

Item Non-Response Rates

Because there are branching questions in the survey, not every respondent will answer the same number of questions. Therefore, the item non-response rate, rather than the number of item non-responses is the variable of interest. Table 5 reports the mean item non-response for the returned surveys. The mean item non-response rate declines with increasing incentives with the exception, again, of the $2 incentive. Fitted regressions for item non-response, on both the individual data and the means (using weighted least squares, with group size as the weights), are presented in Table 6. A White (1980) heteroskadasticity-consistent covariance matrix is used to calculate the t-statistics. The two models used in the previous section were re-estimated for this analysis. Model 1 indicates that a no-incentive treatment would have a 11 percent item non-response rate; each additional dollar in incentives reduces the item non-response rate by 0.22 percent. Model 2 is similar, and the IncenDUM variable is not significant. The regressions on means have almost identical coefficient values, but with higher goodness-of-fit statistics (this is to be expected [Greene 1997]). The regression on means Asmooths out@ variance from the wide range of item non-response rates for each incentive level. The very close coefficients between the two regressions shows that the individual data regressions are not affected by outlier item non-response rates.

Response Quality

One test of response effects is to compare the responses (across incentives) to a battery of scale questions about individuals= level of concern for various social issues. If incentives cause respondents to give socially desirable answers, those receiving larger incentives will express more concern for these issues than will those receiving smaller or no incentives. Setting up the analysis as a 2x2 comparison of expressed concern versus incentive, the appropriate test statistic is Somers= d, which treats one of the variables (in this case, incentive amount) as an independent variable while the other variable (expressed concern) is treated as dependent (Agresti 1990). Somers= d ranges from -1 to 1 and a value of 0 represents no relation. The scale for the questions goes from 1 (A lot of concern) to 5 (No need for concern). If higher incentives increase expressed concern, the sign of the test statistic d will therefore be negative. Table 7 reports the results of this test.

All of the calculated values are small (with absolute value <0.1) and negative. The sign indicates that higher incentives lead to higher expressed concern. For six of the ten variables, this effect is statistically significant at the 10 percent level. This indicates that respondents were, to some extent, exhibiting minor response effects because of the incentive (assuming no differences between groups). To assess the effect of incentives on the makeup of the obtained sample, the same test was applied to the (categorical) income and education questions in the survey. These results are reported in Table 8. The tests are not statistically significant, indicating that the size of incentive did not result in demographically different subsamples.

To investigate whether incentives affect WTP, we use a regression context to estimate bid equations. Variable definitions are given in Table 9. If incentives affect WTP, other things being equal, then a variable for incentive amount will have a significant and positive coefficient.

Table 10 reports the Tobit regression results for WTP with two outlier bids (of $200 and $1000) excluded. Model One includes sociodemographic variables, and scenario characteristics being used for estimating individuals= values (Heberling et al. 1996). Also included is the variable Incentive to test for the effect of monetary incentives on response quality. Model Two addss interactive terms to check whether income or education affect responsiveness to monetary incentives. The main variables of interest are at the bottom of Table 10. Incentive is not significant in either model indicating that we fail to reject the null hypothesis that stated WTP is invariant to monetary incentives. Neither interaction term in Model Two is significant. This further supports the claim that incentives in CVM surveys can increase response rates without inducing changes in WTP-based value estimates.

6. Conclusions

We found that use of a monetary incentive in a mail survey can significantly improve responses rates, although the amount of the incentive did not seem to matter. While incentives may induce modest bias in the concern questions, they did not affect stated preferences. These results suggest that monetary incentives should be considered as a tool for improving CVM surveys. The question of optimal incentive size is left unanswered as the marginal benefits of a response are not considered here. The fact that incentives over $1 have little additional effect suggests using the smallest incentive able to induce an incentive response effect. However, this is, ion part, because of the high overall response rates.

Further research should look at the impact of incentives in other types of CVM surveys, specifically those with less-specialized samples. Even the no-incentive treatment had a response rate of nearly 80 percent, which is quite high compared to other CVM mail survey. This high response rate may be attributable to the salience of the topic to our target sample. Topics that are salient to a sample will induce higher response rates, and this increase in response will be more pronounced for specialized samples (Brown et al. 1989) . If we anticipate that effort to gain a response has a diminishing marginal effect, then giving increasing incentives to a high-salience sample might not be expected to have much of an effect. Increasing incentive amounts may have a stronger effect on a general population sample, or a targeted sample to whom the topic is not as salient. It is ironic that obtaining a lower response rate may have led us to more clear-cut conclusions. However, it seems clear that monetary incentive improve response rates without influencing stated WTP.

References

Agresti, Alan. 1990. Categorical Data Analysis. New York: John-Wiley.

Armstrong, J. Scott. 1975. AMonetary Incentives in Mail Surveys.@ Public Opinion Quarterly 39: 111-116.

Arrow, Kenneth, Robert Solow, Paul R. Portney, Edward E. Leamer, Roy Radner, and Howard Schuman. 1993. AReport of the NOAA Panel on Contingent Valuation.@ Federal Register 58, no. 10 (January 15): 4602-4614).

Berk, Marc L., Nancy A. Mathiowetz, Edward P. Ward, and Andrew A. White. 1987. AThe Effect of Prepaid and Promised Incentive: Results of a Controlled Experiment.@ Journal of Official Statistics 3: 449-457.

Biner, Paul M., and Deborah L. Barton. 1990. AJustifying the Enclosure of Monetary Incentives in Mail Survey Cover Letters.@ Psychology and Marketing 7: 153-162.

Bradburn, Norman M. 1992. APresidential Address: A Response to the Nonresponse Problem.@ Public Opinion Quarterly 56: 391-397.

Brown, Tommy L., Daniel J. Decker, and Nancy A. Connelly. 1989. "Response to Mail Surveys on Resource-Based Recreation Topics: A Behavioral Model and an Empirical Analysis." Leisure Sciences 11: 99-110.

Church, Allan H. 1993. AEstimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis.@ Public Opinion Quarterly 57: 62-79.

Dalecki, Michael G., John C. Whitehead, and Glenn C. Blomquist. 1993. ASample Non-response Bias and Aggregate Benefits in Contingent Valuation: an Examination of Early, Late and Non-respondents.@ Journal of Environmental Management 38: 133-143.

Davis, James A., and Tom W. Smith. 1992. The NORC General Social Survey: A User=s Guide. Newbury Park, CA: Sage Publications.

Desvousges, William H., F. Reed Johnson, Richard W. Dunford, K. Nicole Wilson, H. Spencer Banzhaf, Kristin J. Stettler. 1992. AUsing CV to Measure Nonuse Damages: An Assessment of Validity and Reliability.@ Recommendations to the NOAA Contingent Valuation Panel. Research Triangle Institute Project Number 5367.

Dillman, Don. 1978. Mail and Telephone Surveys, the Total Design Method. New York:

Wiley.

Edwards, Steven F., and Glen D. Anderson. 1987. AOverlooked Biases in Contingent Valuation Surveys: Some Considerations.@ Land Economics 63: 168-178.

Fleiss, Joesph L. 1981. Statistical Methods for Rates and Proportions, 2nd Ed. New York: John Wiley and Sons.

Fox, Richard J. Melvin R. Crask, and Jonghoon Kim. 1988. AMail Survey Response Rate: A Meta-Analysis of Selected Techniques for Inducing Response.@ Public Opinion Quarterly 52: 467-491.

Freeman, A. Myrick III. 1993. The Measurement of Environmental and Resource Values: Theory and Methods. Washington, D.C.: Resources for the Future.

Furse, David H. and David W. Stewart. 1982. AMonetary Incentives versus Promised Contributions to Charity: New Evidence on Mail Survey Response.@ Journal of Marketing Research 19: 375-380.

Goyder, John C. 1982. AFurther Evidence on Factors Affecting Response Rates to Mailed Questionnaires.@ American Sociological Review 47: 550-553.

________. 1987. The Silent Minority: Nonrespondents on Sample Surveys. Cambridge, UK: Polity Press

Greene, William H. 1997. Econometric Analysis. 3rd Ed. Upper Saddle River, NJ: Prentice Hall.

Gripp, Sharon I., A. E. Luloff, and Robert D. Yonkers. 1994. AReporting Response Rates for Telephone Surveys Used in Agricultural Economics Research.@ Agricultural and Resource Economics Review 23: 200-206.

Groves, Robert M. 1989. Survey Errors and Survey Costs. New York: John-Wiley.

Hansen, Robert A. 1980. AA Self-Perception Interpretation of the Effect of Monetary and Nonmonetary Incentives on Mail Survey Respondent Behavior.@ Journal of Marketing Research 17: 77-83.

Heberlein, Thomas A., and Robert Baumgartner. 1978. AFactors Affecting Response Rates to Mailed Questionnaires: A Quantitative Analysis of the Published Literature.@ American Sociological Review 43: 447-462.

Heberling, Matthew T. 1997. Estimating the Economic Effects of Global Climate Change on Recreational Anglers. Master=s Thesis, Department of Agricultural Economics and Rural Sociology, Penn State University.

Heberling, Matthew T., Ann Fisher, James Shortle, Donald Epp, Jeff Lazo, and Will Wheeler. 1996. APennsylvania Anglers and Climate Change.@ Draft report to U.S. EPA.

Hopkins, Kenneth D., and Arlen R. Gullickson. 1992. AResponse Rates in Survey Research: A Meta-Analysis of the Effects of Monetary Gratuities.@ Journal of Experimental Education 61, no. 1: 52-62.

Hubbard, Raymond, and Eldon L. Little. 1988a. AResearch Note: Cash Prizes and Mail Survey Response Rates: A Threshold Analysis.@ Journal of the Academy of Marketing Science 16: 42-44.

________. 1988b. APromised Contributions to Charity and Mail Survey Responses: Replication with Extension.@ Public Opinion Quarterly 52: 223-230.

James, Jeannine M., and Richard Bolstein. 1990. AThe Effect of Monetary Incentives and Follow-Up Mailings on the Response Rate and Response Quality in Mail Surveys.@ Public Opinion Quarterly 54: 346-361.

James, Jeannine M., and Richard Bolstein. 1992. ALarge Monetary Incentives and Their Effect on Mail Survey Response Rates.@ Public Opinion Quarterly 56: 442-453.

Kanuk, Leslie, and Conrad Berenson. 1975. AMail Surveys and Response Rates: A Literature Review.@ Journal of Marketing Research 12: 440-453.

Kopp, Raymond J., and V. Kerry Smith, eds. 1993. Valuing Natural Assets: The Economics of Natural Resource Damage Assessment. Washington, DC: Resources for the Future.

Laughland, Andrew S., Wesley N. Musser, and Lynn M. Musser. 1994. AAn Experiment in Contingent Valuation and Social Desirability.@ Agricultural and Resource Economics Review 23:29-36

Linsky, Arnold S. 1975. AStimulating Responses to Mailed Questionnaires: A Review.@ Public Opinion Quarterly 38: 82-101.

Loomis, John B. 1987. AExpanding Contingent Value Sample Estimates to Aggregate Benefit Estimates: Current Practices and Proposed Solutions.@ Land Economics 63: 396-405.

Maddala, G. S. 1983. Limited-Dependent and Qualitative Variables in Econometrics. Cambrudge, UK: Cambridge University Press.

Mattsson, Leif, and Chuan-Zhong Li. 1994. ASample Nonresponse in a Mail Contingent Valuation Survey: An Empirical Test of the Effect of Value Inference.@ Journal of Leisure Research 26: 182-188.

Mitchell, Robert Cameron, and Richard T. Carson. 1989b. Using Surveys to Value Public Goods: The Contingent Valuation Method. Washington, D.C.: Resources for the Future.

Mizes, J. Scott, E. Louis Fleece, and Cindy Roos. 1984. AIncentives for Increasing Return Rates: Magnitude Levels, Response Bias, and Format.@ Public Opinion Quarterly 48: 794-800.

Panel on Incomplete Data. 1983. Incomplete Data in Sample Surveys. Vol. 1 Report and Case Studies. New York: Academic Press.

Pino, Wareen. 1986. AIncentives Work When You Follow Up on a Survey.@ Marketing News 20: 25.

Schulze, William, Gary McClelland, and Jeffrey Lazo. 1994. AMethodological Issues in Using Contingent Valuation to Measure Non-Use Values.@ Paper Prepared for DOE/EPA Workshop on Using Contingent Valuation to Measure Non-Market Values.

Smith, Tom W. 1994. ATrends in Non-Response Rates.@ International Journal of Public Opinion Research 7: 158-171

Spaeth, Mary A. 1992. AResponse Rates at Academic Survey Organizations.@ Survey Research 23, no. 3/4: 18-20.

White, Halbert. 1980. AA Heteroscedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroscedasticity.@ Econometrica 48: 817-838.

Yammarino, Francis J., Steven J. Skinner, and Terry L. Childers. 1991. AUnderstanding Mail Survey Response Behavior: A Meta-Analysis.@ Public Opinion Quarterly 55: 613-639.

Yu, Julie, and Harris Cooper. 1983. AA Quantitative Review of Research Design Effects on Response Rates to Questionnaires.@ Journal of Marketing Research 20: 36-44.

Table 1. Cumulative Response Rate, by incentive and mailing (percentage).


$0

$1

$2

$5

$10

1st mailing

36.1

71.7

67.5

73.3

79.2

Postcard

50.4

85.8

83.3

90.8

92.5

2nd followup

77.3

92.5

91.7

95.8

96.7

3rd followup

79.8

94.2

93.3

98.3

96.7

Total Deliverable

119

120

120

120

120

Table 2. Analysis of Differences between Response Rates.


$0 vs. other incentives

within other incentives


P2

p-value

P2

p-value

1st Mailing

15.1

0.005

3.16

0.53

Postcard

38.5

<0.001

5.51

0.24

1st Followup

23.6

<0.001

3.71

0.45

2nd Followup

26.1

<0.001

4.35

0.36

Table 3. Effect of Incentives on Final Response Rate, Grouped Logit Regressions (n=5).


Model A

Model B


Coefficient

t-stat

Coefficient

t-stat

Constant

2.14

4.3***

1.38

3.0***

IncenDUM

---

---

1.41

1.0

Incen

0.203

3.0***

0.098

3.3***

Adj. R2

.93


.96

* Significant at p=.10 or better

** Significant at p=.05 or better

*** Significant at p=.01 or better

Table 4. Final Response Rates, Predicted vs. Actual (percentage).

Incenitve

Observed

Predicted-Model A

Predicted-Model B

None

79.8

89.4

79.8

$1

94.2

91.2

94.7

$2

93.3

92.7

95.1

$5

98.3

95.9

96.3

$10

96.7

98.5

97.7

Table 5. Mean item non-response, by incentive (percentage).

$0

$1

$2

$5

$10

11.7

10.1

11.0

9.8

9.0

Table 6. Effect of Incentives on Item Non-Response Rate (n=398).

Individual Data


Coefficient

t-stat

Coefficient

t-stat

Constant

0.11

24.1***

0.12

13.1***

IncenDUM

---

---

-0.0094

-.91

Incen

-0.0022

-3.3***

-0.0018

-2.4**

Adj. R2

.02


.02


Means Data


Coefficient

t-stat

Coefficient

t-stat

Constant

0.11

28.4***

0.12

13.1***

IncenDUM

---

---

-0.0095

-2.4

Incen

-0.0022

-4.5**

-0.0018

-3.8*

Adj. R2

.66


.71

* Significant at p=.10 or better

** Significant at p=.05 or better

*** Significant at p=.01 or better


Table 7. Effect on Incentives on Level of Concern (n=398).

Question

Value of d

Stand. Error

p-value

Pollution

-0.002

0.035

0.47

Education

-0.057

0.037

0.06

Global Warming

-0.042

0.041

0.15

Endangered Species

-0.086

0.038

0.01

Budget Deficit

-0.065

0.039

0.05

Protecting Habitat

-0.012

0.036

0.37

Drunk Driving

-0.039

0.035

0.13

Race Relations

-0.076

0.040

0.03

Industrial Facilities

-0.006

0.041

0.07

Fishing Quality

-0.065

0.036

0.04

Table 8. Effect on Incentives on Demographic Variables (n=398).

Question

Value of d

Stand. Error

p-value

Education

0.006

0.040

0.44

Income

0.008

0.042

0.42

Table 9. Variable Definitions

Variable

Defintion

Constant

Constant Term

Satisfaction

Scale of satisfaction with fishing

Importance

Scale of importance of fishing

Concern

Scale of concern about quality of fishing

Trout Trips

Number of trips taken to fish for trout

ATB

Dummy=1 if angler fishes exclusively for Aanything that bites@

Warm

Dummy=1 if angler fishes exclusively for warm-water fish

Cold

Dummy=1 if angler fishes exclusively for cold-water fish

Change

Dummy=1 if scenario specifies a large change in conditions

Male

Dummy=1 if angler is a male

Travel

Dummy=1 if scenario specifies distant travel to avoid change in conditions

School

Categorical level of education

Age

Age of respondent

Income

Income assigned to be midpoint of stated income category

Incentive

Level of monetary incentive

Table 10. Bid Equations, Willingness-to-Pay (n=398).


Model One

Model Two


Coeffient

t-stat

Coefficient

t-stat
Constant-11.3-1.85* -12.6-1.92*
Satisfaction-0.24-0.49 -0.28-0.57
Importance0.410.63 0.430.67
Concern0.660.83 0.650.82
Trout Trips0.122.66*** 0.122.65***
ATB-5.88-1.90* -6.11-1.96**
Warm-3.80-1.35 -4.06-1.43
Cold-1.08-0.37 -1.42-0.48
ATB*Change11.32.43** 11.62.49**
Warm*Change8.401.91* 8.681.96**
Cold*Change5.221.19 5.611.27
Change-6.20-1.60 -6.53-1.68*
Male2.130.79 2.300.84
Travel-1.26-1.03 -1.17-0.96
School0.561.78* 0.611.29
Age-.004-0.52 -.004-0.54
Income0.133.82* 0.163.15***
Incentive0.140.83 0.490.31
Incen*Income------ -.0070.44
Incen*School------ -.0110.90

Log-likelihood

-764

-758

Pseudo-R2

.176

.173

Key:

* Significant at better than the 0.10 level

** Significant at better than the 0.05 level

*** Significant at better than the 0.01 level