Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Assessing Consumer Experience of Health Plan Performance: A Look at CAHPS 2.0H Guidelines, Summaries of Health sciences

This document evaluates the National Committee for Quality Assurance's CAHPS 2.0H guidelines for monitoring health plan performance over time, focusing on user needs to assess sick enrollees and prioritize determinants of enrollee experience. The discussion covers the importance of longitudinal membership, changes in health, and plan performance on satisfaction criteria.

What you will learn

  • What are the implications of CAHPS 2.0H guidelines for health plans and purchasers?
  • What are the key issues addressed in the CAHPS 2.0H guidelines for assessing health plan performance?
  • How does the use of consumer surveys contribute to evaluating health plan performance?

Typology: Summaries

2021/2022

Uploaded on 09/12/2022

seymour
seymour 🇬🇧

4.8

(16)

216 documents

1 / 10

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
International Journal for Quality in Health Care 1998; Volume 10, Number 6: pp. 521–530
Anticipating market demand: tracking
enrollee satisfaction and health over time
HARRIS ALLEN
Harris Allen Associates, Boston, MA, USA
Abstract
Objective. To assess guidelines, set by the National Committee for Quality Assurance, for the Health Plan Employer Data
and Information Set (HEDIS) 1999 CAHPS 2.0H Survey (formerly the HEDIS 1999 Consumer Survey) in the light of
user’s needs to monitor health plan performance over time, monitor sick enrollees, and prioritize determinants (drivers) of
enrollee experience.
Design. A two-wave, cross-sectional/longitudinal panel design, consisting of national surveys mailed to employees of three
major USA corporations in 1993 and 1995.
Study participants. Samples included employees selected to represent 23 major managed care and indemnity plans in five
regions of the USA. In 1993, 14 587 employees responded and in 1995 9018 employees responded (response rates: 51 and
52%). The longitudinal panel sample included 5729 employees who completed both surveys and stayed in the same plan
for both years.
Study measures. The main 1993 and 1995 surveys consisted of 154 and 116 items, respectively. Panel survey content
assessed care delivery, plan administration, functional status, well being, and chronic disease.
Results. CAHPS 2.0H’s point-in-time, cross-sectional design was unable to detect selection bias and led to an inaccurate
view of change in performance. CAHPS 2.0H’s use of aggregate samples masked key differences between healthy and sick
enrollees; e.g. the sick became less satisfied over time. The association-based, statistical techniques that many survey users
will employ to prioritize the ‘drivers’ of enrollee experience in the absence of CAHPS 2.0H guidelines yielded a less efficient
account of change than the multi-method/multi-trait approach developed for this project.
Conclusion. Consumer experience of plan performance is best understood when the separate contributions of longitudinal
membership and movement in and out of plans are clarified, changes in health are identified, changes for sick and healthy
enrollees are compared, and plan performance on satisfaction criteria is probed to give confirmation and detail. Changes to
the CAHPS 2.0H approach in HEDIS 1999 will facilitate user application of these principles.
Key words: Consumer Assessment of Health Plans Survey (CAHPS), consumer experience, health plan performance, health
care surveys, Health Plan Employer Data and Information Set (HEDIS)
Use of outcomes data to compare health plans, networks, will continue to grow, as users become more familiar with
the uniquely relevant information they provide.and providers has reached critical mass in the USA market-
place. Few would argue that momentum for results-based This paper focuses on one source for providing outcomes
data, the consumer, and recent developments for elicitingcomparisons whether conducted on an external basis across
competing entities or on an internal basis by entities seeking data therefrom. The principal medium for this purpose,
consumer surveys, is undergoing a fundamental shift inoutside referents has achieved a point of no return. Much
debate probably lies ahead on how to define and conduct the USA. This shift towards a publicly sanctioned greater
consistency across surveys and how they are implemented,these comparisons, how to interpret and act on results, etc.
Yet, the pivotal role that they play in the search for value is most evident at the level of health plans. Broad support
Address correspondence to Harris Allen, Harris Allen Associates, 20 Park Plaza, Suite 482, Boston, MA 02116, USA. Tel:
+1 617 350 5523. Fax: +1 617 426 9236. E-mail: harris-allen@msn.com
1998 International Society for Quality in Health Care and Oxford University Press 521
pf3
pf4
pf5
pf8
pf9
pfa

Partial preview of the text

Download Assessing Consumer Experience of Health Plan Performance: A Look at CAHPS 2.0H Guidelines and more Summaries Health sciences in PDF only on Docsity!

International Journal for Quality in Health Care 1998; Volume 10, Number 6: pp. 521–

Anticipating market demand: tracking

enrollee satisfaction and health over time

HARRIS ALLEN

Harris Allen Associates, Boston, MA, USA

Abstract

Objective. To assess guidelines, set by the National Committee for Quality Assurance, for the Health Plan Employer Data and Information Set (HEDIS) 1999 CAHPS 2.0H Survey (formerly the HEDIS 1999 Consumer Survey) in the light of user’s needs to monitor health plan performance over time, monitor sick enrollees, and prioritize determinants (drivers) of enrollee experience.

Design. A two-wave, cross-sectional/longitudinal panel design, consisting of national surveys mailed to employees of three major USA corporations in 1993 and 1995.

Study participants. Samples included employees selected to represent 23 major managed care and indemnity plans in five regions of the USA. In 1993, 14 587 employees responded and in 1995 9018 employees responded (response rates: 51 and 52%). The longitudinal panel sample included 5729 employees who completed both surveys and stayed in the same plan for both years.

Study measures. The main 1993 and 1995 surveys consisted of 154 and 116 items, respectively. Panel survey content assessed care delivery, plan administration, functional status, well being, and chronic disease.

Results. CAHPS 2.0H’s point-in-time, cross-sectional design was unable to detect selection bias and led to an inaccurate view of change in performance. CAHPS 2.0H’s use of aggregate samples masked key differences between healthy and sick enrollees; e.g. the sick became less satisfied over time. The association-based, statistical techniques that many survey users will employ to prioritize the ‘drivers’ of enrollee experience in the absence of CAHPS 2.0H guidelines yielded a less efficient account of change than the multi-method/multi-trait approach developed for this project.

Conclusion. Consumer experience of plan performance is best understood when the separate contributions of longitudinal membership and movement in and out of plans are clarified, changes in health are identified, changes for sick and healthy enrollees are compared, and plan performance on satisfaction criteria is probed to give confirmation and detail. Changes to the CAHPS 2.0H approach in HEDIS 1999 will facilitate user application of these principles.

Key words: Consumer Assessment of Health Plans Survey (CAHPS), consumer experience, health plan performance, health care surveys, Health Plan Employer Data and Information Set (HEDIS)

Use of outcomes data to compare health plans, networks, will continue to grow, as users become more familiar with and providers has reached critical mass in the USA market- the uniquely relevant information they provide. place. Few would argue that momentum for results-based This paper focuses on one source for providing outcomes comparisons – whether conducted on an external basis across data, the consumer, and recent developments for eliciting competing entities or on an internal basis by entities seeking data therefrom. The principal medium for this purpose, outside referents – has achieved a point of no return. Much consumer surveys, is undergoing a fundamental shift in debate probably lies ahead on how to define and conduct the USA. This shift towards a publicly sanctioned greater these comparisons, how to interpret and act on results, etc. consistency across surveys and how they are implemented, Yet, the pivotal role that they play in the search for value is most evident at the level of health plans. Broad support

Address correspondence to Harris Allen, Harris Allen Associates, 20 Park Plaza, Suite 482, Boston, MA 02116, USA. Tel: +1 617 350 5523. Fax: +1 617 426 9236. E-mail: harris-allen@msn.com

 1998 International Society for Quality in Health Care and Oxford University Press 521

H. Allen

has emerged for two initiatives: Health Plan Employer Data plan offerings for their covered populations by encouraging Information Set (HEDIS) [1], sponsored by the National plans to improve. As they endeavor to characterize per- Committee for Quality Assurance (NCQA), and Consumer formance and to negotiate improvement expectations, pur- Assessment of Health Plans Survey (CAHPS) [2], sponsored chasers need to be able to identify a short list of primary by the Agency for Health Care Policy Research. HEDIS and drivers. Which variables are likely to mean the most to the CAHPS developers have recently endeavored to resolve insured? Which will yield the best return on investment for differences between these initiatives and NCQA is now quality improvement? Credible driver detection techniques presenting a unified offering to users, CAHPS 2.0H [3]. for prioritizing among the many variables often available in Considerable experience has by now been gained with the collected data sets are an essential element of this exercise. use of consumer surveys to evaluate plan performance, under Examination of CAHPS 2.0H’s approach to these issues the aegis of prior versions of HEDIS and other report card needs to take into account survey design and content. Each initiatives [4–8]. This paper proceeds from the premise that issue is mainly design-oriented, yet content concerns still lessons have emerged from this experience, about how a apply in each case. Implications for content are therefore survey approach can and should be fashioned to maximize probed below to facilitate treatment of design issues. fit with user demand [9,10], which merit examination in light of CAHPS 2.0H. How does this offering’s survey instrument and design fare when held up against these lessons? This (^) Applying CAHPS 2.0H question is examined by posing the lessons as analytic issues against which CAHPS 2.0H guidelines are assessed, using (^) Discussion of CAHPS 2.0H needs to be undertaken in light results from the first reported consumer survey of plan (^) of guidelines that the NCQA and CAHPS investigators performance over time [11]. (^) explicitly offer and how these guidelines are likely to be

applied by users. To answer their questions, many users are likely to do what they can within limits set by the guidelines. Giving shape to user demand (^) What the guidelines do not say – what they default to in the

absence of explicit prescription – may be as important as Although each of the principal user groups – health plans, (^) what they do say. consumers, and purchasers – has unique interests for con- sumer survey data, all three share a growing demand for the (^) Monitoring over time best information that can be derived from results on each of the following issues. Each issue has corresponding pre- NCQA has for several years now called for periodic ad- requisites for survey approach, and can be illustrated by ministration of the HEDIS survey. CAHPS 2.0H maintains questions that one of the three user groups is raising out of this guideline. It asks plans to produce a new random sample the needs its constituents bring to survey data. for each administration and, in so far as analysis and reporting First, can consumer experience of plan performance change are concerned, contains no provision for linking data across over time? If so, how? For example, health plans are in- administrations. For this point-in-time framework, moni- creasingly developing empirical benchmarks for performance, toring performance over time defaults to a sequence of single both for self-appraisal and to represent their performance to administration/cross-sectional surveys. The accuracy of this current and potential customers. As they implement these design for over-time assessment is at issue. benchmarks, they need to know if they are improving over time, relative to themselves and to their competitors. The^ Monitoring the sick capacity of plans to make accurate links across successive (^) CAHPS 2.0H relies on random samples of entire mem- survey administrations is paramount in this regard. (^) berships when representing health plans. No guidelines are Second, what is the experience of sick enrollees, and how (^) offered for identifying or monitoring healthy and sick en- does this compare across plans? Hibbard et al. have suggested (^) rollees. Do random samples adequately represent the ex- that for many individual consumers, an acculturation process (^) perience of the sick? Growing public concern about this is needed before they can make effective use of comparison (^) group in health management organizations (HMOs), reflected data when shopping for plans [12]. As this occurs, many (^) in patient’s ‘bill of rights’ legislation now appearing across consumers will start to ask one or both of two questions: (^) the USA [13], emphasizes the need to scrutinize this issue. what happens when people in my plan become sick and how does the experience of sick enrollees compare in my plan (^) Driver detection versus my other plan options? Proper procedures for iden- tifying and monitoring the sick will be key to any project’s As with earlier versions, CAHPS 2.0H is silent on offering attempts to address these questions. ways to prioritize the relative importance of variables in Third, what drives (determines) consumer experience of collected data sets. Left to their own devices, users to plan performance? Which aspects of performance most affect date (that is, those implementing empirical methods for this what consumers say and do about their plan? Purchasers, purpose) have often used correlation and multiple regression too, are turning to benchmarks, not only to inform plan- analyses where plan satisfaction is either associated with or related buying decisions but also to maximize the value of regressed on competing candidates in single administration,

H. Allen

Table 1 1993 demographic and health characteristics of key study samples

Variable 1993 1995 1995 1995

.............................................................................................................................................................................................................................overall^ sampled^ respondents^ non-respondents Demographics Mean age (years) 40.6 40.6 41.3 39. Female (%) 37.2 37.0 37.0 37. White (%) 85.9 85.7 87.4 82. Health Overall physical health 1 53.2 53.2 53.3 53. Overall mental health 1 50.4 50.5 50.5 50. Four or more chronic conditions 2 21.5 21.6 21.7 21. Total n 14 587 9294 5729 3565

(^1) Average score, normed to the 1990 USA general population using a T score distribution with a mean of 50 and standard deviation of

(^2) Percent self-reporting four or more conditions from a checklist of 16 chronic conditions that are primarily physical (i.e. non-mental) in

orientation.

Key findings who were sampled, who responded, and who did not respond in 1995 were a factor in these tests of CAHPS 2.0H.

Table 1 describes sample demographic and health char- acteristics at the start of the evaluation period by giving 1993 (^) Monitoring over time scores for four groups: the overall 1993 sample; the 1995 target sample; 1995 responders; and 1995 non-responders. Figure 1 gives the cross-sectional view (i.e. the CAHPS 2.0H As shown, 1995 responders reported health and percent analog) of change in plan satisfaction. It shows that IPA and female scores in 1993 that were virtually identical to the PPGP enrollee satisfaction was relatively high in 1993 and corresponding 1993 scores for each of the other groups. The remained favorable in 1995. In both years, more than four 1995 responders group was somewhat older and more likely to out of five responders in these two plan types said they were be white, yet these differences were correctable by weighting. satisfied. POS enrollees were somewhat less likely to be These data rule out the possibility that initial demographic satisfied, with roughly three out of four responders reporting and health differences between the 1993 sample and those satisfaction. Cross-sectional comparisons, however, imply

Figure 1 Over-time changes in plan satisfaction: cross-sectional view.

Anticipating demand for enrollee surveys

Table 2 Selection patterns for indemnity that were notable for their size relative to the size of the entire plan type sample and for their marked differences in satisfaction. Those who stayed with indemnity were, on (^1993 1995) average, 10% more satisfied than either those who voluntarily ........................... .......................... switched to a managed care plan type or who were lost from Percent n Percent n (^) the 1993 sample. On the other hand, the 1995 satisfaction Group............................................................................................................ satisfied 1 satisfied level of new entrants to the indemnity sample in 1995 was

Stayed in sample 84.2 253 83.7 253 actually 2% higher than the level for those who had stayed Switched in the indemnity sample. Almost all new entrants came from To another plan type 74.4 497 the pool of employees who had been with indemnity since From another plan type 76.5 17 1993, sampled to make up for the loss due to the high rate Lost from sample 73.0 1511 of disenrollment from this plan type during the 2 years. New to sample 85.4 578 The loss of less satisfied enrollees who switched out of their plan was not unique to indemnity, as both IPA and (^1) Percent somewhat + very + completely satisfied. PPGP samples showed a similar trend (data not shown).

What was unique to indemnity was the proportionate size of downward shifts for all three managed care plan types during (^) this loss. Each of the corporate sponsors had promoted a the 2 years, with the highest rated plan type in 1993, IPA, (^) managed care strategy that had succeeded in persuading many recording a significant drop. In contrast, indemnity enrollees (^) employees to switch to a HMO. Less satisfied switchers reported sharply lower ratings in 1993, but a major significant (^) shifted principally to managed care plans, with a very small gain in 1995. Fifteen percent more enrollees indicated sat- (^) percentage going to indemnity. The net result was selection isfaction with indemnity plans in 1995 than in 1993. patterns underlying trends in satisfaction, in most cases having The longitudinal view of change given in Figure 2 provides (^) no clear relationship to actual plan performance, that worked a strikingly different picture. As with the cross-sectional very much to indemnity’s favor at the expense of the managed samples, the IPA and PPGP longitudinal samples recorded (^) care plans, particularly IPAs. relatively high 1993 scores and the POS sample recorded a (^) Comparison of the four groups shown in Table 1 revealed lower score. However, the longitudinal indemnity sample another dynamic at work: non-response bias. The 1993 scored 15% higher than its cross-sectional counterpart and, (^) satisfaction levels of those who responded 2 years later contrary to the cross-sectional comparisons, all four plan averaged 4% more than those who did not respond. Re- types in the longitudinal sample reported 2–5% declines. For (^) sponders in 1995 recorded 84% satisfaction in 1993 whereas indemnity, this trend is in sharp contrast with the double- 1995 non-responders recorded 80% satisfaction in 1993. This digit gain implied by the difference between the 1993 and trend showed a positive relationship between satisfaction and 1995 cross-sectional samples. response, with more satisfied responders more likely to return Further analysis identified substantial shifts in enrollment a completed survey. It suggests that follow-up non-responders that explain the differences in cross-sectional and longitudinal were more likely to have experienced negative transitions in results. Table 2 details employee shifts in and out of indemnity satisfaction from 1993 to 1995 than follow-up responders.

Figure 2 Over-time change in plan satisfaction: longitudinal view.

Anticipating demand for enrollee surveys

Figure 4 Overall plan satisfaction by health transitions: 1993–1995.

category scheme: (0) no statistical evidence ( P > 0.05); (1) Results in both cases assigned less importance to the access, statistically significant evidence ( P < 0.05); (2) compelling choice of physician, customer service, and cost domains. statistical evidence ( P < 0.0001 and correlation of 0.2–0.4 or As shown in the bottom panel of Table 3, the additional equivalent); (3) most compelling statistical evidence ( P < approaches used in our study yielded a different picture. 0.0001 and correlation of greater than 0.2 or equivalent and Strong, unambiguous support for assigning the quality domain correlation larger than any other correlation). highest priority came from three methods: the direct question The top panel gives results when we used correlation and asking responders to make choices; the longitudinal cor- multiple regression analyses conducted on the 1993 and 1995 relations predicting change in satisfaction; and longitudinal cross-sectional data sets – the analog for the approaches that regressions predicting satisfaction changes. In addition, costs most users of CAHPS 2.0H will employ in the absence of alone provided compelling evidence when predicting actual guidance otherwise. As shown, correlations on 1993 data found disenrollment. Combined, the evidence from our multiple support for coverage, but were not discriminating otherwise. approaches led us to assign high priority to the quality, Multiple regression techniques yielded mixed evidence sec- coverage, and costs domains, and lower priority to access, onding coverage and supporting the nomination of quality. physician choice, and customer service.

Table 3 Tests of major drivers of overall plan satisfaction

Type of analysis Access Quality Physician Coverage Service Costs

.............................................................................................................................................................................................................................choice Approaches available to users of CAHPS 2.0H guidelines Cross-sectional correlation: 1993 2 2 2 3 2 2 Cross-sectional regression: 1993 0 2 1 3 1 2 Cross-sectional regression: 1995 1 3 1 2 1 1 Additional multi-method/multi-trait approaches Respondents’ values 2 3 2 2 0 1 Prediction of actual disenrollment 0 0 0 1 0 3 Longitudinal correlation 2 3 2 2 2 2 (predicting change in satisfaction) Longitudinal regression 1 3 1 2 1 2 (predicting change in satisfaction)

Entries: 0=no evidence ( P > 0.05); 1=statistically significant evidence ( P < 0.05); 2=compelling evidence ( P < 0.01); 3=most compelling evidence ( P < 0.001).

H. Allen

Table 4 Drivers of satisfaction by plan type: 1993 to 1995 many CAHPS 2.0H users would probably have missed the key role that quality and especially costs played in these changes. Change (1993–1995) .................................................................

Driver variable............................................................................................................ 1 IPA 2 PPGP 3 POS^4 INDEM (^5) Comment Thoroughness of treatment −1.2 – 6 – +2.8 (^) CAHPS 2.0H makes several noteworthy contributions. It Outcomes of care −3.6 −2.9 – – (^) brings the HEDIS and CAHPS initiatives into closer align- Coverage – – −2.8 – (^) ment. It includes new content that, the vetting process of Premium (^) the CAHPS focus groups suggests, taps concerns that are (employee share) −3.2 – −2.4 −5.7 (^) high on the list of consumers in terms that are salient

1 and meaningful. These contributions represent progress that All variables have a 5-point excellent/poor response scale and are scored positively on 0–100 point scale. virtually all parties concerned would say is needed. They are (^2) IPA, Independent Practice Association. made in addition to NCQA’s compelling track record for (^3) PPGP, Pre-Paid Group Practice. advancing routine survey comparisons of health plans as a (^4) POS, Point-of-Service. (^) standard for the field. (^5) INDEM, Indemnity. (^) As outcomes measures gain currency in the marketplace, (^6) All – entries indicate non-significant results. however, design issues – how data are collected, analyzed, and reported – are becoming critical [17]. Lessons supported by the results here suggest that consumer experience of plan To select individual items within these domains, our ana- (^) performance is best understood when: (i) separate con- lyses capitalized on the re-survey strategy, including use of (^) tributions of longitudinal members and selection in and out cross-lagged correlations that removed bias attributable to (^) of plans are clarified; (ii) changes in health are identified; (iii) individual rating differences. Our final short list of primary (^) changes for sick and healthy enrollees are compared; and (iv) drivers included perceived thoroughness of treatment, self- (^) performance on specific satisfaction criteria is probed to give rated outcomes of medical care, range of coverage, and (^) confirmation and detail. premiums (the employee share). In this regard, other aspects (^) Our tests of the CAHPS 2.0H approach have found it of quality, for example interpersonal communication and time (^) wanting. Its point-in-time, cross-sectional design will not spent with physician, like access, choice, and customer service, (^) enable users to account for selection bias. Their interpretation showed positive trends during the 2-year period. Each driver (^) of change over time will be susceptible to serious errors. making our final short list, in contrast, recorded decreases (^) CAHPS 2.0H’s use of simple random samples will not when dominating the trend in plan satisfaction. That some (^) sufficiently represent entire covered populations. Users may aspects of performance got better and others worse increased (^) miss key differences between sick and healthy enrollees. With confidence that no unaccounted for method effect con- (^) no guidance otherwise, many users will rely on point-in-time, founded this exercise. (^) association-based statistics to prioritize drivers. Important The added value that resulted from the additional three (^) causal influences may be missed. components in our MMMT strategy becomes apparent when (^) Given marketplace developments, several changes to the 1993–1995 trend for each of these drivers is examined by (^) CAHPS 2.0H merit review. Based on published work, these plan type. The numerical entries in Table 4 report statistically (^) changes focus on survey design but also have implications significant estimates of change on the longitudinal sample; (^) for content. no entry indicates non-significant change. The configuration that point-in-time, association-based (^) Monitoring over time statistical techniques suggested was most influential – cov- erage and to lesser extent quality – helped to explain change Since migration across plans is likely to continue in USA for POS. The quality element of this configuration was also health care, CAHPS 2.0H needs to work toward a design useful for each of the other plan types, but coverage was that blends longitudinal sampling into the current cross- not. In contrast, the configuration emerging from the MMMT sectional design. Users need to generalize to enrollee popu- strategy – quality, coverage, and costs, with the highest lations at various points in time; hence the cross-sectional emphasis on quality – was useful in explaining change for all design. But they also need to keep track of the stayers and four plan types. For IPAs, the decrease in satisfaction was switchers; hence the need for longitudinal sampling and fueled mostly by decreases in outcomes, thoroughness of tracking. treatment, and employee premiums. For PPGPs, decreases Routine implementation of blended cross-sectional/lon- in outcomes and, to a lesser extent, thoroughness were gitudinal designs in the marketplace will be no small feat. the major contributors. A modestly significant decrease in Our work offers one model, but others are needed because coverage was at work among POS plans, whereas sharp purchasers, not plans, supplied our sampling frames. A key decreases in premiums for indemnity outweighed an increase issue is the capacity of users to re-survey disenrollees once in thoroughness. Applying the decision rules employed here, they have left a plan. Momentum is building with several

H. Allen

  1. Minnesota Health Data Institute. 1995 Consumer Survey Project. Information from Health Care Consumers: A Resource Manual of Tested St Paul, MN: Minnesota Health Data Institute, 1995. Questionnaires and Practical Advice. Gaithersburg, MD: Aspen Publishers, 1996, pp. 8-131–8-163.
  2. The HMO Group. The HMO Group Member Satisfaction Survey. New Brunswick, NJ: The HMO Group, 1996. (^) 15. Dillman D. Mail and Telephone Surveys: The Total Design Method.
  3. Allen HM, Darling H, McNeill D, Bastien F. Employee Health New York, NY: J. Wiley and Sons, 1978. Care Value Survey: Round 1. Health Affairs 1994; 13: 25–41. (^) 16. Ware JE, Kosinski M, Keller S. How to Score the SF-12 Physical
  4. Allen HM. Toward the intelligent use of health care consumer and Mental Health Summary Scales.^ Boston, MA: The Health surveys. Man Care Quart 1995; 3: 10–21. Institute, 1995.
  5. Allen HM. Reading the tea leaves: anticipating market demand 17. Cleary PD, Lubalin J, Hays RD^ et al.^ Debating survey approaches. for patient/consumer surveys in health care. In Robinson Allen HM, Rogers WH. Debating survey approaches: the authors L., ed., 1998–99 Guide to Patient Satisfaction Survey Instruments. respond.^ Health Affairs^ 1998;^ 17:^ 265–268. Washington, DC: Atlantic Information Services, 1998, pp. ix– (^) 18. Foundation for Accountability. Health Status under Age 65 Measure- xiii. (^) ment Guide: Draft 1.0. Portland, OR: Foundation for Ac-
  6. Allen HM, Rogers WH. Consumer Health Plan Value Survey: countability, 1998. Round two. Health Affairs 1997; 16: 156–166. (^) 19. Allen HM. A core set of survey items on health plan consumer
  7. Hibbard J. Strategies to support the use of outcomes measures satisfaction: a recommendation to the National Committee for by purchasers and consumers: new dilemmas and challenges. Quality Assurance. In^ Annual Member Health Care Survey Manual : Using Outcomes Data to Compare Plans, Networks, and Providers. Version 1.0. Washington DC: National Committee for Quality Detroit, MI, 22–24 April 1998.^ Assurance, 1995.
  8. Kilborn P. Voters’ anger at H.M.O. plays as hot political issue. New York Times , 17 May 1998, A1.
  9. Allen HM. The employee health care value survey. In Collecting Accepted for publication 23 July 1998