Summary of EPA survey’s performance-evaluating questions

Editor’s note:  This piece was a memo I prepared for the county commissioners in my then-capacity as technical consultant respecting EPA matters.  It was dated October 25, 2001.


(PHOTO CREDIT:  Google Images)

The EPA conducted a public opinion survey in the Coeur d’Alene River Basin this past summer. It was a postal (or mail-out) survey involving two subsamples: (a) all members of the EPA’s mailing lists in the region (the returned sample, N=334); (b) a so-called “random” sample of the general population, which sample was drawn from a database of community addresses (returned sample, N=142).  Additionally, a small number of citizens who were not sent questionnaires by the EPA nevertheless obtained and returned completed  forms (returned  sample, N=17).

By most standards, the scientific quality of the survey must be regarded as quite poor: the brief questionnaire harbored a number of design weaknesses, the response rate was too low to allow for confident population projections , and the sampling universe was ill­-defined. Hence, any analysis of this survey must be regarded with caution.    

The questionnaire asked respondents to identify where they lived, offering four response categories:  Mullan to Kingston, Cataldo to Harrison, Coeur d’Alene and Post Falls, and, finally, Spokane Valley, City of Spokane, and Western Spokane County. Here I focus only on the results of the “Mullan to Kingston” respondents — that is, respondents  from the Silver Valley (N=88).

The questionnaire included a series of a dozen questions, each asking the respondent to evaluate a specific aspect or dimension of local EPA performance.   I’ve focused on these questions in particular because  they provide a telling picture of public  perceptions.

A six-point response scale was provided  for each evaluative question   –

  • running from response #1 (named “Very Bad” performance) to response #6 (“Very Good” performance). Intermediate response numbers (2-5) were not given specific word equivalencies; however it will be convenient to supply such names for ease of reference. I’ve used the following  labels to refer to the six numbered  responses:
  1. Very Bad (VB)
  2. Bad (B)
  3. A Bit Bad (ABB)
  4. A Bit Good (ABG)
  5. Good (G)
  6. Very Good (VG)

Table  1 (below) summarizes the responses of Silver Valley respondents’ to these twelve questions (see Appendix for full wording of questions). The table is organized, top row to bottom  row, from most negative to most positive popular response (based on the percent answering “Very Bad”). Response percentages were calculated on the base of the total number of respondents who actually answered a question; hence “Ns” (the base on which percentages were calculated) vary slightly from question to  question.


The table’s left-most column lists the 12 performance-evaluating questions. The seven columns to the right — labeled (Cl) to (C7) -­ offer:

  • (Cl): percent of respondents answering “Very Bad” to this question
  • (C2): percent answering “Very Good” to this question
  • (C3): ratio of “Very Bad” responses to “Very Good” responses (note: ratios calculated on unrounded data)
  • (C4): percent answering “Very Bad” or “Bad”
  • (C5): percent answering  “Very Good”  or “Good”
  • ( C6): ratio of total “Very Bad”-plus-“Bad” responses to total “Very Good”-plus-“Good” responses (note: ratios calculated on unrounded data)
  • (C7): median response (that is, the response category indicated by the respondent in the exact middle of the response distribution) and Ns (number of respondents answering this question)

Table  1’s main results may be summarized as  follows:

  • Of the twelve questions asked, only one — relating to COURTEOUSNESS (2h) — garnered, on balance, a positive response from  Silver Valley respondents.  Regarding COURTEOUSNESS, the median response fell in the ABG (“A Bit Good”) category and the ratio of VB (1’Very Bad”) to VG (“Very Good”) responses tilted slightly in the “Very Good,” or positive, direction.
  • All of the remaining eleven evaluative dimensions garnered negative evaluations of EPA performance.
  • TRUST (2d) was the issue that garnered the greatest frequency of negative response from Silver Valley respondents. Regarding TRUST, the median response fell in the “Very Bad” category; fully 52% of respondents (Rs) checked the VB response and a total of 64% checked either the VB or the B (“Bad”)  responses.  The ratio of VB to VG responses was 8.8:1 favoring VB; the ratio of combined VB-and-B responses to combined VG-and-G responses was 5.4:1 in the VB-and-B direction. Clearly, negative responses strongly predominated over favorable responses  regarding TRUST.
  • Negative responses outweighed favorable ones regarding the next five evaluative dimensions listed in Table 1. The median response fell in the “Bad” (B) category for: RESPONSIVENESS (2g) (with 46% of the sample responding “Very Bad” and 63% “Very Bad” or “Bad”), FAIRNESS (2i) (41% VB, 60% VB or B), USING YOUR INPUT (2j) (39% VB, 62% VB or B), UNDERSTANDING (2f) (39% VB, 57% VB or B), and, finally, ACCURACY  (2b) (34% VB, 53% VB  or B).
  • Still negative, though somewhat less so, were the sample’s responses to the next five evaluative dimensions listed in  Table 1.  The median response fell in the “A Bit Bad” (ABB) category for: UNDERSTANDABLE (2c) (32o/o VB, 49% VB or B), EXPLAINING DECISIONS (2k) (22% VB, 40% VB or B), WELCOMING (2e) (21% VB, 40% VB or B), CLEANING UP THE SITE (21) (21% VB, 41% VB or B), and, finally, RELEVANT INFO (2a) (21% VB, 43% VB or B).

Several additional questions were asked in the EPA questionnaire (and  a subsequent TAM may address these). The dozen interrogatives examined in this memo, however, represent the key performance­ evaluating dimensions offered in the questionnaire — and thus merited this brief examination.

The survey’s various methodological weaknesses  reduce the value  of the findings described. Nevertheless, the contour of public sentiment these findings suggest is notable for its apparent  negativity.


  1. How do you rate EPA at each of the following?

a. Providing the information you need

b. Giving you accurate information

c. Making the information easy  to understand

d. Earning your trust

e. Making it easy to get involved

f.  Understanding your concerns

g.  Responding to your concerns

h. Treating you courteously

i.  Having a fair decision making process

j.  Using your input

k.  Explaining decisions

l.  Cleaning up the site


This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s