TEMPLATE FOR USING SURVEYS AND (SOMETIMES) SAMPLING
TO EVALUATE PRACTICE EFFECTIVENESS

A. The time period I want to survey:

B. My definition of a client:

C. How I plan to conduct the survey (face-to-face, phone, written questionnaire- mailed or handed out):

D. How I plan to protect respondent's anonymity or confidentiality to maximize the likelihood of honest
responses and a good return rate:

E. Plans for the first follow-up (to get an adequate return rate):

F. The survey instrument I selected or created (please attach it):

G. This instrument was developed by _____________(author's name).

H. I acquired it from (Please circle your answer!)

-the public domain ( Yes or No)
-by special permission (Yes or No)
-by purchasing it (Yes or No)

 

I. Why I believe the survey instrument is reliable: Hint! Argue that-- the questions are worded simply;
the questions ask clients about information that they know (don't have to guess at).

J. Why I believe the survey instrument is valid: Hint! Argue that-- professionals reviewed the items
and agreed that they measure client satisfaction.

K. Among the following demographic items, I added ______[item(s) #] to the survey.

1. Your Gender:

1. male
2. female

2. Your Race:

1. white
2. African-American
3. Hispanic
4. Asian
5. Native American
6. Other: _____________

3. What is your religious affiliation?

1. Protestant
2. Catholic
3. Jewish
4. Islam, Muslim
5. Buddhist
6. Other
7. None

4. What is your marital status?

1. married
2. divorced
3. widowed
4. single

5. What is the highest educational level you’ve reached?

1. Grade school
2. Middle school
3. High school
4. Junior college
5. College degree
6. Higher education beyond college

6. Do you presently work?

1. YES
2. NO

IF YES,

7. About how many hours do you work per week? _______

IF NO, PLEASE SKIP TO THE NEXT QUESTION!

8. What other sources of income do you have?

1. disability
2. social security
3. retirement pension
4. other

9. What is your combined family income approximately?

1. less than 20,000
2. between 20-29999
3. between 30-39999
4. between 40-49999
5. greater than 50,000

10. What was your age at your last birthday? _____

L. My supervisor approved the survey on ___________ (date) and here is his/her signature:

M. My field liaison approved the survey on __________ (date) and here is his/her signature:

N. Decisions I made about which clients I would survey.

Sampling was not used. All clients were surveyed. ____ (check if true)

OR

Because of a large caseload (>100), I sampled clients in the following manner:


(Did you use a probability sampling design or a non-probability design? Please explain!)

The number of cases in the sampling frame (the size of my case load for the specified time period) was:

The number of cases that I selected to survey was:

O. Findings and Conclusions

Frequency distributions and percentage tables to show the findings:

Optional statistics that I measured, e.g., means, medians, standard deviations, measures of association.

Conclusions and recommendations, based on the data collected:

____________________________________________________________________

REFRESHER NOTES ON USING SURVEYS AND (SOMETIMES)
SAMPLING TO EVALUATE PRACTICE EFFECTIVENESS

Surveys

Written surveys, face-to-face interviews, or phone interviews are examples of survey research in which clients have the chance to report on their satisfaction with the services provided. Whenever possible, it's best to choose a survey instrument that has been tested in previous research and shown to be reliable and valid. Furthermore, existing surveys sometimes have normative data showing the usual range of client responses so you can compare your findings to a norm. A number of client satisfaction survey instruments are available, i.e., However, some of these survey instruments lack important demographic questions which you should add so that you can describe your survey respondents in general terms. In the template section, you can select items that you would like to include from among the sample demographic questions, e.g., client's age, religion, race, gender, marital status, education level, income, etc. Depending on your field setting, some questions are more important than others.

If your agency has been using a survey instrument, please review it with your field faculty educator (supervisor) and your field liaison. Make sure that the statements or questions-- are worded simply, are unbiased, and will generate the information you want to know. Does the survey include important demographic questions so that you can find out if the respondents are representative of the clients you serve? Make sure any modifications you want to make have been accepted by your field faculty educator and field liaison.

If you are creating your own survey instrument (or making many changes to an existing one), ask your field educator and field liaison to critically review it. Are the words simple enough to understand? For what age level are they written? Are there any words that are vague or can mean different things to people depending on differences in social/cultural upbringing? Have you accidentally asked two questions in one? (e.g., Do you agree that the information given to you was clear and important to your needs?) Are any of your questions phrased with a bias? Do any of your questions provide vague findings? (e.g., Can this agency do more to assist individuals like yourself?)

When creating your own survey, it's advisable to pretest it. Choose a few people similar in age, educational and social background to the clients you want to survey, and ask them to read through and explain out loud their understanding of what each question is asking and any confusion they might have about the items. Improve upon the items, if warranted.

Be careful how you word questions, statements and directions. Select items that will give you the information you want. Do not duplicate other items. And only ask questions that respondents have enough awareness of so they do not have to guess at their answers.

Rightfully, when using newly created survey instruments, you should pretest them for reliability and validity. One reliability test is to give a number of people the questions on 2 occasions about 10 days apart. Their answers should be very similar and when computing a reliability coefficient, you should reach +.80 or higher for this test-retest procedure. One type of validity test asks knowledgeable professionals to indicate what each question seems to be measuring. If a large majority of these professionals agree that the questions are evaluating client satisfaction, then the items are said to have face validity. There are other empirical tests for validity which are more complex. For these very reasons, it is much better to use an existing survey instrument that has already been pretested for reliability and validity.

Deciding whom you will survey is a matter of sampling design.

Not only do you need to carefully choose the survey instrument, but you need to select the clients who will complete the survey. You shouldn't just choose the clients that you like.

First, you will need to define what you consider to be a client. For example, if someone goes through intake but never returns for service, is this someone who will be counted as a client? Your survey results will probably be more positive if you decide that a client must have attended 4 or more counseling sessions to be considered in the sampling frame (list of all the elements that could be selected in the sample), than if you count anyone who went through the intake interview as a client. This is because the clients who appreciate the program are usually those who stay longer whereas dissatisfied and disinclined clients drop out. Develop reasonable and specific criteria that you can follow which operationally define your clients. For example, you can decide that any person who completed an intake interview, was assigned to you, and was seen by you at least once (in or out of the office) for professional purposes would count as a client.

Once you have defined what constitutes a client, you can form a list of all the clients that you have seen in the time period you've chosen to study. For example, a reasonable time frame could be the 4 month period from Dec. 1st through March 31st. This would allow you time to collect and analyze the surveys during April so that you can present the material by the end of the semester.

If you have fewer than 100 clients, it's easier and more valid to survey each of your clients. (Of course, even if you survey all your clients, not everyone will complete the survey.) If this is the case, you can skip the next paragraph about sampling designs.

If your list of clients is very large (>100), then that it's only practical to sample from among the entire list.

Consider using a probability sampling design, e.g. simple random sampling, systematic sampling, or even stratified sampling. These sampling designs are more representative so you can estimate, within a confidence interval, how your entire caseload would respond. With simple random sampling (SRS) and systematic sampling, each of your cases should be given a number. Then you decide on the size of your sample; let's use 40 as a reasonable sample size for a caseload of 100. That means that you will be selecting 40 out of 100 or each client would have a 4 in 10 chance of being selected. By giving each client a number and throwing all 100 numbers into a hat, you can choose 40 cases without looking at the numbers (this describes SRS); or you could start at some random spot on your list (this would be your first case to survey) and then take every third case down the list, until you've selected 40 cases (when you get to the bottom of the list, think of it as a circle & go back to the top). There are other probability and non-probability sampling designs (review books on sampling methodology, class notes, and consult with knowledgeable professionals) which can help you decide which clients to survey.

After deciding on the survey and the process of either sampling or selecting each case to be surveyed, your next step is to formulate a confidential, or even better, an anonymous procedure in which the client feels comfortable to provide his/her honest responses without worrying that you will be able to identify them. It is better if someone else distributes the survey and collects it. If you use a mail survey, make sure to provide a stamped, return-addressed envelope and have the results sent to a 3rd party evaluator. Do not have the respondent place his/her name on the completed survey. To monitor who did or did not return the survey, you can include a stamped, return-addressed postcard which the respondent is instructed to mail when s/he sends back the survey. Two weeks later, you would mail another survey and return addressed, stamped envelope to persons who did not respond. Calling people to remind them of the importance of responding to the survey will also increase your return rate. You need to have a return rate of at least 50% to consider the findings valid.

As soon as a survey is returned, you can begin to tally (count) the data by hand or place it on a computer scan form or enter the data into a computer file. The minimum data analysis should include: frequency distributions (how many people selected answer 1, answer 2, & so on) and percentages (percentage of respondents who chose answer 1 & so on) for each question/item. You may want to calculate the mean or median and standard deviation for some important variables. The data analysis should be described in words and through tables or graphs. There are more complicated statistics which could provide very interesting information such as an analysis of the relationship between one variable and another or between sub-categories of a variable. For example, are single parent families more or less satisfied with your agency's parent skills training compared to dual parent families? For help with data analysis, you should consult a research-statistic book, e.g. R.W. Weinbach & R. M. Grinnell's Statistics for Social Workers or D.M. Pilcher's Data Analysis for the Helping Professions and one of the research instructors in the department.