Tuesday, December 10, 2013

Tips: Pre-test - Post-test assessment


Pre-test – post-test is a quantitative technique used to measure the difference between responses to the same set of questions before a program or service and then again afterwards. The difference between the responses, when statistically compared, represents the effect of the program or service.
Advantages:
  • Useful to assimilate data on variable experiences into a prescribed set of categories, such as learning outcomes.
  • Relatively easy to create and administer
  • Particularly useful for measuring self assessment and information recall
  • Can provide complimentary data to qualitative assessments
Disadvantages:
  • Yes, you need to use statistics in order to make appropriate inferences
  • Often ineffective at measuring skill based learning
  • Validity issues
o The act of taking the pre-test tends to influence responses on the post-test
o Socially desirable behaviors tend to be over-reported
o Socially undesirable behaviors tend to be under-reported
o Normal maturation
o Extraneous, “outside” experiences
o Individual motivation
o Questioning in general is vulnerable to language bias, cultural differences, and perceived threat
  • Often requires complimentary qualitative data to inform the researcher’s interpretation of the data and illuminate possible language, bias and validity problems
Tips:
  • Determine whether the question is likely to be threatening
o Is there any perception of a right vs. wrong answer?
o Is there any reference or relationship to a medical or psychological condition?
o Is there any reference or relationship to deeply personal behaviors (taxes, drinking, sexual intercourse, etc.)?
  • Make the question specific:
o What exactly is that behavior?
o Who exactly are you asking about?
o When exactly are you talking about?
o “What exactly to do you mean by…?” Respondents will often interpret broad terms or ideas, such as “any,” “sometimes,” or “regularly,” much more narrowly than the researcher
  • Closed ended questions are better for non-threatening behaviors; open ended questions are better for questions that may be threatening
  • When using closed ended questions regarding behavior, ensure all possible alternative answers are included, otherwise, responses lumped into an “other” category will be underreported.
  • Use aided recall whenever possible, but avoid terribly long lists. If the exhaustive list is too long, restrict to a liberal list of the most likely responses
  • Time is relative – Singular experiences like childbirth, marriage, etc. will likely be remembered with more clarity, while less significant experiences, like leadership behaviors, will not. A month or less is a suitable period of time for less significant experiences. A year or less is suitable for significant experiences
  • Respondents usually estimate regular behavior, and it can sometimes be better to ask about exceptions to the behavior.
  • Use diaries if the assessment requires a significant amount of memory recall.
  • Use language that is common to those who are taking the assessment
  • Short questions are better for non-threatening questions, long questions are better for questions that may be threatening.
Helpful techniques when asking threatening questions
  • Long open-ended questions usually provide more accurate data
  • Ask respondents about other’s behaviors, instead of their own
  • Desensitize the respondent by asking related but unnecessary and more threatening questions first (i.e. about drug use before alcohol use when assessing alcohol use)
  • Alternate, to some degree, threatening and innocuous questions
  • Self administered, computer aided assessments tend to reduce the level of perceived threat and increase the accuracy of the data
  • The use of the respondents vernacular may increase the accuracy of the data when assessing socially undesirable behavior (i.e. “hooking up” as compared to “unplanned sexual intercourse”)
  • Ask “have you ever?” before you ask “have you currently?”
  • Respondents tend to report undesirable behaviors more accurately in diaries
  • Avoid asking the same question more than once to help confirm reliability. Doing so tends to irritate respondents and suggest the question has more importance over others
  • At the end, assess to what degree the questions were perceived as threatening
  • Avoid euphemisms or disarming language as it may have the opposite effect of actually alerting the reader to a threatening question
Twists on the standard written pre-test post-test technique:
  • Diaries – structured to record specified behaviors and assessed prior to and after the program or service
  • Role play – structured role play scenario, observed and assessed for specified behaviors, then repeated after the program or service
  • Control groups – Control groups can help identify or eliminate validity issues and thus determine whether or not the program or service had any impact
o Using a control group where the program or intervention is not experienced can help validate the findings of the test group.
o Using an additional control group where there is no pre-test, but the group does experience the program or service and completes the post-test helps illuminate the effect of the pre-test on the post-test responses.
o And using a fourth control group where there is only the post-test, can further help to validate the actual impact of the program or service.
  • Informants – instead of using self assessment, randomly and anonymously assign individuals to observe and report on specified behaviors of another specific individual both prior to the program or service and then again afterwards. It might limit bias if the observers did not experience the program or service.
This information is taken and adapted from:
Bordens, K. S. & Abbott, B. B. (1991). Research design and methods: A process approach. (2nd Ed.). Mountain View, CA: Mayfield Publishing Company.
Bradburn, N. Sudman, S., & Wansink, B. (2004). Asking questions: The definitive guide to questionnaire design – for market research, political polls, and social and health questionnaires. (Rev. Ed.). San Francisco, CA: Jossey-Bass.
Crowl, T. K. (1996). Fundamentals of educational research. (2nd Ed.): Brown and Benchmark Publishers.
Huba, M. E. & Freed, J. E. (2000). Learning centered assessment on college campuses: Shifting the focus from teaching to learning. Needham Heights, MA: Allyn & Bacon.
Patton, M. Q. (1990). Qualitative evaluation and research methods. (2nd Ed.). Newsbury Park, CA: Sage.
Upcraft, M. L. & Schuh, J. H. (1996). Assessment in student affairs: A guide for practitioners. San Francisco, CA: Jossey-Bass.

No comments:

Post a Comment