Thursday, December 26, 2013

To assess academic dishonesty or not?

In the Summer 2013 issue of New Directions for Student Affairs (Selected contemporary assessment issues), well known student affairs leader and scholar, Dr. Greg Blimling reflects upon assessment in student affairs and notes an early interest of his to assess academic dishonesty.  He concludes that quantifying the amount of academic dishonesty on campus is not worth the political and PR related consequences.
He writes, "The politics of assessment have taught me that assessment works best when student affairs is part of an institutional effort and not apart from it, and that it is not worth the political price of having some information" (p.13).
I was initiated into the world of student affairs under Dr. Blimling at Appalachian State University, so I naturally look up to him.  Most of what Blimling says in his chapter resonates with me.

  • Most of us welcome assessment when it yields data we can actually use
  • We're overburdened with administrative and bureaucratic tasks and are reluctant to take on assessment work that doesn't truly benefit us
  • Assessment is not about collecting data, but rather about collecting actionable data that can be used to effect measurable improvements
  • Assessment works better when we're working with colleagues in academic programs and institutional assessment
Not assessing academic dishonesty really bothers me, though.  It undermines the core of our mission - student learning.   The willful blindness here is akin to a medical professional not diagnosing symptoms out of fear of bad news or the inability to treat.  I can't imagine any legitimate medical professional doing this.  Why do we?
The perception of threat and fear impact our ability to reason, and clearly the threat of losing one's job and its impact on one's career and/or family is a serious threat.  We often over-estimate that kind of threat though, because its emotional significance makes it more readily available in our thoughts. 
I believe it is time we acknowledge the fact that our thinking is not as objective or independent as we would like to believe, especially when fear or threat is involved. 

I do not mean this to judge Dr. Blimling or any others who have made this decision.  But I do think we as a field need to speak out against it... because it harms students.  One of our most cherished ethical tenants, "do no harm" was first proposed by Kitchener in 1985 and is now embedded in ACPA's Ethical Principles & Standards.  Allowing students to cheat through their courses directly and indirectly diminishes their learning, which we are responsible to facilitate.  We should know the degree to which our students are able to bypass the established standards of learning for our institution.
Students' diminished capacity likely impacts their achievement and opportunity later in life.   Employers say graduates lack job skills (including interpersonal skills, problem solving skills, etc.) at a time when 93% of employers say many of those and similar skills are more important than their major and over 1/3 of college students are failing to improve higher level thinking skills after 4 years.   The fact that nearly 1/2 of graduates are underemployed, working in jobs that do not require a college degree may be the end result of this.
The students are, of course, responsible as well, but our willful blindness is an enabling behavior, and admittedly, I take the high road in saying this is something we are responsible for. The cognitive development required for ethical reasoning is not fully developed in many of our students. And I suspect the aversion to assessing academic dishonesty contributes to this era of criticism and distrust higher education faces today.  But I wonder, could assessing it be one of the things that helps us out of it?


Tuesday, December 24, 2013

Note the following:

  • Ed O'Bannon and the antitrust lawsuit against NCAA.  Could drastically change the college experience of student athletes
  • Dominique Raymond, VP of Complete College America - leading influencer of "performance based funding" measures in several states, including Oregon.
  • Mark Schneider, VP American Institutes for Research - known for perseverance around the "unit record system" - still influences efforts around national metrics and is focused on employment and earnings based metrics to evaluate institutions
  • Caroline Hoxby and Sarah Turner - researchers who've researched ways to overcome the "undermatching" phenomena

Sunday, December 22, 2013

4 reasons (out of many probably) as to why this is important for education in general:
  • It may open doors for similar devices to help hearing and speech impaired individuals and persons with conditions that impact communication interact in a broader set of circumstances
  • It may open doors for individuals requiring motorized assistance vehicles or digital devices to control them sans any physically manipulated interface.
  • It may replace the mouse, keyboard AND gestures!  Imagine scrolling through your iPad or turning on the Xbox Kinect simply by thinking….
  • Edit (12/23/13) - It may also allow for the thought-based control of prosthetics

Saturday, December 21, 2013

If Colleges don't change the way they do business, students will…”  Looking back at an article from 2010 following TIAA-CREF's Higher Education Leadership Conference - how well have their predictions turned out?


  • Students will change the way colleges do business - partially true.  Fairly large numbers of students have started seeking credit for prior learning, such as military experience, MOOCs, etc.  But several institutions began offering it on their own, too.
  • The landscape/paradigm will change and not in favor of traditional colleges and universities - somewhat true.  The landscape is clearly changing and becoming more difficult for institutions to function as they traditionally did.  It is not cataclysmic, yet, though.
  • Disconnects between policy makers and institutions would lead to unclear paths and benchmarks - mostly true, I think.  Performance based funding, more budget cuts, etc. are leading to unclear and sometimes dysfunctional relationships and funding systems.
  • Higher Education Funding is the next bubble - Unclear now, but recognition that it is the next bubble has been growing widely among those in and outside higher education.
  • Outdated systems of teaching and preparing students will lead to a decline in US's competitive position - Unclear, but it is clear that the US has been losing ground in several benchmarks, including global economic markers, degree attainment markers, etc.

Tuesday, December 10, 2013

Tips: Pre-test - Post-test assessment

Pre-test – post-test is a quantitative technique used to measure the difference between responses to the same set of questions before a program or service and then again afterwards. The difference between the responses, when statistically compared, represents the effect of the program or service.
  • Useful to assimilate data on variable experiences into a prescribed set of categories, such as learning outcomes.
  • Relatively easy to create and administer
  • Particularly useful for measuring self assessment and information recall
  • Can provide complimentary data to qualitative assessments
  • Yes, you need to use statistics in order to make appropriate inferences
  • Often ineffective at measuring skill based learning
  • Validity issues
o The act of taking the pre-test tends to influence responses on the post-test
o Socially desirable behaviors tend to be over-reported
o Socially undesirable behaviors tend to be under-reported
o Normal maturation
o Extraneous, “outside” experiences
o Individual motivation
o Questioning in general is vulnerable to language bias, cultural differences, and perceived threat
  • Often requires complimentary qualitative data to inform the researcher’s interpretation of the data and illuminate possible language, bias and validity problems
  • Determine whether the question is likely to be threatening
o Is there any perception of a right vs. wrong answer?
o Is there any reference or relationship to a medical or psychological condition?
o Is there any reference or relationship to deeply personal behaviors (taxes, drinking, sexual intercourse, etc.)?
  • Make the question specific:
o What exactly is that behavior?
o Who exactly are you asking about?
o When exactly are you talking about?
o “What exactly to do you mean by…?” Respondents will often interpret broad terms or ideas, such as “any,” “sometimes,” or “regularly,” much more narrowly than the researcher
  • Closed ended questions are better for non-threatening behaviors; open ended questions are better for questions that may be threatening
  • When using closed ended questions regarding behavior, ensure all possible alternative answers are included, otherwise, responses lumped into an “other” category will be underreported.
  • Use aided recall whenever possible, but avoid terribly long lists. If the exhaustive list is too long, restrict to a liberal list of the most likely responses
  • Time is relative – Singular experiences like childbirth, marriage, etc. will likely be remembered with more clarity, while less significant experiences, like leadership behaviors, will not. A month or less is a suitable period of time for less significant experiences. A year or less is suitable for significant experiences
  • Respondents usually estimate regular behavior, and it can sometimes be better to ask about exceptions to the behavior.
  • Use diaries if the assessment requires a significant amount of memory recall.
  • Use language that is common to those who are taking the assessment
  • Short questions are better for non-threatening questions, long questions are better for questions that may be threatening.
Helpful techniques when asking threatening questions
  • Long open-ended questions usually provide more accurate data
  • Ask respondents about other’s behaviors, instead of their own
  • Desensitize the respondent by asking related but unnecessary and more threatening questions first (i.e. about drug use before alcohol use when assessing alcohol use)
  • Alternate, to some degree, threatening and innocuous questions
  • Self administered, computer aided assessments tend to reduce the level of perceived threat and increase the accuracy of the data
  • The use of the respondents vernacular may increase the accuracy of the data when assessing socially undesirable behavior (i.e. “hooking up” as compared to “unplanned sexual intercourse”)
  • Ask “have you ever?” before you ask “have you currently?”
  • Respondents tend to report undesirable behaviors more accurately in diaries
  • Avoid asking the same question more than once to help confirm reliability. Doing so tends to irritate respondents and suggest the question has more importance over others
  • At the end, assess to what degree the questions were perceived as threatening
  • Avoid euphemisms or disarming language as it may have the opposite effect of actually alerting the reader to a threatening question
Twists on the standard written pre-test post-test technique:
  • Diaries – structured to record specified behaviors and assessed prior to and after the program or service
  • Role play – structured role play scenario, observed and assessed for specified behaviors, then repeated after the program or service
  • Control groups – Control groups can help identify or eliminate validity issues and thus determine whether or not the program or service had any impact
o Using a control group where the program or intervention is not experienced can help validate the findings of the test group.
o Using an additional control group where there is no pre-test, but the group does experience the program or service and completes the post-test helps illuminate the effect of the pre-test on the post-test responses.
o And using a fourth control group where there is only the post-test, can further help to validate the actual impact of the program or service.
  • Informants – instead of using self assessment, randomly and anonymously assign individuals to observe and report on specified behaviors of another specific individual both prior to the program or service and then again afterwards. It might limit bias if the observers did not experience the program or service.
This information is taken and adapted from:
Bordens, K. S. & Abbott, B. B. (1991). Research design and methods: A process approach. (2nd Ed.). Mountain View, CA: Mayfield Publishing Company.
Bradburn, N. Sudman, S., & Wansink, B. (2004). Asking questions: The definitive guide to questionnaire design – for market research, political polls, and social and health questionnaires. (Rev. Ed.). San Francisco, CA: Jossey-Bass.
Crowl, T. K. (1996). Fundamentals of educational research. (2nd Ed.): Brown and Benchmark Publishers.
Huba, M. E. & Freed, J. E. (2000). Learning centered assessment on college campuses: Shifting the focus from teaching to learning. Needham Heights, MA: Allyn & Bacon.
Patton, M. Q. (1990). Qualitative evaluation and research methods. (2nd Ed.). Newsbury Park, CA: Sage.
Upcraft, M. L. & Schuh, J. H. (1996). Assessment in student affairs: A guide for practitioners. San Francisco, CA: Jossey-Bass.

Thursday, May 3, 2012

Tracking Attendance at Co-Curricular Activities

Jeff Lail (@jefflail), a Coordinator of Programs in the Office of Campus Activities and Programs at the University of North Carolina at Greensboro, posted a video describing an initiative he is involved in at UNCG regarding tracking attendance at University co-curricular events as a step towards developing means to measure longitudinal growth of students.
Jeff asks for feedback on the program, and in spirit of doing so, I’d like to offer a few thoughts.  
It’s possible to view Jeff’s video and make some assumptions that are potentially significant; I’d like to mention two of them.
Jeff comments that the learning student affairs aims to facilitate takes place longitudinally, and thus is not conducive to testing as in the classroom.  Jeff’s point is valid and accurate, yet it is a little incomplete; Student Affairs does attempt to facilitate more pervasive improvements in developmental domains, such as identity, values, citizenship, interpersonal relationships, self-management, etc. 
Here is where I see the first assumption.  It unintentionally implies that academic learning is not longitudinal.  It very much is longitudinal, though.  Generally speaking, the fact that the academic domains of knowledge are fairly well defined and there is a structured, progressive curriculum affords the convenience of testing.  But there are clear, longitudinal goals in terms of what skills and knowledge a student should achieve by the time they graduate. 
These involve skills related to cognitive complexity and critical thinking, scientific inquiry, information literacy, communication, etc. as they are applied within that particular field.  All of these are just as complex, intertwined, etc. as those that we look at in Student Affairs.  Indeed, they are intertwined as well!  One cannot understand their identity without engaging a sophisticated degree of cognitive complexity, and the attempts to cognate on one’s identity, their purpose, their values, their emotions, etc. are applications of complex cognitive thinking. 
Thus, juxtaposing academic and student affairs learning occurring on different timeline is more misrepresentative than I suspect Jeff intended it to be.  They are more significantly different in that academic learning involves more domain-specific knowledge and skills, while student affairs learning involves more domain-general knowledge and skills.
The second, a flip side to the first, is that the domains of learning student affairs attends to cannot be deconstructed within an organized and progressive curriculum.  While, I certainly agree that this practice is not mainstream in our field, that doesn’t make it inaccurate. Indeed, there are movements within our field to develop more structured and rigorous curricula and assessment methods.  One example is the Residential Curriculum Institute and the various implementations of Learning Reconsidered and Learning Reconsidered 2.  There is even the fresh face of the New Leadership Alliance for Student Learning and Accountability aligning institutions together that are committed to improving assessment at all levels of the institution – including Student Affairs – in order to develop ways of improving student learning.
We can and need to reconsider our work with this lens.
There are some significant privacy concerns regarding the process in which the program is implemented.
Jeff explains the program gives any student who is doing a program a card reading device that obtains student ID numbers using an iPad app.  This is absolutely important data to collect, and I would offer that I think every institution should strive to track attendance at every possible event.  I’ll explain why later.  But the fact that the program provides this to any student doing an event means that students are essentially gaining access to other student ID numbers, which are protected under FERPA.
There is a knee-jerk argument that by swiping their card, the student is essentially giving permission for the student coordinating the swiping to have access to the ID number.  That argument falls apart quickly though.
  • The student did not give consent in writing
  • The student was not made aware of exactly how that data would be used
  • The student collecting the data was not a University Official
  • The student collecting the data had no educational purpose that would warrant the school granting them access to it
Jeff tweeted to me that UNCG’s IT department approved it, and I am a bit surprised by that.  I suspect there is more context to this that I am not privy to, because UNCG’s University Counsel states the university’s policy pretty clearly:
A final thought on this particular concern, aside from whether or not permission is received.  As professionals I think the question, Is giving students such open access to student ID numbers appropriate (ethically or professionally)?, is a very important one to ask.  I am not saying that my response to that question is “right,” but I am saying it’s the right question to ask.
Systematic and extensive tracking of attendance and involvement data is crucial to our field’s ability to show demonstrable impact.
The meta-analysis research by Blimling (1989, 1999) and Pascarella & Terenzini (2005) that is well known in our field was unable to show any consistent impact of student affairs efforts on student learning and success.
In other words, we have essentially been irrelevant.  It pains me to say that, but that is the conclusion of the research.  I know we can point to individual students and show tremendous and inspiring exceptions to that.  I can point to a number of individual students and show that… but here is the problem…
Out of the thousands of students I have interacted with in my 15 plus years, I can only point to probably less than 100 that I can effectively and unequivocally demonstrate how I/we impacted them.  When you do the math, it’s no wonder that research has not been able to show our efforts consistently have an impact.
But some efforts have shown to have an impact.  The National Wabash Studies are a good example, and Salisbury and Goodman (2009) point out, the Wabash Studies illuminate that it is not the activity that students are engaged in, but rather the quality of the design of that activity and the quality of the interactions within it that make experiences learningful.
Thus I see 3 critical needs in order for us to start to show a consistent and positive impact:
  1. We need to focus as much on understanding Learning Theory as much as (and perhaps more than) Developmental Theory.  Our efforts to design developmental experiences are often ineffectual because we lack critical knowledge from Cognition and Instructional Design in how to consistently effect meaningful learning. 
    • As I embarked on my journey into Learning theory and Educational Psychology, this was one of the most difficult admissions I had to make to myself.  We are not the experts on Student Learning as we are taught and/or like to believe.
    • We need to utilize this knowledge to reconsider our work in the context of a flexible, relevant, and longitudinal curriculum conducivee to our context and the affordances and constraints it offers
  2. We need to dramatically increase our competence in assessment so that we know what data to collect, how and when to collect it, what inferences we can make from it, and how to act on it.
    • The UNCG tracking program collected the data but I didn’t get a sense of how it was being used to illuminate the impact of student affairs efforts on student learning.  I got the impression it was hard for them to glean any evidence of learning from the data.
    • That is not surprising; attendance numbers, quantity of programs, etc. are descriptive statistics, and we cannot responsibly or effectively make inferences from them about learning or development.  Any conclusions like that are based on assumptions and leaps of logic with no supporting evidence.
  3. We need to systematically track student attendance at events, patronage of services, etc.  But not because those numbers demonstrate anything except operational metrics.  We need that data for better statistics!
    • We need to utilize regression models to determine what behaviors of attendance, involvement, etc. are most correlated with student success on our individual campus. 
    • Understanding which departments, which programs, etc. show greater correlations essentially identifies “high impact” programs and services that we can focus our energy and resources on.  (I do not mean that to suggest we base elimination decisions strictly on that data, though, in case anyone reads it that way)
    •  It may be able to identify common pathways students take within student affairs that contribute to or deter students from succeeding.  Path models of student success would be a very innovative application of that particular methodology.  And it would require much more sophisticated attendance tracking than most of us do presently.  Jeff and UNCG could be the champions that finally make something like that happen, though!
Those are my thoughts.  Jeff, and others, I hope they are thought provoking if not helpful.

Friday, July 15, 2011

Bl g e r's B war : G v ng On y H lf of th St ry

Recently I saw a link on LinkedIn sharing a blog by Dr. Rey Junco, who blogs about college students' use of technology and its impact on their experience and performance.  This particular blog entry highlighted 2007 data from the Higher Education Research Institute and a much smaller 2008 study by Heiberger and Harper.

Junco highlighted that students who reported higher levels of social media use also reported higher levels of involvement in student clubs and organizations.  Comments on his blog were generally favorable with a few asking critical questions about "how is student involvement defined" and "selection bias."  Others though were seemingly blindly accepting; one such response was, "I Love this chart! it is a great way to show non SM users what a positive effect SM can have on student engagement."

Junco did provide links to the sources, and it's clear not all of the commentators viewed the sources; however Junco only shared part of story (as did Heiberger and Harper as well).  There's more, and it's important.

The HERI study also showed that students who reported higher levels of social media use also reported higher levels of partying and greater frequency and amount regarding their drinking habits.  On top of that, they reported greater difficulty managing their time and developing effective study habits.

This is why it is important to try and view the whole picture.  If we were to look at just the involvement data, we would naturally conclude that social media use is correlated with positive student behaviors and possibly act in ways to increase use of social media (disregarding that correlation is not causation).  However, if you look at the broader picture, social media use is also correlated with undesirable behaviors that would lead us to respond more cautiously and thoughtfully.  Any information related to student involvement has to include its relationship to academic performance, for if a student who is thoroughly engaged is unable to study and learn effectively to maintain sufficient grades, their involvement has cost them dearly and we have done them a disservice.