Psychological Factors Affect Student Survey Participation

As students’ survey fatigue grows, so do internal and external demands for outcomes assessment. In addition, the Education Department’s Office for Civil Rights has very strongly and publicly recommended that institutions survey students about sexual assault.

A study at a southern flagship university found that three factors tend to motivate students to complete surveys:

  1. the view that surveys in general can cause change
  2. the perception that frequent surveying is simply a part of the university experience
  3. trust that survey participation will actually lead to improvements at their school

The study found that students’ attitudes toward surveys varied. Some students described survey participation as interesting, while others described it as an annoyance. Despite these differences, the common factors emerged.

Researchers interviewed eight full-time undergraduate students who had lived in university housing for at least one academic year. (The housing requirement was included because residential students are typically easier to reach than are commuter students and are more likely to have been surveyed frequently by the university than are nonresidential students.)

William K. Tschepikow, assistant to the president of the University of Georgia, and its director of student affairs assessment and staff development at the time of the study, reports the survey’s findings in “Why Don’t Our Students Respond? Understanding Declining Participation in Survey Research among College Students” in the November 2012 Journal of Student Affairs Research and Practice.

Tschepikow participated in an email interview about the study’s findings and what implications they have for increasing participation rates.

What do you think accounts for the broad range in students’ attitudes about being surveyed?

Tschepikow: My study suggested that the difference may be accounted for—at least to a certain extent—by psychological factors rather than organizational ones. For example, some participants who acknowledged a propensity for survey research located this preference within broader intellectual curiosity.

In other words, these participants found the survey process fascinating in general and therefore chose to participate. You may recall one participant stating, “I think it’s interesting to see what people are researching. I’m nerdy.”

As you note in your question, not all respondents perceived the survey process this way. One participant felt harassed by the number of solicitations she received. Again, psychological rather than organizational factors are probably at play here. I believe future research in this area should more deeply explore the effect of these factors on survey participation.

Students said that if they “know” nothing will come from a survey, they’re less likely to participate. What survey administration missteps might lead to this belief? What might signal to students that their survey responses will indeed accomplish something?

Tschepikow: I think the biggest misstep is failing to communicate to students on a regular basis how survey results have led to concrete and positive changes (big and small) in the educational environment—of course, “concrete” and “positive” are subjectively defined.

This misstep is often the result of poor planning. For example, an institution may elect to administer a survey without first identifying specific issues, questions, or problems to which the survey might be responsive.

When a survey is not responsive to a set of guiding questions—grounded in the educational environment—it is unlikely that the results from that survey will lead to meaningful action by administrators. In consequence, the results may end up merely collecting dust on a shelf, as it were.

In short, avoid the tendency to administer a survey for the sake of administering a survey. In addition, when surveys are administered, officials should develop strategies for effectively communicating the results and any related action to students.

For example, administrators may decide to post on the institution’s home page three ways information collected through a particular survey was used to improve a program or service. Systematic and regular communications like this will, in effect, function as signals over time that the institution values, needs, and uses feedback from students.

To take this example one step further, during a subsequent survey administration, officials may include in invitations to participate references to improvements resulting from previous surveys. For example, an email invitation might state, “The University of X highly values feedback from students. Your participation in this survey two years ago led to A, B, and C. We are administering this survey again in an effort to improve the curricula in our residential learning communities.” Again, this type of communication can act like a signal to students that their feedback is a critical part of improvement processes on campus.

How can we convince students that the subject of a survey is salient to them?

Tschepikow: It begins with choosing the right sample. Going back to the example used above regarding learning communities, the sample for this survey should include—perhaps exclusively—those students for whom the residential learning communities are naturally salient, i.e., members of the communities. While this may seem like a basic design element, it can be overlooked.

Administrators can also identify in invitations to participate specific connections between the student’s educational experience at the institution and the content of the survey. For example, a solicitation might state, “We are administering this survey to improve the curricula in our residential learning communities. As a member of one of these communities, your feedback is particularly valuable.”

Keeping in mind the signals referenced above, one might continue with, “Participation in this survey two years ago by students in our learning communities led to A, B, and C.” Again, this is a minor design element, but it may go a long way in drawing a connection between the purpose of a survey and students’ experience on campus.

It seems from your study that increasing response rates is as much about building an environment of trust and communication as it is about administering a specific study well. If that’s an accurate take on your findings, what can institutions do to build this environment?

Tschepikow: I would agree with your summary of my findings. Of course, in many respects the strategy should be different from institution to institution, based on organizational dynamics such as culture, size, student demographics, mission, etc. So, my first thought is that any effort to build this environment should begin with a thoughtful examination of the institution as a distinct social collection. Literature on organizational change may be helpful in this endeavor.

My findings also suggest that trust, in this context, is the belief that survey data will be used to engender positive change in the educational environment. In order for this type of trust to develop, administrators must actually use data collected from surveys in important decision-making processes and communicate their use directly to students.

Of course, this explanation is far too simplistic to be instructive. It appears that trust and communication are inextricably and dynamically linked. Trust is fostered through regular communication, as noted throughout this interview, and communication is more effective when trust is present.

A challenge to this logic arises when one considers that multiple surveys may be administered by multiple units at any given point in time. This reality is what makes coordination across the institution so critical to building an institution-wide environment of trust and communication. In practical terms, coordination can include the timing of various surveys, sampling, communication strategies, branding, and other elements.

The article recommends that institutions establish campus-wide survey policies and that these policies include expectations for providing direct feedback to students. What would some of these expectations look like? Could you refer us to policies that you find serve as useful examples?

Tschepikow: I would take a look at Duke’s.
I would also take a look at Northwestern’s.

Posted in Student Health and Wellbeing

Subscribe to Campus Law Considered Blog