Facts and Stats

We’re in data collection mode in the Wellness Education Office, participating in the American College Health Association’s National College Health Assessment (NCHA). It asks about a wide variety of health behaviors, attitudes and experience – everything from wearing a helmet when you ride a bicycle to feeling overwhelmed and exhausted, from cold/flu/sore throat to alcohol consumption. From this survey we get data like this:

And this:

So… what do numbers like this really mean, and how accurate are they?

Well it’s important to remember that these statistics don’t tell us what students DO, they tell us what students REPORT doing. And the two are not necessarily the same thing. People might try to “fake good” or “fake bad,” they might give answers that they feel are expected of them or “socially appropriate,” and they might just plain old not really remember how much they weigh or how much they smoked, drank, or felt overwhelmed in the last two weeks, 30 days, or 12 months.

A particular problem with drinking is the whole concept of “a drink.” As a college health educator, when I hear “a drink,” I think of .6oz of ethanol, which is the amount of alcohol in 12 ounces of beer, 5 ounces of wine and 1.5 ounces of 80 proof liquor. But what does a student think “a drink” means? Is a mixed drink with 3 shots in it “a drink”… or is it 3 drinks? Technically it’s 3, but will a student report it that way?

Another thing to be cautious about is known as “response bias.” These data tell us about the people who completed the survey, but what do they tell us about all those people who DIDN’T complete the survey? In 2011, only about 20% of the students who were invited to take the survey actually did. Can we generalize to the whole population based on the responses of students who took the time to complete a 20-30 minute survey?

So there are enough grains of salt that you need to take when thinking about these data that you might almost start to worry about your sodium intake.

But.

There is reason to believe that people are as truthful as they can be, and there are ways to use the data that help to minimize the shortcomings of survey research.

For example, the survey asks questions like, “Has XYZ health issue (infectious disease, sleep, alcohol, gambling, whatever) interfered with your academics in the past 12 months?” And then it also asks for a person’s overall GPA. And sure, people might misreport their drinking or their GPA, but if you’re looking at the RELATIONSHIP between two variables, then the actual value of those variables isn’t so important. So, though students may not report that alcohol has interfered with their academics, it turns out there is a direct, linear correlation between GPA and frequency of heavy drinking: the more a person drinks, the lower their GPA. (No surprise there.) As long as we assume that there’s not a huge difference in the way heavy drinkers report their GPAs compared to non-heavy drinkers, we don’t need to worry overmuch about how accurate those numbers are.

So the data allows us to fact-check students responses and reduce the noise and user error inevitable in survey data. For all its faults, NCHA data is the best thing we’ve got for getting a POPULATION-LEVEL VIEW of students’ health behaviors, attitudes, and experiences. Quantitative data gives us a superficial but comprehensive overview of the topics it surveys. Like a 100-level course, it’s an introduction; once you get to the 400-level classes, you find out that it’s really much more complicated than that, but you can’t understand the 400-level stuff before you master the basics.

Is there a place in evaluation research for one-student-at-a-time anecdotal evidence? Absolutely, and we collect qualitative data. Individual student narratives tell us a lot about the kinds of paths students might take, the kinds of experiences they might have, and they richly illuminate the data. But just as what’s true at the population level isn’t necessarily true for a given individual in that population, what’s true for an individual isn’t necessarily true for anyone else in the population. We can get the most efficient and informative snapshot or cross-section of students’ health behaviors, attitudes, and experiences when we use the broad-stroke instruments wisely and cautiously.

Anyway. If you get an email from me that says, “Smith College Health Assessment – please participate!” I hope you’ll consider actually participating. It’ll help make the data more reliable, and more reliable data means more reliable wellness- and health-related programming on campus.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s