Snowflakes attend college. Why? One might assume to learn. Hmmm …
I have been teaching college classes since 2003. Since my goal is not be a complete ass, I try and improve my teaching. Aside from my perceptions, how do I know if I am doing a good job teaching college?
Well, it turns out that’s the $64,000 question that nobody seriously asks. What? Yes, no college administrator in America asks how well are their faculty doing at teaching students. Why? I have some ideas I will share, but you really need to ask them – and there are a lot of them to ask. New Analysis Shows Problematic Boom In Higher Ed Administrators.
Now, I am not suggesting that college administrators do not care about learning outcomes, but what I am asserting is that they do not necessarily care to try and get at the answer. Why? Too bad you asked, because I have no real idea. I work as a lecturer and I know the administration spends some effort spying on faculty by slinking around and querying students about their opinions of various faculty and courses, but I have observed no systematic efforts to measure learning outcomes. They do administer the now nearly universal and obligatory course evaluations. Course evaluations are a series of questions given the students in a class to find out what the students thought of the course and the instructor. They vary somewhat from school-to-school, but not by much. They usually ask “on a scale of 1 to 5, 5 being the highest, 1 being the lowest” rating questions. The responses to these surveys are then used to evaluate the instructor’s effectiveness.
Now, one might be tempted ask a few questions about this process. Where to start? First, who designed these surveys? Were they put together scientifically by experts in survey design methods, question design, behavioral psychology? Given that I have taught at six different colleges and universities and the questions are nearly always the same, I’m guessing the answer to that question is “no.” In fact, the situation reminds me of the old joke “How many college professors does it take to change a light bulb? Change? What’s that?” Second, aside from the questions themselves, do twenty year old college students really have the experience to evaluate college courses? I would say “no,” these kids don’t know any more about teaching because they take classes than I do about car repair because I drive a car. Also, anecdotally, I have received low scores from students with comments like “how can you take him seriously, the way he dresses,” and once a student gave me the lowest possible score on each question because he missed an exam due to illness and my policy was (clearly stated in the syllabus and pointed out in class) that I would drop the lowest exam score, so if you missed an exam, you got zero, which would be dropped. He was convinced that he would have done well on that exam (even though he never scored above a 65% on any other exam and the class average on that exam was 10% lower than on any of the other exams. So, it is clear that often students don’t like something about the class, but they cannot articulate just what that is and the format of these surveys is not designed to elicit that information. What value is my opinion of you and your efforts, if you don’t know why I have that opinion?
In addition, it is clearly possible to manipulate the results of those surveys (more on that in a minute). Once I had a student who spent a considerable amount of time writing very detailed ways that I could try to improve the course (I used to spend some time trying to get students to provide meaningful feedback, rather than “this guy sucks” or “great instructor!”). When I read his suggestions I decided to implement many of them in the next course. How did that turn out? It didn’t work at all. None of that student’s suggestions worked at all. Why? In retrospect, he was a lot smarter than average, he was motivated and he put substantial effort into the course. Looking back, I realized that all of his suggestions would probably have worked quite well for a class of students like him, but those suggestions were completely ineffective in a class with a distribution of students. I used to put a question on my exams that asked students to provide suggestions on how to improve the course. I made sure to offer substantial extra points for particularly good suggestions. I did get a couple of “speak more slowly” types of suggestions, but the suggestion I got most often was “give more homework.” I honestly have trouble getting them to do the homework I actually do assign, so I’m guessing they were trying to tell me what they thought I wanted to hear. Why not? College students, even the very brightest, have no experience, so they really are clueless when it comes to providing a meaningful evaluation.
Now, I do feel that course evaluations can potentially offer some helpful information. Like the canary in the mine, consistently low scores/ratings can be an indication that there is some problem with a particular instructor. Such low scores (I’m not even sure how low is too low) demonstrate that an instructor is particularly unpopular without providing much of indication as to why. Being unpopular does not necessarily correlate with poor learning outcomes, but it might, so it should be subject to further evaluation. Scores that a consistently very high might also suggest there is a problem. I worked with another instructor, I’ll call him Kip, who was widely popular. Kip’s course evaluation rating scores are almost always near 5/5, students fought to enroll in Kip’s courses, his comments on instructor rating websites were uniformly glowing and I once ran into one of Kip’s former students at a local restaurant and when he found out I taught economics, he couldn’t say enough good things about Kip. “Best teacher I ever had!” “What course?” I asked. Well, he couldn’t remember that, but Kip was awesome.
I have taken a lot of college courses over the years and I have a had some instructors I liked better than others, but I never thought any one of them was a significantly better instructor than the rest. I had a math professor as an undergrad that I really enjoyed. I took nearly every course he taught whether I was interested in the particular subject or not, but I didn’t really think he was the “best teacher ever.” In fact, looking back, they all seem to have been about the same. So, why is Kip so widely popular? Is he that much better at teaching? Being curious, I decided to start asking students to describe how Kip conducted his classes. From my inquiries I have discovered why Kip was so popular. First, Kip taught more difficult classes – more “mathy” problem-based, courses that give most student trouble. Second, Kip’s classes are all “chalk and talk.”
I wrote a paper (we’ll be presenting it at the Eastern Economic Association conference in NYC in February) which examined the marginal rates of substitution between in-class and out-of-class learning activities. The analysis found that students are around three times more productive doing out-of-class learning activities. There are a number of reasons for this, but the end result was my determination to try and make in-class time more productive for student learning. To do this, I started organizing in-class extra credit activities that students would do in small groups during class. One day of the week I would lecture, the other we would do one of these activities. After a few activities I surveyed the class and asked them which they preferred, lecture or extra-credit activities. Now, I have about 500 students across three sections, so these activities were a lot of extra work for me, but, hey, if they helped performance … . Student preferred, by two to one, lectures. Why not? As I lecture I look out and see students napping, chatting, watching videos, texting their friends, looking at social media. Lecture is easy, it is the REM cycle equivalent of teaching and learning, but students think they are accomplishing something positive. So, Kip had the right idea, chalk and talk.
Third, Kip’s classes are “easy.” At the end of the day, the bright student has to study about 20% of the course content to get an A in the course. Kip tells students exactly what will be covered on the exam (often he gives them the exact problems on the exam and works those problems in class, but other questions require that the students understand what’s going on in the problems so Kip does get a grade distribution). Finally, Kip’s students genuinely believe they are learning. Are they really? Does it matter? I used to teach math econ and students struggled greatly. Part of the problem was that when I did the problems on the board, I made it look easy (of course it was easy for me), but when they went to do the exam problems, they struggled, therefore they weren’t learning and I was obviously a poor instructor (okay, at least not a great instructor). Kip, on the other hand, would tell them exactly which problems were going to be on the exam and then work those problems out before the exam. Students would then either learn those problems or at least be able to memorize the steps, parrot them back on the exam and receive their grade based on the other questions Kip asked. Brilliant!
Two more Kip anecdotes and I’ll let you decide what is your opinion of Kip’s methods. I asked one of Kip’s former students about her class with Kip. “Oh, he’s great!” Why? Did you work really hard? Doesn’t Kip tell you exactly what is going to be on the exam? “Yeah and there was one guy who never came to class – I don’t think he even owned to text – but he did the homework problems, always showed up for the exam review classes and got an A.” Once, Kip and I had a student in common. She struggled greatly with Kip’s class because (a) she was really a slacker, and (b) she was convinced she couldn’t do math. She came to me before Kip’s final exam and showed me one of the problems that was going to be on it. Of course she hadn’t bothered to go to the exam review (she was really a slacker that semester), so she wanted me to show her the steps to solving that problem. I know she didn’t learn a thing, but she ended up with a B+ in the class.
So, you decide. Increased learning outcomes or popular?
I will leave with a couple of observations. I surveyed my 500 students and asked them about their primary motivation for the course. It was multiple choice, so I gave them responses like “learn the material,” “parental expectations,” etc. It turns out that 85% chose “get a high grade.” It suggests, to me at least, that we have convinced college students that high grades are perfectly correlated with learning. If I get an A, I learned the material (I have to share a further observation; curves or scaling grades does not count. Students want a high grade, but giving a difficult exam that results in an average score of 40%, which is subsequently curved or scaled does not count. The high grade has to come without any scaling or the students are quite unhappy because they then do not believe they have actually learned anything). The last observation is this. There are very few university administrators that actually understand statistics. They do not understand sample size, they do not understand means relative to variances or distribution theory. When I worked in industry it became apparent over time that people in leadership could not distinguish a realistic number from any other number, so if you could show them a number that met their expectations, that number was viewed as correct. The same thing is true for higher education. I end by recounting an anecdote I heard, but did not witness. The bureaucrats were having a “come to Jesus” meeting where they were discussing course evaluations. The person in charge of deciding what those scores mean made the following comment, “We want all instructors to score above the average.” Now, ignoring the absurdity of that remark, it is clear that college administrators want instructors to teach to be popular, but haven’t the slightest ability to explain how their measure is relevant.
Well played, Kip! Well played!