I recently read a great opinion article on Wired titled “Why Living in the Present Is a Disorder”. There was enough inspiration there for an entire series (which are on my “to do” list); however, the thing that really struck me every time I read the article was this quote from Douglas Rushkoff:
Even now I can’t quite describe the degree of resonance this evokes, both in my experiences as a student as well as those as a teacher. I’ve had this growing feeling of dissatisfaction and lack of alignment with the direction of education; these words got me thinking about some of the failures of education all over again. Although I’ve taught some at the primary level, most of my experience has been at the college level. Because of that I’m going to limit my comments to higher education.
In my experience higher education suffers from two basic problems, each of which reinforces and supports the other:
- An emphasis on grades instead of ideas
- A focus on preparing students for particular jobs
Why an emphasis on grades sucks for higher education
Don’t get me wrong; I think grades are important. However, there’s a big difference between summative and formative assessment that students, parents, and most unfortunately many professors don’t understand. Formative assessment is designed to give the student feedback on how they are progressing in their understanding; it should be designed to help the student learn to self-assess. In other words, formative assessment focuses on what I like to call the “Big Ideas” of a subject; I’ve found that formative assessments are energizing and life-giving, regardless of whether I’m on the receiving or giving end. They require engagement, discussion, thought, and reflection. Summative assessment, on the other hand, is a statement of essence — a judgement of intrinsic quality. The grading scale, regardless of whether it runs from ‘A+’ to ‘F-‘ or from ‘1oo pts’ to ‘0 pts’, is a stamp applied to a given work, or even a given student. (How often have we directly or indirectly referred to the ‘A’ students versus the ‘C’ students?) It is a label that students carry forward, one that defines who they are and who they can be.
What happens in many college classrooms, however, ends up being predominantly summative assessment. Course syllabi require a certain number of assignments, each of which will receive a letter grade or some number of points. At the end of the semester the professor applies some arcane rule, often mixed with a liberal dose of what from a student’s point of view appears to be prejudice, to determine final grades from all of these assignment grades. In other words, no formative assessments.
Sometimes professors will add some feedback or comments. Unfortunately, these rarely invite the student into a discussion or reflection about the ideas (which is the important part of formative assessments). Instead serves as yet another slap on the wrist — “Don’t do this or else.” Some professors are willing to talk such comments over during office hours, but this puts the burden of formation onto the student. Others aren’t even willing to discuss things then. After all, what professor likes facing a student who’s going to complain about the grade they got? Nobody likes a grade grubber.
What choice to students have, however? They know that the material itself isn’t important; it’s how they look at the end. Employers don’t check to see if students understand material, or even know what they’re talking about — it’s the degree and the grades that matter. Why else would so many people be willing to fake degrees and lie about their credentials when the consequences of being caught are often so high? They know what really matters, and it’s not the ability to use the knowledge and skills. It’s about landing the job.
Why a focus on job preparation sucks for higher education
This leads us into the second problem. Higher education used to be focused more on mastering skills and obtaining knowledge; businesses recognized the value that resulted from the process. Unfortunately, the success of the liberal arts education has also been its major weakness. People equate higher degree with higher income without necessarily understanding why there might be (or, more appropriately, might have once been) a correlation between the two. Over time, the focus moved from “intellectual expansion and creative expression” to “make more money when I get out”; from there, it’s a really short trip to “make sure I can get <this job>”. Employers have been all to happy to go along with this; the new minimum requirements for many jobs is a college degree, regardless of whether or not the position actually requires those skills. Unfortunately, by buying in to the “college grades as a measure of job preparation” paradigm they’ve ended up creating a desert of actual ability in a sea of diplomas. The appearance of mastery has triumphed over actual mastery.
While many universities and colleges have been trying to fight this trend, most of them have ended up inadvertently contributing to it. Instead of pushing back against the expectations of students, business, and the public by creating an internal set of measurable performance outcomes that align with the original mission, they also buy into the basic idea that their mission is “job preparation”. The school them sets off in a quixotic quest to measure student preparedness — a metric that would be difficult to obtain even with full cooperation from the business community. Instead, it’s easier to fall back upon a simpler stamp of approval — grades — and defend those to the death. Students know that their grades — not their ideas, knowledge, or skills — will be what determine future success. Why the hell wouldn’t any serious student grade grub for all they’re worth? It’ll pay off so much more that actually learning the material! Professors are then pressed to produce certain grade distributions to fit the school’s metrics. Under pressure from both the school and the students, professors end up focusing almost exclusively on creating a “fair” grading system to the exclusion of developing an environment for discussion, growth, and reflection. It’s all about determining the grades.
So what’s the solution?
Where do we go from here? How do we break this cycle of grades and job preparation, and return to (or establish anew) an economy of skill? The current system not only has a lot of inertia, there are many people who profit from it. As the old joke says, “What do you call a medical student who graduates at the bottom of his class? ‘Doctor’.”
If appearance is what’s important, then the most prestigious universities, their current students, and former students have the most to lose from any changes. Reputation gets schools more money (in the forms of tuition, research grants, and potentially government funding) , their professors better salaries, and their graduates better paying jobs. (If you doubt this, consider who would be considered the better doctor — one who graduated from the program at John Hopkins (the Wikipedia article on John Hopkins notes that its medical degrees have been ranked “one of the five most prestigious degrees in the world”), or one who graduated from the program at West Virginia University School of Medicine?
Unfortunately, I don’t have a quick and easy solution to this problem. The issue isn’t just about the behavior of professors or the performance of students; it’s also about the expectations of hiring managers, business leaders, and the public. I do think, however, that a great place to start is in the workplace. Creative applications that showcase the actual use of skills and abilities should carry more weight at businesses; conversely, more human resource departments should take a page from Google’s (apparently discontinued) practice of asking applicants insanely hard questions to see how they respond. Although the article calls these “embarrassing”, I found it telling that many of the problems basically tested one of two things:
- a. the applicant’s ability and willingness to even attempt to solve a seemingly impossible problem ( examples: the first, second, fourth, sixth, eleventh, and fifteenth)
- b. the applicant’s history at solving algorithmic problems common in software development (examples: the third, seventh, eighth, tenth, twelfth, and thirteenth).
In addition, one dealt specifically with communication skills (the fourteenth). In other words, of fifteen example problems, all but two asked the interviewees to demonstrate specific skills that directly related to the job performance. (The fifth and the ninth are very hard for me to justify; the eleventh has issues — in that there’s a “right” answer — but could still be used with some care.)
What do you think? Is there a way to make higher education suck less? Is there a problem with putting so much emphasis on grades and job preparation? Should we as a society be focused more on ability than certification? Let me know your thoughts!