--Originally published at nodes
Campus Technology published an article last week about a biomed course that saw mixed results from flipped instruction. The full article is open access (CC-BY 4.0) and available to read. I’ve read and annotated the original article and I’m going to distill a couple of points from both the published report and the CT article.
The authors state right up front that there “were no statistically significant differences in examination scores or students’ assessment of the course between 2015 (traditional) and 2016 (flipped).” Campus Technology (and other publications) often latch on to the grade implications rather than qualitative student feedback on the efficacy of flipping. To the researchers’ credits, they do recognize higher retention and application as reported by students on self-reported feedback surveys.
The biggest red flag for me was in the definition of flipping. As Robert Talbert regularly points out, many research articles limit flipping to “video at home, discussion in class.” The article elaborated on the at home experience in the methods section. From the article,
Students were introduced to new material each week by completing assigned readings from textbooks and journal articles, then by watching recorded lectures given by faculty experts at MSPH on one of 10 core epidemiology topics. Next, students completed short online graded assessments of their understanding of the new concepts presented in these media based on the Just-in-Time Teaching (JiTT) pedagogy…
Students were also able to submit questions to instructors prior to the in-person meeting that would be addressed at the start of the session. The article also makes note that doctoral students and instructors would monitor questions via email or office hours in between in-person meetings.
So, students watched a lecture (no discussion on the format, length, or content of the lecture), read some articles, and then began to apply material in preparation for the lecture. More on this later.
Students reported confidence in their learning and ability to apply materials with a slight increase in the flipped (84.1%) vs traditional (80.6%) cohorts (non-statistically significant, however).
Campus Technology’s Interpretation
The opening sentence proclaims:
A study at Columbia University’s Mailman School of Public Health found that in a health science course following the flipped classroom model, there was no statistically significant differences in test scores or students’ assessments of their course, compared to a traditional lecture course.
They do not note that the study took place over two years (two different groups of students) but did report positive impacts due to freedom to watch lectures when they wanted to (improved flexibility). CT also included an insightful quote from one of the authors about the lack of time to process information in a traditional setting after a lecture (discussion was immediately after lecture in the traditional design) but that flipping doesn’t allow for “[direct engagement] with the lecturers.”
The Bigger Picture
The research study and the ensuing report highlight two things for me:
- Grades are often the motivating factor when flipped classrooms are studied which limits discussion of student impact and,
- the perceived importance of course design is negligible when studies are conducted or reported.
Students reported a higher satisfaction with the class due to flexibility and because they felt more confidence in the material. Time to process information is important and they were better able to contribute to discussions after having time to think through the lecture. But, all the CT article focused on was the grade. It isn’t a secret that few practitioners (K-12 or higher ed) actually read the reports unless they’re actively planning their own study. There is a responsibility for news outlets and blogs to include gains beyond the final exam score.
How did students grow beyond the test? What improvements did instructors see in the cohort? These are important factors that should be included in followup interviews if not in the research report itself. The research did have the six instructors full out surveys, but they were not reported in the results with student feedback.
Secondly, course design is critical if we want to improve student performance. Several of the citations were quite old (early to mid 2000’s) and were in a similar vein, looking at student exam scores rather than course design and teaching methodology (granted, several of the cited articles were paywalled so I couldn’t do a full evaluation of each).
If we simply bottle courses and reverse the time of interaction, why would we have an expectation of student improvement on exams? This article shows that the course is consistent, if nothing else, with no change in student exam performance. How would it have changed if students had explored material before the lecture, as in Ramsey Musallam’s or Dan Meyer’s work? How would students have benefitted from interactive items at the beginning of the discussion period rather than a rehash of the lecture from the instructor?
While the research makes some interesting points, it is far from conclusive in its results on the efficacy of flipping. The authors make conciliations at the end, but we need to continue to push the discussion away from a particular technology solution and start by analyzing our instruction methods as the real turning point in student learning.
Featured image is Lecture Hall, Chairs flickr photo by Dustpuppy72 shared under a Creative Commons (BY-NC-ND) license
This is by far one of the most frustrating aspects of flipped teaching “assessment”! We flip teach in hopes that students can begin to reach higher order cognitive skills such as application, analysis, evaluation and creation… but then we assess only their ability to recall..maybe their ability to apply a memorized equation..and then we are shocked when there is “no significant difference in performance”…??!! Students are excellent a memorizing facts and how to use an equation regardless of the pedagogy.. it’s how they perform at the higher order skills we need to focus on.
And that is where we should be assessing and where you see the difference! We have a paper coming out in CBE-lifesciences soon (just accepted last week) that demonstrates exactly this.
This is great to hear, Kate. I don’t know where you’re based, but in the US, most higher ed lacks formal instructional training. Talking about “formative assessment” draws blanks and we have to draw out connections between learning and performing. I understand requirements are different, but it’s an instructional gap that limits insight on student learning in these studies.
I’m glad to hear you have an article accepted. Be sure to send it to Kelly at the FLN (or blog it and syndicate!) so people can get their eyes on it. I’d love to read it as well.