--Originally published at FLN – matthewtmoore
How do I collect data that I can be confident reflects the true level of effectiveness of flipped learning?
As a general rule I am a skeptic. I approach change slowly. I have read @RobertTalbert on the Flipped Learning Network blogs and the FLN slack. However, since the very early stages of my flipped journey I have been looking to quantify and empirically justify the advantages of flipped learning from data drawn from within my own classroom. This is not an “eggs are good for you” – “eggs will kill you” thing, and I am not even going to pass it off as “when I started flipping the research didn’t exist”, I just have a personality that insists on seeing it for myself.
At the end of my first semester flipping a pre-calculus class I gave a PAPER survey to my classes asking primarily for opinions and collecting basic facts. I have found high school students to be brutally honest, particularly when their responses can in no way impact their grades. I was not looking for a great statistical proof but mere opinions from my first crop of guinea pigs who suffered through my most basic attempt at what would now be considered flip 101. Additionally, I was working in a “Bring your own device” or BYOD enviroment and so access to digital content was a primary interest.
The short version of the results are best summarized that 70% liked flipped “better”, 19% “about the same”, and 11% “worse”. Only 12% had minor trouble getting access while no student reported regualr trouble accessing class content. When I compared student grade performance from the first semester traditional format to the second semester flipped format 15% showed any decrease in overall grade. While that may not sound positive, understand that while the first semester of precalculus consisted of extentions of Algebra II topics, the second semester consisted primarily of “first view” material like trigonometric identities and proofs along with the polar topics including polar complex numbers. Having looked at my gradebook data in Pre-Calc over a number of years I grew to anticipate 1/3 of the students making some negative grade movement as they dealt material they were seeing for the first time, so cutting that in half was a win in my book.
Data now…the opinion part
I still use the same survey questions five years later and I have compared the data year over year. I have expanded the survey to include information on homework time, time of day student report watching videos, and other questions to reflect the change in the group-space format. The survey is now a Google Form that students complete on their Chromebook as my school has gone 1:1. I also now do the survey twice a year near the end of each semester. I never intended my survey to be used for anything except guiding my classroom but I have found some interesting trends.
The first and most interesting trend is that the percentage of students that like the flipped classroom “better” than the traditional classroom has dropped.
Most of that decline has moved to the “same as traditional” option, but in all honesty there has been a small jump in the “worse” category. This is where I rely on the correlation of the free response questions to give insight to the multiple choice. In my precalc classes, I have high performers who tend to also be hyper-scheduled, the dislike of the course format seems to center on either my expectation that something is done in the individual space everyday, or the fact that the group space is for extension of learning not learning by osmossis and repitition. Conversely, areas reported by students as their favorite part is the work with friends at the whiteboards, or other projects working with friends. You see, my group space was initially homework time, but over time I have reduced “homework” significantly and much of the practice and extension is done in class as group product or group presentation. The significantly reduced “homework” problems I give are simply “preparation for presentation”. I expect that given the online instruction and scaffolding in class previously, that what I send home becomes the basis for moving forward the next day as students share and present their work. At this point I hear some readers suggesting mastery learning is the answer to this problem, and I am coming to agree. My challenge is the community expectation reflected in the other camp of detractors who believe learning is a passive activity that must happen all in the classroom so it does not impede softball, dance, basketball, and the like.
On the bright side
Longitudinal data shows that despite an increase in protest, the percentage of students who watch the video and complete the quiz, edpuzzle, practice problems, and other preparation peices has remained very high with nearly all students reporting watching “almost all of all the videos” or higher. This is borne out when I examine the data from the check methods listed above. Additionally, as I study the amount of time “spent on math outside of class” (video or prepatory problems), 75% spend less than 45 minutes while 38% spend less than 30 minutes. Following the fall survey we discussed the results and talked about how to use that time most efficiently and use their resources most effectively to reduce even those numbers.
Are opinions data?
From the begining, the data has been both informative and entertaining as what some students enjoy most about the format is what other students enjoy least. What some students find the most helpful is exactly the element that some students find the “biggest waste of time”. Understanding that much of the data I have collected is opinions of 15 to 17 year olds and is therefore taken with a grain of salt, I do value the opinions both positive and negative as there is truth in both. I also make it a point to share the aggregated results of any data I gather and tell them that I respect their opinion enough to take class time to share the results and make changes I feel are educationally appropriate based on their input. I find the biggest revelation for some students is that according to their data not “everybody feels” this or that about a topic and that there is a diversity of opinion. I believe that is a lesson worth half a day’s group space.
Data now…the hard numbers
The best numbers I have compare the overall semester grade to the semester exam grade. The short explanation is that the semester grade is the total of all grades given throughout the semester including, daily work, summative quizzes, chapter assessments, etc. The community expectation at my high school is that nearly every piece of work gets a numerical value (don’t get me started). Needless to say, this creates a certain level of grade inflation based on “effort”, “participation”, “completion”, etc. in some classes the grade carrot works well, in other classes it is nothing but a numbers game. The semester exam on the otherhand is a cummulative single test. I have found that there is a built in drop in the average grade from “overall grade” to “exam grade” of approximately 10%, or one letter grade depending on the scroring system.
To me, this drop indicated the gap in learning retention from short-term performance to long-term mastery. True an individual student can cram, work hard, and match or beat their overall grade on the exam, but when averaging over 90 students the gap was very consistent. In my first full year of flip the gap fell to 7%. It then fell to 0.3% by year three, then leveled to approximately 2.5%. With a recent increase in the rigor of the exam the gap has returned to 5% but I could not justify using the same exam longer at a lower level of rigor. The 5% gap now also includes not only high performing Pre-Calculus classes but also my Algebra I and Geometry classes which consist of average to somewhat below average mathematics performers. I feel that if I can minimize the drop in retention from short-term to long-term learning then my flipped classroom is truly accomplishing something positive for students.
What else can I measure?
I need your help. What else can I measure? I still feel that although I believe the flipped learning method results in better learning for students and is far more engaging for me, that I am missing something. I feel that if flipped learning is as effective as it feels subjectively that there should be a better way to measure it objectively. I have tried many other comparisons across years of classes pre and post flip but I just don’t feel the statistics are clean enough. I have found that especially in my special education heavy class there are so many confounding behavioral and societal factors that make objective study difficult. I am open to suggestions, please comment below, tweet me @matthew_t_moore, or email me email@example.com.