Educational research is important for understanding how to enhance educational outcomes and experiences for everyone. To do this, quantitative and qualitative analyses can help researchers determine what goes into effective teaching. Unfortunately, quantitative and qualitative studies in classrooms are also notoriously time intensive and laborious. And, of course, the bigger the study, the longer it takes.
Or, does it?
With Deepgram’s help, researchers at Stanford’s Graduate School of Education have embraced natural language processing and machine learning to elegantly speed up research and make it more useful for teachers and students.
The impact of MOOCs
In recent years, teachers have increasingly turned to MOOCs, massive online open courses, as a way to “take responsibility for their own professional learning” and learn about new tools and techniques. However, it’s difficult to quantify the impact of these professional development resources on teachers and their students. Are MOOCs actually useful for teacher education?
To answer this question, a team of researchers at Stanford is asking K-12 science teachers in a professional development MOOC to audio-record their own teaching for a couple of months. This data will enable the research team to ask:
- When do teachers have opportunities to use tools and techniques presented in the MOOC?
- How do they fit these tools and techniques into their own repertoires of professional practices?
The answers to such questions could indicate what sort of impact MOOCs do (and don’t) have as a medium for teacher education, and can help improve the design of future MOOCs to make them more useful for teachers.
An automated solution to a tedious task
Attempting to answer those questions presents a time-consuming obstacle: transcribing hundreds of hours of classroom audio for analysis. Traditionally this task would be passed on to students for manual transcription or outsourced to a transcription service. However, this process has its disadvantages. As Quentin Sedlacek, a PhD student and one of the principal investigators on the study, explains:
“We really want to be able to share our findings with the same teachers who are participating in the study. They’re helping us out, and we want to help them–but if it takes us six or seven months to complete our research, the school year’s practically over. Teachers won’t have time to actually use our findings, at least not in time to benefit the specific cohort of students who participated in the study.”
However, advances in computational methods and machine learning have the potential to help resolve this conundrum. Aaron Alvero, a PhD student and member of the research team, has experience using such methods to quickly uncover hidden patterns in large corpora of educational data. To use such methods for this project, however, the team first needed a way to rapidly convert their audio-recordings into reliable transcriptions of text.
Enter Klint Kanopka, another PhD student and research team member with expertise in natural language processing and computational text analysis. Kanopka believed he could find an automated solution to what he describes as the “miserable task” of transcribing and analyzing months’ worth of audio recordings.
After evaluating a number of automatic speech recognition services, Kanopka said “Deepgram became a clear winner for what we wanted to do”. Given the sheer quantity of recordings his team hopes to collect, they needed a consistently accurate tool that can speed up the process of transcription.
“Google dumped out a big, disgusting JSON file with quality that wasn’t good enough. CMU Sphinx leaned heavily on a language model and totally distorted entire paragraphs. Nuance Dragon Dictation outputted unformatted blocks of text with big chunks where it didn’t transcribe anything.”
“For our use case, Deepgram was first accuracy-wise and produced, by far, the easiest transcriptions to work with.”
Automating processes and saving money
Kanopka found that using Deepgram sped up research in a number of ways. Contrasted with Google, which was “the hardest of any of them to actually get up and running” and Nuance Dragon which was tied to a single machine, he said “Deepgram is really easy to use.” Using both Google Cloud and PocketSphinx required him to write programs to interface with the APIs, but Deepgram was functioning right out of the box. That enables the team the freedom to interact with Deepgram’s in-browser and API options without hassle and in any location.
Whereas CMU Sphinx or Nuance Dragon took up to several hours for each file to transcribe on his local machine, Deepgram’s cloud platform allows him to batch upload multiple files and be done in under a minute.
By choosing Deepgram, Kanopka not only significantly shortens transcription time but also reduces costs.
“Saving us a bunch of money is a huge consideration because it makes it possible to recruit more teachers for the study.”
As researchers at one of the top education schools in the country, Kanopka and his team have the potential to play a big role in how teachers go about securing the tools to improve their student’s educations. With Deepgram, they can automate research processes and maintain focus on their goal of pushing education forward.
“Finding technologies that make our research more seamless and easier to implement is incredibly exciting. It won’t be long before we’re able to conduct classroom research do analysis fast enough to share findings with teachers in the same school-year with the same set of students.
Being able to use something like real-time speech recognition in a classroom setting will be a game-changer for research. We’ll be using tools like this more and more as we move forward.”
– Quentin Sedlacek, Stanford PhD Student and Researcher