9/15/2019
Evaluating EdTech: A Few Good Reasons You Need More Than One Good Study
In the digital age, purchasing decisions big and small tend to result from a great deal of research. After all, we spend hours researching potential new cars, upcoming vacations, and even places to eat this weekend, painstakingly scrutinizing reviews for hours until we feel we’ve garnered enough insight to make an informed decision on how to spend our money. So, in an age when standards are so high, why should edtech be any different?
The edtech landscape expands greatly each year and shows no sign of slowing down. As the already countless product options become more numerous, it gets more difficult to decide where to spend our valuable dollars. With billions continuing to be poured into the industry year after year, it’s surprising how many edtech products lack substantial supporting evidence as they go to market.
The fact of the matter is that edtech publishers have historically relied on achieving that one piece of “gold-standard” evidence?—typically in the form of a randomized controlled trial (RCT)?—to push their product to potential buyers. However, in a piece from the MIND Research Institute, Chief Data Science Officer Andrew Coulson broke down why one good study simply isn’t enough. "The biggest problem with relying solely on fully experimental RCT studies in evaluating edtech programs is their rarity," Coulson explained. "In order to meet the requirements of full experiments, these studies take years of planning and often years of analysis before publication."
This tedious process results in a number of challenges, including the following:
-
The product has changed: RCTs do not keep pace with rapidly changing technology. In the time it takes to complete a single RCT, the product has often undergone revisions and is out of date.
-
All school districts are not the same: RCTs are typically conducted within a single school district?—but with so many differences from district to district, how can you be sure that the results will suit the needs of your students?
-
Different states have different assessments: In an RCT, one state test is used as the sole form of assessment. However, because assessments vary from state to state, it’s difficult to gauge the trial results' validity against the metrics used by your own state’s assessment.
-
Grade levels are limited: If an RCT covers a specific grade level band (which they often do), you can’t know how the results will transfer to your intended grade level.
Making strides in speed
The limitations of evaluating edtech with RCTs haven't gone unnoticed. Indeed, in recent years, numerous entities have brought to the forefront the need for evaluation tools that move as rapidly as technology itself. In 2016, the Office of Educational Technology debuted a tool called the Ed Tech Rapid Cycle Evaluation Coach as a resource to assist educators in determining the effectiveness of the tools they are currently using, as well as to guide them through the process of purchasing new technology that will help them meet or exceed their goals and provide measurable results.
Along similar lines, the International Society for Technology in Education (ISTE) launched the EdTech Advisor, a community-driven review platform, in 2018. “Choosing the right tools and apps to use in the classroom is far too important of a decision to be made without accurate, trustworthy data that takes into consideration the context and circumstances in which the tool will be used,” said Richard Culatta, CEO of ISTE. This type of platform provides educators with direct, timely peer-to-peer feedback?—in other words a form of comparatively instant gratification? sorely needed in an area so long reliant on studies that take years to complete and yield immediately outdated results.
EdCredible is another review platform that aims to bring edtech evaluation up to speed. According to EdCredible, “Educators are burdened with an antiquated marketing, sales, and procurement process” that often includes politically motivated purchasing decisions and pilot testing that is not actually relevant to prospective customers' circumstances. All reviews on EdCredible are unbiased and written by end-users?—namely, teachers, administrators, and other education professionals. As an added bonus, the service is free for teachers to use.
Try, try again: Repeatability is key
While the old saying “quality over quantity” suggests one of these elements must be overlooked in favor of the other, the truth is that educators can and should have both. In the time it takes to complete one highly involved RCT, it's possible to conduct a larger number of smaller studies that use more up-to-date data and draw from larger and more diverse pools of participants representing several districts.
In the MIND Research Institute piece, Coulson emphasized the importance of repeatability, stating that “even the ‘gold-standard’ results of a single study in the social sciences have very often failed to be replicable. In fact, published studies are allowed a 5% chance of drawing a false conclusion; how does one know that one ‘gold-standard’ study was not itself a cherry-picked or fluke result?”
Just as you wouldn’t make the decision to buy a new car based on one review, you shouldn't choose edtech using one good study—regardless of how shiny and appealing the results might seem. Ultimately, increasing the number of studies and improving upon the snail’s pace of RCTs will save educators valuable time and resources, reducing the trial-and-error process along with the potential to return to square one after seemingly promising new tech turns out to be a wasted investment. “Less is more” is no longer the case when it comes to evaluating edtech offerings; in fact, more is more in terms of more timely studies with more consistent results.