Common Core assessment in a Snap
Building the backbone of Edmodo's new assessment platform from the ground up.
In the summer of 2013, Edmodo began the design and development of Snapshot, a standalone assessment product that would eventually have a huge influence on the data model that linked students, teachers, educational content, and learning outcomes together. We had big plans for how data from Snapshot could influence the way our teachers interact with students in their classroom. There was just one problem, there were a lot of things we didn’t know about how teachers and students interact with the education standards that shape their curriculum and teaching strategies.
Kicking things off
After months of studying the newly formed Common Core education standards that were being digested and turned into lesson plans all across the country, our lead product manager, Kevin, came to me with some ideas of what the minimum viable version of Snapshot could look like. While our grand vision included many things that aren’t even possible today, the initial version came with a few goals in mind.
- Teachers should be able to see how their students are performing on Common Core education standards without having to wait until their state-mandated tests at the end of the year.
- Data should be available in real time.
- The assessments should be fast and painless, encouraging teachers to assess early and often and have plenty of time to act on the data.
- At a glance, teachers should be able to see where their students stand relative to each other.
In Balsamic, that looks something like this…
Building an MVP
When coming up with entirely new product, it can be pretty easy to get out of control. One tiny add-on may seem harmless on paper but could introduce weeks of unexpected engineering time, more user testing, more support documentation, and tons of other potential “surprises.” That said, it was very important that we stripped down our initial ideas around Snapshot to their core and built just enough to start testing. In our case, this meant:
- Ability to view common core standards
- Ability to “create a snapshot” comprised of multiple standards
- Ability for students to take a snapshot assigned to them.
- Ability for teachers to see how students performed.
It was ugly but it looked something like this…
That was enough to get started as we began to polish our visual design and line up some user research and testing.
Polished “beta” design
Finding out what we didn’t know
At this point, a lot of our product discussions were turning into arguments over feature priority when nobody had anything to go off of other than gut instinct. We needed to actually get out into the field and learn more. We lined up interviews with a dozen teachers across America and headed into their classrooms to see how they do their job. We came armed with a long list of topics that we wanted to hit on:
- How do educational standards affect the teaching methodologies you wish to employ?
- How do you decide the order of the topics that you teach?
- Do you track student progress towards standards throughout the year? If so, how do you go about doing this?
- How do you go about translating the standards into teachable units?
- Where do you discover the education standards that you are to be teaching to?
and some questions about the students…
- Do your students have any insight into the standards that they are supposed to know?
- do they know they exist?
- do they care?
- do they know how much more they need to know to achieve mastery?
- When and how do students find out how they performed on a standardized test?
and many more. Once we started talking to teachers, it didn’t take long to realize that some of our questions were completely silly and ignorant (Are you familiar with the codes associated with the Common Core standards?) while some topics may have never been touched upon were it not for impromptu conversational detours. With hours of video/audio and pages upon pages of notes, we were ready to digest our findings and start makings some changes to the product.
user testing an early version of snapshot
Findings and fixings
So, we messed up. Actually, it wasn’t too bad. Teachers loved what we were presenting to them and were excited to use it in their classroom. There were a ton of minor things to address when it came to redesigning drop-down menus, adjusting certain sections for readability and visual hierarchy, and some copy to more closely align with a teacher’s working vocabulary. However, we walked away with one big red flag.
“If I don’t know why my student got a question wrong, I have no way of knowing what the best way to teach them is going to be. They may have answered incorrectly because the question included a vocabulary word they weren’t familiar with, not because they didn’t understand the concept. These are the kind of things I need to know.”
This sentiment was nearly unanimous among all test participants. It seems painfully obvious now, but at the time we were so caught up in providing aggregate data that we didn’t understand the ramifications of cutting out the granularity that teachers desired. We went back to the drawing board and came up with some solutions where teachers were able to dig deep on the student results page while still maintaining the high-level view that’s necessary for reviewing hundreds of data points in a timely manner.
At the time of this writing, students have answered millions of questions with Snapshot and the product continues to evolve into something that teachers can use as part of their daily routine.
A ton of great people worked with me on this project but special thanks to Kevin for leading the project, Jake for flying to Chicago with me in the middle of December for teacher interviews, and Fahrettin for taking the lead on UI design.
To infinity (or at least content recommendations) and beyond!