Meet the updated Placement Test
The Happy Numbers Content Team is continuously working not only on new product development but also on fine-tuning existing features and resources. To be able to do it well, we collect and analyze anonymized data on student activity on our platform. Our recent focus has been on the Placement Test, which we administer automatically at the beginning of grades K-5 to create individual learning pathways for each student. Here are some of the conclusions we’ve arrived at:
- To ensure significant math growth for our students, it is imperative to accurately determine each student’s zone of proximal development;
- Since the Placement Test consists of just 20 questions, each question must be perfectly targeted to contribute to successful student placement.
- Since the Placement Test is our first touchpoint with a student, our goal is to make sure that his or her learning journey with Happy Numbers starts smoothly.
The evolution of our assessment
Always striving to be better
In developing Happy Numbers features, we always use anonymized data that helps our research team to monitor the quality of our content. Along with this, we carefully study user reviews and adjust our plans based on the information we get.
For example, as a result of statistics and analysis of your feedback, we recently discovered that since 2019 about 9% of students haven’t been placed correctly for different reasons.
Let’s get into more details here, starting with data from Grade 1 in 2019. The chart below shows the percent of students for each possible score (correct answers out of 20) on the Placement Test. It’s obvious that the majority of students solved from 13 to 16 questions correctly, and the graph is somewhat skewed to the right, while, ideally, we should expect a normal distribution.
To achieve assessment results more normally distributed, we decided to fine-tune the test questions (especially in the 2nd part of the assessment) for more accurate predictions of the zone of proximal development for each student. As a result, we expect to achieve more accurate placement of students in the curriculum.
How did we analyze data for the K-1 Placement Test?
As the first step, we decided to focus on grades K and 1. Our Analytics Team primarily examined the following metrics:
- Percentage of correct answers per each test question (questions with more than 90% and less than 25% accuracy rate were excluded, as they can’t help to determine students’ levels);
- Test question alignment with Happy Numbers curriculum and State Standards;
- Correlations between student performance (accuracy rate, time spent), test result, and placement score;
- Distribution of the number of correct answers;
- Distribution of Placement Points (where a student was placed in grade-level curriculum after the Placement Test).
After carefully analyzing the metrics above together with MetaMetrics ®, one of the assessment development leaders and the creators of the Lexile and Quantile scales, we’ve updated the existing tests for both grade levels: Kindergarten and Grade 1. And today, we are happy to share some of the results with you!
Analyzing the test results of the 2000-2021 school year, we discovered a very interesting trend. Most of our youngest students (K-1) appeared incredibly smart compared to previous years. While 15% of students scored a perfect 20 out of 20 on the test in the 19-20 school year, a whopping 53% did so in 20-21!
Unfortunately, the explanation we received from the teachers' inquiries was not as exciting. Due to Covid-19, many students learned remotely, and they worked on the platform not in the classroom, but next to their parents. And apparently parents helped them solve the test questions, not realizing how important it is for a child to do his or her own work. To fix this, we added a banner to the placement test, explaining to parents the importance of not helping the child at this point.
Evidence of successful improvements
For those who didn’t know, our Placement Test is adaptive, meaning it initially consists of 40 multiple-choice questions but students solve only 20 of them, depending on the accuracy of their answers. As a result of our work on improvements, 35 out of 80 test items have been replaced.
- More complex test items for higher level K and Grade 1 students were added;
- Some user experience issues were eliminated (sometimes students struggled with the correct interpretation of pictures and tasks);
- Banner for parents was placed;
- Test items on geometry and data were added for testing accuracy
Results of the improvements
You can find the student cohort data at the end of the article.
After seeing how successful the K and 1 test improvements have been, we've agreed to work with MetaMetrics for this school year to redesign the remaining items and develop brand new Mid-Year tests. Stay tuned to know how our other updates improve your students' experience and ensure math growth!
Cohort K-2019: students who took the test from August 1st to September 22nd 2021
Number of students: 9,753
Cohort K-2021: students who took the test from August 1st to September 22nd 2021
Number of students: 21,794
Cohort G1-2019: students who took the test from August 1st to September 22nd 2021
Number of students: 16,508
Cohort G1-2021: students who took the test from August 1st to September 22nd 2021
Number of students: 34,617