The present disclosure is directed to a system and method for updating a probability of concept mastery and/or readiness before a summative assessment is administered and which particularly considers how the probability changes over time. The system finds application in school districts and school systems that can apply, inter alia, outcome-based instruction and can evaluate performance at the end of a curriculum using a standards-based test or summative assessment. However, there is no limitation made herein to the type of data sets applied to the disclosed algorithms.
Currently, many school districts and education systems (“school system”) have adopted outcome-based programs (“standardized learning goals”), which focus on a concept that a student is required to master before advancing to the next topic, course or grade level. In these programs, the curriculum is developed to reach a known outcome and the success can be measured by a summative assessment, such as, for example, a standards-based test. Oftentimes, a school system operates under the guidelines of a state government, which sets the outcome (a.k.a. “standards”). The state establishes standards, develops state assessments, and rates schools based on the educational results. On a smaller, more localized scale, the school board can develop the standards and rate teachers based on the results of their students.
In other words, some school systems have created a structure of accountability based on the students' performance on the summative assessment, particularly rewarding teachers and administrators based on the results. Therefore, a market continues to develop for educational assessments which assist school systems and teachers (collectively referred to as “users”) in identifying the strengths and weaknesses of the students receiving instruction. With this information, a school system can concentrate its resources on the concepts which students struggle with, thus increasing the likelihood that the students will achieve a higher score on the summative assessment.
In American schools, inter alia, a number of tests are administered throughout an academic year to assess students' mastery of topics and concepts. These tests assess whether the students learned the topic after receiving classroom instruction on the topic. The test results can be used to develop targeted instruction of a concept—in the time remaining—before the summative assessment is administered. FIG. 1 is a flowchart showing the typical pattern followed in the PRIOR ART, using an academic school year as an illustrative example only. The method starts at S10. A first test (such as a formative assessment, diagnostic test, etc.) can be used to assess the student's existing knowledge before the teacher starts teaching the topic at S12. Although a score is generated for the first test at S14, this information is not considered in any future determinations regarding concept mastery. Additional tests can be administered and received throughout the year at S16, such as routine quizzes, end-of-chapter tests, quarterly exams, etc., the results of which can be used to identify any gaps in the students' knowledge at S18. Again, however, the test results are not considered in any future determinations regarding mastery. In the case of a summative assessment, a final diagnostic test can be administered and received at S20 shortly prior to the summative assessment. The results of the final test are computed at S22 and may be used to identify any problem areas that can be addressed in the remaining time. Generally at S24, the results of the final test score, only, are used to compute a probability that the student has mastered the concept and estimate the student's performance on the summative assessment, which is to follow. The method ends at S26
The summative assessment measures the mastery of a concept at a specific point in time. The score on the summative assessment is the measure of mastery. One problem with using summative assessments as an indicator of mastery is that the duration between the assessments can be long and the benefits of instruction or gaps in knowledge may not be spotted until it is too late. While the teacher can diagnose potential problems via a formative assessment or intermittent tests, such as a mid-term or final exam, the results are rarely aggregated at the classroom level to help plan the output-based curriculum. Therefore, an approach is desired for generating a prediction of mastery leading into the summative assessment.
In computer-based assessments and adaptive assessments, the exact order of questions on the assessment is known. By understanding how a student performed on one question in a known sequence, the computer-based approach can determine which question to ask next (in response to the student missing a concept-related question(s)) and whether the test should end (in response to the student answering a predetermined number of concept-related questions correctly). Bayesian techniques are often used in this scenario to update a broad measure of mastery. One problem with this approach is that it updates the probability of mastery by applying the answer of the assessment question to Bayes theorem, one answer at a time. In other words, the existing technique treats each answer independent of others and requires the probability of mastery be computed for each assessment item. An approach is desired that makes no assumptions on the order that concepts are tested on one or more assessments. An approach is desired combines all answers, particularly using a score generated for all answers covering a concept, to predict the probability of mastery while also accounting for slips and guesses.