What if you could speed up your scoring process without compromising on quality? AI Scoring, the latest innovation from our Learnosity AI Labs team, is designed to save your SMEs time, deliver faster learner feedback, and keep scores consistent across multiple scorers, all without removing the essential role of human oversight.
Designed to leverage your custom scoring criteria via rubrics and able to adapt to a wide range of use cases or complexities, from certification programs to high-stakes L&D, AI Scoring can analyze long-form written responses and then automatically generate intelligent scoring suggestions. Your SMEs and scorers can then review each AI-generated score, make adjustments, and submit the final result. This seamless workflow blends the speed and consistency of AI with human expertise, allowing organizations to scale their learning programs more easily than ever before.
A dive into how it works
AI Scoring evaluates responses across multiple dimensions using analytic rubrics created in the Rubric Manager and applied through the Scoring Tool. Once enabled, AI generates scoring suggestions and feedback, which scorers can review, adjust, and submit. Watch our walkthrough video for a deep dive into how to make the most of AI Scoring:
Talk to us
Discover how AI Scoring could support your program today.
Benefits & capabilities
- Saves SME time by automating the scoring process while still keeping them in control
- Helps keep scoring consistent and fair across multiple scorers
- Enables faster learner feedback for better learning outcomes
- Able to score long-form written responses
- Enables skill-based testing to be truly scalable via audio & video responses (coming soon)
FAQ
If you’re already a customer, you can contact your account manager to request activation of Analytic rubrics and AI Scoring in your customer area. If you are not yet a Questionmark customer, please hit the ‘Talk to us’ button to organize a chat with us.
We are engineering on top of existing public large language models (LLMs). This allows us to optimize for grading use cases while still benefitting from the rapid pace of improvement in the generative AI industry. We are not tied to a specific LLM. We use prompting, ensembling (i.e. for evaluating different elements of a response, there may be more than one call to an LLM), and a variety of proprietary techniques to achieve our high accuracy.
The Rubric Editor lets you create and manage analytic rubrics, which define how responses are scored across multiple dimensions using a consistent scale. You can add rubric levels such as Very Good, Good, Poor, and Incomplete, and define the criteria for each level. Each dimension in the rubric automatically uses these levels, ensuring consistent scoring.
To get started, create or edit an Essay or File Upload question in the Standard Item Bank and set the Scoring style to Analytic rubric. A scoring instructions field, rubric levels, and dimensions will appear in the question editor. You can add new levels or dimensions as needed, and edit descriptions and scores to match your program’s requirements. The maximum score for a question is calculated automatically based on the highest rubric level and the number of dimensions.
Once your rubric is set up, it integrates with the Scoring Tool. Graders can use the rubric to review responses, apply consistent scoring, and, if AI Scoring is enabled, receive AI-generated suggestions that they can adjust before submitting final scores. This workflow saves time, ensures accuracy, and keeps graders in control.
No, your data won’t be used to train public models. Our AI service providers do not use any of the information provided to us or generated by us to train any model.
No. AI Scoring ensures that your intellectual property remains protected. The data used during scoring is not stored or used to train any of the LLMs integrated in our solutions.
AI Scoring is built with data security and IP ownership in mind. Neither Questionmark nor the LLMs used claim copyright over source material or the AI-generated scores and feedback. Customers retain ownership of the scoring output in line with the applicable legal jurisdiction.
Customers retain full ownership of the scoring feedback and results produced via AI Scoring, subject to any applicable legal frameworks.
There is a limit of 25000 characters per response.
Yes, AI scores can be reviewed and modified using the Scoring Tool before submission.
When a score is edited, the AI highlight is removed, and the updated score appears in blue to indicate human modification.
Not by default. Spelling or typos will only impact scoring if the rubric explicitly includes them as a trait, such as a row for grammar, mechanics, or conventions. If no such criterion is defined, the AI model typically ignores minor errors and focuses on the overall quality and meaning of the response.
Ensure Analytic rubrics are enabled for your customer area. Confirm the assessment is using an Analytic rubric.
No, AI Scoring is an assistive tool. All final scores require human review and submission to ensure accuracy and fairness.