Skip to content

AI Scoring

Score at scale with AI that saves time, ensures consistency, and keeps you in control.
Learn more Register for the webinar
NEW!

Combine AI efficiency with human judgment

Rubric-aligned and able to adapt to a range of use cases and levels of complexity, AI Scoring generates intelligent scoring suggestions from long-form written responses based on your specific criteria. Scorers can then review each AI-generated score, make adjustments as needed, and submit the final results. By combining the speed and consistency of AI with human judgment, AI Scoring enables faster learner feedback, reduces SME workload, and paves the way for more scalable learning programs.

AI Scoring is the latest innovation from our Learnosity AI Labs team that can automate your scoring process while still keeping you fully in control. 

Reduce SME bottlenecks

Reduce time spent manually scoring while keeping your SMEs fully in control with automated and intelligent AI Scoring. For organizations like yours, this means a more efficient scoring workflow so you can scale your learning programs without overloading your staff or sacrificing quality.

Deliver faster feedback

Harness faster feedback that empowers  learners to take control of their performance. By accelerating this feedback loop with AI, your organization can support better outcomes and deliver a richer learning experience for every candidate.

Keep scoring consistent

Support consistency between multiple scorers by automating the initial score and giving a standard baseline for SMEs to work from. With AI Scoring, you can ensure every learner is scored to the same high standard no matter how large your learning programs grow.  

Tailor scoring to your program

AI Scoring is built around your program’s specific criteria, ensuring results are high-quality and aligned with your learning objectives. By creating scoring rubrics that define how each question should be evaluated, AI Scoring can generate scoring suggestions accurately and automatically. This takes the heavy lifting out of scoring large volumes of assessments, while still leaving scorers with the final say on every score. Find out more about how AI Scoring works here.

Create skill-based assessments using advanced assessment options to verify employee competency
Coming Soon!

Unlock real-world skills testing at scale

Scaling L&D programs for  highly regulated or skill-based industries has become far more attainable in recent years. New testing capabilities mean that previously resource-heavy and costly assessment formats are now easier to deliver at scale. Scoring them, however, has remained extremely difficult. Until now.

From simple short responses to complex audio and video responses, AI now makes it possible to score constructed responses at scale while keeping you in control of the final score. Grow your learning programs and deliver feedback faster with a cost-effective and efficient solution that combines AI efficiency with human oversight.

FAQ

If you’re already a customer, you can contact your account manager to request activation of Analytic rubrics and AI Scoring in your customer area. If you are not yet a Questionmark customer, please hit the ‘Talk to us’ button to organize a chat with us.

We are engineering on top of existing public large language models (LLMs). This allows us to optimize for grading use cases while still benefitting from the rapid pace of improvement in the generative AI industry. We are not tied to a specific LLM. We use prompting, ensembling (i.e. for evaluating different elements of a response, there may be more than one call to an LLM), and a variety of proprietary techniques to achieve our high accuracy.

The Rubric Editor lets you create and manage analytic rubrics, which define how responses are scored across multiple dimensions using a consistent scale. You can add rubric levels such as Very Good, Good, Poor, and Incomplete, and define the criteria for each level. Each dimension in the rubric automatically uses these levels, ensuring consistent scoring.

To get started, create or edit an Essay or File Upload question in the Standard Item Bank and set the Scoring style to Analytic rubric. A scoring instructions field, rubric levels, and dimensions will appear in the question editor. You can add new levels or dimensions as needed, and edit descriptions and scores to match your program’s requirements. The maximum score for a question is calculated automatically based on the highest rubric level and the number of dimensions.

Once your rubric is set up, it integrates with the Scoring Tool. Graders can use the rubric to review responses, apply consistent scoring, and, if AI Scoring is enabled, receive AI-generated suggestions that they can adjust before submitting final scores. This workflow saves time, ensures accuracy, and keeps graders in control.

No, your data won’t be used to train public models. Our AI service providers do not use any of the information provided to us or generated by us to train any model. 

No. AI Scoring ensures that your intellectual property remains protected. The data used during scoring is not stored or used to train any of the LLMs integrated in our solutions.

AI Scoring is built with data security and IP ownership in mind. Neither Questionmark nor the LLMs used claim copyright over source material or the AI-generated scores and feedback. Customers retain ownership of the scoring output in line with the applicable legal jurisdiction.

Customers retain full ownership of the scoring feedback and results produced via AI Scoring, subject to any applicable legal frameworks.

There is a limit of 25000 characters per response.

Yes, AI scores can be reviewed and modified using the Scoring Tool before submission.

When a score is edited, the AI highlight is removed, and the updated score appears in blue to indicate human modification.

Not by default. Spelling or typos will only impact scoring if the rubric explicitly includes them as a trait, such as a row for grammar, mechanics, or conventions. If no such criterion is defined, the AI model typically ignores minor errors and focuses on the overall quality and meaning of the response.

Ensure Analytic rubrics are enabled for your customer area. Confirm the assessment is using an Analytic rubric.

No, AI Scoring is an assistive tool. All final scores require human review and submission to ensure accuracy and fairness.

Responsible AI you can trust

Secure

At no time does OpenAI store or “learn” from the prompts entered by you or your authors.

Unified

AI tools seamlessly integrate with the overall assessment creation experience.

Human-Controlled

Discover time-saving and AI-enhanced test creation that keep you in the driving seat.

Scale scoring with innovative AI

Leverage human-driven AI Scoring that empowers you to:

  • 1
    Unlock scoring at scale
  • 2
    Save SME time
  • 3
    Deliver faster candidate feedback
  • 4
    Keep your human scorers in control
Save time & stay in control