Collaborative Authoring and Item Management
Questionmark enables learning professionals, instructional designers, educators, testing professionals and subject matter experts to author questions, organize them into surveys, quizzes, tests and exams. Browser-based and SME authoring tools enable learning, assessment and education professionals to streamline the question authoring and content harvesting process.
Questionmark’s Authoring module provides comprehensive authoring and item banking capabilities, enabling assessment, learning, education and HR professionals to: author and import items; manage items within a collaborative yet secure environment; organize items into tests, exams, quizzes ; and publish assessments for delivery.
The authoring module is browser-based and compatible with most popular web-browsers such as Chrome, Firefox, Safari and Internet Explorer.
Questionmark’s authoring environment provides secure, collaborative environment with intuitive, browser-based interfaces for creating and managing items. Authors can:
- Author and edit questions/items
- Include stimulus such as images, audio or video
- Insert mathematical formulas using built-in editor
- Choose from 20 different item types
- Organize questions by topic and sub-topic folders
- Define feedback at question and topic levels
- Define and apply meta tags
- Review and try out questions
- Define sophisticated scoring algorithms
- Search item banks by keyword, author, meta-tags, stem text, choice text and more
Questionmark’s item banking capabilities provide a secure, collaborative environment for tests authors and administrators to keep exam content up-to-date and ready for deployment in production assessments. Item banking from Questionmark enables credentialing organizations and test publishers turn to:
- Organize items hierarchically and by metatag to align with skills and competencies
- Track changes to items and rollback to previous versions
- Item version histories for auditing and defensibility
- Use advanced item search features to identify items that may need updating
- Collaborate securely in creating, reviewing and selecting exam questions
- Support for broad range item import formats: CSV (.csv); Questionmark Qpack (.qpack4); LXR* Test 6.1 Merge (.LXRMerge, .mrg); Blackboard Upload (.txt); Blackboard* 6.x, 8.x and 9.x Pools (.zip); Moodle (.xml)
- Classify, categorize and structure item storage for enhanced automation and scalability in test form creation and maintenance
Questionmark provides a powerful set of tools for organizing your items into tests, exams, quizzes or surveys. Authors may opt to create simple forms, selecting individual items to be included in the assessment. Or, they may opt to create assessments that pull items at random that meet specific selection criteria, and then define sophisticated branching within the assessment based on how the participant answers questions or performs on certain groups or blocks of questions. A few of the assessment authoring capabilities include:
- Implement assessment and topic scoring schemes
- Select questions by metatag or topic
- Define topic requisites to be met as part of an overall cut score
Randomize ordering of questions and choices
- Require delivery of high-stakes exams via Questionmark Secure
- Define outcomes based on score-bands and topic-level scores
- Create automatic emails with assessment results to participants and/or other stakeholders
- Specify varying levels item, topic and assessment feedback provide to participants
- Set time limits for assessments (and override time limits if required for certain candidates/participants)
Maintaining an easily accessible “Item History” that documents edits made and reviews performed -- can be crucial to defending the fairness of your assessment development processes. Questionmark records each version of an item as well as comments as items are changed. Item version history can also be helpful during the review process to understand what edits were made, when the edits were made and who made the edits. You can compare two revisions side-by-side to examine what changes were made -- and you can use the “rollback” function to discard edits and roll back to a previous version of an item.
Involving subject matter experts (SMEs) in the item authoring process can be vital to ensuring valid content. Within Questionmark’s authoring tools, you can provide SMEs access to just the content – specific folders of items -- that they need to work on… no more, no less. You can assign varying levels of permissions to authors and reviewers based on what their role will be in the process.
Once an assessment has been authored and is ready to deliver it to participants, the author or administrator may “publish” the assessment and its items. This workflow helps product “live” assessments and items from inadvertant changes, providing administrators with control over how and when updates occur.
Many organizations need to delivery assessments to participants all over the world -- and need the ability to deliver assessments in many different languages. Questionmark's Translation Management System provides translation management features and project management capabilities to manage the process of localizing items and assessments.
- Drag-and-Drop: the participant clicks and drags up to ten images into position. The feedback and score is dependent upon the final position of the images.
- Essay question: The participant answers by typing a response. Questionmark’s Scoring Tool enables grading essay questions within assessments by using customized rubrics. You may define what is right or wrong in advance by entering a list of acceptable answers or print out a report of the responses for manual grading. The logic can also allow scoring based on the presence or absence of keywords or key phrases. This question type is also used to solicit opinions or suggestions on a particular subject.
- Job Task Analysis: Job Task Analysis (JTA) surveys are used to analyze what tasks within a job role are most important. They are often used to construct and validate certification programs, to ensure that the questions being asked are relevant to the job.
- Explanation screens: insert text or graphics for the participant to view prior to answering a series of questions.
- File Upload: participants are often required to complete an assignment which requires them to create a document in the form of a computer file. Question authors can use File Upload questions to enable participants to upload their document files.
- Fill-in-the-blank: the participant is presented with a statement where one or more words are missing and completes the missing words. The score can be determined from checking each blank against a list of acceptable words and can checked for misspelled words.
- Hotspot: a participant clicks on a picture to indicate their choice. Depending upon their choice, certain feedback and grades will be assigned. A graphics editor is provided to simplify specifying the choice areas.
- Knowledge Matrix: this question type presents several multiple-choice questions together where the participant selects one choice for each statement or question presented. This question type is used to cross-relate responses from a single item.
- Likert scale: the participant selects one of several options such as "strongly agree" through "strongly disagree" that are weighted with numbers to aid analysis of the results.
- Matching: two series of statements/words are presented and the participant must match items from one list to items within the other list.
- Multiple choice: the participant selects one choice from up to 40 possible answers. There is no limit to the length of each answer.
- Multiple Response (Right/Wrong answer) Similar to multiple choice except the participant is not limited to choosing one response; he/she can select none, one or more of the choices offered. The “Right/Wrong” version of this question type, there are only two potential outcomes – the participant must select the exact combination correct choices in order to achieve the ‘correct’ outcome.
- Multiple Response (Score per choice) - Similar to Multiple Response (Right/Wrong) but instead of an “all of nothing,” dichotomous scoring algorithm, the “score per choice” approach enables more potential outcomes: they may achieve maximum points by selecting all of the correct choices, or varying levels of “partial credit.” Question authors may also define a maximum number of choice selections by the participant, which can make this question type particularly useful for surveys. For example, if you wanted to ask survey respondents “What are your top 3 preferences of the following choices,” the question can be configured to prevent participants from selecting more than 3 choices.
- Numeric questions: a participant is prompted to enter a numeric value, and this may be scored as one value for an exact answer and another score if the response is within a range.
- Pull-Down List (selection question): a series of statements are presented and the participant can match these statements with a pull-down list.
- Ranking (Rank in Order): a list of choices must be ranked numerically with duplicate matches not allowed.
- Select-a-blank: the participant is presented with a statement where a word is missing; words can be selected from a pull-down list to indicate their answer.
- Survey Matrix: This question type enables you to include multiple rows of Likert questions within a table with column headers included.
- True/False: the participant selects "true" or "false" in response to the question.
- Text Match: the participant types in a single word or a few words to indicate their answer. You define right or wrong words or phrases in advance by entering a list of acceptable answers. The grading logic can also allow scoring based on the presence or absence of keywords or key phrases and check for misspellings.
- Yes/No: the participant selects "Yes" or "No" in response to the question.
Questionmark's authoring module provides multilingual interfaces available in more than 30 languages:
- Arabic (العربية)
- Bulgarian (Български)
- Catalan (Català)
- Chinese, Simplified (简体中文)
- Chinese, Traditional (繁體中文)
- Croatian (Hrvatski)
- Czech (Čeština)
- Danish (Dansk)
- Dutch (Nederlands)
- Finnish (Suomi)
- French (Français)
- German (Deutsch)
- Greek (Ελληνικά)
- Hebrew (עברית)
- Hindi (हिन्दी)
- Hungarian (Magyar)
- Indonesian (Bahasa Indonesia)
- Italian (Italiano)
- Japanese (日本語)
- Korean (한국어)
- Norwegian Bokmål (Bokmål)
- Persian (فارسی)
- Polish (Polski)
- Portuguese, Portugal (Português)
- Romanian (Română)
- Russian (Русский)
- Serbian (Српски)
- Slovenian (Slovenščina)
- Spanish (Español)
- Swedish (Svenska)
- Thai (ภาษาไทย)
- Turkish (Türkçe)
- Ukrainian (Українська)
- Urdu (اردو)
- Vietnamese (Tiếng Việt)