Glossary | Questionmark



ability / trail parameter

In item response theory (IRT), a theoretical value indicating the level of a participant on the ability or trait measured by the test; analogous to the concept of true score in classical test theory.

ability testing

The use of standardized tests to evaluate the current performance of a person in some defined domain of cognitive, psychomotor, or physical functioning.

absolute score interpretation

The level of an individual's or group's competence in some defined criterion domain as inferred from the test score.


A reasonable modification in an assessment instrument or its administration made to compensate for the effects of a qualified disability without altering the purpose of the assessment instrument.


Responsibility of a certification board, advisory committee, or other sponsor of a certification program to its stakeholders to demonstrate the efficacy and fairness of certification policies, procedures and assessment instruments.


A status awarded by a certification agency to a candidate that has demonstrated compliance with the standards set forth in the certification program.


The process whereby individuals from one culture adopt the characteristics and values of another culture with which they have come in contact.

achievement levels / proficiency levels

Descriptions of student or adult competency in a particular subject area, usually defined as ordered categories on a continuum, often labeled from "basic" to "advanced," that constitute broad ranges for classifying performance.

adaptive testing

A sequential form of individual testing in which successive items in the test are chosen based primarily on the psychometric properties and content of the items and the participant's response to previous items.

adjusted validity / reliability coefficient

A validity or reliability coefficient -- most often, a product-moment correlation -- that has been adjusted to offset the effects of differences in score variability resulting from different populations.  See restriction of range or variability.


In 1997, the US DoD and the White House Science and Technology Bureau launched the Advanced Distributed Learning initiative. ADL was targeted from the very beginning to Web-based education. Its work is coordinated with other organizations like IEEE, IMS and AICC. As a result of this joint work, it was produced the Sharable Courseware Object Reference Model (SCORM). This proposal includes a reference model for educational sharable software objects, a runtime environment and a content aggregation model.

administrative independence

An organizational structure for the governance of a certification program that ensures control over the essential certification and re-certifications decisions without being subject to approval by or undue influence from any other body. See Autonomy. 

advisory committee

A group of individuals appointed or elected to recommend and implement policy related to certification program operation.

age equivalent

The chronological age population for which a given score is the average score.  Thus, if children ten years and six months of age have a median score of 17 on a test, the score 17 is said to have an age equivalent of 10-6.


A resource that acts or has the power to act or play a significant role in the lifecycle of a credential.


The Aviation Industry CBT Committee is the natural response to the educational standardization challenge from one of the largest users of educational software. The activities of the AICC are targeted, among others, to the definition of software and hardware requirements for student computers, needed peripherals, multimedia formats for course contents, and user interface properties.


Advanced Learning Infrastructure Consortium (ALIC) is a collaborative effort of the Japanese government and the industry and academic professionals working on e-learning field.

alternate forms

Two or more versions of a test that are considered interchangeable, in that they measure the same constructs, are intended for the same purposes, and are administered using the same directions.  Alternate forms is a generic term used to refer to any of three categories.  Parallel forms have equal raw score means, equal standard deviations, and equal correlations with other measures for any given population.  Equivalent forms do not have the statistical similarity of parallel forms, but the dissimilarities in raw score statistics are compensated for in the conversions to derived scores or in form-specific norm tables.   Comparable forms are highly similar in content but the degree of statistical similarity has not been demonstrated.

analytic scoring procedure

A procedure in which the judgment of each critical dimension of performance is undertaken separately and the resultant values re combined for an overall score.  In some instances, scores on the separate dimensions may also be used in interpreting performance.

anchor test

A common set of items administered with each of two or more different forms of a test for the purpose of equating the scores of these forms.

answer key

The key that describes the scoring scenario for a question or test.


Request by applicant, candidate or certified person for reconsideration of any adverse decision made by the certification body related to her/his desired certification status.


An individual who declares interest in earning a credential offered by a certification program, usually through a request for information and the submission of materials. See Candidate.


The Alliance of Remote Instructional Authoring and Distribution Networks for Europe was part of the European Commission's fourth framework program. The main working fields of this alliance include computer networks for education and learning, methodologies for the development, management and reuse of educational contents, syllabus definition for computer based training, and educational metadata.


A method of obtaining evidence from tests, examinations, questionnaires, surveys and collateral sources used to draw inferences about the abilities, qualities, performance or outcome of a person or action.

assessment instrument

The methods for determining if candidates possess the necessary knowledge and/or skills related to the purpose of the certification.

Assessment Profile

A resource that describes the key characteristics of an assessment for a credential. Characteristics described include, but are not limited to, processes for assessment development, maintenance, selection and evalutation as well as assessment examples.

Associate Degree

An award level that normally requires at least 2 but less than 4 years of full-time equivalent college-level work.

attention assessment

The process of collecting data and making an appraisal of a person's ability to focus on the relevant stimuli in a situation.  The assessment may be directed at mechanisms involved in arousal, sustained attention, selective attention and vigilance, or limitation in the capacity to attend to incoming information.

Audience Level

Class of levels in an academic or training progression of the typical person seeking the credential.

authoring system

A generic name for one or more computer programs that allow a user to author, and edit items (i.e. questions, choices, correct answer, scoring scenarios and outcomes) and maintain test definitions (i.e. how items are delivered with a test).

automated narrative report

A programmed, computer-generated interpretation of an examinee's test scores or test score profile, corresponding to the level of each score or by the interrelationships among the scores, and based on empirical data and/or expert judgment.


Control over all essential certification and re-certification decisions without being subject to approval by or undue influence from any other body. See Administrative Independence. 


Bachelor's Degree

An award level that normally requires at least 4 but not more than 5 years of full-time equivalent college-level work.


A recognition designed to be displayed as a marker of accomplishment, activity, achievement, skill, interest, association, or identity.


A set of tests standardized on the same population, so that norm-referenced scores on the several tests can be compared or used in combination for decision making.


In a statistical bias context, a systematic error in a test score.   In a fairness bias context, bias may refer to the inappropriateness of content in the assessment instrument, either in terms of its irrelevance, overemphasis, exclusion, under representation, irrelevant components in the construct of test scores.  Fairness Bias usually favors one group of participant over another. In a eligibility bias context, bias refers to the inappropriateness or irrelevance of requirements for certification or re-certification if they are not reasonable prerequisites for competence in a profession, occupation, role or for product use and support. See Fairness. 


The characteristic of being relatively proficient in two languages.


A document that contains information about the assessment, including its stakeholders, the intended candidates, eligibility, job analysis, the conditions under which the assessment must be conducted, content domains, and other information to ensure that assessments are valid, equivalent and unbiased.

bubble sheets

Paper forms that contain printed circles (i.e. bubbles), and other guide marks, to prompt a participant to fill in the form for later scanning by an optical mark reader. 



Computer Assisted Assessment.  A common term used to describe the use of computers to support assessments. 


Computer Aided  Learning.  A common term used to describe the use of computers to support learning. 


The process of setting the test score scale, including mean, standard deviation, and possibly shape of score distribution, so that scores on a scale have the same relative meaning on scores on a related scale.


An individual who has met the eligibility qualifications for, but has not yet earned, a credential awarded through a certification program or a person that participates in a test, assessment or exam by answering questions. See Applicant. 


Computer Adaptive Testing.  A method by which a computer selects the range of questions to be asked based on the performance of the participant on previous questions. 


Computer Based Assessment.  A common term used to describe the use of computers to deliver, mark, score, and analyze assessments. 


Computer Based Learning.  A common term used to describe the use of computers to support learning. 


The European Committee for Standardization (Comité Europeacute;en de Normalization, CEN).


The European Committee for Standardization (Comité Europeacute;en de Normalization, CEN) hosts the Information Society Standardization System (ISSS) subcommittee.


Educational standardization activities at CEN ISSS take place within the Learning Technologies Workshop (CEN/ISSS/LT). The main efforts are devoted to reuse and interoperation for educational resources, educational collaboration, metadata for educational contents, and learning process quality, all this having in mind the European cultural diversity.


An individual who has earned a credential awarded through a certification program.


A written statement or document from the certification agency confirming the competence of an individual.


A process, often voluntary, by which individuals who have demonstrated the level of knowledge and skill required in the profession, occupation, role or the competent use or support of a product, are identified to the public and other stakeholders.   See also licensing, credentialing.

certification agency

The organizational or administrative unit that sponsors a certification program.  See also licensing, credentialing.

certification board

A group of individuals appointed or elected to govern one or more certification programs as well as the certification agency, and responsible for all certification decision making, including governance.

certification body

The organizational or administrative unit that sponsors a certification program and maintains certification records.  See Registration Body

certification committee

A group of individuals appointed or elected to recommend and implement policy related to certification program operation.

certification process

All activities by which a body establishes that a person fulfils specified competence requirements, including application, evaluation, decision on certification, surveillance and recertification, use of certificates and logos/marks.

certification processing

The process of matching an individual's accomplishments against the requirements for a certification program, and awarding certifications when all requirements have been met.

certification program

The standards, policies, procedures, assessment instruments and related products and activities through which individuals are publicly identified as qualified in a profession, occupation, role or for the competent use or support of a product.

certification scheme

Specific certification requirements related to specified categories of persons to which the same particular standards and rules, and the same procedures apply.

certification system

Set of procedures and resources for carrying out the certification process as per a certification scheme, leading to the issue of a certificate of competence including maintenance.


One of the possible responses that a participant might select.  Choices contain the correct answer/s and distracters.

class mean

The average score for all participants in a class for a particular test.

class standard deviation

The standard deviation of the scores achieved by participants within a class for a particular test.

classical test theory

The view that an individual's observed score on a test is the sum of a true score component for the participant, plus an independent measurement error component.  A few simple premises about these components lead to important relationships among validity, reliability, and other test score statistics.

classification accuracy

The degree to which neither false positive nor false negative categorizations and diagnoses occur when a test is used to classify an individual or event.  See sensitivity and specificity.


Planned short term instructional activities in which prospective participant participate prior to the test administration for the primary purpose of increasing their test scores.  Coaching typically includes simple practice, instruction on test-taking strategies, and so forth.  Activities that approximate the instruction provided by regular school curricula or training programs are not typically referred to as coaching.

coefficient alpha

An internal consistency reliability coefficient based on the number of parts into which the test is partitioned (e.g., items, subtests, or raters), the interrelationships of the parts, and the total test score variance.  Also called Cronbach's alpha, and, for dichotomous items, KR 20.


Comments, remarks and observations that clarify terms, provide examples of practice that help explain a standard, or offer suggestions regarding evidence that must be documented to demonstrate compliance.


An assertion of measurable or observable knowledge, skills and abilities.

Competency Framework

The classification of structured sets of resources designed for use as value vocabulary terms for description and classification in the credentialing context.

composite score

A score that  combines several scores by a specified formula.

computer assisted assessment

A common term used to describe the use of computers to support assessments. 

computer based assessment

A common term used to describe the use of computers to deliver, mark, score, and analyze assessments. 

computer based mastery test

An adaptive test administered by computer that indicates whether or not the participant has mastered a certain domain.  The test is not designed to provide scores indicating degree of mastery, but only whether the test performance was above or below some specified level.

computerized adaptive test

A method by which a computer selects the range of questions to be asked based on the performance of the participant on previous questions.  See adaptive test.

Condition Profile

A description of the conditions to which the credential is subject during its lifecycle including the requirements to attain the credential.

conditional measurement error variance

The variance of measurement efforts that affect the scores of examinees at a specified test score level; the square of the conditional standard error of measurement.

conditional standard error of measurement

The standard deviation of measurement errors that affect the scores of examinees at a specified test score level.

confidence interval

An interval between two values on a score scale within which, with specified probability, a score or parameter of interest lies.  The term is also used in these standards to designate Bayesian creditability intervals that define the probability that the unknown parameters falls in the specified interval.

construct domain

The set of interrelated attributes (e.g., behaviors, attitudes, values) which are included under a construct label.  A test typically samples from this construct domain.

construct equivalent

The extent to which the construct measured by one test is essentially the same as the construct measured by another test.  Also, the degree to which a construct measured by a test in one cultural or linguistic group is comparable to the construct measured by the same test in a different cultural or linguistic group.

construct irrelevance

The extent to which test scores are influenced by factors that are irrelevant to the construct that the test is intended to measure.  Such extraneous factors distort the meaning of test scores from what is implied in the proposed interpretation.

construct response item

An exercise for which examinees must create their own responses or products rather than choose a response from an enumerated set.

construct underrepresentation

The extent to which a test fails to capture important aspects of the construct that the test is intended to measure.  In this situation, the meaning of test scores is narrower than the proposed interpretation implies.

content domain

The set of organized categories characterizing subject matter under which behaviors, knowledge, skills, abilities, attitudes,  and other characteristics may be represented in specifications for assessment instruments by which items are classified.

content standard

A statement theoretical concept or characteristic that a test is designed to measure.

continuing competence

The ability to provide service at specified levels of knowledge and skill, not only at the time of initial certification but throughout an individual’s professional career. See Re-certification and Continuing Education.

continuing education

Activities, often short courses, that certified professionals engage in to receive credit for the purpose of maintaining continuing competence and renewing certification. See Re-certification and Continuing Competence.

convergent evidence

Evidence based on the relationship between test scores and other measures of the same construct.

corrective scoring

A calculation used to offset the effects of guessing in objective tests.

Cost Profile

The type and nature of direct costs one would incur if one were to pursue attaining a credential including background checks, standalone assessments, tuition fees, application fees, and learning resource fees.


A description of an educational experience or event which may be offered as distinct instances at different times and places, or through different media or modes of study, which aims to build knowledge, competence or ability of learners.


A qualification, achievement, personal or organizational quality or aspect of an identity typically used to indicate suitability. Examples include (but not limited to) government ID, home address, university degree.

Credential Alignment Object

Alignment to a credentialing framework used to describe the alignment of a resource in the credentialing environment to frameworks such as concept schemes, specilized controlled lists, and competencies.

credential consumer

An entity that requests a credential for processing. 

credential curator

 A program, such as a storage vault or personal verifiable claim wallet, that stores and protects access to a holder's credentials and verifiable claims.

Credential Organization

An organization that plays one or more key roles in the lifecycle of a credential.

credential verification

The process that cryptographically demonstrates the authenticity of a credential. 


Granting, by some authority, a person a credential, such as a certificate, license, or diploma, that signifies a certain level of competence in some domain of knowledge or activity.

criterion domain

See construct domain:  the construct domain of a variable used as a criterion.

criterion-referenced score interpretation

A score interpretation that does not depend upon the score's rank within, or relationship to the distribution of scores for other examinees.   Examples of criterion-referenced interpretations include comparison to cut scores, interpretations based on expectancy tables, and domain-referenced score interpretations.

criterion-referenced test

A test that allows its users to make score interpretations in relation to a functional performance level, as distinguished from those interpretations that are made in relation to the performance of others.  See also domain-referenced test.


A procedure in which an empirically derived scoring system or set of weights from one sample is applied to a second sample in order to investigate the stability of prediction of the scoring system or weights.

cut score

A specified point on a score scale, such that scores at or above that point are interpreted differently from scores below that point.  Sometimes there is only one cut score, dividing the range of possible scores into "passing" and "failing" or "mastery" and "nonmastery" regions.  Sometimes two or more cut scores may be used to define three or more score categories, as in establishing performance standards.  See also, performance standards.


data controller

the entity that determines the purposes, conditions and means of the processing of personal data

data erasure

Also known as the 'Right to be Forgotten', it entitles the data subject to have the data controller erase his/her personal data, cease further dissemination of the data, and potentially have third parties cease processing of the data.

data processor

The person or organization that processes data on behalf of data controller e.g. a cloud service provider

data protection authority

A national authority tasked with the protection of data and privacy as well as the monitoring and enforcement of data protection regulations

Data Protection Officer

An expert on data privacy who works to ensure that an entity is adhering to the policies and procedures set forth in the GDPR (see GDPR for more info).

data subject

A person whose personal data is processed by a controller or processor.


A collection of information/data, often organized within tables, within a computer's mass storage system.  Databases are structured in a way to provide for rapid search and retrieval by computer software.   The following databases are used by testing systems; item, test definition, scheduling and results.


The Dublin Core Metadata Initiative (DCMI) is an open forum engaged in the development of interoperable online metadata standards that support a broad range of purposes and business models. DCMI's activities include consensus-driven working groups, global workshops, conferences, standards liaison, and educational efforts to promote widespread acceptance of metadata standards and practices.


An academic credential granted upon completion of a program or course of study, typically over multiple years at postsecondary education institutions.

delivery channel

One of more testing centers, usually managed by a delivery provider (i.e. an organization that provides candidate scheduling services, computers, proctoring services, and the space in which  to conduct a computerized test).

delivery provider

An organization that provides candidate scheduling services, computers, proctoring services, and the space in which  to conduct a computerized test.

derived score

A score to which raw scores are converted by numerical transformation (e.g., conversion of raw scores to percentile ranks or standard scores).

diagnostic and intervention decisions

Decisions based upon inferences derived from psychological test scores as part of an assessment of an individual.  See also intervention.

diagnostic assessment

Assesses knowledge, skills, behaviors, understandings and/or attitudes to determine gaps and potentially provide a diagnosis and prescription of learning and/or other activities.

differential item functioning

A statistical property of a test item in which different groups of participants have different rates of correct item response, conditional upon total test score or equivalent measure.


A statistical property, sometimes known as facility,  indicating the level of a question, from 0.0 to 1.0.  Calculated as the average score for the question divided by the maximum achievable score. A facility of 0.0 means that the question is very hard (no-one got it right) and 1.0 means that it is very easy (no-one got it wrong).  0.5 ideal.

digital badge

A recognition designed to be displayed as a marker of accomplishment, activity, achievement, skill, interest, association, or identity offered in digital form.


A formal, published process for the enforcement of standards governing the professional behavior (i.e., ethics) of certificants.

discriminant evidence

Evidence based on the relationship between test scores and measures of different constructs.


Discrimination refers to the formulate used for calculating the potential of a question to distinguish between stronger and weaker students.  The statistical correlation of the question score and the test score from -1.0 to +1.0.  A high correlation (close to +1.0) means that the question is measuring the same thing as the test.  A low correlation means that there is little correlation between participants getting the question right and getting a good score in the test.  A negative correlation indicates that participants getting the question right generally get a bad overall test score.


One of the choices, that a participant may select, that is not the correct answer.

Doctoral Degree

The highest award level to be earned for postsecondary study.


The body of literature (e.g., tests manuals, manual supplements, research reports, publications, user's guides, etc.) made available by publishers and test authors to support test use.

domain sampling

The process of selecting test items to represent a specified universe of performance.

domain-referenced test

A test that allows users to estimate the amount of a specified content domain that an individual has learned.  domains may be based on sets of instructional objectives, for example.  See also criterion-referenced tests and content-related evidence of validity.

drag-and-drop question

A response style where the participant indicates their selection by using a mouse or pointing device to drag and drop graphic elements that illustrate their choice(s).

Duration Profile

A resource describing the time-related aspects of a resource including exact, minimum, and maximum timeframes of an activity.


Earnings Profile

A resource that describes earning and related statistical information for a given credential.


Education Network Australia, EdNA is targeted to promote Internet as a supporting tool for computer-based learning among the Australian educational community, from students to content providers.

eligibility requirements

Published criteria, often benchmarks for education, training and experience, with which applicants must demonstrate compliance in order to qualify for the certification.

empirical keying

The strategy of using empirical relationships between individual test items and the criterion of interest as the basis for test scoring.

Employment Outcome Profile

The employment outcomes and related statistical information for a given credential.

encrypted data

Data that is protected through technological measures to ensure that the data is only accessible/readable by those with specified access


A thing with distinct and independent existence such as a person, organization, concept, or device.

equated forms

Two or more test forms constructed to the same explicit content and statistical specifications and administered under identical procedures (alternate forms); through statistical adjustments, the scores on the alternate forms have been placed on a common scale.


A statistical process used to convert scores on two or more alternate forms of an assessment instrument to a common score for purposes of comparability and equivalence.

equivalent forms

See alternate forms.

error of measurement

The difference between an observed score and the corresponding true score or proficiency.  See also standard error of measurement and true score.

essay response

A response style where the participant enters an essay in response to the stimulus.

essential element

A statement that is directly related to the blue-print and specifies what a certification program must do to fulfill the requirement of the blue-print.


The process that assesses a person’s achievements (fulfillment of the requirements of the scheme) and/or the effectiveness of learning experiences


A method or procedure to access an individual's knowledge, skills and abilities.  Such procedures may involve written or oral responses, or by observation of the candidate performing tasks. 


A person deemed by the certifying agency to posses the relevant technical and personal qualifications to conduct an examination as part of the certification process.

Learning Resource

An entity that is used as part of an educational activity (e.g. a textbook) or that describes (e.g. a lesson plan) or records the educational activity (e.g. an audio- or video-recording of a lesson).



A statistical property, sometimes known as difficulty,  indicating the level of a question, from 0.0 to 1.0.  Calculated as the average score for the question divided by the maximum achievable score. A facility of 0.0 means that the question is very hard (no-one got it right) and 1.0 means that it is very easy (no-one got it wrong).  0.5 ideal.


In measurement theory, a statistically derived, hypothetical dimension that accounts for part of the intercorrelations among tests.  Strictly, the term refers to a statistical dimension defined by a factor analysis, but it is also commonly used to denote the psychological construct associated with the dimension.   Single-factor tests presumably assess only one construct; multi-factor tests measure two or more constructs.

factor analysis

Any of several statistical methods of analyzing the intercorrelations or covariance's among variables by constructing hypothetical factors, which are fewer in number than the original variables.  The analysis indicates how much of the variation in scores on each original measure can be accounted for by each of the hypothetical factors.

factorial structure

The set of factor obtained in a factor analysis.


The principle that all applicants and candidates will be treated in an equitable manner throughout the entire certification process. See Bias

false negative

In classification or selection, an error in which an individual is assessed or predicted not to meet the criteria for inclusion in a particular group but in truth does  (or would) meet these criteria.  See sensitivity and specificity.

false positive

In classification or selection, an error in which an individual is assessed or predicted to meet the criteria for inclusion in a particular group but in truth does not (or would not) meet these criteria.   See sensitivity and specificity.


Feedback is term used when stimulus is provided to a participant according to their responses within an assessment.  Feedback is normally provided at an item, topic, and/or assessment level.

field test

A test administration used to check the adequacy of testing procedures, generally including test administration, test responding, test scoring, and test reporting.  A field test is generally more extensive than a pilot test.   See pilot test.


A response style where the participant completes a phrase by entering a word, words or a number.


An indicator attached to a test score, a test item, or other entity to indicate a special status.  A flagged test score generally signifies a score obtained in a modified, non-standard test administration.  A flagged test item signifies an item with undesirable characteristics, such as excessive differential item functioning.

focus group

An evaluation activity comprising of a semi-structure discussion with a group of people.  Focus groups, comprising of stakeholders, are used to inform test-designers on the significance of each topic to be administered within a certification exam. 

formative assessment

Used to strengthen memory recall by practice and to correct misconceptions and to promote confidence in ones knowledge.

frequency analysis

Frequency analysis measures the number of times a particular distracter, or combination of distracters, was selected by a groups of participants. 

functional equivalence

The degree to which similar activities or behaviors have the same functions in different culture or linguistic groups.


gain score

The difference between the score on a test and the score on an earlier administration of the same or an equivalent test.


General Data Protection Regulation (GDPR) is a new law that will apply to every organization that handles data about people who live or work in Europe from 25 May 2018. The regulation is designed to enable individuals better control over their personal data and harmonize rules across the EU and beyond. The sanctions for non-compliance under GDPR are significantly higher than previous legislation and can be up to €20 million or 4% of global annual turnover (whichever is greater).


Project GEM, Gateway to Educational Materials, provides a unified framework for the publication and location of educational resources available through the Internet. This project was born in 1997 as a special project within ERIC Clearinghouse on Information & Technology.

General Education Development (GED)

A credential awarded by examination that demonstrates that an individual has acquired secondary school-level academic skills.

generalizability coefficient

An index formed as the ratio of (a) the sum of variances that are considered components of test score variance in the setting under study to (b) the foregoing sum plus the weighted sum of variances attributable to various error sources in this setting.  Such indices, which arise from the application of generalizability theory, are typically interpreted in the same manner as reliability coefficients.

generalizability theory

An extension of  classical reliability theory and methodology in which analysis of variance is used to estimate variance components that indicate the magnitude of errors from specified sources.  The analysis is used to evaluate the generalizability of scores beyond the specific sample of items, persons, and observational conditions that were studied.

grade equivalent score

The school grade level for which a given score is the real or estimated median or mean.

graphical hotspot question

A response style where the participant indicates their selection by using a mouse or pointing device on a graphic display.


high-stakes test

A test whose results has important, direct consequences for examinees, program, or institutions tested.


An entity that is in control of a particular credential. Typically a holder's identity is also the primary subject of the information in a credential. A holder is often the entity that initiates the transmission of a credential.

Holders Profile

The count and related stastical information of holders of a given credential.

holistic scoring

A method of obtaining a score on a test, or a test item, that results from an overall judgment of performance using specified criteria.

hotspot response

A response style where the participant indicates their selection by using a mouse or pointing device on a graphic display.


Identifier Value

Where a formal identification system exists, recommended practice is to use a string - an alphanumeric identifier value - conforming to that system.


A set of information that can be used to identify a particular entity such as a person, organization, concept, or device. An entity may have multiple identities associated with it. 


The Learning Technologies Standardization Committee from the IEEE covers practically all aspects related to computer-based education. Its main objective is to develop technical standards, recommended practices and guidelines for software components, tools, technologies and design methods to facilitate the development, implementation, maintenance and interoperation of educational systems.


The IMS is a member funded global consortium that develops and promotes the use of specifications for online learning resources, systems, products, and services.

informed consent

The written agreement of a person, or that person's legal custodian, for some procedure to be performed on or by the individual, such as taking a test.

Instructional Program Classification

The identification and classification of an ordered listing of instructional programs. For example Council of International Programs USA (CIPUSA).

intelligence test

A psychological or educational test designed to measure intellectual processes in accord with some evidence-based theory of intelligence.

interested party(ies)

The various individuals and groups with an interest in the quality, governance, and operation of a certification program, such as the public, employers, customers, clients, third party payers, etc.  See Stakeholders.

internal consistency coefficient

An index of the reliability of test scores derived from the statistical interrelationships of responses among item responses or scores on separate parts of a test.

internal structure

In test analysis, the factorial structure of item responses.   (See factorial structure)

inter-rater agreement

The consistency of rater judgments of the work or performance of people; sometimes referred to as inter-rater reliability, although the typical index of agreement does not reflect variation in the performance of participants from one sample or occasion to another.

intervention planning

The activity of a practitioner that involves the development of a treatment protocol.


A questionnaire or checklist, usually in the form of a self-report, that elicits information about an individual's personal opinions, interests, attitudes, preferences, personality characteristics, motivations, and typical reactions to situations and problems.


An individual who supervises a written examination/test to maintain  a fair and consistent testing environment, but takes no part in the examination process.  See Proctor. 


The International Standardization Organization.

ISO 27001:2008

ISO 27001 is the recognised international standard for Information Security Management Systems, requiring that a company demonstrates a systematic approach to managing sensitive information and ensuring data security. Compliance with ISO 27001 provides independent third-party assurance by a licensed certification firm that the data center meets specified information security requirements.

ISO 9001:2005

ISO 9001 is a widely implemented Quality Management System standard for providing assurance about an organisation's ability to satisfy quality requirements. Certification against ISO 9001 provides independent third-party assurance by a licensed certification firm that the data center meets specified requirements.


The International Standardization Organization and International Electrotechnical Commission Committee.


The 36th subcommittee of the first joint International Standardization Organization and International Electrotechnical Commission Committee (ISO/IEC JTC1 SC36) was launched in 1999 to cover all aspects related to the standardization in the field of learning technologies. Its focus is on interoperability, not only at the technical level, but also taking into account social and cultural issues.


An entity that creates a credential and associates it with a particular holder. 


A general term referring to an individual problem, question, choices, correct answer, scoring scenarios and outcomes used within a test.

item analysis

The process of studying the responses to questions delivered in the pilot study or prototype in order to select the best questions in terms of facility and discrimination.

item bank

The system by which test items are maintained, stored and classified to facilitate item review, item development and examination assembly.

item characteristic curve

A function relating the probability of a certain item response, usually a correct response, to the level of the attribute measured by the item.   Also called item response curve.

item pool

The aggregate of items from which a test or test scale's items are selected during test development, or the total set of items from which a particular test is selected for participant during adaptive testing.

item prompt

The question, stimulus, or instructions that direct the efforts of examinees in formulating their responses to a constructed-response exercise.

item response theory (IRT)

A theory of test performance that emphasizes the relationship between mean item score (P) and level (0) of the ability or trait measured by the item.   In the case of an item scored 0 (incorrect response) or 1 (correct response), the mean item score equals the proportion of correct responses.   In most applications, the mathematical function relating P to 0 is assumed to be a logistic function that closely resembles the cumulative normal distribution.

item type or format

The structure of a problem that stimulates a candidate to respond within an assessment instrument (i.e. drag-and-drop, essay, fill-in-the-blank, hot-spot, multiple choice, multiple-response, numeric, open-ended, selection, short answer).


job analysis

Any of several methods used singly or in combination to identify the tasks performed on a job or the knowledge, skills, abilities, and other personal characteristics relevant to job performance.

job task analysis

See job analysis

Journeyman Certificate

A credential awarded to skilled workers on successful completion of an apprenticeship in industry trades and professions.

JSR 168

Java Specification Request 168 (JSR 168) defines a standard interface that addresses the areas of content aggregation, personalization, presentation, and security for portlets implemented for the Java platform and defines the contract between a portlet and its container.



An element of an item that details the correct choice(s) to allow the item to be graded correctly.


learning outcomes

The intended product from the process of learning.


A credential awarded by a government agency that constitutes legal authority to do a specific job and/or utilize a specific item, system or infrastructure and are typically earned through some combination of degree or certificate attainment, certifications, assessments, work experience, and/or fees, and are time-limited and must be renewed periodically. See also certification, credentialing.

likert scale

See lykert.

local evidence

Evidence (usually related to reliability or validity) collected for a specific set of participants in a single institution or at a specific location.

local norms

Norms by which test scores are referred to a specific, limited reference population, (locale, organization, or institution); local norms are not intended as representative of populations beyond that setting.

local setting

The organization or institution where a test is used.

low-stakes test

A test whose results has only minor or indirect consequences for examinees, programs, or institutions tested.

lykert scale

A method to prompt a respondent to express their opinion on a statement being presented.  Likert scales are often 4 point scales (strongly agree, agree, disagree, strongly disagree), 5 point scales  (strongly agree, agree, neutral, disagree, strongly disagree), but sometimes as many as 10 potential choices.


mandated tests

Tests that are administered because of a mandate from an external authority.

Master Certificate

A credential awarded upon demonstration through apprenticeship of the the highest level of skills and performance in industry trades and professions.

Master's Degree

An award level that requires the successful completion of a program of study of at least the full-time equivalent of 1 but not more than 2 academic years of work beyond the bachelor's degree.

mastery test

A test designed to indicate that the participant has or has not mastered some domain or knowledge or skill.  Mastery is generally indicated by a passing score or cut score.  See cut score.

matrix sampling

A measurement format in which a large set of test items is organized into a number of relatively short item sets, each of which is randomly assigned to a subsample of participants, thereby avoiding the need to administer all items to all examinees in a program evaluation.


Arithmetic average of some scores, i.e. the sum of the scores divided by the number of scores.

measurement error variance

That portion of the observed score variance attributable to one or more sources of measurement error; the square of the standard error of measurement.   [2-Feldt]


 A narrowly focused credential that attests to achievement of a specific knowledge, skill, or competency. 

moderator variable

In regression analysis, a variable that serves to explain, at least in part, the correlation of two other variables.

multi-factor test

An instrument that measures two or more constructs which are less than perfectly correlated.


Graphics, animation, audio, and video presented by a computer.

multiple choice

A response style where the participant selects one choice from several to indicate their opinion as to the correct answer.

multiple response

A response style where the participant selects more than one choice from several to indicate their opinion as to the correct answers.  Multiple response questions have answer keys that describe various combination of choices being right or wrong with different possible outcome for the different combination of selections.



Classification or description of inferred central nervous system status on the basis of neuropsychological assessment.

neuropsychological assessment

A specialized type of psychological assessment designed to generate hypotheses and inferences about normal or pathological processes affecting the central nervous system and the resulting psychological and behavioral functions or dysfunction's.

normalized standard score

A derived test score in which a numerical transformation has been chosen so that the score distribution closely approximates a normal distribution, for some specific population.

norm-referenced test interpretation

A score interpretation based on a comparison of a participant's performance to the performance of other people in a specified reference population.


Statistics or tabular data that summarize the distribution of test performance for one or more specified groups, such as participants of various ages or grades.  Norms are usually designed to represent some larger population, such as participants throughout the country.  The group of examinees represented by the norms is referred to as the reference population.

numeric response

A response style where the participant enters a number to indicate their choice


objective testing

Style of testing that measures the participants knowledge of objective facts, the correct answers, to which, are known in advance

Occupation Classification

The identification and classsification of an ordered listing of occupations. For example the Standard Occupational Classification (SOC) system in the U.S. and the European Skills/Competences, Qualifications and Occupations (ESCO).


Optical Character Recognition.  A method whereby a computer can recognize text and other marks that have been scanned.

Offer Action

An action by an authoritative agent offering access to a resource.


Optical Mark Reader.  A device that scans paper forms (normally bubble sheets) and recognizes the marks made on the form.

Open Badge

An Open Badge is a visual symbol containing verifiable claims of achievement, affiliation or authorization in accordance with the "Open Badges specification" found at They enable individuals to share verifiable records of their learning.

operational use

The actual use of a test, after initial test development has been completed, to inform an interpretation, decision, or action based, in part, upon test scores.


The event that will occur after a question or questions have been answered (i.e. the item is scored, feedback is provided, etc.)

outcome evaluation

The activity of a practitioner that evaluates the efficacy of an intervention.


parallel forms

See alternate forms.


A person that participates in a testing, assessment or survey process by answering questions.

participant mean

The mean of the percentage score achieved by candidates.  Used to determine validity of choices, within an item, by examining the choices selected by the higher and/or lower scoring candidates. 


The score on a test below which a given percentage of scores fall.

percentile rank

The percentage of scores in a specified distribution that fall below the point at which a given score lies.

performance assessments

Product- and behavior-based measurements based on settings designed to emulate real-life contexts or conditions in which specific knowledge or skills are actually applied.

performance domain

The set of organized categories characterizing a role or job under which tasks and associated knowledge and/or skills may be represented in the job analysis.

performance standard

An objective definition of a certain level of performance in some domain in terms of a cut score or a range of scores on the score scale of a test measuring proficiency in that domain.  Also, sometimes, a statement or description of a set of operational tasks exemplifying a level of performance associated with a more general content standard; the statement may be used to guide judgments about the location of a cut score on a score scale.

personal data

Any information related to a person or ‘Data Subject’, that can be used to directly or indirectly identify the person. Examples include a home address, a photo, an email address, bank details, posts on social networking websites, medical information, computer’s IP address etc.

personality assessment

Assessment which analyzes personality traits in order to predict behaviors includes Myers-Briggs. Sometimes gives information directly to participant and sometimes requires an expert to review and interpret.

pilot test

A test administered to a representative sample of participants solely for the purpose of determining the properties of the test.  See field test.


The principles, plan or procedures established by an agency, institution, or government, generally with the intent of reaching a long-term goal.


A portal is a Web-based application that provides personalization, single sign-on, and content aggregation from different sources and hosts the presentation layer of information systems.

portfolio assessments

Systematic collections of educational or work products that are typically collected over time.


A portlet is a Web component, usually managed by a container, that processes requests and generates dynamic content. Portals use portlets as pluggable user interface components to provide a presentation layer to information systems.

practice analysis

See Job Analysis


In the context of psychological or neuropsychological assessment, an appropriately qualified interpreter of psychological test results and relevant collateral information.

precision of measurement

A general term that refers to the reliability of a measure, or its sensitivity to measurement error.

predictor domain

The construct domain of a construct used as a predictor.   See construct domain.


A diagnostic assessment before a specific learning activity. Used to create intrigue, to set a benchmark for comparison with a post course test, as a pre-requisite or route to an appropriate learning activity, and to provide instructors and mentors information on the student's abilities.

privacy impact assessment

a tool used to identify and reduce the privacy risks of entities by analyzing the personal data that are processed and the policies in place to protect the data


An individual who supervises a written examination/test to maintain  a fair and consistent testing environment, but takes no part in the examination process.  See Invigilator. 

Professional Doctorate

A doctoral degree conferred upon completion of a program providing the knowledge and skills for the recognition, credential, or license required for professional practice.

program evaluation

The collection of systematic evidence to determine the extent to which a planned set of procedures obtains particular effects.

program norms

See user norms.


PROmoting Multimedia access to Education and Training in EUropean Society is another European initiative that gets together more than 400 institutions involved in computer-based education.

proposed interpretation

A summary, or a set of illustrations, of the intended meaning of test scores, based on the construct(s) or concept(s) the test is designed to measure.


A security measure organizations can apply to personal data by which the most identifying fields within a data record are replaced by one or more artificial identifiers, or pseudonyms. This results in personal data no longer being attributed to a specific data subject without the use of additional information. Said additional data stays separate to ensure non-attribution.


Formalization or classification of functional mental health status based on psychological assessment.  See neuropsychodiagnosis.

psychological assessment

Clinical psychological assessments which ask questions and then determine a psychological profile. These are typically validated against a norm population.

psychological testing

Any procedure that involves the use of tests or inventories to assess particular psychological constructs of an individual.


Properties of the items and test such as the distribution of item difficulty and discrimination indices.

psychometric analysis

The analysis of the items and test such as the distribution of item, difficulty and discrimination indices.


A qualified person who analyses the psychometrics of a test or item.

public member

A representative of the consumers of services provided by a defined certificant population, serving on the governing body of a certification program. 


To release and make public, in hardcopy, electronic, or web-based formats, an assessment by publishing from the development system to the production or release system to make it widely available.


QA Credential Organization

A quality assurance organization that plays one or more key roles in the lifecycle of a resource.

qml or question markup language

A data markup language being proposed to become a standard to aid the interchange of item (question) data between different authoring and delivery tools. Click here for details on QML.


A specification produced by the IMS consortium to specify how assessments,  sections, items and results might be exchanged using an XML binding between assessment authoring,  delivery and reporting systems. QTI stands for Question and Test Interoperability. 


A formal process where an individual is recognized for having been through a certification process or provided evidence of attributes, education, training and/or work experience.

Quality Assurance Credential

A credential assuring that an organization, program, or awarded credential meets prescribed requirements and may include development and administration of qualifying examinations.

question banks

The system by which test items (questions, choices, feedback and keys) are maintained, stored and classified to facilitate item review, item development and examination assembly.


One or more questions presented and answered together.


random error

Any unsystematic error; a quantity (often observed indirectly) that appears to have no relationship to any other variable.


A method of picking assessment items and presenting them in no particular order to reduce the likelihood of cheating.

raw score

The unadjusted score on a test, often determined by counting the number of correct answers, but more generally a sum or other combination of item scores.  In item response theory, the estimate of participant proficiency, usually symbolized, is considered a raw score.


Requirements and procedures established as part of a certification program that certificants must meet in order to ensure continuing competence and renew their certificate. See Continuing Competence and Continuing Education.

reference population

The population of participants represented by test norms.   The sample on which the test norms are based must permit accurate estimation of the test score distribution for the reference population.  The reference population may be defined in terms of examinee age, grade, or clinical status at time of testing, or other characteristics.

registration body

The organizational or administrative unit that sponsors a certification program and maintains certification records.  See Certification Body

regression coefficient

A multiplier of an independent variable in a linear equation relating a dependent variable to set of independent variables.  The coefficient is said to be standardized or under-standardized according as the variable it multiples has been scaled to a standard deviation of 1.0. or has some other standard deviation.

Regulate Action

An action by an independent, neutral, and authoritative agent enforcing the legal requirements of a resource.

relative score interpretations

The meaning of the score for an individual, or the average score for a definable group, derived from the rank of the score or average within one or more reference distributions of scores.


The degree to which the scores of every individual are consistent over repeated applications of a measurement procedure and hence are dependable, and repeatable; the degree to which scores are free of errors of measurement.

reliability coefficient

A unit-free index that reflects the degree to which scores are free of measurement error.  The index resembles (or is) a product-moment correlation.  In classical test theory, the term represents the ratio of true score variance to observed score variance for a particular examinee population.  The conditions under which the coefficient is estimated may involve variation in test forms, measurement occasions, raters, scorers, or clinicians, and may entail multiple examinee products or performance.   These and other variations in conditions give rise to qualifying adjectives, such as alternate-forms reliability, internal-consistency reliability, test-retest reliability, etc.

Renewal Profile

The conditions and methods by which a credential must be renewed by its holder.


A person that participates in a survey process by answering questions

response bias

A participant's tendency to respond in a particular way or style to items on a test (i.e., personality inventories) that yields systematic, construct-irrelevant error in test scores.

response process

A component, usually hypothetical, or a cognitive account of some behavior, such as making an item response.

restriction in range or variability

Reduction in the observed score variance of an examinee sample, compared to the variance of the entire examinee population, as a consequence of constraints on the process of sampling examinees.  See adjusted reliability/validity coefficient.

Revocation Profile

The conditions and methods by which a credential can be removed from a holder.


Safe Harbor

The Safe Harbor Framework, run by the U.S. Government Department of Commerce (in consultation with the European Commission and the Federal Data Protection and Information Commissioner of Switzerland), enables companies to certify that they are compliant with the stringent needs of the European Union and Switzerland for data security.


A selection of a specified number of entries called sampling unit (participants, items, etc.) from a larger specified set of possible entities, called the population.  A random sample is a selection according to a random process, with the selection of each entity in no way dependent on the selection of other entities.   A stratified random sample is a set of random samples, each of a specified size, from several different sets, which are viewed as strata of the population.

scale score

See derived score.


The process of creating a scale score.  Scaling may enhance test score interpretation by placing scores from different tests or test forms onto a common scale or by producing scale scores designed to support criterion-referenced or norm-referenced score interpretations.

scheduling system

The generic name for one or more computer programs that allows a user to track candidate appointments.  Scheduling systems may also provide bill collection information, testing center resource scheduling and candidate demographics.


Any specific number resulting from the assessment of an individual; a generic term applied for convenience to such diverse measures as test scores, estimates of latent variables, production counts, absence records, course grades, ratings, and so forth.

scoring formula

The formula by which the raw score on a test is obtained.   The simplest scoring formula is "raw score equals number correct."   Other formulas differentially weight item responses, sometimes in an attempt to correct for guessing or non-response, by assigning zero weights to non-responses and negative weights to incorrect responses.

Scoring Method

A system of classifying the different methods of assigning merit (usually expressed numerically).

scoring protocol

The established criteria, including rules, principles, and illustrations, used in scoring responses to individual items and clusters of items.   The term usually refers tot he scoring procedures for assessment tasks that do not provide enumerated responses from which test-takers make a choice.

scoring rubric

The principles, rules, and standards used in scoring an examinee performance, product, or constructed response to a test item.  Scoring rubrics vary in the degree of judgment entailed, in the number of distinct score levels defined, in the latitude given scorers for assigning intermediate or fractional score values, and in other ways.


Sharable Courseware Object Reference Model includes a reference model for educational sharable software objects, a runtime environment and a content aggregation model.

screening test

A test that is used to make broad categorizations of examinees as a first step in selection decisions or diagnostic processes.


A purpose for testing that results in the acceptance or rejection of applicants for a particular educational or employment opportunity.

selection response

A response style where the participant selects from a pull-down list.


A process by which an assessment instrument is self-administered for the specific purpose of providing performance feedback, diagnosis and prescription recommendations rather than a pass/fail decision.


In classification of disorders, the proportion of cases in which a disorder is detected when it is in fact present.

sequence response

A response style where the participant orders a list of objects or text to formulate their response. 

Spearman-Brown formula

A formula derived within classical test theory that projects the reliability of a shortened or lengthened test from the reliability of a test of specified length.


In classification of disorders, the proportion of cases for which a diagnosis of disorder is rejected when rejection is warranted.

speed test

A test in which performance is measured primarily or exclusively by the time to perform a specified task, or the number of tasks performed in a given time, such as tests of typing speed and reading speed.


A test characteristic, dictated by the test's time limits, that result in a test-taker's score being dependent on the rate at which work is performed as well as the correctness of the responses.

split-halves reliability coefficient

An internal consistency coefficient obtained by using half the items on the test to yield one score and the other half of the items to yield a second, independent score.  The correlation between the scores on these two half-tests, adjusted via the Spearman-Brown Formula, provides an estimate of the alternate-form reliability of the total test.


An agency that offers a certification program and awards credentials (free-standing board, professional association, certification committee, advisory committee, or entity within a business or industry).


Previously known as SAS 70 Type II, SSAE 16 (Statement on Standards for Attestation Engagements No. 16) is an auditing standard developed by the American Institute of Certified Public Accountants (AICPA) for service providers that wish to demonstrate a high level of control effectiveness to independent auditors. SSAE 16 audits provide independent third-party assurance by a licensed Certified Public Accounting firm as to whether control activities are suitably designed to meet specified control objectives.


The various groups with an interest in the quality, governance, and operation of a certification program, such as the public, employers, customers, clients, third party payers, etc.  See interested Party(ies)

standard deviation

A statistical measure of the spread of results. The higher the standard deviation, the greater the spread of data. 

standard error of measurement

The standard deviation of the distribution of errors of measurement that is associated with the test scores for a specified group of participants.

standard score

A type of derived score such that the distribution of these scores for a specified population has convenient, known values for the mean and standard deviation.


In test administration, maintaining a constant testing environment, and conducting the test according to detailed rules  and specifications, so that testing conditions are the same for all participants.  In statistical analysis, transforming a variable so that its standard deviation is 1.0 for some specified population or sample. In scoring, ensuring that candidate responses are judged using predefined criteria in order to provide a consistent basis for evaluating all candidates.

standards-based assessment

Assessments intended to represent systematically described content and performance standards.


An element of an item, normally the actual question, that provides the stimulus that conveys a prompt to the participant to understand and answer the question. 

stratified coefficient alpha

A modification of coefficient alpha that renders it appropriate for a multi-factor test by defining the total score as the composite of scores on single-factor part-tests.

summative assessment

Main purpose of assessment is to measure or certify knowledge, skills and aptitudes (KSAs).

systematic error

A score component (often observed indirectly), not related to the test performance, that appears to be related to some salient variable or subgrouping of cases in an analysis.  See bias.


Task Profile

A profile describing the required or recommended tasks to be performed by a holder of, or applicant for, a credential assertion.

technical manual

A publication prepared by test authors and publishers to provide technical and psychometric information on a test.

technical report

A summary of psychometric procedures and their results as implemented in the assessment instruments used in a certification program, often addressing such issues as content validity, item writing, test assembly, reliability analysis, cut score development, scoring and equating.

Temporal Coverage

The temporal coverage of a CreativeWork indicates the period that the content applies to, i.e. that it describes, either as a date, time or as a textual string indicating a time period in ISO 8601 time interval format.

test center

A facility that provides computers and proctoring services in which to conduct tests.

test center administration system

The generic name for one or more computer programs used by a test center to administer tests to candidates.  This may include, but is not limited to, starting tests, stopping tests and communicating item, test and results data back and forth.

test developer

The person(s) or agency responsible for the construction of a test and for the documentation regarding its technical quality for an intended purpose.

test development

The process through which a test is planned, constructed, evaluated and modified, including consideration of content, format, administration, scoring, item properties, scaling, and technical quality for its intended purpose.

test development system

A generic name for one or more computer programs that allow a user to author, and edit items (i.e. questions, choices, correct answer, scoring scenarios and outcomes) and maintain test definitions (i.e. how items are delivered with a test).

test documents

Publications such as test manuals, technical manuals, user's guides, specimen sets, directions for test administrators and scorers, and previews for participants that provide the information necessary to evaluate the appropriateness and technical adequacy of a test for its intended purpose.

test driver

A generic name for one or more computer programs that displays test items to a computer screen, collects candidate's responses score and stores the results.

test equivalence

Ensures that examinees taking one version of a test do not have a relative advantage over those taking another version.

test manual

A publication prepared by test developers and publishers to provide information on test administration, scoring, and interpretation and to provide technical data on test characteristics and procedures that are used in test development and in evaluating the technical quality of the test scores.  See user's guide.

test modification

Changes made in the content and/or administration procedure of a test in order to accommodate participants who are unable to take the original test under standard test conditions.

test score information function

A mathematical function relating level of an ability or latent trait, as defined under item response theory (IRT), to the reciprocal of the conditional measurement error variance.

test specification

A framework that specifies the proportion of items that assess each content and process/skill area; the format of items, responses, and scoring protocols and procedures; and the desired psychometric properties of the items and test such as the distribution of item difficulty and discrimination indices.

test sponsor

The person(s) or agency responsible for the choice and administration of a test, the interpretation of test scores produced in a given context, and for any decisions or actions that are based, in part, on test scores.

test taker

The person taking a test.

test user

The person(s) or agency responsible for the choice and administration of a test, the interpretation of test scores produced in a given context, and for any decisions or actions that are based, in part, on test scores.

test-retest coefficient

A reliability coefficient obtained by administering the same test a second time to the same group after a time interval and correlating the two sets of scores.


The subject matter of a question.

translational equivalence

The degree to which the content of the translated version of a test is linguistically comparable to that of the original test.

true score

In classical test theory, the average of the scores that would be earned by an individual on an unlimited number of perfectly parallel forms of the same test.  In item response theory, the error-free value of participant proficiency,   usually symbolized by 0.


undue influence

Control of decision making over essential certification policy and procedures by stakeholders or other groups outside the autonomous governance structure of a certification program.

universe score

In its most common usage, the true score of an examinee that hypothetically holds for an entire domain of items, the complete population of raters, or any other facet of the measurement setting that represents a source of random error or measurement.

user norms

Descriptive statistics (including percentile ranks) for a sample of participants that does not represent a well defined reference population, for example, all persons tested during a certain period of time.  Also called program norms.

user's guide

A publication prepared by the test authors and publishers to provide information on a test's purpose, appropriate uses, proper administration, scoring procedures, normative data, interpretation of results, and case studies.  See test manual.


The relative value of an outcome with respect to a set of other possible outcomes.  Hence test utility refers to an evaluation, often in cost-benefit form, of the relative value of using a test vs. not using it, of using a test in one manner vs. another, or of using one test vs. another test.



The process of investigation by which the validity of the proposed interpretation of test scores is evaluated.


The degree to which accumulated evidence and theory supports specific interpretations of test scores and/or all the other components of a certification program (e.g., education, experience and assessment instruments).

validity argument

An explicit scientific justification of the degree to which accumulated evidence and theory supports the proposed interpretation(s) of test scores.

variance components

In testing, variances accruing from the separate constituent sources that are assumed to contribute to the overall variance of observed scores.   Such variances, estimated by methods of the analysis of variance, often reflect situation, location, time, test form, rater, and related effects.

Verification Service Profile

An resource describing the means by which someone can verify whether a credential has been attained by a person.

Verizon Cybertrust

Verizon Cybertrust is the leading provider of intelligent risk management products and services, and its certification is the de facto standard for secure IT and Internet operations.

virtual learning environment

A system that assists organizations manage learning.

virtual reality

A computer simulation that models natural environments.


Virtual Learning Environment is a system that assists organizations manage learning. 

vocational assessment

A specialized type of psychological assessment designed to generate hypotheses and inferences about interests, work needs and values, career development, vocational maturity and indecision.


web based assessments

Assessments delivered via the Internet, or an Intranet, in which the items reside on a server and are packaged with HTML to allow a participant to respond using a browser. 

weighted scoring

A method of scoring a test in which the number of points awarded for a correct (or diagnostically relevant) response is not the same for all items in the test.  In some cases, the scoring formula awards more points for one response to an item than for another.

word response

A response style where the participant enters a word to indicate their choice


Web Services for Remote Portlets (WSRP) is a Web services standard that defines a standardized interface between a portal and a portlet container service. WSRP enables interoperability between a WSRP-enabled container and any WSRP-compliant portal. Its definitions include a Web Services Description Language (WSDL) interface description for WSRP services. The standard also provides Markup Fragment Rules for markup generated by WSRP services.