Level 1, 2 and 3 Assessments at The Regence Group

Background

The Regence Group is the largest affiliation of health-care plans in the Pacific Northwest/Mountain State region. It includes Regence BlueShield of Idaho, Regence BlueCross BlueShield of Oregon, Regence BlueCross BlueShield of Utah and Regence BlueShield (in Washington). Collectively, these four Plans serve nearly 3 million people in four states with more than $6.5 billion in combined revenue as of January 2004

Gathering Training Evaluation Data

In 2003, Organizational Development (OD) at Regence gathered training evaluation data of an information technology skills training program for Regence Information Technology Services (RITS) using Questionmark online assessments. This training evaluation was required in the work-force and capacity-building training program administered by the Oregon Department of Community Colleges & Workforce Department (CCWD).

The CCWD requested that training participants and their supervisors assess the training both immediately afterwards and following several weeks. The evaluation tool was developed by the American Society for Training & Development (ASTD) and based on Kirkpatrick’s four-level model. The responses about training, learning and performance could be compared with other benchmark measures at Regence, and at other organizations. Using Questionmark browser-based assessment forms and SQL Server database system, OD conducted online training evaluations of 17 classes, which instructors delivered in a classroom setting. The training evaluations spanned eight months and produced a total of 490 online assessments.

Engagement Process

In OD at Regence, several assessment authors use to build deliver and report training evaluations for internal clients such as RITS. They follow this engagement process with internal clients:
  1. Collaborate on a timeframe with mutual roles and responsibilities;
  2. Develop and import the question wording, choices, outcomes, and scoring into the authoring tools;
  3. Select a template for page layout (images, instructions, questions, submit button and optional jump-block questions);
  4. Test the evaluation in the development environment with the client;
  5. Gain client approval and move the evaluation into our production environment for distribution;
  6. Create an assessment schedule for participants;
  7. Turn on or off the settings for logon (anonymous or username and password), limit of one try, and time limit;
  8. Notify the participants about the schedule with a link to their online evaluation; and,
  9. At the conclusion of the session, pull the respondent data out of the database management system for analysis and reporting by participant, class and instructor.

Online Evaluation of Levels 1, 2 and 3

IOD and RITS Professional Development established a timeline for the information technology skills classes with both immediate and follow-up evaluations. For the training assessments of Levels 1, 2, and 3, they adopted the ASTD evaluation tool to benchmarking training evaluation data. Supplemental questions produced records for reporting evaluation data by participant, class and instructor for initial and follow-up training evaluations.

Immediately after each class, participants launched a “Part A” online assessment of Level 1 and Level 2. Part A questions consisted of a 1-5 scale to measure reactions to statements about these categories:

  • administration and logistics (prerequisites, facilities and equipment);
  • content (understood the objectives, the objectives were met);
  • design (method of delivery, materials, length of class time, organization);
  • instruction (satisfaction with instructor);
  • perceived impact (knowledge and skills increased;
  • applicability to current job; applicability for preparing participant for other jobs in the company; training helped toward other jobs in the company); and,
  • overall satisfaction with the class.

Several weeks after each class they distributed a “Part B” assessment where participants provided their names, and then answered questions about their:

  • use of skills from training (opportunity to use the training, actually used actual use of the training);
  • confidence in ability to perform (extent of increase in confidence resulting from this training);
  • barriers and enablers of transfer (training accurately reflected the job, access to necessary resources to apply the training, extent of coaching and other assistance); and,
  • measures of impact (percentage changes in production and performance).

Besides the online Part B follow-up data from participants, RITS gathered follow-up training evaluations from their supervisors as part of the ASTD evaluation tool.

Results

Employees and supervisors evaluated the training for RITS very positively during the Questionmark assessments. Using Kirkpatrick’s model, there was significant consensus that the training was satisfying (Level 1), effective (Level 2), and applicable (Level 3) in improving the performance of the participants. The standardized questions from ASTD, based on Kirkpatrick’s model, provided standardized data types for benchmark comparisons by instructors, training managers and budget analysts. Running two-part assessments enabled the team to cover Levels 1, 2 and 3 without utilizing one evaluation for each level. Using assessment software to gather and organize the relevant training measurements proved to be efficient and cost-effective.