Skip to content

Ep. 39 – Assessment Luminary, John Weiner, Lifelong Learner Holdings

14 Aug 2023
Share:

Host John Kleeman is joined by John Weiner, an expert in the assessment industry with four decades of experience. Until recently, he was Chief Science Officer at PSI, and now has the same role at his parent company, Lifelong Learner Holdings.

They explore the evolution of assessments from paper-based to technology-driven, highlighting shifts in workplace demands and educational needs. John shares his thoughts on fairness and its multiple dimensions, along with the challenges in measuring and ensuring it. The conversation dives into the impact of generative AI on assessments and the importance of critical thinking.

He also shares his predictions of the future of the workforce as technology reshapes roles. Tune in for insights into the changing landscape of assessments and the exciting possibilities that lie ahead!

Full Transcript

John Keeman:

Hello everyone, and welcome to Unlocking the Potential of Assessments, the show that delves into creating, delivering, and reporting on fair and reliable assessments. In each episode, we chat with assessment luminaries, influencers, subject matter experts, and customers to discover and examine the latest and best practice guidance for all things assessment. I’m your host, John Kleeman, founder of Questionmark and EVP of Industry Relations and Business Development at Learnosity, the assessment technology company.

Today, I’m really pleased to welcome John Weiner. John Weiner has an extensive background in the science and industry of assessment. Until recently, he was Chief Science Officer at PSI, and now has the same role at his parent company, Lifelong Learner Holdings. In a career spanning four decades in the assessment industry, John has been very influential in the development and use of assessments, including making over 140 presentations at international conferences. John is active in professional associations such as ATP, SIOP, ICE, Clear, ITC, and NCME. He’s been chair of ATP and recently served as co-chair and editor of the Guidelines for Technology-Based Assessment. Welcome, John. Really, really pleased to have you here.

John Weiner:

John, thank you so much. It’s a pleasure to be here with you today.

John Keeman:

So the question I ask everybody is, how did you get into the world of assessment?

John Weiner:

Well, you mentioned the four decades, it’s been a long road, right? Some people think I came in on a covered wagon, but that’s not true, but I did start quite a while ago. The landscape was quite different, as you could imagine, in the world of assessment. But my interest, like many people, started in the field of psychology. And so, I studied psychology in college, in university, and I won’t go into all the details, but undergraduate, I was really surveying every type, like many people interested in clinical and counselling, but also research and experimental psychology, perception, it’s quite a vast world, social psychology. Psychology is a wonderful, wonderful field.

It was when I went to graduate school, I gravitated towards research and experimental design and quantitative psychology. Many of the methods that come under the heading today maybe as machine learning, I was involved in in grad school. And so in some ways, it’s interesting to see things come a bit full circle there. But I discovered then the world of measurement, measuring people, measuring their characteristics, attitudes, personality, capabilities, and so forth. And something clicked there where I could see this perfect application of psychometrics and quantitative methods to an applied problem to help people and to help organizations. So that’s been my home ever since.

John Keeman:

And I think you joined PSI quite a long time ago, you must be one of the earliest employees of PSI, at least, who’s still with the company.

John Weiner:

Well, yes, there have been many versions of PSI. I joined PSI out of grad school. One of the professors I was working with was doing a fair amount of consulting, and one of his clients was Bill Rue at PSI. And that was how I was introduced to PSI in the 1980s when I was still in grad school. And then after finishing, I obtained a job in PSI in Los Angeles. So I’m what you’d call a boomerang employee, really. I worked there in the early 80s for a few years. I left to work for a commission in California that is responsible for all of the training and selection standards for law enforcement in the state.

And so I was focused on managing research projects there for reading and writing, testing, psychological testing, and so forth. And then returned to PSI in 1998. So, PSI began as what I’d call a small consulting firm that developed some instruments and tools as part of their practice. And then when I returned, PSI had moved towards becoming a product-oriented organization that offered consulting. So, developing assessment tools for selection.

John Keeman:

So I’d like to move on to talk about some of the periods of transition and change to the assessment industry, maybe a bit about fairness and the technology-based assessment guidelines. But first of all, tell me about Lifelong Learner Holdings, because I don’t think many people know who they are.

John Weiner:

Okay. Yeah. Well, that’s my home right now. Lifelong Learner Holdings is the parent company for PSI and the talent assessment company that spun off from PSI called Talogy. So we’re the corporate head of the whole operation.

John Keeman:

And what do you do on a day-to-day basis or whatever, either the company or yourself, however you want to answer it.

John Weiner:

Well, I have what I’d call a dream job really. I provide advisory services on projects. I provide and I explore opportunities for strategic partnerships and other opportunities that the company might want to engage in.

John Keeman:

Sounds good. Sounds good.

John Weiner:

And of course, I’m very heavily involved in thought leadership activity at conferences and engaging in sessions like this with people like you, John.

John Keeman:

No, indeed. Now, 140 presentations and conferences must be good, and I’ve had the pleasure of doing a few jointly with you, which I greatly enjoyed.

John Weiner:

Likewise.

John Keeman:

So talk about how has the assessment industry changed in these four decades. What have been the big changes and what does that look like for the future? But start perhaps with what the changes have been.

John Weiner:

John, how long is this session? How much time do we have? I’m just kidding. Just kidding. But obviously, a lot has changed over the decades, really. If I was going to think of the markers, the mile markers along the way, in the first segment, really let’s say in the 90s, I’ll just be kind of overly general here, we saw a real interesting shift from paper-based assessment to computer-based testing. And at the time, everyone felt this was a big deal, to go to CBT, and that did have a lot of implications in terms of automating many of the processes that were manual in testing. Okay?

But in hindsight, that was really just putting our toes in the water, so to speak. Shortly after that, in the late 90s, early 2000s, we saw a shift to internet-based testing. So, we went from computer-based local testing to internet. That opened really the floodgates for online assessment, and especially in the talent assessment industry. One of the interesting things that happened was the shift to un-proctored internet-based testing. So testing anytime anywhere around the world without direct supervision, which was kind of revolutionary and is still a predominant model for what I’d call non-high stakes testing. There have been other trends. Yeah.

John Keeman:

We see quite a lot of that as well in slightly different spaces. But in terms of, you’re talking about the recruitment space there, the un-proctored internet testing?

John Weiner:

Yeah.

John Keeman:

And how does it work?

John Weiner:

Well, I’d say what’s happened there is that testing became integrated with recruitment. So, web-based recruiting platforms incorporated online testing as part of the automated workflow, if you will, for bringing candidates into an organization in what they’re aiming to have a frictionless process that’s very time efficient. And so potential candidates can explore job opportunities, fill out applications, and proceed to take an assessment on the same website without delay.

John Keeman:

And is that mostly personality-type tests or is it mostly mental ability type tests, or a bit of both? Or…

John Weiner:

I’d say the focus is on personality, attitude-type assessments, background and job fit. Less so for what I’d call cognitive, although there are cognitive tests that are used un-proctored, some gamified, in various formats of course, but they are used in that manner. So there are some cognitive tests and it opens up a lot of interesting concerns.

But if we look at maybe the other side of the assessment industry, let’s say high stakes credentialing assessment or admissions testing for universities, those, of course, are proctored and supervised and that model hasn’t come into play. But what we did see is, if we fast forward to another decade, we saw that online proctored testing has become very common now in credentialing, licensing, and even some educational testing, admissions testing.

So, those were major changes I think that we’ve seen, earmark events. Of course, there are other things I would mention. I would just say one other, of course, has been what many people call digitizing or digitalization of information and process automation that’s allowed everything, including assessments and learning, to become automated in ways that have really changed how those processes are approached. Big data sets that are used now to integrate learning and assessment throughout the lifecycle. And it’s really changing approaches to assessment and education.

John Keeman:

And looking at the past, how do you see the future going? I mean what are the key things happening now?

John Weiner:

Well, some mega trends that I think we’re seeing right now are work is changing. So, the workplace has changed. So, we have virtualized work and that’s changed the requirements for workers, because of that. So not just that it’s virtual, but many of the job tasks that were performed by people have become automated and can be performed by software. You could pick any industry, finance or investment or marketing, and this is the case.

So it’s changing the kinds of competencies that are becoming important in the workplace, and also in recognition of education, the same trend, what has been called the 21st century skills movement has really taken root. And so there’s more focus now on non-technical skills in education more than ever. That’s where the future is going, is moving away from purely knowledge worker, the era of the knowledge worker is changing, right? And knowledge-based education is changing quite a bit. So what do we mean by 21st century skills, you might ask?

John Keeman:

I might. I was about to. Go ahead.

John Weiner:

I would say there are C words like communication, collaboration, creativity, and critical thinking are the big ones, especially critical thinking and we can talk more about that.

John Keeman:

So I think people understand how you can test critical thinking, but how do you test collaboration?

John Weiner:

That is the challenge. That’s kind of the wave of the future is scalable ways to do this. There are, of course, within the educational realm, there are tools and platforms that enable this today. But in a scalable way, it depends on what we’re aiming for. As part of learning, it’s very doable today. There are platforms available that support this to work as part of a team to collaborate on a project and be evaluated on that and learn from it, in a formative way and in a summative way.

John Keeman:

So, I mean I might be putting words into your mouth here, but are you essentially saying that over the last few decades, the technology we use for testing and the way testing happens has changed a lot, but what might be happening in the future is more the kinds of tests will change and what we’re testing will change?

John Weiner:

Absolutely. I agree with what you’re saying, and that’s what I am saying. And we haven’t talked about generative AI at all, but we will, I’m sure, in this conversation, because every conversation is required to talk about it.

John Keeman:

Yeah. So, I mean let’s jump into that. What do you see generative AI or other kinds of AI impacting assessment?

John Weiner:

Well, this is where critical thinking becomes very important because now many of the tasks that people would’ve been required to do, like draught an essay, write a report, come up with ideas, those can be performed by generative AI, and you can do that today. In fact, I saw recently that one of the universities in Arizona is allowing students to use ChatGPT to write its essay for entrant. But critical thinking is needed now by humans more than ever to be able to evaluate what comes out of these generative AI programs, evaluate them in terms of, are they good? Are they rational? Are they reasonable? And make decisions at that level, so it takes on a different tone. So that kind of thinking has become elevated.

John Keeman:

Thank you. So, I’d like to make sure that we touch on a couple of things in this podcast. One is I’d like you to talk about the technology-based assessment guidelines, which I’m sure a good number of your 140 presentations have been about, but also about fairness because I know you’re a real expert on fairness, and I’d love our listeners to hear about that. So why don’t you mention briefly explain what the technology-based assessment guidelines are and why they’re important? And then we’ll move on to fairness.

John Weiner:

Okay. Well, the TBA guidelines, technology-based assessment, guidelines were developed and published just last year, late last year. And those were in response to a need for best practices and guidance for assessing organizations and individuals who use tests and really all the stakeholders in the assessment lifecycle. And there really aren’t any guidelines, there hadn’t been for quite some time. There are standards in place that are very general that deal with measurement, standards for educational and psychological testing, which remain the touchstone, focus mostly on measurement issues.

And then of course, the Association of Test Publishers had published computer-based testing guidelines in the early 2000s, and so did the International Test Commission had guidelines for internet-based testing, but it’s been a couple of decades since any guidance was put out and think about all the changes we’ve been talking about on this call, on this discussion. And so the underlying concerns are the same, we care about validity and reliability and fairness and measurement, but what’s changed is with technology and its real infusion into testing, what’s changed is the threats to good measurement, to ensuring validity and reliability and so forth, those have changed. So, providing best practices to organizations in some of these areas, we think, is filling an important need.

John Keeman:

Yeah. So, I would strongly recommend these technology-based assessment guidelines, which John was a co-editor of with Steve Sireci of the ITC. They’re available, if you search for them, technology-based assessment guidelines on Google, you’ll find them on the ATP website or the ITC website. So, let’s look onto fairness. And just to set the scene, high stakes assessments, really important. They give life chances to people, they need to be fair, employment tests, they need to be fair. What would you say, I mean what is fairness in an assessment context?

John Weiner:

Yeah, this is kind of a loaded question, isn’t it? Fairness, it’s almost always been a concern since there’s been, what I’d call high stakes testing, so summative testing. And the challenge, the reason this has been a longstanding concern and hasn’t been really solved and that it’s still a concern is that there are so many different perspectives and definitions of what constitutes fairness, right? So from a purely technical standpoint, technical meaning measurement science, the focus is on making sure that assessments are equally valid for different groups, subgroups, and making sure that they’re equally reliable so that they’re equally accurate in measuring someone’s capabilities or their personalities or their fit for a role.

The challenge is, so if you were to look at the technology-based assessment guidelines or the standards for educational and psychological testing that I was mentioning a moment ago, they do provide outlines of what that means. So, what I was just talking about, well, what does it mean for assessments to be equally valid? And you can technically and empirically investigate that and show, “Well, yeah, if we mean do test scores predict performance on a job equally, accurately?” We can look at that. And in fact,that’s what psychologists have been doing. The challenge is that other groups might not look at just those two areas, for example, validity and reliability.

I’d say maybe on the other end of the continuum is are the outcomes the same? So if there’s a passing score used on a test, do different subgroups pass the test at equal rates? And some people define fairness as that. So, if they don’t have the same passing rates, the test is considered unfair. That’s where it becomes challenging in some of these discussions because there could be many reasons that groups perform differently on assessments and maybe the underlying reasons need to be examined. I don’t mean to downplay the concept of equal outcomes, like equal passing rates, because I think it’s important to consider and consider why.

John Keeman:

But I mean can some of those reasons be whether the questions are written taking account of all different cultural subgroups and things? I mean the people who argue for social justice, do they have some weight behind their arguments?

John Weiner:

Absolutely. They do in the sense that it’s very important for test developers need to take those into account. So when content is developed for an assessment, of course it depends what we’re talking about, what kind of assessment, but in general, when content is developed, it should be inclusive in terms of who develops the content. And then if there are choices to make in terms of scenarios for questions or things of that nature, of course it’s very important to make sure that there’s fair representation of groups and so forth. So, yes, there are things to do from a technical standpoint that can and should be done.

John Keeman:

And do you have a view about the current controversy about admissions testing in the United States where a lot of admissions tests are being done less because people are concerned about the fairness and equity issues around about them?

John Weiner:

Well, I do. I certainly have some exposure to that issue. I live in California, for example, and the University of California system has made admissions tests optional, or just they’re not using them at all in many programs. So like the SAT or the ACT test, for example, and they are studying alternatives. So I think the challenge is, I think it’s good and scientists think it’s good to question, to be questioning results and methods, but the challenge is going to be, “Well, what is the alternative?” My prediction is that there are opportunities to look at additional competencies than those that are currently assessed, and I think we’re likely to see some additional competencies be folded into what the traditional assessments have been measuring.

John Keeman:

Maybe some of these 21st century skills you were talking about, like creativity, collaboration, critical thinking.

John Weiner:

Yes.

John Keeman:

And what about fairness in workplace assessments? So, I know there’s quite widespread use of personality tests within the workplace and also mental ability tests. I mean are they generally fair and valid to make decisions about? A bit of a broad question but…

John Weiner:

I think it’s a good question that opens the door for a lot of discussion, and I’ll try to keep it brief, but I’ll just say that, depending how we define fairness, let’s say we take the position that we want to see equal passing rates or comparable passing rates for different subgroups, racial, ethnic, gender, age, and so forth. What the research shows is that personality assessments tend to be more neutral. They tend not to manifest differences between groups, which is interesting.

Cognitive ability tests, measures of pure reasoning or reading comprehension, for example, verbal tests, those tend to manifest bigger group differences. They have historically. And so that has typically been what I’d call maybe the lightning rod for a lot of the controversy is that assessments that are focused heavily on cognitive tend to manifest those differences and attract criticism and raise questions. But can they be fair? I would say, just to close that off, I would say cognitive tests can be fair if they manifest differences. If we’re talking about, are they equally valid? They can be shown to be equally valid, but it’s a social question more than a technical question in many cases.

John Keeman:

And what kind of advice would you give people listening to the podcast who want to have fair tests? So if they’re creating either high stakes tests or employment tests or educational tests, what sort of things should they be thinking about to make those tests fair?

John Weiner:

Okay. Well, that’s a great question of course. I think my recommendation would be to start with considering a broad range of definitions of fairness. So even before developing or adopting an assessment, let’s agree on how we’re going to identify fairness or the different kinds of fairness or bias, potential bias, which could be part of that. And let’s then operationalize that to ensure how we’re going to examine the composition of assessments tools, not just the type of assessment, but how it’s scored and used in decision-making. I think there are certainly strategies that can be put in place to help enhance fairness, mitigate bias, and ensure that tests achieve their purpose in terms of providing valid and reliable assessments.

John Keeman:

And how do you see things changing in the future? So we talked a little bit about generative AI. Do you think that that’s going to be a big change or is it going to be more that we transform assessment on what we assess? Or is it going to be both? I mean how do you see from your experience in the future, sorry, your experience of the past, how do you see things going in the future?

John Weiner:

Yeah. Well, people who try to predict the future or write about predicting the future say that we tend to overestimate how much change is going to happen in the near term in a couple years. And we wildly underestimate how much change will happen in the slightly longer term, like maybe 10 years from now. So depending what we mean by future, I’m sure I’ll either overestimate it or probably underestimate it. But I do think this whole business with generative AI, the ability to generate content is a big game changer. This is a seismic shift in the workplace and in education.

And so what will change, as I was alluding to before, is what roles will humans take in the workplace? What will we be asked to do? Well, I think everyone just about is going to be asked to use some version of generative AI, a chatbot or ChatGPT or some version of that is going to become part of the job. So we are going to have to rely more on, how do we engage with a really super intelligent device to perform activities that we would have been performing? So I see critical thinking as near the top of the list for that.

I do think other human characteristics are also going to elevate, such as ethical use of AI, ethical decision-making because of the ability to use these tools in very powerful ways that need to be considered. So, long answer made much shorter, I’ll say that what’s happening is the skillsets that people need are going to change quite a bit, and they’re going to be the 21st century skills that I mentioned, and I think other skills that need to be added to the 21st century skillset.

John Kleeman:

That’s a very interesting way of thinking about it. And essentially, assessment will need to transform for that. So to sum up, how would you sum up where you see things going or the future?

John Weiner:

Yeah, well John, it’s interesting, we started our conversation with, I started in this industry a long time ago, coming in a covered wagon, and here we are still, I think we’re in Wild West territory again with a lot of the AI developments, especially generative AI, which I think everyone is talking about now and will play a very big role in transforming work and learning. To be honest, we don’t know how, and that’s why it’s a Wild West. A lot of it is up for grabs, but that’s what makes this an exciting time for us to be in the industry, John.

John Kleeman:

Thank you, John. Really appreciate hearing your thoughts and I hope that our listeners do as well. And thank you, listeners.

John Weiner:

Well, thank you for inviting me.

John Kleeman:

Well, it was a pleasure. Thank you everyone for listening for us today. We appreciate your support. And don’t forget, if you’ve enjoyed this podcast, why not follow through your favorite listening platform? Also, please reach out to me directly at john.kleeman@learnosity.com with any questions, comments, or if you’d like to keep the conversation going, and I’m sure John Weiner would also welcome you to reach out to him if you’re interested in a dialogue with him. Please check out the www.questionmark.com and learnosity.com websites for more information or to register for our webinars. Thanks again and please tune in for another exciting podcast discussion we’ll be releasing shortly.

Related resources

Get in touch

Talk to the team to start making assessments a seamless part of your learning experience.