We had the privilege of hosting Matt Wingfield, a renowned expert in the field of assessments and a Chairman of the e-Assessment Association. Speaking with our host, John Kleeman, Matt shares valuable insights into the world of e-assessment and its implications for education, certification and professional development.
From an overview of the e-Assessment Association and its mission to promote and advocate for high-quality e-assessment practices worldwide to the growing importance of e-assessment in various sectors, he highlights its ability to enhance learning outcomes, improve efficiency and provide reliable and timely results. While privy to the potential concerns related to e-assessment- including issues of security, fairness and accessibility- Matt emphasizes the importance of a robust infrastructure, data protection and ethical considerations to ensure a trustworthy and inclusive assessment environment.
He delves into the future of e-assessment, exploring the potential of artificial intelligence and machine learning in enhancing assessment practices. Matt envisions a future where e-assessment plays a central role in continuous learning and professional development, enabling individuals to showcase their skills and knowledge throughout their careers.
Hello, everyone, and welcome to Unlocking the Potential of Assessments, the show that delves into creating, delivering, and reporting on fair and reliable assessments. In each episode, we chat with assessment luminaries, influencers, subject matter experts, and customers, to discover and examine the latest in best practice guidance for all things assessment.
I’m your host, John Kleeman. I’m the founder of Questionmark and EVP of Industry Relations and Business Development at Learnosity, the assessment technology company. And today, I’m really pleased to welcome Matt Wingfield, who’s a genuine assessment expert, having been chair of the e-Assessment Association from 2012, and then its CEO. And he’s also a consultant and a speaker on e-assessment and very, very knowledgeable, as we’ll see in a moment, on the assessment market. Welcome, Matt.
Hi, John. Thank you very much for having me.
Very pleased to have you. And can I start with a question I always ask people, how did you get into assessments?
Well, really and truthfully, it was by accident. I started my career as a teacher, as an elementary school teacher, moved into the world of EdTech and started working for an EdTech company that was primarily focused on supporting the learning aspect of education in primary schools, colleges, universities, and work-based learning. And then migrated with them into the assessment space and headed up a division for them, providing assessment solutions to their customers.
And that, as I say, really brought me into the assessment world. So, that was probably around 2000 when I first became actively involved in that assessment community, so some 23 years ago, which is quite scary when I think back at the length of time that represents.
And I think before moving into assessment, you were in e-portfolios, which are a kind of assessment. Do you want to tell us a little bit about that?
Yeah. Yeah. So, it’s interesting and I think maybe as a slightly roundabout way of answering that question, I think there’s often a view that assessment and learning are somehow disconnected from each other, at least at a policy level that’s quite often the case. And I think that’s a complete false-ism. Education is founded on the concept of learning as a continuum, and that assessment is there to support the learner through that journey and help them understand and their teacher to understand where they are as they go through that journey. And that’s where the power of e-portfolio comes in, in my mind. And it was e-portfolio, you are right, that brought me into this space. So, the company I worked for created an evidence-based assessment platform, which was in other words, an e-portfolio. So, in basic terms, an e-portfolio is an online repository, it doesn’t actually have to be online, but a digital repository of student learning evidence that shows their journey over time and how that’s developed and how they’ve collaborated with other people and built up a range of knowledge and skills that they can demonstrate.
And what the company I used to work for, which was called Digital Assess, did, is took that concept, created an online repository, but then gave the ability for both the learner and teachers to tag the evidence that that learner put into their portfolio against assessment criterion, so that they could demonstrate skills and knowledge that they’d attained against a particular qualification or a particular aspect that they were being assessed against.
And I say that’s what brought me into the world of assessment, but from a slightly different angle, which for me highlighted that separation between learning and assessment and how they’re often treated as being different things when actually they should be working in harmony together.
And I think even before that, you started work life as a primary school teacher, elementary school teacher?
I did, yes. Yeah, I’d always liked the idea of working with children and I found certainly working as a primary school teacher, elementary school teacher, very, very rewarding. The age group I was working with was the sort of 10 to 11-year-olds, which are just at that really nice age where they’re still eager and hungry to learn, but they do thrive on your support in terms of that journey, which means it’s a very rewarding process as a teacher.
But to be honest, paperwork became an increasingly big part of a teacher’s life, it’s even worse these days. And I just found that that started to erode the benefit that I could take away personally in terms of working with the children to help them advance. It was all about ticking the boxes and making sure that we’d covered those parts of the curriculum.
So, I actually was headhunted into working for an organization that ran study centers where school groups could visit and use what was then high-tech IT equipment. Looking back on it, it was very old tech, big, monolithic desktop computers, but with some data logging and some other stuff going on that allowed them to interact with the environment around them in the study center.
So, I set up and ran a study center for this organization in North Norfolk, which was great, but it was 24/7, 7 days a week, and I only lasted a year before I got a little burnt out and was then headhunted by the organization that provided a lot of the educational software to that center, to run their business development function and work with both companies and educational institutions that they were working with, with the software that they provided. Which was very much focused around creativity and problem-solving and open-ended demonstration of capability, which was a really interesting and fun area to be in. And I thoroughly enjoyed it and stayed with that organization for 20 years.
I think it’s a very impressive organization, and I think it used to be very UK focused, but I think it’s now much more European and global-focused, which is good. Do you want to just share the URL of the e-Assessment Association? And I think the conference is in the first week of June and I think this podcast will go out before then, and last year it did sell out, but there might still be seats available. It was a very good conference last year.
Yeah, no, thank you, John. That’s appreciated. Yes, so the URL for the association is a simple one. It’s e-assessment.com. It’s completely free to become an individual member. So, if this is an area that you are interested in, this whole use of technology and supporting assessment and testing, then please do join. Takes a couple of minutes and as I say, it doesn’t cost anything, brings you into that broader community.
The conference is something we hold every year. It’s become a bit of a go-to place now to understand what’s happening in the market and to hear some really good examples and case studies of best practice use of technology and assessments. It takes place in Central London, this year at the Park Plaza Hotel on the southern embankment in Central London, just downstream a little bit from the Houses of Parliament. A really nice location. 6th and 7th of June.
And it isn’t quite sold out yet. I think we’re about 70% capacity at the moment. But if you are interested in coming along and participating in that conference, then you can buy tickets through the website, go to the same website, e-assessment.com, and there’s a tab on the top line for the conference that will take you to the conference website where you can book your tickets.
So, you’ve seen the e-assessment industry for well a decade or more as leading the Assessment Association and obviously before that in other sectors. What kind of trends would you say that you’ve seen the industry change in the last 10 to 15 years?
I think when I first became involved in the association and the industry particularly, there was a huge focus on testing knowledge, testing the ability for a student to regurgitate knowledge that they have learned in whatever context that is. And I think that has changed as people’s understanding of what we should be assessing has matured. Maybe that’s too strong because I think it’s been a collective decision to look at richer forms of assessment by the sector as a whole or by the community, the education community as a whole. It’s not just that there was an ignorance about that at the beginning.
But certainly, there’s been a maturing of people’s understanding about how technology can help you assess things that are difficult to assess on a piece of paper or necessarily just sitting in front of someone in a scalable way. It’s difficult to send an assessor out to watch everybody doing all of the things that they’re doing in a broader context. Well, we can use technology to scale that assessor and to make it possible to go and watch them do stuff and observe them do stuff.
So, I think that the overall trend I’ve seen is that people’s understanding of what technology can do has matured significantly, and their trust in the ability of technology to be able to do that has also matured significantly. And I think in no small way, the pandemic’s really helped with that journey because pre-pandemic, there was lots of really good technology out there that could support assessment, and a real breadth of different assessment approaches. But people were always nervous about making the jump from the pen and paper that they know won’t let them down, to a piece of technology that was somehow unknown or uncertain.
And the reality is actually, if you compare those two modes, yes there’s a fundamental physical difference, but actually the reliability of the technology is very, very favorable to the pen and paper now, and if not more so, because bits of paper can get lost far more easily than digital evidence can get lost.
And so, that maturity of understanding has been, I suspect, my biggest observation. And people at a policy level and at an exam and test awarding level, the maturity of their confidence in technology has really grown. There were always some trailblazers in this space. So, if I look back to 2000, when I first joined the assessment community, people like City & Guilds who are a large vocational awarding body based in the UK but operating internationally, were already delivering onscreen digital assessments. And their journey has just increased in its richness as they’ve gone through those intervening 23 years. Whereas, other similar organizations have only just joined that e-assessment path, primarily as a result of the pandemic and the huge impact that that had on everyday learning and assessment practice.
But generally speaking, as a collective now, so much more confidence in technology, so much more recognition of the fact that technology can help us step back and think about what it is that we want to assess and then find ways to support that process of assessing that kind of mature evidence like creativity and problem-solving and critical thinking.
And so, the latest technology is obviously generative AI and ChatGPT. How do you see that impacting the world of assessment?
Well, if anything, I think it’s going to help the trend and support the trend that I was just talking about, even more. There are two schools of thought here, aren’t there, around generative AI and its ability to help the student? Help in inverted commas in some people’s minds. The first camp is, “Let’s just ignore it and let’s ban it.” And there’ve been quite a few universities and a few states in the US that have done that. They’ve just said, “Students aren’t allowed to use ChatGPT.”
Or, there’s the, “Let’s embrace it,” camp, “and let’s think about how we can use this to our advantage.” And I’m very much in the latter. I think the former is folly, because it’s very difficult to shut something like that off, and even if you do, it’s counterproductive because the students are going to have to use that technology once they leave that formal education setting and they need to have the skills to use that wisely.
And I think that’s the key thing, it’s about technology doesn’t stand still, technology evolves. We can’t stop that evolution and our learners of today need to be properly equipped to deal with that technology when they come out into the world or when they advance further in their career. So, actually embracing some of this new technology, I think makes a lot of sense.
But the biggest implication it has from an assessment perspective is that, of course, students can cheat using ChatGPT, in the same way if we think about it, that they could already cheat with Google or they could already cheat with Wikipedia. And actually, what these tools mean is that we have to rethink what it is that we really need to assess in those learners and we have to think about the way in which we’re going to do that. Is that just a multiple-choice question test?
Well, I would argue that that’s difficult, it creates challenges because that’s easily cheated, test security aside, that kind of information, that knowledge recall is in some ways redundant because these tools allow us to access that kind of knowledge. Are we going to ask them just to write an essay? Well, again, that’s got problems, ChatGPT can write you an essay and okay, it’s not perfect, but it does a pretty good job.
So, actually what we need to try and do is build an assessment landscape that uses a mixture of different modalities to assess a student’s capability. So yes, there’s some multiple choice to test that they can still recall the knowledge, which is important certainly in some professions, particularly. Yes, we want them to write an extended response, but let’s also get them to demonstrate that they can perform a skill or that they can interact with a client or a potential client, and they have the skills in which they can undertake a professional discussion so that we build a balanced view of their capabilities.
And let’s not leave it all until the last minute when they’re sitting in an exam hall for three hours when they’re trying to cram all of that assessment into that one piece. Let’s take stock of their formative learning journey all the way through. So, again, going back to that e-portfolio kind of stuff that I was talking about earlier on because then we’ll be able to spot much more easily if there are sudden spikes in the student’s capability because they’ve suddenly started using ChatGPT to get all of the answers because we’ll have a much fuller picture.
And even more revolutionary, God forbid we have another pandemic, but if we have something that interferes with our assessment regime, like a pandemic, we’ve also got formative evidence that we can fall back on as an evidence of their capability, and both the student and their teachers understand where they are on that journey. So, again, it’s not all leaving it to the cliff edge at the end where we do one big, summative assessment. We’re spreading that load, which helps attack that malpractice risk, but also helps mitigate against big, catastrophic events that disrupt our single summative event taking place.
So, that’s very interesting. You said that in the beginning of your career you were involved in assessing creativity and you implied that we can use technology to assess things like creativity or problem-solving or whatever, but creativity is a really quite hard thing to assess. How can you use technology or indeed, other means, to assess creativity?
It is definitely a very hard thing to assess, and I don’t think technology provides necessarily the solution to it, but I think that what it does is allows you to, like I was just describing, I guess, take a longer view of a student’s capability over an extended period of time. And that extended period of time might just be a week, it might be several weeks, it might be several months, but that gives you the opportunity to allow the student to interact with other people on a project, or to demonstrate that they can solve a problem by actually physically solving a problem.
So, we’re not restricted to what they can do in 30 minutes or an hour or whatever in an exam, we’re actually able to use the technology. I remember being involved in a project that was all about capturing creativity and problem-solving, critical thinking, that was run by Goldsmiths University and who are part of the University of London. And one of the professors there was talking about the fact that what we want the technology to do is to hoover up all of that evidence that naturally occurs as students are learning and demonstrating that they can work together and that they can solve problems. And then present that in a way that teachers can easily understand, to understand where they are collaborating, where they are solving problems, and how that matches against rubrics.
We solved that problem through this Goldsmiths’ project called the E-scape Project, by using an e-portfolio. The technology allowed us to capture snapshots of their performance all the way through a period of time, to then be able to dissect it around how they were interacting with other people and how they were solving problems.
So, I think to go back to your original question, those skills are difficult to assess, but one of the reasons that they are difficult to assess is that it’s very difficult to get a clear picture of a student’s or multiple students capabilities in those areas, because we try to capture it in an hour sitting in front of a computer asking them some questions.
And organizations like PISA, they try to do this and they say that they can assess creativity by asking students a series of multiple-choice and short-response questions. I can tend that because I think that that’s slightly false. It’s a constructed creativity. I think the real nugget is the creativity and the problem-solving that students undertake as part of their natural learning. And if we can use technology to capture insight into that journey, then we end up with a much richer view on their capabilities in those areas.
So, essentially if you just do a conventional test, then you get a very narrow view of people’s creativity or perhaps problem-solving because it doesn’t necessarily come on demand. Or, indeed, problem-solving, you might need to think about it or sleep on it, or other things.
Yeah, because just what you’ve said, and also of course you’ve got to throw into the mix that the person may not be on their best performance on the day that you are testing them. So again, if you are able to use technology to take a longer view of that, then you’ve got a more reasonable chance of capturing them at their best or at least a good representation of their best during that extended period. Rather than saying, “Right, on Tuesday at 3:30, I want you to be creative or I want you to solve a problem.”
And actually, when you think of it in that context, that’s quite a silly thing to be suggesting, but yet we do it and we do it so much in our high stakes testing that says, “On this day, at this particular hour, you’ve got to be your best possible self in order to demonstrate to us that you are capable of these things.” And yes, some people can do that, but quite a lot of people can’t. So, using technology to get a broader spectrum view of a person’s capability, I think is the real answer.
Oh, I love that. Thank you. Thank you. So, I think as well as leading the e-Assessment Association, in fact probably more of your time is spent on the consulting. What kind of consulting do you do?
Yeah. So, no, it is more and more now. So, I left the world of being employed in 2017. It was quite a hard and nerve-wracking thing to do, having been employed by the same organization for 20 years. But I branched out into providing independent consultancy. Fundamentally, I suppose you can break the consultancy down into two groups. The first group is focused on supporting EdTech and e-assessment vendors, suppliers of technology, around building their strategies, supporting their business development processes for breaking into new markets and new sectors within the markets. And that’s an area that I know really well, because that was the role that I was doing in my former employed life in business development.
And I’ve worked with a wide range of organizations in that capacity, from large assessment organizations like RM, through to small, innovative EdTech providers like Sparx, who provide an AI-based mathematics learning tool. And that’s been really interesting. And that’s one of the things I really relish about the consultancy piece is there’s always something different to look at and be involved in.
And then the second group of people that I support are assessment providers, so exam awarding bodies, test providers. And that’s been really interesting because … And I fell into that a bit by accident when I was approached by one of the UK’s largest awarding bodies to help them with a procurement project, where they were trying to select a new assessment platform provider using my experience and my externality, I suppose, on the process.
And that was really interesting because I’ve spent my whole career selling to those organizations and suddenly I find myself on the other side of the fence and supporting the procurement process. At the moment, I think I’ve got six or seven awarding organization clients where I’m supporting them not just on procurement, but also on how to maximize the use of the technology they’ve already got, in deploying effective assessments. It’s a fascinating part of the work that I do and one that I really enjoy.
So, look, we always like to share some good practice on this podcast. Could you perhaps, first of all, maybe share some good practice for tech, e-assessment vendors, looking to do new things and then perhaps exam organizations or test publishers or others, what advice would you give from the different angles? Let’s start with the vendors. What advice would you give vendors?
Yeah, and I think there is some commonality across the two groups, by the way. And I always think none of this is rocket science, none of this isn’t obvious when you step back and think about it, but that’s the problem for many people, is you get so focused on the day-to-day that it’s very difficult to step back. And so, particularly from a supplier perspective, I think it’s important to be focused on what you are good at doing and not trying to be all things to all people. And that goes to the technology itself, if you are really, really good at delivering large scale, high scale multiple choice questions, then focus on that and grow that and make that the best thing that you can do and the best in the market at doing that. Because the e-assessment community now is more and more crowded, particularly in the UK, with vendors trying to sell their solutions. So, identify what you are good at doing and do more of that. Don’t try to be all things to all people.
And the other temptation, which is quite … Mistake is probably too hard a word, but quite an easy trap to fall into. When you’ve got something particularly innovative, it’s very easy to try and sell that to everybody and to find new use cases in different markets, because often new technologies take a while to embed in a market. So, don’t give up on the core markets that you know that there’s synergy with, with your innovative technology. Keep focused on that. Don’t get caught being pulled off at a tangent to a completely different sector because someone’s shown some interest in your technology. Keep true to your course. I think that’s really my overarching advice. Decide what you’re good at and really focus that and pool all of your resources onto that thing.
And supposing you’re an examining organization looking to work with suppliers or starting a new digital project. What advice would you give them?
I think it’s similar but slightly different. I think it is about focusing on what you are good at doing still, but in the assessment context, it’s about deciding what it is that you really want to assess and then how best to go about doing that. So, there’s a temptation and it’s a, again, very easy trap to fall into, to keep doing the same thing and just using technology to try and make it more efficient. And that’s definitely a positive thing. If we can make things quicker, if we can make it easier for the staff of an awarding body to get through processes more quickly, then they’ve got more capacity to do more stuff.
But are we inadvertently missing the opportunity to use new technologies or different approaches to assess different things which actually carry more monetary value, if you like, for the learner in the 21st century? And I think a really good example, if you don’t mind me quoting a specific example here, is something that an awarding organization that I’ve done quite a bit of work for now called NEBOSH, who are a UK-based but internationally working organization who focus in the health and safety sector.
The pandemic caused a real problem for them, because most of their assessments were delivered on a sessional basis, where everybody went into a test center or logged on at exactly the same time and did an assessment. And the pandemic put paid to that. It was very difficult for them to be able to carry on with that particular model. And it made them and their regulators, which was refreshing that the regulator was involved in this discussion, which in this case was the Scottish Qualifications Authority. It made them sit down and think about what it was that they were trying to assess the candidates for.
And actually, they’d realized that they weren’t necessarily interested in making sure that the candidates could recall all of the information that they wanted to on cue, and answer a bank of MCQ questions. Actually, what they wanted them to do was to take the knowledge that they had and apply that in their own workplace scenario. So, they shifted their assessment model from a closed book assessment of knowledge to an open book application of knowledge context. And in doing so, have started to assess things in a much more meaningful way for their candidates.
And interestingly, and I’m hoping this is a validation for that kind of thinking, their candidate numbers have just gone through the roof as a result of that, because not only are the assessments more accessible and they’re on-demand, so people can sit them more or less when they want to, but they’re also focused on demonstrating capability which translates directly into an ability to do the job better.
So, there’s a progression there for the candidate themselves and, “I want to do this NEBOSH qualification because it’s going to demonstrate to my boss or to my next employer that I can do the job that I’ve been hired to do.” And that’s really important. So, I think it’s step back, think about what it is you’re trying to assess and then think about how you are going to assess that with the technology that you have at your disposal.
Thank you. I think that’s great advice for most people, if not everybody listening to this podcast. So, thank you, Matt, and thank you very much to our audience, thank you for listening to us today. We really appreciate your support.
And don’t forget, if you’ve enjoyed this podcast, why not follow us through your favorite listening platform. Please reach out to me directly at firstname.lastname@example.org with any questions. You’ll also find Matt on LinkedIn, and I’m sure you’re open to some more consulting contracts, at least when the current one is finished, Matt?
Of course, yes.
But please keep the conversation going. You can also visit the Questionmark website at questionmark.com, to register for any of our many best practice webinars. And thank you again, and please tune in for another exciting podcast we’ll be releasing next month.