Well, it’s been a busy year so far!! Yardstick Assessment Strategies (YAS) has been out and about meeting many of you at conferences, making presentations, acquiring companies ;), and most important of all, helping your learners achieve competence by providing industry leading testing products and services.
In many of my previous blogs, I’ve talked about innovation that has the potential to change the quality and scale of testing, from virtual proctoring to automatic item generation. I’ve been thinking about this topic for quite a bit, inspired by the work that’s going on in the industry. This blog is about much of that work, based on our recent participation at the Association of Test Publishers (ATP) 2018 conference, in San Antonio.
I thought one interesting way to put a finger on the pulse of the testing industry, specifically from the perspective of innovation, was to look at all the presentations in the program at the ATP conference, and summarize them by topic. Then, I’d curate the list a bit, based on the frequency of mention and the potential for impact on the industry. Here’s what I came up with, in alphabetical order.
Over the next year, YAS will be focusing on each one of these topics, perhaps with the occasional guest contributor, diving deeply into the issues that define them and providing you with a perspective on how each could affect the professions and credentials that are important to you. For the remainder of this blog, I will offer a brief description of each, and why it’s impact could be significant.
AI / Machine Learning is everywhere these days, from self-driving cars to influencing elections. Could AI have a significant impact on assessment? And what exactly is AI, anyways?
I’ve heard it said, correctly, that no amount of downstream effect in exam development can make up for deficiencies in the Competency Profile or Job Task Analysis. What is current best practice in CP development / JTA, and what opportunities does a well-developed CP / JTA enable for your organization?
In its best version, formative testing is psychometrics in the service of the learner. How could formative testing be constructed to best meet learner needs? How can content be developed that will help learners gain certification, but not undermine security requirements of the certification body?
Games maximize engagement, but do they really make for sound assessment? What kinds of tasks actually define gamification, and what kinds of testing applications are games appropriate for?
Innovative items hold the potential to expand the range of testable competencies, or to increase the alignment of assessment tasks with the targets of inference. In this light, what kinds of tasks show the most promise? And how do the psychometrics work for Innovative Items?
Many advantages for testing, including improved test security, larger item banks, and better construct alignment are enabled when more quality content is available. Item Modeling and AIG allow for the rapid development of aligned content and the industry is just starting to grasp the potential changes that means for the testing industry.
Micro-credentialing is a response to the relative inflexibility of larger, one-size-fits-all credentials, favouring instead ‘bite-size’ certifications on smaller skill sets. What are the challenges and opportunities that this new model of credentialing represents? How could it apply to licensure & certification?
Like innovative items, PBAs and Sims allow for the assessment of more complex competencies. The challenge with PBA is cost, resource intensity, and the difficulty of getting the psychometrics right. New organizations and technologies (e.g., Virtual Reality) have become mainstream and have made their way into testing conversations and applications. Can these technologies make PBA more accessible, especially for smaller organizations?s?
It is no understatement to say that soft skill assessment is an Achilles Heel when it comes to ensuring public safety. It is often easier to ensure a candidate has the technical skill to perform their job then it is to ensure they can, for example, communicate effectively or act in an ethical manner. We’ll talk about how best to integrate soft skill assessment into a variety of testing environments, and organization sizes.
Test Security is a perennial issue for the industry. What are the most promising approaches to detecting cheaters and preventing item harvesting? Or, are there more proactive approaches to maintaining item security and therefore the validity of decisions following from your test program?
I’ve written a fair bit about Virtual or Remote Proctoring in the past year. One of the most exciting developments in this area is the potential for VP to be used in high stakes applications, reducing costs and improving accessibility. I hope to interview an organization that currently uses VP in such program that has obtained certification for their program.
So that’s what’s on the agenda for the upcoming year! Are you excited yet? Stay tuned for a more detailed timeline of topics, and feel free to suggest future topics of discussion for this blog, and for future YAS webinars and podcasts!