Compliance testing is critical. Assessments provide evidence of employee awareness and competence to internal stakeholders and regulators.
They can also be used for pre-testing, personalising the training, and improving the training content over time. Here we explore some of the strategies Skillcast uses to create robust, high-quality assessments for corporate compliance training.
If you’re a learning developer, trainer, or subject matter expert involved in planning online and in-person evaluations, then you should find this interesting.
Test application rather than recall of knowledge
Most compliance assessments rely on multiple-choice questions (MCQs). Despite their shortcomings, MCQs are easy to write, replicate as variants, and easy to score and aggregate results from.
However, to test knowledge and competence with a good degree of reliability (i.e. the reliability with which the MCQ tests competence), you need to write questions of high quality, consistency, and level fit for purpose.
Testing comprehension and recall of factual knowledge, e.g. types of fire extinguishers or penalties for fraud, may be possible with simple MCQs.
However, compliance assessments should be more about testing "how employees will act" than about "what they know". Testing application, rather than basic knowledge.
For this reason, compliance assessments should feature scenario-based questions where learners are required to analyse, critique and make decisions.
A typical scenario-based question starts with a case statement or vignette that can include a detailed background, even a separate document that the employee must study. It can also have a realistic dialogue between the protagonists (in plain text or video format) that the learner is required to consider. A policy document. An order form. You get the picture.
The vignette can serve as the basis for asking one or more MCQs, each with its question stem and response choices. These choices should be written as decisions or critiques that the learner is asked to make.
Writing and quality testing such scenario-based questions can be time-consuming and expensive, but this can be offset by creating variants of the questions for randomisation.
Multiple-choice question best practices
- Each question must be related to a learning objective in the course
- The question stem should state clearly the problem and only test a single idea
- The details of the scenario should be kept in the question stem, and the response choices should be kept short
- The distractors must be plausible, but the correct choice should be unambiguously the best answer
- The distractors should incorporate common compliance errors that people are known to make - rather than being obvious
- The length of the response choices should be similar - if the correct choice is longer than the distractors in some questions, it must be shorter in others
- All response choices, correct choice and distractors alike, should follow on grammatically and logically from the question stem
- The question stem should be worded positively - if it's necessary to use a negative word like "not", it must be underlined or capitalised
- There should be no double negatives, e.g. a question stem featuring a "not" or "except" and one or more choices also featuring a negative
- Ideally, "All of the above" and "None of the above" should be avoided
- You can use dialogue as response choices, too - to make conversations, advice, and decisions sound more natural
- The position of the correct choice should vary randomly (in online assessments, the options can be jumbled each time the question is asked)
- The language should not use humour or colloquial terms that leave non-native speakers at a disadvantage
Compliance assessment best practices
Compliance assessments typically consist of a bank of MCQs from which a certain number are randomly selected for each test. The size of the question bank varies, but a 3:1 ratio (e.g. a bank of 30 questions for an assessment where ten questions are asked) is generally considered to be sufficient.
However, this random sampling approach is flawed and results in a low degree of reliability. When questions are picked from a single pool, there is a risk of the test including multiple questions on certain topics (e.g. gift-giving) but no questions on other topics (e.g. PEPs) covered in the course. Moreover, since the questions on some topics are likely to be less difficult than others, the overall difficulty of each test will vary.
The best approach for addressing this shortcoming is to structure the assessment into multiple question banks - each of which aligns with a single action point that we want to achieve in the course. Action points are analogous to learning objectives but distinct. Whereas the learning objectives list what people will learn, the action points list what the company wants them to do (or not do) after being trained.
The MCQs (preferably scenario-based) in a given question bank must test only the corresponding action point and be of the same level of difficulty. The test is then composed by randomly selecting a set number of questions (preferably one) from each question bank. This ensures that each instance of the assessment test is consistent, and each learner is tested on all the points that are important for compliance.
Best practices for true-false questions
True/false questions are regarded as being inferior to multiple-choice questions. This is unjustified as numerous research studies have shown that the reliability score of multiple true-false (MTF) questions is higher than that of MCQs.
An MTF consists of a single vignette with a question stem followed by several items, each of which the employee must evaluate as true or false. The items may be presented together or one at a time - the latter is better accessible if the assessment is online.
To compare the MTF format with MCQ, consider the example of a course on Bribery Prevention. Let's assume that this course has five action points and an assessment with two MCQs for each action point. Take one of these action points - say related gifts and hospitality procedures in the company. This would be tested with two MCQs with four response choices. The probability of an employee with no competency being able to select the correct answer randomly is 25% (it would be higher if the learner had a partial understanding to eliminate one or more of the distractors). So the probability of this employee answering both questions correctly is 6.25%.
Extrapolating this, if there were 1,000 employees in the company with poor or no competency of gifts and hospitality procedures, 62 of them would pass the assessment on this action point nevertheless.
This level of false-positive error may be tolerable if the assessment comes after the course and if the purpose of the intervention is to raise awareness rather than assess competence. However, it would be intolerable if the assessment was being used for pre-testing.
The purpose of pre-testing is to enable those who demonstrate competence on an action point to skip parts or the whole of the training content related to that action point. For companies to fulfil their compliance training obligations, they need to be able to demonstrate that the pre-test is robust.
In the above example, with a 6.25% false-positive error rate, 62 out of every 1000 employees with insufficient competency would be able to skip the content on gifts and hospitality procedures via a pre-test composed of two MCQs on this point.
To reduce this false positive error rate, we use the MTF format. Each MCQ with four options can be replaced with MTF with three, or even four items, with no appreciable difference in the assessment duration or user experience. The probability of the employee with no competency being able to score any item correctly is 50%, and that of scoring the two MTFs with three items each is 1.56% (the probability falls to 0.4% if the MTFs have four items each). This four-fold reduction in the false-positive error rate makes MTF the format of choice for pre-testing.
Enhancing assessments using confidence levels
To reduce false-positive errors further, we take a gamification approach to pre-testing, in which learners are invited to play a game for points (and optionally compete for places on a corporate leaderboard). In this game, we can allow learners to bet on their answers to some or all questions to earn additional points.
The game dynamics can be fine-tuned with a variety of settings including negative marking and the value of the bet. Irrespective of the points, this format of pre-testing adds a valuable new dimension - the confidence that each learner has in their responses. Using this confidence level alongside the MTF score can drive down the false-negative errors and improve the reliability of the testing.
This article was originally published by T-CNews Online, an independent resource for people development and people regulation personnel within the UK financial services industry.
Want more compliance learning insights?
If you'd like to stay up to date with best practices, industry insights and key trends across regulatory compliance, digital learning, EdTech, and RegTech news, subscribe to Skillcast Compliance Bulletin.
To help you navigate the compliance landscape, we have collated searchable glossaries of key terms and definitions across complex topics, including GDPR, Equality, Financial Crime and SMCR. We also track the biggest compliance fines, explaining what drives them and how to avoid them.
You can follow our ongoing YouGov research into compliance issues, attitudes and risk perceptions in the UK workplace through our Compliance Insights blogs.
Last but not least, we have 70+ free compliance training aids, including assessments, best practice guides, checklists, desk-aids, eBooks, games, handouts, posters, training presentations and even e-learning modules!
If you've any questions or concerns about compliance or e-learning, please get in touch.
We are happy to help!