In many assessment situations, an overall score does not provide enough information to students on what they are doing well and not. Cognitive diagnostic models (CDMs) assess whether examinees have the skills needed to answer test questions, so that they can obtain a finer feedback on their strengths and weaknesses. This means basing pass/fail decisions on more words instead of numbers. Carlos F. Collares, assistant professor at Maastricht University (the Netherlands) and psychometrician at the European Board of Medical Assessors (EBMA), defends overcoming unidimensionality to embrace a more qualitative use of modern psychometrics.
Madrid - July 28, 2022. Cognitive diagnostic modelling in healthcare professions education: an eye‑opener is a recent paper published at Advances in Health Sciences Education by Prof Collares that tries to demystify the so-called ‘post-psychometric era’. His point is that the literature confirms that scores are not effective in the prevention or timely detection of critical learning gaps. Prof Collares, who makes part of the advisory council of Practicum Script, states that “the inclusion of feedback based on CDM in test results may provide an essential tool for learners, teachers and other institutional stakeholders, allowing them to address specific gaps in cognitive attributes, and going beyond the remediation of content-wise knowledge gaps only.”
Also referred to as diagnostic classification modelling, CDM focuses on multiple latent variables, which are set to be discrete. Instead of assuming that we are measuring a single dimension or factor, CDM uses a restricted form or latent class analysis to assign test-takers into more qualitative categories by determining whether they classify along a number of axes, so the result is a rich multidimensional profile of which axes they have and which they do not. These attributes could be different psychoeducational constructs, but are often used to represent specific competencies for examinees. This way, CDM provides more granular evidence and has potential for guiding teaching and learning decisions in the classroom.
In short, “the latent variables chosen to measure constructs of interest —Prof Collares explains— are no longer continuous, as in item response models, but categorical.” Moreover, CDM estimates the relationship between the examinees’ cognitive attributes (any cognitive process, skill, or competency deemed necessary constructs to correctly solve the problem) and the different attributes required to solve test items. In addition, CDM allows any item to reflect more than one latent variable concurrently. The interactions between latent variables can be modelled with flexibility, which makes it a feasible approach that can encompass the multidimensionality within items with a higher level of professional authenticity.
According to Prof Collares, “criticisms about psychometric paradigms currently used in healthcare professions education include claims of reductionism, objectification, and poor compliance with assumptions.” Nevertheless, probably the most crucial limitation comes from learners’ difficulty in interpreting numerical scores and the detrimental impact scores can have on them. For example, a good score can be assigned even if the correct answer is achieved using the wrong cognitive process, erroneous pathways can lead to a correct answer, and a score of zero on a binary scale is irrespective of the degree of misunderstanding resulting in the wrong answer.
As it happens in programmatic assessment, an approach in which routine information about the participant’s competence and progress is continually collected and analyzed to allow high-stakes decisions at the end of a training phase, CDM differs from older psychometric paradigms providing meaningful diagnostic feedback. It classifies test takers attending to their mastery or non mastery of a large number of specific latent categorical attributes. As a result, adaptive learning for formative purposes is possible and tutoring becomes more intelligent.
But, how would students pass in a framework of assessment based on cognitive diagnosis? In Prof Collares’ opinion, “a pass qualification would likely be given according to the demonstrated competencies or cognitive attributes and not simply as a function of the obtained score.” The number of false positives and false negatives on an assessment could be better estimated and even lowered. Alternatively, the scores associated with the latent class in which all attributes of interest have been demonstrated could inform a cognitive diagnostic-based cut score. “My first attempts on using CDM for standard setting strongly suggest that programmatic assessment is not a luxury but a necessity,” the psychometrician emphasizes.
Applicability to Practicum Script
CDM allows measuring the different cognitive processes involved in clinical reasoning. “In the article —Prof Collares expresses—, I talk about two types of reasoning: inductive and deductive; the point is that most tests put the focus on deductive reasoning, and questions that require inductive reasoning can be dysfunctional in traditional unidimensional psychometric analyses”. This approach ignores the phenomenon of task-specificity. Practicum Script, instead, provides a wide variety of exercises that stimulate both types of reasoning, and even another one: heuristics. The simulator presents a comprehensive evaluation of all stages of clinical reasoning, all of them decisive for the future practitioner.
In this sense, for Prof Collares, “in the assessment of heuristics available in the knowledge application exercises of Practicum Script, it becomes very clear that competence is specific, not generic, confirming the task-specificity phenomenon that has been observed more than 40 years ago.” For him, “instead of ignoring this phenomenon, CDM allowed us to embrace it in Practicum Script, so I am very proud of what we were able to achieve with the use of CDM in the partnership that exists between Practicum Script and EBMA.”
Understanding knowledge structures and enabling a detailed measurement of complex cognitive attributes should have clear implications and serve as a rational basis for instructional design, formative assessment, and improvement of education in general. Otherwise, improvement remains largely a trial-and-error process. Prof Collares considers “a competency-based curriculum should emphasize what learners are expected to do rather than mainly what they are expected to know”. In principle, such a curriculum is learner-centred and adaptive to the changing needs of students, teachers, and society.
Reference
Collares, C. Cognitive diagnostic modelling in healthcare professions education: an eye-opener. Advances in Health Sciences Education. 2022; 27(10): 427-440.
Our personalized help center enables you to obtain technical support and help for navigating through the site and using the program.