All in the Mind
What is actually going on when the decision-making process goes awry?
I fell into the field of medical error research after a cognitive psychology PhD. For my thesis, I investigated how operators identify, or fail to identify, faults in industrial process control systems – for example, as in the contribution of human error to the Three Mile Island accident. Towards the end of my PhD, the medical error field became very prominent, and I ended up working on a project exploring the decision making of oral surgeons regarding the removal of asymptomatic third molars. That was at the School of Dentistry at Cardiff University.
Today, my specific interest and area of expertise is in the field of diagnostic error. Accurate diagnosis is the heart of medicine – it’s what makes a doctor an expert. I’ve done most of my research with UK family doctors, so it’s possible that my findings apply mainly to them; I suspect, however, that much will also be applicable to hospital doctors and other healthcare professionals. After all, our minds all work in the same way.
One characteristic of human decision-making common to most is that once we think we’ve identified the right cause, our minds may not remain open to other possibilities. We elicit hypotheses by reference to cases we know of, by considering the selection of patients that we’ve seen or that our colleagues have told us about. And once we have mentally structured the problem in a specific way, it can become very difficult to restructure it, to think of other solutions. I and others have found that the starting hypothesis is very important to the ultimate outcome of the diagnostic process – that is, for the actual diagnosis and subsequent treatment decisions. For example, one of our recent studies showed that, if doctors had not explicitly considered the possibility of cancer at the start of the diagnostic exercise, they were much less likely to diagnose it at the end of the consultation and refer the patient to a specialist.
So there’s good evidence to suggest that this initial hypothesis-generation stage, right at the start, is very important for the final outcome of the diagnostic process. If we have the wrong hypothesis in mind to begin with, we may subsequently elicit the wrong information, and in addition we may not appropriately account for all observed information. The net effect may be that we only confirm what we are already thinking. Therefore, if we want to improve diagnostic decision-making, we should focus on supporting this initial stage – subsequent interventions may be too late.
One mechanism by which an initial, incorrect hypothesis persists throughout the diagnostic investigation is pre-decisional information distortion. This may result in a bias towards collecting the wrong information, but mainly it leads people to change the value of new information in an attempt to support their existing leading hypothesis. It occurs because the human mind seeks consistency; we like to have coherence between our hypothesis and the data we observe, and one way we do this is by altering the meaning of these data to fit our hypothesis.
So, to summarize, there are some generic mental processes at play. First of all, from the start, we are continuously trying to elicit causes behind what we observe – usually on the basis of whatever little information is available. If we see someone in the street behaving strangely, we immediately generate hypotheses about why this might be – and it’s exactly the same in a diagnostic situation. Secondly, pre- or post-decisional information distortion is a widespread characteristic of human judgement. And the difficulty in restructuring a mental representation – in setting aside the first hypothesis to think of other causes – is another generic characteristic of decision-making. That’s why I believe we should make diagnostic decision support systems available very early in the process, before healthcare professionals formulate and start testing their own hypotheses. Having an external system that interrupts you early on, before you go down the wrong path, might be a fruitful approach.
In fact, this is something that we’re working on at the moment – a decision support system that slots in at the start of the diagnostic process. There are many commercial diagnostic support systems, and they are all based on the healthcare professional inputting as much data as possible, from which the system generates a list of possible diagnoses. So these systems only come into play after the doctor has collected a lot of information, by which time a favored hypothesis has already been generated. By that time, we may be missing the boat; the user will already have a bias as to what the diagnosis is, and so may not have asked all the right questions, or may not have correctly interpreted the answers, and therefore the information provided to the diagnostic support system may be incorrect – because of a bias towards a given hypothesis. As I said, it’s hard for people to change their minds late in the process. For all of the above reasons, we believe that existing decision support systems may come too late in the process, and therefore we have developed a prototype diagnostic support system that intervenes at an early stage. At present it’s aimed at improving diagnostic accuracy in family clinics, but we can see it having applicability in other clinical fields too, such as emergency medicine. Hopefully we’ll get funding to continue developing this system.
The idea of it is to get healthcare professionals to be more open-minded, to reverse the intuitive way of going about diagnosing and get physicians to be more analytical right from the start. But really we need to be thinking about how we can facilitate this kind of thinking at medical school, rather than trying to persuade experienced doctors to change the way they think late in their career!