MeReC

 

Tablets


Volume 22 Number 1
August 2011

MeReC Bulletins are correct at the time of publication. Have you checked for updates?
See our MeReC Rapid reviews and e-learning materials, or search for further information on NHS Evidence.

Making decisions better

The link between evidence and decision making
How do we make decisions?
Dual process theory System 1 and System 2
The development of clinical expertise
The cognitive miser effect
Cognitive and affective biases
Making decisions better
Conclusion

Summary

  • There is much variation in the implementation of the best available evidence into clinical practice. These gaps between evidence and practice are often a result of multiple individual decisions.
  • Health care practitioners need to be good decision makers, yet decision making is rarely discussed during undergraduate or postgraduate training.
  • When making a decision, there is so much potentially relevant information available, it is impossible to know or process it all (so called ‘bounded rationality’). Usually, a limited amount of information is selected to reach a sufficiently satisfactory decision, a process known as satisficing.
  • There are two key processes used in decision making: System 1 and System 2. System 1 involves fast, intuitive decisions; System 2 is a deliberate analytical approach, used to locate information which is not instantly recalled. Human beings prefer to use System 1 processing as it is less effortful than System 2.
  • In clinical practice, gaps between evidence and practice can occur when a clinician develops a pattern of knowledge, which is then relied on for decisions using System 1 processing, without the activation of a System 2 check when needed.
  • The ability to process information and make good decisions may be influenced by a number of cognitive biases, of which the decision maker may be unaware.
  • Interventions to encourage appropriate use of System 1 and System 2 processing have been shown to improve clinical decision making.
  • Increased understanding of decision making processes and common sources of error should help clinical decision makers to minimise avoidable mistakes, and increase the proportion of decisions that are better.

Back to top.

The aim of evidence-based medicine (EBM) is to ensure that decision making in health care incorporates the best available evidence. However, evidence should e used judiciously, taking into account both clinical expertise and the needs and wishes of individual patients.1

After more than twenty years, the EBM movement has well-developed systems and processes. There are high quality syntheses of evidence which cover many areas of clinical practice. The Cochrane Library, the National Institute for Health and Clinical Excellence (NICE) and many other guideline producers across the globe use methodological approaches specifically designed to minimise biases in the data. Worldwide, clinicians are taught about EBM at undergraduate and postgraduate levels.2 However, even in the United Kingdom where the EBM culture is strong, its formal incorporation into the undergraduate curricula of medical schools is variable.3

Linked with the development of EBM has been a movement exploring implementation — how to better incorporate the findings from high quality research into routine clinical practice. Despite more than twenty years of work, much remains to be done and many variations in care can be identified.4,5 Sometimes there is over-implementation — for example, the rapid uptake of long-acting insulin analogues for people with type 2 diabetes mellitus when the clinical effectiveness and health economics versus NPH insulin is highly questionable.6 Sometimes there is under-implementation — for example, in patients with heart failure admitted to hospital, beta blockers have been shown to reduce mortality by about one third in the first year after admission, when used at target doses. However, beta blockers are only prescribed in 60% of eligible patients in England and Wales, and where they are prescribed, 66% of patients receive less than 50% of the target dose.7 Many gaps between evidence and practice remain, and even when multi-faceted implementation programmes are in place (using, for example, education, audit, data feedback and financial incentives) it is difficult to predict the uptake of evidence.8, 9,10

These gaps are a result of multiple individual decisions. We expect health care practitioners to be good decision makers. Yet it is rare that the evidence on decision making — and how it might be improved — is discussed in mainstream undergraduate or postgraduate curricula. The patient safety data alone indicates that some decision making is not always optimal.11 This bulletin provides a brief introduction and overview of the evidence on decision making and relates it to the use of evidence to guide practice in health care.

Back to top

The link between evidence and decision making

Decision making is important across a wide range of human activity. Modern research programmes go back to the 1930s and have largely been conducted in the fields of economics and cognitive psychology.12

All decision making is an uncertain enterprise. Mistakes are inevitable even in the best of circumstances, and especially when judged with the benefit of hindsight. But even in uncertain practice, some decisions are clearly better than others. Avoiding common mistakes would increase the proportions of decisions that are better, so learning about common sources of error ought to enable the recognition of errors and help develop strategies to minimise avoidable mistakes. The EQUIP study, for example, looked at prescribing errors in foundation trainees.13 The study found a complex mixture of factors were involved (including miscommunication, lack of safety culture, inadequate prescribing training and support, and stressful working conditions). The report led to the development of a core drug list to support prescribing training interventions to reduce prescribing errors.14

Whilst the EBM movement’s core statement includes a paragraph on clinical decision making, it gives few details and just two references from the large volume of research on decision making in medicine.15 Arguably, this is an omission, because clinical decision making and evidence-based practice are closely related, and may be considered to be inter-dependent. Both reject the culture of regarding authority and status as an automatic guide; both emphasise the importance of honouring ethical obligations to individuals and to populations; and both involve a spirit of inquiry.16

Back to top

How do we make decisions?

In 1978 Herbert Simon won the first of two Nobel Prizes awarded in this field for describing ’bounded rationality’.17 Put simply, this maxim states that there is so much potentially relevant information available to a decision maker that it is impossible for the human brain to know or process it all. This is counter to what we would wish to be the case. Either as clinicians or as patients we would expect high quality decisions to be made as a result of consideration and weighing of large amounts of scientific data, a well-controlled process carefully honed by years of training, experience and dedication.

However, it has repeatedly been demonstrated that human beings truncate large volumes of information. Imagine a patient presenting with a headache. Making a diagnosis might depend upon knowing the anatomy of the structures capable of causing the pain, their physiological interdependence, cellular biochemistry, the potential pathophysiological processes and so on — to say nothing of human behaviour and its response to pain and illness. The consultation skills to elicit and weigh key symptoms accurately and succinctly, and the examination skills to elicit the presence or absence of key signs are also required. Case memories (or illness scripts) of similar patients seen before may be recalled, and compared to the current patient to help formulate a decision.18 The volume of information and its complexity is enormous, and yet this task is performed thousands of times a day by thousands of clinicians across the NHS. Clinicians often use shortcuts within this complex process, to create a plausible judgement that quickly comes to mind, a process known as satisficing.17

Even more remarkably, the diagnostic decision process then is followed by the decision on management. This is another potentially huge volume of complex information, this time evidence-based data — the product of much research. Even when this is summarised into high quality syntheses, the volume is amazing. Consider this analysis of information required to manage18 patients admitted in one 24 hour on call period in a UK hospital. The patients had a total of 44 diagnoses. The on call physician, if referring only to relevant guidelines provided by NICE, UK Royal Colleges and major societies would need to have read 3679 pages. If it takes two minutes to read each page, the admitting physician would need to read for 122 hours and correctly remember and apply all of that information (for one 24 hour on call period).19

Back to top

Dual process theory — System 1 and System 2

Fig.1

Figure 1. Picture of glass test in a young person with meningococcal septicaemia

Look at Figure 1 above.What emergency treatment does this seriously ill young person need? Most health care professionals have little difficulty in determining that the presumed diagnosis is meningococcal septicaemia and part of the emergency response is parenteral antibiotics. Both the diagnostic decision and the management decision are made very quickly by experienced clinicians. However, in the same clinical setting the process of diagnostic reasoning by medical students might have to be purposeful and take longer.

Whilst it is emotionally and intellectually appealing to hope that health care professionals can acquire and process all of the high quality evidence relating to diagnosis and management instantly just when required, the volume of information is completely unmanageable. On most occasions and in most settings human beings make decisions based on a much faster and less intensive process. This brings us to our second Nobel Prize in this field of research, awarded to Daniel Kahneman in 2002.20

The detailed approach to decision making involving lots of conscious, deliberative effort is called System 2 processing. The quicker and less effortful approach is called System 1, and the over-arching theory which unifies many theories of decision making is called dual process theory (see Figure 2).21,22

Figure 2. Schematic model for diagnostic decision making
Fig.2
Reprinted from: Croskerry P. Context is everything or how could I have been that stupid? Healthc Q. 2009; 12:e171-6.

A clinician faced with a new consultation will either quickly recognise the constellation of symptoms and signs using pattern recognition and be able to make a diagnosis, or not. If the pattern is recognised System 1 operates; if not, then System 2 processing is required. Imagine a 28 year old Indian lady consults with a two month history of exertional chest pain when pushing her baby’s buggy. She has a past history of type two diabetes, hypothyroidism and a BMI of 34.6.23 If this patient was a 58 year old man, System 1 processing would probably lead most physicians effortlessly (through pattern recognition) to a diagnosis of ischaemic chest pain. In this case, however, the key clinical features (exertional chest pain + young woman + post partum + cardiovascular risk factors) do not fit with a well-recognised pattern, and require some analytical thinking, within System 2. The doctors caring for this lady employed System 2 processing, arranged a number of investigations to inform their clinical decisions, leading to the discovery of a critical stenosis in her left anterior descending coronary artery.

If a diagnosis becomes obvious during the analytical process (in this case, for example, if the patient’s resting or exercise ECG had clearly shown ischaemic changes) then the decision making moves into System 1, and importantly the model includes the option of a System 1 decision moving into a more purposeful check by System 2.

This decision making process takes place in the diagnostic phase of a consultation and is then repeated by another System 1 or System 2 process in generating management options ­— creating the model of health care decision making which could be termed dual-dual process theory.

Neither System 1 nor System 2 should be regarded as ‘good’ or ‘bad’. System 1 decision making can provide life-saving decisions very quickly (as in the case of recognising a meningococcal rash). System 2 decision making can locate information which enables a decision to be made when System 1 is incapable of doing so. However, System 2 processing takes more time and this may not be consistent with the pace required of clinical practice. Gaps between evidence and practice occur when a clinician develops a pattern of knowledge, which is then relied on for decisions using System 1 processing, without the activation of a System 2 check. Imagine a consultation with a teenager with frequent coloured circular flashing visual aura lasting for seconds to minutes, that may be clustered, and are followed by a headache with vomiting. If only System 1 processing is used (teenager + visual aura + headache + vomiting) to reach a diagnosis of migraine, without activating System 2 to analyse the parts of the story that don’t quite fit (coloured + circular + brief + clustered), then the diagnosis of occipital lobe seizures could be missed.24

In addition, even if there were System 2 input, the powerful nature of System 1 creates a large number of cognitive biases which make it difficult for new data to be accepted, even if of high quality (Table 1).

Back to top

The development of clinical expertise

The development of any new skill can be illustrated by the four stage conscious competence model (Figure 3). The origins of this model are obscure, but the US Gordon Training International organisation has played a major role in defining it and promoting its use.25 The dual process theory can be applied to this well known model of learning to explain how expertise can be developed.

Figure 2. Schematic model for diagnostic decision making
Fig.3
Reprinted from: Croskerry P. Context is everything or how could I have been that stupid? Healthc Q. 2009; 12:e171-6.

Imagine someone who has never driven a car, diagnosed appendicitis or calculated the dose of gentamicin for a patient with impaired renal function. Whatever the task is, if they have never done this important and complex task before, they will know they cannot do it — a state in this model termed ‘consciously incompetent’. It requires purposeful and conscious learning using System 2 processes to gain the knowledge and skills to carry out the task. Once the individual has mastered it, they may be able to pass an assessment of competence set at the end of the learning period. However, at this stage, they can only perform the task adequately if, cognitively, they address it with their utmost attention and concentration. The individual is still in System 2 but they are now ‘consciously competent’.

With further practice some of the actions required become automated and at least for part of the time for part of the task the individual can operate on ‘automatic pilot’, for example, not having to refer to a drug formulary when prescribing gentamicin. System 2 learning has become embedded into a System 1 process and they are now ‘unconsciously competent’. However, if they stay in System 1, errors may creep in and they may become ‘unconsciously incompetent’, using the same example, remembering a dose calculation incorrectly. With an effortful, conscious, System 2 assessment activity, they can realise this and become ‘consciously incompetent’ and be able to rectify their actions once again using System 2 as a check on System 1. Some educationalists argue that there is a fifth stage in this model, whereby the System 1 ’unconsciously competent’ practitioner can ‘toggle’ into System 2 and perform an internal assessment of current activity and using reflection or metacognition correct activity if required without entering again the more formal assessment and learning.26

This is a useful model — it fits well with the research describing the development of medical expertise, which is also in four phases. In the first phase some basic scientific education is followed by the development by the learner of ‘elaborated casual networks’. Initial learning focuses mainly on facts and relationships between facts, often covering traditional medical biosciences (anatomy, biochemistry, physiology, cellular biology, and so on). Whether a formal problem-based approach is used or not, most modern undergraduate health sciences curricula introduce pathophysiology and relate learning to patients, because learning in context is associated with greater retention of knowledge. This is purposeful and effortful learning — almost pure System 2 processing.

As contact with patients increases during the second half of the undergraduate programme, the focus of learning moves from understanding health and disease to diagnosis and management. The learning from phase one is combined in this second phase into ‘abridged networks’. Information becomes reformatted, automated or forgotten. Knowing stage 3 of the Krebs cycle is usually not required information when faced with the challenge of diagnosing a patient with abdominal or chest pain, whereas the symptoms and signs associated with, for example myocardial infarction or peptic ulcer, are essential knowledge that with repeated use becomes automated. This second stage represents System 2 learning becoming automated and systematised into System 1.

Based on repeated interactions with patients which are subsequently analysed and assessed, clinicians develop a bank of knowledge (case memories or illness scripts), sufficient to diagnose and treat most common conditions.18 This third phase, the process of developing expertise, rests largely on moving from System 2 into System 1 decision making, as patterns of diagnosis and management become embedded. It would be incompatible with the capacity of a human brain and the timeframe of a consultation and undoubtedly unhelpful to be able to activate all the potentially useful information going back to basic sciences. Satisficing is the norm, but System 2 still is activated relatively frequently when a pattern reliable enough to act upon is not recognised, in what are often high risk and complex situations.

Expert clinicians may compare a new patient with previous similar cases they have seen, and then (usually without recognising the cognitive processes involved) move effortlessly from System 1 into System 2 to diagnose and treat a particularly challenging case. Nevertheless, data from a variety of environments demonstrates that human beings prefer to use System 1 processing whenever possible.22

Back to top

The cognitive miser effect

Consider the decision to replace a family car. Where does the information come from to inform that decision? It’s unlikely that many people would go to lengthy technical documents from, say, the Society of Automotive Engineers describing the research evidence on the types of laminated safety glass for windscreens, the optimal design in terms of ride quality and durability of different shock absorbers and so on. The length of time required to review, remember and recall data of such volume and complexity is beyond most people, and it is impossible to acquire all of the available information within the timeframe available before the decision must be made. Of course there is no guarantee that the studying of such data in great detail would necessarily lead to a better decision. So the search for data is necessarily truncated, and most people seek out summaries of data produced by consumer-focused magazines, take a short test drive and seek advice from trusted colleagues before committing themselves to the next few years of motoring. This approach is termed search-satisficing and it is a characteristic of human decision making in many situations.

An ethnographic study in general practice described decisions being heavily influenced by ‘mindlines’ — tacit guidelines were employed which were influenced by the interactions between practice members, other trusted colleagues (including pharmaceutical representatives), early training and personal experience.27 This can again be seen as a process which creates a pattern, this time relating to management options for common conditions. Once the pattern is established, then System 1 processing leads rapidly to the selection of the preferred management options for the target condition. The System 1 processes are powerful, and have great utility. Moving into a System 2 process requires a conscious and deliberate effort.

The qualitatative data describing what happens in real-world decision making in healthcare mimics this approach and supports the hypothesis that System 1 processing dominates behaviour. If you ask doctors how often they need to search for an answer to manage a consultation most say about once a week. But if each consultation is observed and subsequently discussed, a clinical question is identified for every two or three patients seen.28,29 Many consultations are therefore managed without the activation of System 2 required to identify uncertainties; unrecognised clinical questions are suppressed, truncating the information required to manage the consultation.

Back to top

Cognitive and affective biases

Affect is inseparable from thinking, and emotional intelligence is intrinsically linked to our ability to process information and make good decisions.30 Stress and fatigue, temperament, circadian disturbances associated with shift work, family problems, marital discord, divorce, loss of a loved one, ill health, and other factors would all be expected to result in temporary or prolonged disturbances of affect and in turn decision making.

A cognitive bias is a pattern of deviation in judgment that occurs in particular situations. There is now an empirical body of research which has demonstrated a large number of cognitive biases in decision making, many of which are relevant to decision making in healthcare (Table 1).31

Table 1. Cognitive biases in clinical practice
Cognitive bias Definition Clinical example

Anchoring bias

Undue emphasis is given to an early salient feature in a consultation.

Concentrating on the fact that a 58 year old patient with back pain has a manual job (diagnosis = musculoskeletal pain), and putting less weight on his complaint of hesitancy and nocturia (diagnosis = bony pain from metastatic prostate cancer).

Ascertainment bias

Thinking shaped by prior expectation.

A young patient with an unsteady gait in a city centre late on a Saturday night might be expected to be inebriated, rather than having suffered a stroke.

Availability bias

Recent experience dominates evidence.

Having recently admitted a patient with multiple sclerosis, this diagnosis comes to mind the next time a patient with sensory symptoms is seen.

Bandwagon effect

’We do it this way here’, whatever anyone else says or whatever the data says.

Continuing to prescribe diclofenac to patients with cardiovascular risk factors, despite its thrombotic risk profile.

Omission bias

Tendency to inaction, as events that occur due to natural disease progression, are preferred to those due to action of physician.

Electing not to have a child vaccinated against an infectious disease because of the risk of harm from that vaccine, without considering the harm caused by the illness itself.

Sutton’s slip

Going for the obvious diagnosis.

Diagnosing musculoskeletal pain in the 28 year old lady with chest pain described earlier.

Gambler’s fallacy

The tendency to think that a run of diagnoses means the sequence cannot continue, rather than taking each case on its merits.

’I’ve seen 3 people with acute coronary syndrome recently; this can’t be a fourth.

Search satisficing

Having found one diagnosis, other co-existing conditions are not detected.

Missing the second fracture in a trauma patient in whom a fracture has been identified.

Vertical line failure

Routine repetitive tasks lead to thinking in silos.

Missing the case of meningitis in the middle of an influenza epidemic.

Blind spot bias

’Other people are susceptible to these biases but I am not’

 


Back to top

Making decisions better

Given that the purpose of undergraduate and postgraduate programmes for the health care professions is to produce individuals who can make sound clinical decisions, the failure to widely recognise this body of evidence on decision making and include it more explicitly in the undergraduate and postgraduate education of health care professionals is surprising. As one of the leading researchers in this area, Dan Ariely puts it, ’Once we understand when and where we may make erroneous decisions, we can try and be more vigilant, force ourselves to think differently about those decisions, or use technology to overcome our inherent shortcomings’.32

This is not to decry traditional approaches to education. A good knowledge base is essential. Michael LeGault, in his book ‘Think!’ states ‘The technique by which we make good decisions and produce good work is a nuanced and interwoven mental process involving bits of emotion, observation, intuition, and critical reasoning. The emotion and intuition are the easy, “automatic” parts, the observation and critical reasoning skills the more difficult, acquired parts. The essential background to all this is a solid base of knowledge. The broader the base, the more likely one is to have thought through and mastered difficult concepts, models and ways of interpreting the world’.33

However, when examining one hundred diagnostic errors, Graber found faulty knowledge on 11 occasions, such as missing the diagnosis of complete heart block by misreading an ECG, and faulty data gathering on 45, such as delayed diagnosis of abdominal aortic aneurysm due to incomplete history taking. In contrast there were 265 instances of faulty synthesis (processing and verification) of information, leading to missed diagnoses, including wrongly diagnosing a patient with a history of schizophrenia presenting with abnormal mental status as having a panic disorder, when the underlying problem was CNS metastases.34

It has been proposed that ‘Awareness of this science might accomplish three things. First it might broaden the list of pitfalls that a clinician can anticipate and possibly avoid. Second, it can provide a language and logic for understanding repeated mistakes. Third, it may encourage greater circumspection’.35

There has been some success with interventions that encourage dual processing at the time of problem solving. Using System 1 and System 2 processes demonstrated small but consistent effects in absolute novices learning how to interpret electrocardiographs.36 Similarly, encouraging System 1 processing has been shown to produce small gains with simple problems, and encouraging System 2 processing leads to improvements with difficult problems.37 In addition, a number of ‘debiasing’ strategies to improve decision making have been published.38,39,40

Pending further research, it has been proposed that a programme to improve decision making might include two elements. Firstly the development of strategies to increase awareness of cognitive biases. These could include reflection on real and hypothetical cases using heightened metacognition, simulation training, and training on the laws of probability, distinguishing correlation from causation and basic Bayesian probability theory. The second element proposed is a group of interventions designed to minimise cognitive or affective error. These could include using algorithms, guidelines and information technology at the point of care, improving time management, and improving accountability, feedback and support.30

Back to top

Conclusion

These are exciting times in the field of ‘getting research into practice’. The EBM movement has now developed a large repository of high quality evidence synthesised into the form of guidelines, and in many healthcare systems this is becoming linked into systems to both support and incentivise evidence-informed decision making. The emerging, novel and supportive approaches described in this bulletin ought to, through wider educational initiatives, create a more explicit link between systems approaches and the human dimensions of decision making.

There is further research required to determine the optimal approach to teaching current and future decision makers how human beings make decisions. On the basis of existing data, learning about common sources of error and acquiring an explicit appreciation of the different approaches to making decisions better holds much promise.

Until such research has been performed, clinical decision makers should familiarise themselves with the different processes involved in decision making, and the biases that can affect their decisions. The conscious and appropriate application of these processes, and checks for possible bias, should increase the proportion of decisions made that are better.

Back to top


References

  1. Sackett DL, et al. Evidence based medicine: what it is and what it isn't. BMJ 1996;312:71-2
  2. Straus SE, Jones G. What has evidence based medicine done for us? BMJ 2004;329:987-8
  3. Meats E, et al. Evidence-based medicine teaching in UK medical schools. Medical Teacher 2009;31:332-7
  4. Maskrey N, Greenhalgh T. Getting a better grip on research: the fate of those who ignore history. InnovAiT 2009;2:619–25
  5. NHS Atlas of Variation 2010. www.rightcare.nhs.uk/atlas
  6. Type 2 diabetes: newer agents. Short clinical guideline 87. National Institute for Health and Clinical Excellence 2009. www.nice.org.uk/cg87
  7. National Heart Failure Audit 2010. www.ic.nhs.uk/services/national-clinical-audit-support-programme-ncasp/audit-reports/heart-disease
  8. Greenhalgh T, et al. How to spread good ideas. A systematic review of the literature on diffusion, dissemination and sustainability of innovations in health service delivery and organisation. Report for the National Co-ordinating Centre for NHS Service Delivery and Organisation 2004. http://www.sdo.nihr.ac.uk/files/project/38-final-report.pdf
  9. Sheldon TA, et al. What's the evidence that NICE guidance has been implemented? Results from a national evaluation using time series analysis, audit of patients' notes, and interviews. BMJ 2004;329:999
  10. NHS Information Centre. Use of NICE appraised medicines in the NHS in England - Experimental Statistics 2009. www.ic.nhs.uk/statistics-and-data-collections/primary-care/prescriptions/use-of-nice-appraised-medicines-in-the-nhs-in-england--2009-experimental-statistics
  11. National Patient Safety Agency. www.nrls.npsa.nhs.uk/resources
  12. Neuroeconomics: Decision making and the brain. Glimcher PW et al, editors. Academic Press, London 2009
  13. Dornan T, et al. An in depth investigation into causes of prescribing errors by foundation trainees in relation to their medical education. EQUIP study. http://www.gmc-uk.org/FINAL_Report_prevalence_and_causes_of_prescribing_errors.pdf_28935150.pdf
  14. Baker E, et al. Development of a core drug list towards improving prescribing education and reducing errors in the UK. Br J Clin Pharmacol 2011;71:190-8
  15. Dawes M, et al. Sicily statement on evidence-based practice. BMC Medical Education 2005;5:1
  16. Gambrill E. Critical thinking in clinical practice: improving the quality of judgments and decisions. 2nd edition. John Wiley & Sons, New Jersey. 2005
  17. Simon HA. Rational decision-making in business organizations. Nobel Memorial Lecture. 8th December 1978. http://nobelprize.org/nobel_prizes/economics/laureates/1978/simon-lecture.pdf 
  18. Cox K. Stories as case knowledge: case knowledge as stories. Medical Education 2001;35:862-6
  19. Allen D, Harkins KJ. Too much guidance? Lancet 2005;365:1768
  20. Kahneman D. Maps of bounded rationality: a perspective on intuitive judgment and choice. Nobel Prize Lecture. 8th December 2002. http://nobelprize.org/nobel_prizes/economics/laureates/2002/kahnemann-lecture.pdf
  21. Croskerry P. A universal model of diagnostic reasoning. Acad Med. 2009; 84:1022-28
  22. Croskerry P. Context is everything or how could I have been that stupid? Healthc Q. 2009;12:e171-6
  23. Dwivedi G, et al. A 28 year old postpartum woman with right sided chest discomfort: case presentation. BMJ 2006;332:406, 471, 643
  24. Panayiotopoulos CP. Visual phenomena and headache in occipital epilepsy: a review, a systematic study and differentiation from migraine. Epileptic Disord. 1999;4:205-16
  25. Gordon Training International. www.gordontraining.com
  26. Woods NN, Brooks LR, Norman GR. The value of basic science in clinical diagnosis: creating coherence among signs and symptoms. Med Educ 2005,39:107-112
  27. Gabbay J, le May A. Evidence based guidelines or collectively constructed "mindlines?" Ethnographic study of knowledge management in primary care.BMJ 2004;329:1013
  28. Ely JW, et al. Analysis of questions asked by family doctors regarding patient care. BMJ 1999;319:358-61
  29. Covell D, Uman GC, Manning PR. Information needs in office practice: are they being met? Ann Int Med 1985;103:596-9
  30. Croskerry P. Diagnostic failure: a cognitive and affective approach. In: Henriksen K, et al, editors. Advances in Patient Safety: From Research to Implementation (Volume 2). Rockville (MD): Agency for Healthcare Research and Quality (US). 2005
  31. Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med 2002;9:1184-204
  32. Ariely D. Predictably irrational: the hidden forces that shape our decisions. Harper Collins. New York. 2008
  33. LeGault M. Think!: Why crucial decisions can’t be made in the blink of an eye. Threshold Editions. New York. 2006
  34. Graber ML. Diagnostic error in internal medicine. Arch Int Med 2005;165: 1493-9
  35. Redelmeier DA, et al. Problems for clinical judgement: introducing cognitive psychology as one more basic science. CMAJ 2001;164:358-60
  36. Eva KW, et al. Teaching from the clinical reasoning literature: combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ 2007;41:1152-8
  37. Mamede S, Schmidt HG, Penaforte JC. Effects of reflective practice on the accuracy of medical diagnoses. Med Educ 2008;42:468-75
  38. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003;78:775-80
  39.  Redelmeier DA. Improving patient care. The cognitive psychology of missed diagnoses. Ann Intern Med 2005;142:115-20
  40. Mamede S, Schmidt HG, Rikers R. Diagnostic errors and reflective practice in medicine. J Eval Clin Pract 2007;13:138-45

Back to top


The National Prescribing Centre (NPC) is responsible for helping the NHS to optimise its use of medicines. NPC is part of the National Institute for Health and Clinical Excellence (NICE), an independent organisation providing national guidance on promoting good health and preventing and treating ill health.
© National Institute for Health and Clinical Excellence, 2011. All rights reserved. This material may be freely reproduced for educational and not-for-profit purposes. No reproduction by or for commercial organisations, or for commercial purposes, is allowed without the express written permission of NICE.
Email: copyright@npc.nhs.uk Copyright 2011

National Prescribing Centre, Ground Floor, Building 2000, Vortex Court, Enterprise Way,
Wavertree Technology Park, Liverpool, L13 1FB Tel: 0151 353 7700 Fax: 0151 220 4334