Kaveh ShojaniaProfile page
Professor
Temerty Faculty of Medicine, Department of Medicine
Orcid identifier0000-0002-9942-0130
- ProfessorTemerty Faculty of Medicine, Department of Medicine
- 4164805405 (Work)
- 4164806100 Ext.89608 (Work)
- Sunnybrook Health Science Centre, Medicine, 2075 Bayview Ave, Rm H468, Toronto, ON, M3N 3M5, Canada
BIO
Dr. Shojania is Vice Chair (Quality & Innovation) in the Department of Medicine at the University of Toronto, where he also sees patients as a general internist at Sunnybrook Health Sciences Centre.
Dr. Shojania’s research focuses on identifying and further developing effective strategies for achieving improved quality of care. He has more than 160 publications indexed in Medline, including papers in leading journals such as the New England Journal of Medicine, the Lancet, and the Journal of the American Medical Association. Google Scholar lists over 23,000 citations to his work for an h-index of 70. Dr. Shojania held a Canada Research Chair in Patient Safety and Quality Improvement from 2004-2013, and he has twice delivered invited presentations on patient safety and healthcare quality to the US Institute of Medicine (now the National Academy of Medicine).
After medical school at the University of Manitoba and internship at the University of British Columbia, Dr. Shojania completed his residency in Internal Medicine at Harvard’s Brigham and Women’s Hospital . Dr. Shojania then undertook the first fellowship in Hospital Medicine in the US—at the University of California San Francisco with Dr Robert Wachter, who coined the term ‘hospitalist’ and helped establish what has since grown into second largest subspeciality of internal medicine in the US (after Cardiology). The work he conducted with Dr Wachter, including an influential report synthesizing the evidence for 80 specific patient safety interventions, a series of case-based articles and two federally funded websites introducing clinicians to patient safety concepts, and a book on medical error written for a general audience on, resulted in their sharing one of the John M. Eisenberg Patient Safety Awards (2004) from the National Quality Forum and The Joint Commission for Innovation in Patient Safety at a National Level.
From 2009-2019, Dr. Shojania was the inaugural Director of the University of Toronto Centre for Quality Improvement and Patient Safety (CQuIPS https://cquips.ca ). He grew the Centre from a team of just 4 people to 28 staff and core members, who published over 500 peer review papers and obtained approximately $50M in contracts and grants. The Centre also developed widely successful, award winning education programs which have produced over 1000 graduates.
In 2011, Dr Shojania became Editor-in Chief of BMJ Quality & Safety Quality–later co-Editor-in-Chief with Prof Mary Dixon-Woods of Cambridge University. Under their stewardship (till 2020), the journal’s impact factor rose from under 2 to over 7, rising to a rank below 30th to consistently placing among the top 3 spots among the 90+ journals covering not just health care quality and safety, but also all of health services research, clinical informatics, and health policy, among other topics.
In 2012, Dr Shojania developed a new academic career track for members of the Department of Medicine at the University of Toronto called Clinicians in Quality & Innovation. As described in a JAMA commentary written by Dr Shojania, the CQI position aimed to support and acknowledge faculty whose scholarly work primarily relates to assessing and improving healthcare quality, developing innovative models of care, or other forms of innovation outside of traditional ‘discovery research’. The number of Department members in the CQI job description has grown from just one faculty member in 2013 to 78 full-time CQIs in 2022, plus an additional 11 part-time faculty. Over this time, other major departments began appointing some of their faculty members as CQIs, including Anesthesia and Pain Medicine, Family and Community Medicine, Laboratory Medicine and Pathology, Medical Imaging, and Psychiatry.
Dr Shojania became Vice-Chair (Quality & Innovation) in the Department of Medicine in 2015. In this role, he oversees the mentorship and career development of the 80+ faculty now engaged in quality improvement and others forms of innovation related to healthcare delivery. The CQI faculty have had a 100% success rate in passing their ‘three year review’ and in going forward for promotion to Associate or Full Professor. Partly in recognition for the success of this new faculty track and his mentorship of these and other faculty, Dr Shojania received the Department’s prestigious Robert Hyland Robert Hyland Award for Excellence in Mentorship in 2018.
In the last several years, Dr. Shojania has been exploring opportunities to galvanize more concrete efforts in healthcare to address the impacts of the climate crisis and social determinants of health.
NOTABLE PUBLICATIONS
1. Kwan JL, Lo L, Ferguson J, Goldberg H, Diaz-Martinez JP, Tomlinson G, Grimshaw JM, Shojania KG. Computerised clinical decision support systems and absolute improvements in care: meta-analysis of controlled clinical trials. BMJ. 2020;370:m3216. Published 2020 Sep 17. doi:10.1136/bmj.m3216.
Massive investments by health care organizations implementing sophisticated clinical information systems reflect the expectation that electronic health records (EHRs) and the clinical decision support systems they contain will improve health care quality. These decision support systems include pop-up warnings about serious patient allergies and reminders about overlooked elements of preventive care to more complex guidance for drug dosing in acutely ill patients. Multiple systematic reviews in high impact journals over more than 20 years have fostered widespread optimism over the value of such decision support systems. Yet, these reviews did not report the actual improvements in care achieved, focusing on identifying features associated with ‘positive results.‘ Our meta-analysis, by contrast, evaluated the concrete improvements produced by CDSS, examining the increases in the proportion of patients receiving recommended care.
Across over 120 controlled clinical trials reporting data from over 1 million patients and 10 000 clinicians, we showed that CDSSs increased the average proportion of patients receiving desired care by only 5.8% (95% confidence interval 4.0% to 7.6%). To appreciate how unlikely such increases are to confer clinically worthwhile effects, a median of 40% of patients in the control groups in the included trials received care recommended by the decision support system. Thus, in the typical intervention group, only about 45% would receive the recommended process of care, meaning that over 50% of patients would still miss receiving recommended care (or would continue to receive non-recommended care such as inappropriate medications and diagnostic tests). The results from 30 trials reporting clinical endpoints underscore the doubtful clinical significance of impacts on care from decision support systems. Across 30 trials reporting clinical endpoints, the proportion of patients achieving guideline based targets (e.g., in blood pressure or lipid control) increased by a median of just 0.3% (interquartile range −0.7% to 1.9%).
A minority of trials reported larger and potentially clinically worthwhile effects. For instance, 25% of trials reported increases in the percentages of patients receiving recommended care ranging from 10% to 62%. And, we did identify some candidate predictors of these more worthwhile effects (e.g., decision support systems in pediatrics achieved significantly larger improvements as did interventions delivered in settings with low baseline adherence to recommended care). But, even after taking these characteristics into account, the basis for the substantial heterogeneity, the non-random variation in the improvements achieved across the trials, remained largely unexplained. Thus, after 25+ years of research, including over 100 controlled clinical trials, decision support systems typically produce increases in recommend care of doubtful clinical importance. A minority of interventions have delivered more clinically worthwhile effects, the circumstances under which such improvements occur remain undefined. Future research must identify new ways of designing clinical decision support systems that reliably confer larger improvements in care while avoiding the problem of alert fatigue contributing to the widely acknowledged frustrations with electronic health records (EHRs).
2. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2004 289 (21), 2849-2856.
A classic study showed that the rate at which autopsy identified major, clinically missed diagnoses had not changed in 30 years [Goldman L et al. The value of the autopsy in three medical eras. N Engl J Med. 1983]. Despite many other studies showing similar results—that autopsies detect important missed diagnoses which may have contributed to death, the frequency of non-forensic autopsies has shown a steady decline for decades for myriad reasons (such as lack of reimbursement and dropping regulatory requirements to conduct autopsies on some minimum proportion of deaths). Many clinicians and researchers have generally interpreted the results of these autopsy studies as reflecting selection bias—that clinicians only request autopsies in cases in which they worry they might have missed a diagnosis. To address this concern, I designed what amounted to a meta-regression of this extensive literature including not just the rates of autopsy-detected misdiagnoses but also the rate at which autopsies were performed. If clinicians selected autopsies mostly when they suspected serious errors, then studies with high autopsy rates should report much lower error rates. We showed that this occurred only to a modest extent. Our regression model incorporated data from 50 studies with clear methods for adjudicating what counted as an error and how it was defined. While major autopsy-detected errors decreased over time, diagnostic error rates remained surprisingly high. For a typical U.S. hospital in the year 2000, one would expect autopsies to reveal major diagnostic errors—wrong cause of death or principal underlying diagnosis—in at least 8% of cases, but possibly as many as 23%, with this range reflecting the impact of varying autopsy rates from a high of 100% to a low of 5%. Similarly, diagnostic errors that likely contributed to death would be expected in at least 4% of all cases, but possibly as many as 8% of cases. I was later invited to write a commentary on this topic for the New England Journal of Medicine [Shojania KG, Burton EC. The vanishing non-forensic autopsy N Engl J Med 2008 358 (9), 873.]
3. Shojania KG, Duncan BW, McDonald KM, Wachter RM. Making health care safer: a critical analysis of patient safety practices. Evid Rep Technol Assess (Summ) 2001 43 (1), 668. https://www.ncbi.nlm.nih.gov/books/NBK26966/
After the Institute of Medicine (IOM) Report “To Err is Human” came out in 1999, the US Agency for Healthcare Research and Quality (AHRQ) awarded a contract to the UCSF-Stanford Evidence-based Practice Center to identify proven practices for improving patient safety and rank order them in terms of their priority for implementation. To achieve these goals, we developed a framework for evaluating each intervention in terms of the frequency and severity of the target safety problem, effectiveness of the intervention, and implementation challenges.
Because we were given only 6 months, we engaged a team of 40 researchers from 10 academic medical centers across the US to conduct the systematic reviews for each of over 80 specific interventions aimed at improving patient safety. Over 140,000 copies of the complete report were obtained from AHRQ within just the first year (and many more individual chapter reviews were downloaded), and Google Scholar lists approximately 1200 citations to this work. Highlights of the report appeared in the Journal of the American Medical Association as part of a commentary [Shojania KG, Duncan BW, McDonald KM, Wachter RM. Safe but sound: patient safety meets evidence-based medicine. JAMA. 2002;288(4):508-13,] The US National Quality Forum used the report as the main source of its 30 Safe Practices for Better Healthcare.
This report was initially received with some criticism from some prominent leaders in the field who had expected the report to recommend that healthcare emulate high risk industries such as aviation and nuclear power, and focus on implementing technology solutions such as electronic medical records [Leape LL, Berwick DM, Bates DW. What practices will most improve safety? Evidence-based medicine meets patient safety. JAMA. 2002]. We had acknowledged the promise of such approaches, but pointed out that the current evidence more strongly supported preventing known concrete problems, such as hospital-acquired infections, venous thromboembolism, and so on. The strategies we recommended ended becoming a hugely important approach to improving patient safety in hospital settings (e.g., efforts to implement consistent VTE prophylaxis, the central line bundle for preventing catheter associated blood stream infections, and so on).
I participated in an update of this massive evidence report in 2012, funded by a $1M contract to 4 major academic medical centers in the US. [Making health care safer II: an updated critical analysis of the evidence for patient safety practices. Evid Rep Technol Assess. 2013 PMID: 24423049]. The main overview papers and specific reviews from the report were included in a supplemental issue for Annals of Internal Medicine.
When the National Patient Safety Foundation convened an expert panel, including members of the original Institute of Medicine (IOM) panel that wrote To Err is Human, to develop recommendations for advancing the field of patient safety, I co-chaired the expert panel and writing of the resulting report with Dr. Donald Berwick, founder of the Institute for Healthcare Improvement ( http://www.npsf.org/?freefromharm#form ) A summary of this work appeared in JAMA [Gandhi TK, Berwick DM, Shojania KG. Patient Safety at the Crossroads. JAMA 2016; 315:1829-30]
4. Shojania KG, Ranji S, McDonald KM, et al. Effects of quality improvement strategies for type 2 diabetes on glycemic control: a meta-regression analysis. JAMA. 2006 Jul 26;296(4):14.
This systematic review used a novel categorization to analyze (using a meta-regression technique) the impact on glycemic control of 11 distinct categories of quality improvement interventions on glycemic control for ambulatory patients with diabetes across 66 controlled studies. Using a sophisticated meta-regression technique, we were able to show that the single most effective type of quality improvement intervention (in terms of impact on glycemic control) was case management in which nurses or pharmacists played an active role in coordinating patients’ care and could make medication changes without having to wait for approval from physicians. This paper has been cited over 800 times, and the taxonomy we developed for characterizing the different improvement strategies has since been applied by other groups. Further work in this area led to a much updated analysis—48 cluster randomized controlled trials (involving 84,865 patients) and 94 patient RCTs (involving 38,664 patients— published in the Lancet and on which I was senior author [Tricco AC, Ivers NM, Grimshaw JM, Moher D, Turner L, Galipeau J, Halperin I, Vachon B, Ramsay T, Manns B, Tonelli M, Shojania K. Effectiveness of quality improvement strategies on the management of diabetes: a systematic review and meta-analysis. Lancet 2012 Jun 16;379(9833):2252-61.] A newer update is soon to be publshed in the Cochrane Library.
5. Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D. How quickly do systematic reviews become out of date: a survival analysis. Ann Intern Med. 2007 Aug 21;14(7):10.
This article presents work funded by the US Agency for Healthcare Research and Quality to assess the scope of the problem of outdated systematic reviews of the literature. The AHRQ funds numerous systematic reviews (e.g., to support guideline development) and was therefore interested in the problem of maintaining the currency of these reviews (e.g., how soon they might need to update past reviews). My own involvement in this work grew out of my interest in updating the meta-analysis of diabetes quality improvement strategies (the publication listed above), as well as various systematic reviews of over 75 specific patient safety interventions. We used a variety of efficient search techniques to update each of 100 quantitative meta-analyses indexed in a prominent secondary evidence source for clinicians and then determined how many of them met a priori defined criteria for major changes in evidence (e.g., a 50% change in effect size from updating the original meta-analysis with newer studies or a single publication in one of 6 high impact general medical journals with a qualitatively different conclusion – essentially saying the opposite of the meta-analysis). We found that, while the average ‘survival time’ was 5.5 years, 23% of reviews were out of date within 2 years, 15% within 1 year, and 7% already met criteria for updating at the time of publication.
As an off shoot of some my work on evidence synthesis in quality improvement and patient safety, I developed and evaluated an efficient search strategy for identifying systematic reviews in general (reported in Shojania KG, Bero LA. Taking advantage of the explosion of systematic reviews: an efficient MEDLINE search strategy. Effective Clinical Practice 2000), which was adapted by the US National Library of Medicine for its built-in search filter for retrieving systematic reviews.
Dr. Shojania’s research focuses on identifying and further developing effective strategies for achieving improved quality of care. He has more than 160 publications indexed in Medline, including papers in leading journals such as the New England Journal of Medicine, the Lancet, and the Journal of the American Medical Association. Google Scholar lists over 23,000 citations to his work for an h-index of 70. Dr. Shojania held a Canada Research Chair in Patient Safety and Quality Improvement from 2004-2013, and he has twice delivered invited presentations on patient safety and healthcare quality to the US Institute of Medicine (now the National Academy of Medicine).
After medical school at the University of Manitoba and internship at the University of British Columbia, Dr. Shojania completed his residency in Internal Medicine at Harvard’s Brigham and Women’s Hospital . Dr. Shojania then undertook the first fellowship in Hospital Medicine in the US—at the University of California San Francisco with Dr Robert Wachter, who coined the term ‘hospitalist’ and helped establish what has since grown into second largest subspeciality of internal medicine in the US (after Cardiology). The work he conducted with Dr Wachter, including an influential report synthesizing the evidence for 80 specific patient safety interventions, a series of case-based articles and two federally funded websites introducing clinicians to patient safety concepts, and a book on medical error written for a general audience on, resulted in their sharing one of the John M. Eisenberg Patient Safety Awards (2004) from the National Quality Forum and The Joint Commission for Innovation in Patient Safety at a National Level.
From 2009-2019, Dr. Shojania was the inaugural Director of the University of Toronto Centre for Quality Improvement and Patient Safety (CQuIPS https://cquips.ca ). He grew the Centre from a team of just 4 people to 28 staff and core members, who published over 500 peer review papers and obtained approximately $50M in contracts and grants. The Centre also developed widely successful, award winning education programs which have produced over 1000 graduates.
In 2011, Dr Shojania became Editor-in Chief of BMJ Quality & Safety Quality–later co-Editor-in-Chief with Prof Mary Dixon-Woods of Cambridge University. Under their stewardship (till 2020), the journal’s impact factor rose from under 2 to over 7, rising to a rank below 30th to consistently placing among the top 3 spots among the 90+ journals covering not just health care quality and safety, but also all of health services research, clinical informatics, and health policy, among other topics.
In 2012, Dr Shojania developed a new academic career track for members of the Department of Medicine at the University of Toronto called Clinicians in Quality & Innovation. As described in a JAMA commentary written by Dr Shojania, the CQI position aimed to support and acknowledge faculty whose scholarly work primarily relates to assessing and improving healthcare quality, developing innovative models of care, or other forms of innovation outside of traditional ‘discovery research’. The number of Department members in the CQI job description has grown from just one faculty member in 2013 to 78 full-time CQIs in 2022, plus an additional 11 part-time faculty. Over this time, other major departments began appointing some of their faculty members as CQIs, including Anesthesia and Pain Medicine, Family and Community Medicine, Laboratory Medicine and Pathology, Medical Imaging, and Psychiatry.
Dr Shojania became Vice-Chair (Quality & Innovation) in the Department of Medicine in 2015. In this role, he oversees the mentorship and career development of the 80+ faculty now engaged in quality improvement and others forms of innovation related to healthcare delivery. The CQI faculty have had a 100% success rate in passing their ‘three year review’ and in going forward for promotion to Associate or Full Professor. Partly in recognition for the success of this new faculty track and his mentorship of these and other faculty, Dr Shojania received the Department’s prestigious Robert Hyland Robert Hyland Award for Excellence in Mentorship in 2018.
In the last several years, Dr. Shojania has been exploring opportunities to galvanize more concrete efforts in healthcare to address the impacts of the climate crisis and social determinants of health.
NOTABLE PUBLICATIONS
1. Kwan JL, Lo L, Ferguson J, Goldberg H, Diaz-Martinez JP, Tomlinson G, Grimshaw JM, Shojania KG. Computerised clinical decision support systems and absolute improvements in care: meta-analysis of controlled clinical trials. BMJ. 2020;370:m3216. Published 2020 Sep 17. doi:10.1136/bmj.m3216.
Massive investments by health care organizations implementing sophisticated clinical information systems reflect the expectation that electronic health records (EHRs) and the clinical decision support systems they contain will improve health care quality. These decision support systems include pop-up warnings about serious patient allergies and reminders about overlooked elements of preventive care to more complex guidance for drug dosing in acutely ill patients. Multiple systematic reviews in high impact journals over more than 20 years have fostered widespread optimism over the value of such decision support systems. Yet, these reviews did not report the actual improvements in care achieved, focusing on identifying features associated with ‘positive results.‘ Our meta-analysis, by contrast, evaluated the concrete improvements produced by CDSS, examining the increases in the proportion of patients receiving recommended care.
Across over 120 controlled clinical trials reporting data from over 1 million patients and 10 000 clinicians, we showed that CDSSs increased the average proportion of patients receiving desired care by only 5.8% (95% confidence interval 4.0% to 7.6%). To appreciate how unlikely such increases are to confer clinically worthwhile effects, a median of 40% of patients in the control groups in the included trials received care recommended by the decision support system. Thus, in the typical intervention group, only about 45% would receive the recommended process of care, meaning that over 50% of patients would still miss receiving recommended care (or would continue to receive non-recommended care such as inappropriate medications and diagnostic tests). The results from 30 trials reporting clinical endpoints underscore the doubtful clinical significance of impacts on care from decision support systems. Across 30 trials reporting clinical endpoints, the proportion of patients achieving guideline based targets (e.g., in blood pressure or lipid control) increased by a median of just 0.3% (interquartile range −0.7% to 1.9%).
A minority of trials reported larger and potentially clinically worthwhile effects. For instance, 25% of trials reported increases in the percentages of patients receiving recommended care ranging from 10% to 62%. And, we did identify some candidate predictors of these more worthwhile effects (e.g., decision support systems in pediatrics achieved significantly larger improvements as did interventions delivered in settings with low baseline adherence to recommended care). But, even after taking these characteristics into account, the basis for the substantial heterogeneity, the non-random variation in the improvements achieved across the trials, remained largely unexplained. Thus, after 25+ years of research, including over 100 controlled clinical trials, decision support systems typically produce increases in recommend care of doubtful clinical importance. A minority of interventions have delivered more clinically worthwhile effects, the circumstances under which such improvements occur remain undefined. Future research must identify new ways of designing clinical decision support systems that reliably confer larger improvements in care while avoiding the problem of alert fatigue contributing to the widely acknowledged frustrations with electronic health records (EHRs).
2. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2004 289 (21), 2849-2856.
A classic study showed that the rate at which autopsy identified major, clinically missed diagnoses had not changed in 30 years [Goldman L et al. The value of the autopsy in three medical eras. N Engl J Med. 1983]. Despite many other studies showing similar results—that autopsies detect important missed diagnoses which may have contributed to death, the frequency of non-forensic autopsies has shown a steady decline for decades for myriad reasons (such as lack of reimbursement and dropping regulatory requirements to conduct autopsies on some minimum proportion of deaths). Many clinicians and researchers have generally interpreted the results of these autopsy studies as reflecting selection bias—that clinicians only request autopsies in cases in which they worry they might have missed a diagnosis. To address this concern, I designed what amounted to a meta-regression of this extensive literature including not just the rates of autopsy-detected misdiagnoses but also the rate at which autopsies were performed. If clinicians selected autopsies mostly when they suspected serious errors, then studies with high autopsy rates should report much lower error rates. We showed that this occurred only to a modest extent. Our regression model incorporated data from 50 studies with clear methods for adjudicating what counted as an error and how it was defined. While major autopsy-detected errors decreased over time, diagnostic error rates remained surprisingly high. For a typical U.S. hospital in the year 2000, one would expect autopsies to reveal major diagnostic errors—wrong cause of death or principal underlying diagnosis—in at least 8% of cases, but possibly as many as 23%, with this range reflecting the impact of varying autopsy rates from a high of 100% to a low of 5%. Similarly, diagnostic errors that likely contributed to death would be expected in at least 4% of all cases, but possibly as many as 8% of cases. I was later invited to write a commentary on this topic for the New England Journal of Medicine [Shojania KG, Burton EC. The vanishing non-forensic autopsy N Engl J Med 2008 358 (9), 873.]
3. Shojania KG, Duncan BW, McDonald KM, Wachter RM. Making health care safer: a critical analysis of patient safety practices. Evid Rep Technol Assess (Summ) 2001 43 (1), 668. https://www.ncbi.nlm.nih.gov/books/NBK26966/
After the Institute of Medicine (IOM) Report “To Err is Human” came out in 1999, the US Agency for Healthcare Research and Quality (AHRQ) awarded a contract to the UCSF-Stanford Evidence-based Practice Center to identify proven practices for improving patient safety and rank order them in terms of their priority for implementation. To achieve these goals, we developed a framework for evaluating each intervention in terms of the frequency and severity of the target safety problem, effectiveness of the intervention, and implementation challenges.
Because we were given only 6 months, we engaged a team of 40 researchers from 10 academic medical centers across the US to conduct the systematic reviews for each of over 80 specific interventions aimed at improving patient safety. Over 140,000 copies of the complete report were obtained from AHRQ within just the first year (and many more individual chapter reviews were downloaded), and Google Scholar lists approximately 1200 citations to this work. Highlights of the report appeared in the Journal of the American Medical Association as part of a commentary [Shojania KG, Duncan BW, McDonald KM, Wachter RM. Safe but sound: patient safety meets evidence-based medicine. JAMA. 2002;288(4):508-13,] The US National Quality Forum used the report as the main source of its 30 Safe Practices for Better Healthcare.
This report was initially received with some criticism from some prominent leaders in the field who had expected the report to recommend that healthcare emulate high risk industries such as aviation and nuclear power, and focus on implementing technology solutions such as electronic medical records [Leape LL, Berwick DM, Bates DW. What practices will most improve safety? Evidence-based medicine meets patient safety. JAMA. 2002]. We had acknowledged the promise of such approaches, but pointed out that the current evidence more strongly supported preventing known concrete problems, such as hospital-acquired infections, venous thromboembolism, and so on. The strategies we recommended ended becoming a hugely important approach to improving patient safety in hospital settings (e.g., efforts to implement consistent VTE prophylaxis, the central line bundle for preventing catheter associated blood stream infections, and so on).
I participated in an update of this massive evidence report in 2012, funded by a $1M contract to 4 major academic medical centers in the US. [Making health care safer II: an updated critical analysis of the evidence for patient safety practices. Evid Rep Technol Assess. 2013 PMID: 24423049]. The main overview papers and specific reviews from the report were included in a supplemental issue for Annals of Internal Medicine.
When the National Patient Safety Foundation convened an expert panel, including members of the original Institute of Medicine (IOM) panel that wrote To Err is Human, to develop recommendations for advancing the field of patient safety, I co-chaired the expert panel and writing of the resulting report with Dr. Donald Berwick, founder of the Institute for Healthcare Improvement ( http://www.npsf.org/?freefromharm#form ) A summary of this work appeared in JAMA [Gandhi TK, Berwick DM, Shojania KG. Patient Safety at the Crossroads. JAMA 2016; 315:1829-30]
4. Shojania KG, Ranji S, McDonald KM, et al. Effects of quality improvement strategies for type 2 diabetes on glycemic control: a meta-regression analysis. JAMA. 2006 Jul 26;296(4):14.
This systematic review used a novel categorization to analyze (using a meta-regression technique) the impact on glycemic control of 11 distinct categories of quality improvement interventions on glycemic control for ambulatory patients with diabetes across 66 controlled studies. Using a sophisticated meta-regression technique, we were able to show that the single most effective type of quality improvement intervention (in terms of impact on glycemic control) was case management in which nurses or pharmacists played an active role in coordinating patients’ care and could make medication changes without having to wait for approval from physicians. This paper has been cited over 800 times, and the taxonomy we developed for characterizing the different improvement strategies has since been applied by other groups. Further work in this area led to a much updated analysis—48 cluster randomized controlled trials (involving 84,865 patients) and 94 patient RCTs (involving 38,664 patients— published in the Lancet and on which I was senior author [Tricco AC, Ivers NM, Grimshaw JM, Moher D, Turner L, Galipeau J, Halperin I, Vachon B, Ramsay T, Manns B, Tonelli M, Shojania K. Effectiveness of quality improvement strategies on the management of diabetes: a systematic review and meta-analysis. Lancet 2012 Jun 16;379(9833):2252-61.] A newer update is soon to be publshed in the Cochrane Library.
5. Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D. How quickly do systematic reviews become out of date: a survival analysis. Ann Intern Med. 2007 Aug 21;14(7):10.
This article presents work funded by the US Agency for Healthcare Research and Quality to assess the scope of the problem of outdated systematic reviews of the literature. The AHRQ funds numerous systematic reviews (e.g., to support guideline development) and was therefore interested in the problem of maintaining the currency of these reviews (e.g., how soon they might need to update past reviews). My own involvement in this work grew out of my interest in updating the meta-analysis of diabetes quality improvement strategies (the publication listed above), as well as various systematic reviews of over 75 specific patient safety interventions. We used a variety of efficient search techniques to update each of 100 quantitative meta-analyses indexed in a prominent secondary evidence source for clinicians and then determined how many of them met a priori defined criteria for major changes in evidence (e.g., a 50% change in effect size from updating the original meta-analysis with newer studies or a single publication in one of 6 high impact general medical journals with a qualitatively different conclusion – essentially saying the opposite of the meta-analysis). We found that, while the average ‘survival time’ was 5.5 years, 23% of reviews were out of date within 2 years, 15% within 1 year, and 7% already met criteria for updating at the time of publication.
As an off shoot of some my work on evidence synthesis in quality improvement and patient safety, I developed and evaluated an efficient search strategy for identifying systematic reviews in general (reported in Shojania KG, Bero LA. Taking advantage of the explosion of systematic reviews: an efficient MEDLINE search strategy. Effective Clinical Practice 2000), which was adapted by the US National Library of Medicine for its built-in search filter for retrieving systematic reviews.
POSTGRADUATE TRAINING
- Internal Medicine ResidencyBrigham and Women's Hospital, Boston, United States1 Jul 1995 - 30 Jun 1998Residency
- Hospital Medicine FellowUniversity of California San Francisco Medical Center, San Francisco, United States1 Jul 1998 - 30 Jun 2000Other
- Internship/PGY1 in Internal MedicineUniversity of British Columbia, Vancouver, Canada1 Jul 1994 - 30 Jun 1995Residency
AFFILIATED INSTITUTIONS
- Sunnybrook Health Sciences Centre