Skip to nav Skip to content

Clinical Practice Guidelines Development: Training and Resources

Facebook Twitter LinkedIn Email

IDSA's clinical practice guidelines aim to provide guidance on how to improve the quality and appropriateness of care, to improve cost-effectiveness of interventions, to serve as educational guidance tools, and to identify pertinent research directions. Recommendations are intended to help inform clinical decision making based on systematic review of evidence, along with an assessment of the desirable and undesirable aspects of alternative courses of action.

This website provides many tools related to the process of clinical practice guidelines development such as the forthcoming IDSA Handbook for Clinical Practice Guidelines Development (currently under Board review), Minimum Requirements for Guideline Development, mandatory and additional training modules for potential panelists, as well as many other tools and resources for guideline development and systematic review.

Handbook for Guidelines Development

The IDSA Handbook for Clinical Practice Guidelines Development was developed by the IDSA department of Clinical Affairs and Practice Guidelines and endorsed by the IDSA Standards and Practice Guidelines Committee (SPGC). It was created to provide assistance to IDSA-sponsored guideline panelists for the purpose of developing best-evidence, trustworthy clinical practice guidelines. The primary objective of clinical practice guidelines is to improve the quality of care provided to patients. For this reason, IDSA believes that guidelines should be held to the highest standards of quality, despite the obvious challenges in applying a uniform methodology to guidelines that represent diverse populations, diseases, and interventions. This handbook provides a systematic and practical framework for guideline development by standardizing the methodological process of guideline development and improving guidelines' rigor, transparency, robustness, and homogeneity; this includes detailed information of the various steps from inception to completion and dissemination with expectations and timelines. All current and prospective panelists should familiarize themselves thoroughly with this handbook and understand that IDSA considers the application of its content as mandatory at all levels of the guideline development process. This handbook thereby serves as a tacit agreement between the IDSA SPGC and panelists. The handbook will be updated as needed at the discretion of the IDSA SPGC and the department of Clinical Affairs and Practice Guidelines; the updates will be reflected in real time on this site.

Resource: IDSA Handbook for Clinical Practice Guidelines Development

Minimum Requirements for Guideline Development

To ensure the production of trustworthy, evidence-based guidelines, IDSA requires that guideline developers adhere to:

    1. The IDSA Handbook for Clinical Practice Guidelines Development (Updated January 2021)
    2. The Institute of Medicine's (IOM) practice standards
    3. The GRADE approach 

To facilitate the guideline development process, IDSA requires that prospective guideline panelists complete the mandatory training prior to serving on a panel. To permit the production of timely guidelines, IDSA also requires that the panel respect the proposed timeline for each step of the development process.

Mandatory Training For Guideline Panelists

The following training modules (presented by McMaster University) are resources that a prospective guideline panelist MUST complete before serving on a panel. A record will be maintained by the SPGC to monitor completion and determine who is eligible to serve on a panel.

  1. Overview of GRADE in Guideline Development
    This training module seeks to provide the viewer with an overview of the guideline development process from question formulation all the way to recommendation development. The module also introduces the more recent GRADE addition, the Evidence-to-Decision framework (EtD), which takes the evidence to the recommendation phase.
  2. Formulating/Developing Clinical Questions (PICO) and Choosing Patient-Important Outcomes - COMING SOON
    This training module provides the viewer with a background into how to develop PICO (Patient, Intervention, Comparison, Outcome) questions as well as choosing patient-important outcomes. The module is presented in a manner to highlight that a well-formulated PICO question contains the components needed for the final recommendation.
  3. Choosing a Comparison and Outcomes for the Summary of Findings (SoF) Table
    This module orients viewers to the selection of comparisons and patient-important outcomes as well as how they are situated within the GRADE Evidence Tables.
  4. Summarizing the Evidence Using the GRADE Evidence Profile
    This training module provides the viewer with an introduction to GRADE's Evidence Tables. It discusses an extensive version (Evidence Profile--lists the GRADE domain judgments in arriving at the quality of evidence ratings) and a shortened version (Summary of Findings Table--similar to Evidence Profile, but omits the quality of evidence judgments). The module explains the numerous variables displayed in the tables that would yield the best summary of the evidence and judgments that inform the recommendation(s).
  5. The GRADE Approach and Summary of Findings Tables
    The module provides a brief review of the overall GRADE approach within guideline development and seeks to introduce the viewer to GRADE's Summary of Findings (SoF) Table.
  6. Making Recommendations Using the Evidence-to-Decision Framework
    This module introduces the viewer to the final step in guideline recommendation formulation whereby the evidence is taken to the decision/recommendation phase. The viewer is exposed to the Evidence-to-Decision framework (EtD), a structured, step-by-step approach to formulating the final recommendation, with the quality of evidence and strength components of the recommendation. The EtD also allows for the direction of the recommendations (e.g., "for" or "against" the intervention under consideration).
  7. Strong and Weak/Conditional Recommendations in GRADE
    This module takes the viewer through the steps of making strong versus weak recommendations (distinguishing from among the two types) using the GRADE approach.

Additional Training for Panelists

The following training modules are not mandatory for participation in a guideline panel, but will provide a deeper understanding of the GRADE approach.

Forming Questions: Background vs. Foreground PICO Questions
This resource provides additional background into what "foreground" versus "background" questions are.

GRADE Quality of Evidence Domain: Assessing the Risk of Bias/Limitations (1/5)
This presentation outlines the "risk of bias" domain from among the five GRADE domains for making panel judgments on the quality of the evidence per patient-important outcome. This specific "risk of bias" domain assesses the different potential study limitations of the body of evidence per outcome.

GRADE Quality of Evidence Domain: Assessing Inconsistency/Heterogeneity (2/5)
This presentation focuses on the "inconsistency" domain from the five GRADE criteria. This domain considers the heterogeneity or differences among the studies' estimates of effect.

GRADE Quality of Evidence Domain: Assessing Indirectness (3/5)
This presentation focuses on how methodological aspects of PICO questions and how closely a guideline's recommendations reflect those aspects.

GRADE Quality of Evidence Domain: Assessing Imprecision (4/5)
This module focuses on the 95% confidence interval of the pooled estimate and the extent to which the interval is narrow or broad. Do the extremes of the confidence interval span both benefits and harms (i.e., crosses the line of no effect)? It also orients the viewer to the issue of small sample size and small number of events in attributing to precision (or not).

GRADE Quality of Evidence Domain: Assessing Publication Bias (5/5)
This module considers how comprehensive the literature search was and whether the published literature is plagues with mainly positive results whereby negative findings are not published.

Other Considerations: Upgrading the Quality of Evidence in Observational Study Designs
GRADE guidance indicates that while observational study designs begin with a quality of evidence rating of "low," the quality of evidence can be upgraded in certain circumstances such as when there is a large magnitude of effect, a dose-response relationship, or when all plausible residual confounding may reduce the demonstrated effect or increase the effect (if no effect was observed).

Guideline Development Tools

The following resources represent the various background and critical appraisal tools used for guidelines development.

Institute of Medicine (IOM): Standards for Developing Trustworthy Clinical Practice Guidelines
When treating patients, doctors and other healthcare providers are often faced with difficult decisions and considerable uncertainty. They rely on the scientific literature (in addition to their knowledge, experience, and patient preferences) to inform their decisions. Clinical practice guidelines are statements that include recommendations intended to optimize patient care; they are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options.

Because of the large number of clinical practice guidelines available, practitioners and other guideline users find it challenging to determine which guidelines are of high quality. If guideline users had a mechanism to immediately identify high quality, trustworthy clinical practice guidelines, their health-related decision-making would be improved, potentially improving both healthcare quality and health outcomes.

The U.S. Congress, through the Medicare Improvements for Patients and Providers Act of 2008, asked the IOM to undertake a study on the best methods used in developing clinical practice guidelines. The IOM developed eight standards for developing rigorous, trustworthy clinical practice guidelines.

GRADE Working Group: The Grading of Recommendations Assessment, Development, and Evaluation
Grading of Recommendations Assessment, Development, and Evaluation ("GRADE") working group began in 2000 as an informal collaboration of people with interest in addressing the shortcomings of grading systems in healthcare. The working group has developed a common, sensible, and transparent approach to grading quality (or certainty) of evidence and strength of recommendations. Many international organizations have provided input into the development of the GRADE approach which is now considered the standard in guideline development. IDSA began the transition to the use of GRADE framework in new guidelines and guideline updates initiated after October 2008.

AGREE II Instrument: The Appraisal of Guideline for Research and Evaluation
The Appraisal of Guidelines for Research and Evaluation (AGREE) Instrument was developed to address the issue of variability in the quality of practice guidelines. The AGREE instrument is a tool that assesses the methodological rigor and transparency in which a guideline is developed; it is used internationally. The original AGREE instrument, released in 2003, has been refined to improve the original tool's usability and methodological properties, namely its validity and reliability. These efforts have resulted in the new AGREE II tool that also includes a new User's Manual. The AGREE II tool comprises 23 items organized into six quality domains (scope and purpose, stakeholder involvement, rigor of development, clarity of presentation, applicability, and editorial independence). Each of the 23 items targets various aspects of practice guideline quality.

Critical Appraisal Tools (presented by University of Oxford CEBM)
This resource provides an overview of critical appraisal presented out of the University of Oxford's CEBM program.

Systematic Review Tools

The following resources represent the various systematic review tools used for searching for reviews/meta-analyses as well as appraisal.

Cochrane Database of Systematic Reviews
The Cochrane Database of Systematic Reviews (CDSR) is the leading resource for systematic reviews in healthcare. The CDSR includes Cochrane Reviews (the systematic reviews) and protocols for Cochrane Reviews as well as editorials. The CDSR also has occasional supplements. The CDSR is updated regularly as Cochrane Reviews are published "when ready" and form monthly issues; see publication schedule. To explore Cochrane Reviews, you can used the advanced search or you can browse by topic or by Cochrane Review Group (CRG).

AMSTAR: Assessing the Methodological Quality of a Systematic Review
AMSTAR stands for AMeaSurement Tool to Assess Systematic Reviews. There has been a proliferation of systematic reviews as one of the key tools for evidence-based healthcare. This has presented both opportunities and risks. Opportunities include creating an environment where researchers can base decisions on accurate, succinct, credible, comprehensive, and comprehensible summaries of the best available evidence on a topic, thereby minimizing error and bias. The risks include variation in quality and empirical validation. Decision makers have attempted to find ways to best utilize the vast amounts of systematic reviews available to them that offer pertinent and well-founded literature that is of the highest quality.

PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-analyses
PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials, but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions.

MOOSE: Meta-analysis of Observational Studies in Epidemiology
The proposed checklist contains specifications for reporting of meta-analyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion. Use of the checklist should improve the usefulness of meta-analyses for authors, reviewers, editors, readers, and decision makers.

Cochrane Risk of Bias Tools
Flaws in the design, conduct, analysis, and reporting of randomized trials can cause the effect of an intervention to be underestimated or overestimated. The Cochrane Collaboration's tool for assessing risk of bias aims to make the process clearer and more accurate. For each item in the tool, the assessment of risk of bias is in two parts. The support for judgment provides a succinct, free-text description or summary of the relevant trial characteristic on which judgments of risk of bias are based and aims to ensure transparency in how judgments are reached.

The second part of the tool involves assigning a judgment of high, low, or unclear risk of material bias for each item. We define material bias as bias of sufficient magnitude to have a notable effect on the results or conclusions of the trial, recognizing the subjectivity of any such judgment. Detailed criteria for making judgments about the risk of bias from each of the items in the tool are available in the Cochrane Handbook.

The Newcastle-Ottawa Scale (NOS)
Nonrandomized studies, including case-control and cohort studies, can be challenging to implement and conduct. Assessment of the quality of such studies is essential for a proper understanding of nonrandomized studies. The Newcastle-Ottawa Scale (NOS) is an ongoing collaboration between the Universities of Newcastle, Australia, and Ottawa, Canada. It was developed to assess the quality of nonrandomized studies with its design, content, and ease of use directed to the task of incorporating the quality assessments in the interpretation of meta-analytic results.

EBM Organizations/Resources

The following resources represent organizations, academia, and entities engaged in Evidence-Based Medicine (EBM), healthcare research, and knowledge sharing/translation:

IDSA Standards and Practice Guidelines Committee

Charge

  • To establish IDSA as the scientific voice on issues and standards related to infectious diseases (reports to IDSA Board of Directors)
  • To monitor, develop, and make recommendations to establish scientific policy on infectious diseases related issues
  • To make recommendations and oversee the development of practice guidelines or any other standard setting documents, including those done in collaboration with other groups

Members

  • Ravi Jhaveri, MD, FIDSA, FPIDS, FAAP - Chair
  • Nabin Shrestha, MD, MPH, FIDSA, FSHEA – Vice Chair
  • Barbara Alexander, MD, FIDSA - IDSA Board of Directors Liaison
  • Melissa Badowski, PharmD, MPH
  • Lara Danziger-Isakov, MD, MPH
  • Clare N. Gentry, MD, MS
  • Yasir A. Hamad, MD (MBBS)
  • Adam L. Hersh, MD, PhD – PIDS Representative
  • Susan Hoover, MD, PhD, FIDSA
  • Waleed Javaid, MD, FIDSA, FSHEA
  • Aley George Kalapila, MD, PhD – HIVMA Representative
  • Todd C. Lee, MD, MPH, FIDSA
  • Stephen Liang, MD, MPHS, FIDSA – SHEA Representative
  • Daniel Minter, MD - IDSA Fellows Sub-Committee Representative
  • Jennifer Saullo, MD, PharmD
  • Samir S. Shah, MD, MSCE, FAAP – AAP Representative
  • Tanvi Sharma, MD, MPH
  • Jeffrey Tessier, MD, FIDSA
  • Francesca Torriani, MD, FIDSA
  • Sharon B. Weissman, MD
  • Jonathan Heald, MA - Staff Liasion
  • Dana S. Wollins, DrPH, MGC - Staff
  • Genet DemisashiStaff

This website uses cookies

We use cookies to ensure that we give you the best experience on our website. Cookies facilitate the functioning of this site including a member login and personalized experience. Cookies are also used to generate analytics to improve this site as well as enable social media functionality.