Librarians may not be available during all open hours but will answer ASAP.
Levels of Evidence
Levels of evidence (sometimes called hierarchy of evidence) are assigned to studies based on the methodological quality of their design, validity, and applicability to patient care. These decisions give the "grade (or strength) of recommendation".
The systematic review or meta-analysis of randomized controlled trials (RCTs) and evidence-based practice guidelines are considered to be the strongest level of evidence on which to guide practice decisions. (Melnyk, 2004) The weakest level of evidence is the opinion from authorities and/or reports of expert committees.
Types of Resources
When searching for evidence-based information, one should select the highest level of evidence possible--systematic reviews or meta-analysis. Systematic reviews, meta-analysis, and critically-appraised topics/articles have all gone through an evaluation process: they have been "filtered".
Information that has not been critically appraised is considered "unfiltered".
As you move up the pyramid, however, fewer studies are available; it's important to recognize that high levels of evidence may not exist for your clinical question. If this is the case, you'll need to move down the pyramid if your quest for resources at the top of the pyramid is unsuccessful.
Criteria
When appraising research, keep the following three criteria in mind:
Quality
Trials that are randomized and double-blind, to avoid selection and observer bias, and where we know what happened to most of the subjects in the trial.
Validity
Trials that mimic clinical practice, or could be used in clinical practice, and with outcomes that make sense. For instance, in chronic disorders, we want long-term, not short-term trials. We are [also] ... interested in outcomes that are large, useful, and statistically very significant (p < 0.01, a 1 in 100 chance of being wrong).
Size
Trials (or collections of trials) that have large numbers of patients, to avoid being wrong because of the random play of chance. For instance, to be sure that a number needed to treat (NNT) of 2.5 is really between 2 and 3, we need results from about 500 patients. If that NNT is above 5, we need data from thousands of patients.
These are the criteria on which we should judge evidence. For it to be strong evidence, it has to fulfill the requirements of all three criteria.
Level I
Experimental study, randomized controlled trial (RCT)
Systematic review of RCTs, with or without meta-analysis
Level II
Quasi-experimental Study
Systematic review of a combination of RCTs and quasi-experimental, or quasi-experimental studies only, with or without meta-analysis.
Level III
Non-experimental study
Systematic review of a combination of RCTs, quasi-experimental and non-experimental, or non-experimental studies only, with or without meta-analysis.
Qualitative study or systematice review, with or without meta-analysis
Level IV
Opinion of respected authorities and/or nationally recognized expert committees/consensus panels based on scientific evidence.
Includes:
- Clinical practice guidelines
- Consensus panels
Level V
Based on experiential and non-research evidence.
Includes:
- Literature reviews
- Quality improvement, program or financial evaluation
- Case reports
- Opinion of nationally recognized expert(s) based on experiential evidence
From Johns Hopkins nursing evidence-based practice : models and guidelines
Dearholt, S., Dang, Deborah, & Sigma Theta Tau International. (2018). Johns Hopkins Nursing Evidence-based Practice : Models and Guidelines.