- My presentations
Auth with social network:
Download presentation
We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Systematic Review & Meta-analysis
Published by Diána Fazekasné Modified over 5 years ago
Similar presentations
Presentation on theme: "Systematic Review & Meta-analysis"— Presentation transcript:
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
Protocol Development.
Systematic Reviews Dr Sharon Mickan Centre for Evidence-based Medicine
Doug Altman Centre for Statistics in Medicine, Oxford, UK
Find the Evidence! Sean Elliott, MD Annabel Nunez, MLIS.
Introduction to Evidence Based Medicine Pediatric Clerkship LSUHSC.
Conducting systematic reviews for development of clinical guidelines 8 August 2013 Professor Mike Clarke
Systematic Reviews: Theory and Practice
15 de Abril de A Meta-Analysis is a review in which bias has been reduced by the systematic identification, appraisal, synthesis and statistical.
Making all research results publically available: the cry of systematic reviewers.
Their contribution to knowledge Morag Heirs. Research Fellow Centre for Reviews and Dissemination University of York PhD student (NIHR funded) Health.
How to do a literature search Saharuddin Ahmad Aida Jaffar Department of Family Medicine.
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
Systematic Reviews.
TEACHING EVIDENCED BASED MEDICINE HOW TO TEACH ABOUT SYSTEMATIC REVIEWS David Nunan, PhD Departmental Lecturer/Senior Researcher, Nuffield Department of.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Introduction to Systematic Reviews Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /9/20151.
Zoe G. Davies Centre for Evidence-Based Conservation University of Birmingham, UK Systematic Review Methodology: a brief summary.
Systematic Reviews By Jonathan Tsun & Ilona Blee.
This material was developed by Oregon Health & Science University, funded by the Department of Health and Human Services, Office of the National Coordinator.
About project
© 2024 SlidePlayer.com Inc. All rights reserved.
- Preferences
Systematic Reviews: - PowerPoint PPT Presentation
Systematic Reviews:
Meta-analysis methods focus on contrasting and comparing results from different ... odds ratio relative risk if event occurs frequently. rd vs rr ... – powerpoint ppt presentation.
- Meta-analysis is a statistical analysis of a collection of studies
- Meta-analysis methods focus on contrasting and comparing results from different studies in anticipation of identifying consistent patterns and sources of disagreements among these results
- Primary objective
- Synthetic goal (estimation of summary effect) vs
- Analytic goal (estimation of differences)
- Systematic Review
- the application of scientific strategies that limit bias to the systematic assembly, critical appraisal and synthesis of all relevant studies on a specific topic
- Meta-Analysis
- a systematic review that employs statistical methods to combine and summarize the results of several studies
- Clearly formulated question
- Comprehensive data search
- Unbiased selection and extraction process
- Critical appraisal of data
- Synthesis of data
- Perform sensitivity and subgroup analyses if appropriate and possible
- Prepare a structured report
- What is the study objective
- to validate results in a large population
- to guide new studies
- Pose question in both biologic and health care terms specifying with operational definitions
- intervention
- outcomes (both beneficial and harmful)
- Study design
- Interventions
- Need a well formulated and co-ordinated effort
- Seek guidance from a librarian
- Specify language constraints
- Requirements for comprehensiveness of search depends on the field and question to be addressed
- Possible sources include
- computerized bibliographic database
- review articles
- conference proceedings
- dissertations
- granting agencies
- trial registries
- journal handsearching
- usually begin with searches of biblographic reports (citation indexes, abstract databases)
- publications retrieved and references therein searched for more references
- as a step to elimination of publication bias need information from unpublished research
- databases of unpublished reports
- clinical research registries
- clinical trial registries
- unpublished theses
- conference indexes
- 2 independent reviewers select studies
- Selection of studies addressing the question posed based on a priori specification of the population, intervention, outcomes and study design
- Level of agreement kappa
- Differences resolved by consensus
- Specify reasons for rejecting studies
- 2 independent reviewers extract data using predetermined forms
- Patient characteristics
- Study design and methods
- Study results
- Methodologic quality
- Be explicit, unbiased and reproducible
- Include all relevant measures of benefit and harm of the intervention
- Contact investigators of the studies for clarification in published methods etc.
- Extract individual patient data when published data do not answer questions about intention to treat analyses, time-to-event analyses, subgroups, dose-response relationships
- Well formulated question
- Size of study
- Characteristics of study patients
- Details of specific interventions used
- Details of outcomes assessed
- threshold for inclusion
- possible explanation form heterogeneity
- Base quality assessments on extent to which bias is minimized
- Make quality assessment scoring systems transparent and parsimonious
- Evaluate reproducibility of quality assessment
- Report quality scoring system used
- P1 event rate in experimental group
- P2 event rate in control group
- RD Risk difference P2 - P1
- RR Relative risk P1 / P2
- RRR Relative risk reduction (P2-P1)/P2
- OR Odds ratio P1/(1-P1)/P2/(1-P2)
- NNT No. needed to treat 1 / (P2-P1)
- Experimental event rate 0.3
- Control event rate 0.4
- RD 0.4 - 0.3 0.1
- RR 0.3 / 0.4 0.75
- RRR (0.4 - 0.3) / 0.4 0.25
- OR (0.3/0.7)/(0.4/0.6) 0.64
- NNT 1 / (0.4 - 0.3) 10
- Mean Difference
- When studies have comparable outcome measures (ie. Same scale, probably same length of follow-up)
- A meta-analysis using MDs is known as a weighted mean difference (WMD)
- Standardized Mean Difference
- When studies use different outcome measurements which address the same clinical outcome (eg different scales)
- Converts scale to a common scale number of standard deviations
- True inter-study variation may exist (fixed/random-effects model)
- Sampling error may vary among studies (sample size)
- Characteristics may differ among studies (population, intervention)
- Parameter of interest (quantifies average treatment effect)
- Number of independent studies k
- Summary Statistic Yi (i1,2,,k)
- Large sample size asymptotic normal distribution
- Outcome Yi from study i is a sample from a distribution with mean
- (ie. common mean across studies)
- Yi are independently distributed as N ( , ) (i1,2,,k) where Var(Yi ) and assume E(Yi)
- (ie. study-specific means)
- is a realization from a distribution of effects with mean
- are independently distributed as N ( , ) (i1,2,,k) where
- Var ( ) is the inter-study variation
- is the average treatment effect
- after averaging study-specific effects, distribution of Yi is N ( , )
- although is parameter of interest, must be considered and estimated
- distribution of conditional on observed data, and is N ( )
- where Fi is the shrinkage factor for the ith study
- Studies are stratified and then combined to account for differences in sample size and study characteristics
- A weighted average of estimates from each study is calculated
- Question of whether a common or study-specific parameter is to be estimated remains . Procedure
- perform test of homogeneity
- if no significant difference use fixed-effects model
- otherwise identify study characteristics that stratifies studies into subsets with homogeneous effects or use random effects model
- Require from each study
- effect estimate and
- standard error of effect estimate
- Combine these using a weighted average
- pooled estimate sum of (estimate ? weight)
- sum of weights
- where weight 1 / variance of estimate
- Assumes a common underlying effect behind every trial
- For each trial
- estimate (square)
- 95 confidence interval (CI) (line)
- size (square) indicates weight allocated
- Solid vertical line of no effect
- if CI crosses line then effect not significant (pgt0.05)
- Horizontal axis
- arithmetic RD, MD, SMD
- logarithmic OR, RR
- Diamond represents combined estimate and 95 CI
- Dashed line plotted vertically through combined estimate
- Define meaning of heterogeneity for each review
- Define a priori the important degree of heterogeneity (in large data sets trivial heterogeneity may be statistically significant)
- If heterogeneity exists examine potential sources (differences in study quality, participants, intervention specifics or outcome measurement/definition)
- If heterogeneity exists across studies, consider using random effects model
- If heterogeneity can be explained using a priori hypotheses, consider presenting results by these subgroups
- If heterogeneity cannot be explained, proceed with caution with further statistical aggregation and subgroup analysis
- Common sense
- are the patients, interventions and outcomes in each of the included studies sufficiently similar
- Exploratory analysis of study-specific estimates
- Statistical tests
- Subgroup analyses
- subsets of trials
- subsets of patients
- SUBGROUPS SHOULD BE PRE-SPECIFIED TO AVOID BIAS
- Meta-regression
- relate size of effect to characteristics of the trials
- Assume true effect estimates really vary across studies
- Two sources of variation
- within studies (between patients)
- between studies (heterogeneity)
- What the software does
- Revise weights to take into account both components of variation
- varianceheterogeneity
- When heterogeneity exists we get
- a different pooled estimate (but not necessarily) with a different interpretation
- a wider confidence interval
- a larger p-value
- Include all relevant and clinically useful measures of treatment effect
- Perform a narrative, qualitative summary when data are too sparse, of too low quality or too heterogeneous to proceed with a meta-analysis
- Specify if fixed or random effects model is used
- Describe proportion of patients used in final analysis
- Use confidence intervals
- Include a power analysis
- Consider cumulative meta-analysis (by order of publication date, baseline risk, study quality) to assess the contribution of successive studies
- Pre-specify hypothesis-testing subgroup analyses and keep few in number
- Label all a posteriori subgroup analyses
- When subgroup differences are detected, interpret in light of whether they are
- established a priori
- few in number
- supported by plausible causal mechanisms
- important (qualitative vs quantitative)
- consistent across studies
- statistically significant (adjusted for multiple testing)
- Test robustness of results relative to key features of the studies and key assumptions and decisions
- Include tests of bias due to retrospective nature of systematic reviews (eg.with/without studies of lower methodologic quality)
- Consider fragility of results by determining effect of small shifts in number of events between groups
- Consider cumulative meta-analysis to explore relationship between effect size and study quality, control event rates and other relevent features
- Test a reasonable range of values for missing data from studies with uncertain results
- Scatterplot of effect estimates against sample size
- Used to detect publication bias
- If no bias, expect symmetric, inverted funnel
- If bias, expect asymmetric or skewed shape
- Include a structured abstract
- Include a table of the key elements of each study
- Include summary data from which the measures are computed
- Employ informative graphic displays representing confidence intervals, group event rates, sample sizes etc.
- Interpret results in context of current health care
- State methodologic limitations of studies and review
- Consider size of effect in studies and review, their consistency and presence of dose-response relationship
- Consider interpreting results in context of temporal cumulative meta-analysis
- Interpret results in light of other available evidence
- Make recommendations clear and practical
- Propose future research agenda (clinical and methodological requirements)
- (1) Conceptually, think of a generic effect size statistic T
- (2) corresponding effect size parameter ?
- (3) associated standard error SE(T), square root of variance
- (4) for some effect sizes, some suitable transformation may be needed to make inference based on normal distribution theory
- (A) Fixed-Effects Model (FEM)
- Assume a common effect size
- Obtain average effect size as a weighted mean (unbiased)
- Optimal weight is reciprocal of variance (inverse variance weighted method)
- Variances inversely proportional to within-study sample sizes
- what is the effect of larger studies in calculating weights?
- may also weigh by quality index, q, scaled from 0 to 1
- Average effect size has conditional variance (a function of conditional variances of each effect size, quality index, )
- e.g.. V 1/total weight
- Multiply the resulting standard error by appropriate critical value (1.96, 2.58, 1.645)
- Construct confidence interval and/or test statistic
- Test the homogeneity assumption using a weighted effect size sums of squares of deviations, Q
- If Q exceeds the critical value of chi-square at k-1 d.f. (k number of studies), then observed between-study variance significantly greater than what would be expected under the null hypothesis
- When within-study sample sizes are very large, Q may be rejected even when individual effect size estimates do not differ much
- One can take different courses of action when Q is rejected (see next page)
- Methodologic choices in dealing with heterogeneous data
- (B) Random-Effects Model (REM)
- Total variability of an observed study effect size reflects within and between variance (extra variance component)
- If between-studies variance is zero, equations of REM reduce to those of FEM
- Presence of a variance component which is significantly different from zero may be indicative of REM
- Once significance of variance component is established (e.g.. Q test for homogeneity of effect size),
- its magnitude should be estimated
- variance components can be estimated in many ways!
- the most commonly used method is the so-called the DerSimonian-Laird method which is based on method-of-moments approach
- Compute random effects weighted mean as an estimate of the average of the random effects in the population
- construct confidence interval and conduct hypothesis tests as before (new variance and thus new weights!!!)
- A measure of association more popular in cross-sectional observational studies than in RCTs is Pearsons correlation coefficient, r given by
- X and Y must be continuous (e.g. blood pressure and weight)
- r lies between -1 to 1
- not available in RevMan / MetaView at this time
- Following the generic framework discussed earlier
- the effect size statistic is r
- the corresponding effect size parameter is the underlying population correlation coefficient, ?
- in this case, a suitable transformation is needed to achieve approximate normality of effect size
- inference is conducted on the scale of the transformed variable and final results are back-transformed to the original scale
- Assuming X and Y have a bivariate normal distribution, the Fishers Z transformed variable
- has, for large sample, an approximate normal distribution with mean of
- and a variance of
- Hence, weighting factor associated with Z is W 1/Var n-3.
- meta-analysis is carried out on Z-transformed measures and final results are transformed back to the scale of correlation using
- Source Fleiss J., Statistical Methods in Medical Research 1993 2 121 -- 145.
- correlation coefficients reported by 7 independent studies in education are included in the meta-analysis
- Comparison association between a characteristic of the teacher and the mean measure of his or her students achievement
- No evidence for heterogeneous association across studies
- Fixed effect analysis may be undertaken
- Would a random effect analysis as shown earlier produce a different numerical value for the combined correlation coefficient?
- How would the weights be modified to carry out a REM?
- the weighted mean of Z is
- the approximate standard error of the combined mean is
- Test of significance is carried out using
- this value exceeds the critical value 1.96 (corresponding to 5 level of significance), so we conclude that average value of Z (hence the average correlation) is statistically significant
- 95 confidence interval for ? is
- Transforming back to the original scale, a 95 CI for the parameter of interest, ?, is
- again confirming a significant association
- Does the review set out to answer a precise question about patient care?
- Should be different from an uncritical encyclopedic presentation
- Have studies been sought thoroughly
- Medline and other relevant bibliographic database
- Cochrane controlled clinical trials register
- Foreign language literature
- "Grey literature" (unpublished or un-indexed reports theses, conference proceedings, internal reports, non-indexed journals, pharmaceutical industry files)
- Reference chaining from any articles found
- Personal approaches to experts in the field to find unpublished reports
- Hand searches of the relevant specialized journals.
- Have inclusion and exclusion criteria for studies been stated explicitly, taking account of the patients in the studies, the interventions used, the outcomes recorded and the methodology?
- Have the authors considered the homogeneity of the studies the idea that the studies are sufficiently similar in their design, interventions and subjects to merit combination.
- this is done either by eyeballing graphs like the forest plot or by applications of chi-square tests (Q test)
- The various studies may have used patients of different ages or social classes, but if the treatment effects are consistent across the studies, then generalisation to other groups or populations is more justified.
- Be wary of sub-group analyses where the authors attempt to draw new conclusions by comparing the outcomes for patients in one study with the patients in another study
- Be wary of "data-dredging" exercises, testing multiple hypotheses against the data, especially if the hypotheses were constructed after the study had begun data collection.
- One may also want to ask
- Were all clinically important outcomes considered?
- Are the benefits worth the harms and costs?
PowerShow.com is a leading presentation sharing website. It has millions of presentations already uploaded and available with 1,000s more being uploaded by its users every day. Whatever your area of interest, here you’ll be able to find and view presentations you’ll love and possibly download. And, best of all, it is completely free and easy to use.
You might even have a presentation you’d like to share with others. If so, just upload it to PowerShow.com. We’ll convert it to an HTML5 slideshow that includes all the media types you’ve already added: audio, video, music, pictures, animations and transition effects. Then you can share it with your target audience as well as PowerShow.com’s millions of monthly visitors. And, again, it’s all free.
About the Developers
PowerShow.com is brought to you by CrystalGraphics , the award-winning developer and market-leading publisher of rich-media enhancement products for presentations. Our product offerings include millions of PowerPoint templates, diagrams, animated 3D characters and more.
IMAGES
VIDEO