Meta+Analysis

** Durlak, Joseph **
 * __ Understanding Meta-Analysis __**

** Individual studies become “subjects” ** ** Quantifies data from each study in 2 ways: ** 1) Descriptive features of each study are coded using categorical or continuous coding schemes 2) Outcomes of each study are transferred into a common metric called the Effect Size -Meta analysis contains independent and dependent variables. The dependent variable is Effect Size (ES) drawn from each study. There are MANY potential independent variables. These possibilities include each feature or characteristic of the reviewed studies (eg., characteristics of the subjects, interventions, outcome measures, etc.) -Tests possible relationships between independent and dependent (ES) variables—by assessing which of the independent variables account for significant variation in the dependent (ES) variables. **NOTE:** each study feature may make a difference or none may make a difference Exploratory meta analysis use standardized group mean differences as the index of effect. -Assesses the impact of some type of treatment, program, or intervention.

__ Steps: __ ** Step 1) Formulating the specific research question **

-critical conceptual evaluation of prior research -publication bias-tendency for authors not to submit studies that fail to achieve statistically significant results. -methodological quality—criteria to include/exclude sdtudies NOTE: people who do not exclude studies on basis of methodological criteria attempt to assess how methodological features relate to ES

**Step 2) Literature search**:

-computer searches—unreliable when used alone -hand-search journals most likely to publish in the area -search references of studies

**Step 3) Study coding**:

-To translate features into useable quantitative data -coding procedures are summarized in report -Report on the reliability of coding procedures to assure reader that coding was systematic. Reliability should exceed 80%

** Step 4) The index of ES **

-ES is the standard difference between group means (often called do or g) -Calculating ES: Subtract the mean of the control group at post-treatment from the mean of the treatment group at post-treatment. Then divide by the POOLED standard deviation of the two groups -A positive score reflects that the treated groups outperformed the control groups -Effects can also be calculated using pre- and post-scores of a single group in studies without a control condition, butsuch effects are often higher than those obtained when control groups are used. -Conceptual meaning of effects: ES transforms data into a common metric based on standard deviation units. Therefore, in experimental vs. control group studies, an ES of 1.0 reflects that the experimental group changed one standard deviation more than the controls. This is a relative magnitude of effect. -ESs from all reviewed studies can be averaged to determine an overall mean effect. -Range of effects: convention is: .20 = small; .50 = moderate; and .80 or larger = high magnitude. -Variability in effects: Although the average ES is important, explanatory meta analysis attempts to explain variability in effects across studies -Hope to explain why studies may differ in effects—is difference systematic -SD reported along with its corresponding mean and gives an indication of the variability in obtained ESs. -Statistical significance of a mean effect: Some conduct T-tests todetermine whether a mean effect that is obtained from a group of studies will differ significantly from zero. This is a T- test with N -1 degrees of freedom for a difference between a sample mean (obtained in the meta-analysis) and the population mean effect (assumed to be zero). -It is more useful to calculate confidence intervals around a mean ES obtained from a group of studies. These intervals portray the range of effects that might exist in the true population given the presence of error and variation in the calculation of sample effects. NOTE: A mean ES is interpreted as statistically significant from zero if its confidence interval does not include zero. -Variability in study features: studies differ in methodological and procedural features as well as sample characteristics. NOTE: Effects of any magnitude can have practical significance in relation to outcomes -Although ES can be calculated for each dependent measure—they usually have to be combined or averaged on some basis, because studies can vary considerably in the number of dependent measures they contain. Options: 1) Enter the ESs for each dependent measure, in each study, into the analysis Because of the weighting studies with more features can produce, each possible ES is RARELY used. 2) Use each study as the unit of analysis. Average across all effect sizes WITHIN each study. Potential drawback—different types of dependent measures may yield effects of different magnitudes, and averaging across all the measdures would obscure these differences…… 3) Calculate and effect for each distinct construct represented in studies & keep these effects separate in the analysis. Ie., use categories (not every study has an effect for every category)

** Step 5) Statistical Analysis and ES Distribution **

-Multiple regression: Study characteristics are entered as independent variables to predict ES (which serves as the dependent variable). Regression is helpful when more than one significant prediction of effect size is identified and the meta-analyst wishes to evaluate their relative importance (predictive power). Often used when study characteristics are continuous in nature. NOTE: in many treatment effectiveness meta analyses the prime variables of interest are categorical (eg., types of treatments or types of problems). Under these circumstances, analyze group mean differences. -Analysis of Group Mean Differences: -Divide the total sample of studies into two or more subgroups that differ on certain variables believed to be important. If significant differences in mean ESs are obtained, one might attribute the difference to the variable that was used to divide the studies (eg. Grade level). -How do you know the manner in which you grouped the studies is appropriate? Use the **Q-statistic**. -Q and model testing: -Q is the homogeneity test and assesses whether the effects produced by a group of studies vary because of sampling error or represent systematic differences among the studies in addition to sampling error. If the effects produced by a group of studies are found to be homogenous, the studies are considered to be from some population & analysis of group mean effects is warranted. -Useful for model testing. It is not unusual to fail to obtain homogeneity in some or all study groupings. -Artifact: any type of methodological, statistical, or measurement error or bias that is present in their studies. Example: the use of study that has outcome masure with low reliability. Note: Methodological variables may have a stronger effect on outcome than other study features such as the type of treatment.

** 6) Offering conclusions and interpretations ** -literature  -limitations  -Offer recommendations to improve future research