Reporting ANOVA results

When reporting ANOVA, it’s vital to provide a clear and comprehensive summary of the findings in both the text and, where appropriate, tabular format.

ANOVA Tables

An ANOVA table is a common fixture in scientific literature, and it is expected by many journals, and will likely be expected by your thesis committee. The ANOVA table typically includes several crucial components:

  1. Source: What is being tested (e.g., Between groups, Within groups, Total, or specific factors in factorial ANOVA).

  2. df (Degrees of Freedom): Indicating the degrees of freedom for the source.

  3. SS (Sum of Squares): The total variability in the data.

  4. MS (Mean Square): Obtained by dividing the sum of squares by the corresponding degrees of freedom (SS/df).

  5. F: The F-statistic.

  6. Sig. (Significance): The p-value associated with the F-statistic.

Example ANOVA Table:

\[ \begin{array}{|l|c|c|c|c|c|} \hline \text{Source} & \text{df} & \text{SS} & \text{MS} & F & \text{Sig.} \\ \hline \text{Between Groups} & df_B & SS_B & MS_B & F & p \\ \text{Within Groups} & df_W & SS_W & MS_W & - & - \\ \hline \text{Total} & df_T & SS_T & - & - & - \\ \hline \end{array} \]

Where:

- \(df_B, df_W, df_T\): Between-groups, within-groups, and total degrees of freedom, respectively.

- \(SS_B, SS_W, SS_T\): Between-groups, within-groups, and total sum of squares, respectively.

- \(MS_B, MS_W\): Between-groups and within-groups mean square, respectively.

- F and p: F-statistic and p-value for the between-groups effect.

In-Text Reporting

When reporting ANOVA results in-text, it’s crucial to provide enough statistical detail to allow the reader to understand the findings and potentially replicate the analysis. Common practice typically involves including the following statistical metrics:

  1. F-value: The calculated F-statistic, which is the ratio of variance explained by the model to the residual variance.

  2. Degrees of Freedom: Often reported as two values - degrees of freedom for the factor (between groups) and degrees of freedom for the residuals (within groups).

  3. P-value: The significance value associated with the F-statistic.

Some variations might also include:

  • Effect Size: Such as \(\eta^2\) (eta squared) which provides a measure of the proportion of total variance attributed to the factor.

  • Mean Square (MS): Especially if it provides context to the F-value. This might be particularly relevant for more complex ANOVA models.

General ANOVA: \[ \text{F(df\_between, df\_within) = F-value, } p = \text{p-value} \]

If you include the effect size: \[ \text{F(df\_between, df\_within) = F-value, } p = \text{p-value, } \eta^2 = \text{effect size} \]

Example:

Assuming you performed an ANOVA on coral lesions with varying phosphorus and nitrogen levels and obtained an F-value of 12.34, p-value of 0.001, between-group degrees of freedom of 2, and within-group degrees of freedom of 57, you might report:

“ANOVA revealed a statistically significant difference in coral lesions across the different treatment levels (F(2, 57) = 12.34, p = 0.001).”

Or, with effect size:

“ANOVA revealed a statistically significant difference in coral lesions across the different treatment levels (F(2, 57) = 12.34, p = 0.001, \(\eta^2\) = 0.30).”

Post-hoc (pairwise) tests

Pairwise tests, often used post-hoc after finding a significant main effect in ANOVA, involve comparing each pair of groups to identify which groups significantly differ from each other. Here are some generalized guidelines for reporting pairwise comparisons in-text:

  1. Specify the Test Used:

    • Clearly mention the specific pairwise comparison test used (e.g., Tukey’s HSD, Bonferroni, LSD, etc.).
  2. Report Specific Comparisons:

    • Explicitly mention which group pairs were compared and provide the relevant test statistics.
  3. Report Significance Levels:

    • Clearly indicate the p-value or significance level for each pair being compared.
    • Mention whether the p-values were adjusted for multiple comparisons and by which method.
  4. Use Appropriate Notation:

    • Use appropriate notation for statistical values (e.g., “p < .05”).
  5. Provide Descriptive Statistics:

    • Optionally, provide means and standard deviations or standard errors for the groups being compared to give context to the differences.

Example Format:

“[Test name] post-hoc tests were conducted to compare the means between each group. There was a statistically significant difference between [Group A] and [Group B] (p = [p-value]), with [Group A] ([Mean A] ± [SD A]) having [higher/lower] [dependent variable] than [Group B] ([Mean B] ± [SD B]). However, the difference between [Group C] and [Group D] was not statistically significant (p = [p-value]).”

Example:

“Tukey’s HSD post-hoc tests were performed to elucidate the pairs of nutrient levels that differed significantly. A statistically significant difference was identified between the low (M = 3.21, SD = 1.12) and high (M = 5.76, SD = 1.34) nutrient conditions (p = .003). Conversely, the medium (M = 4.23, SD = 1.09) and high nutrient conditions did not significantly differ (p = .079).”

Additional Notes:

  • Follow the rules: Ensure that your reporting adheres to the guidelines and conventions of your field and any publication or course guidelines.
  • Be Consistent: Ensure consistent reporting of descriptive and inferential statistics across all pairs.
  • Be Transparent: Always be explicit about the methods and adjustments used in the analysis.
  • Ensure Clarity: Make sure your reporting is clear, precise, and interpretable by your intended audience.
  • Discuss Practically: Ensure to also discuss the practical or scientific implications of any significant pairwise differences identified.
  • Show it: Complement statistical findings with visualizations, such as bar charts or box plots, to aid in interpretation and engagement.