# Presentation of mixed model results

This is an issue I was hoping to get some feedback regarding as I suspect this is the crowd of behavioural ecologists most likely to have come across similar issues. I attempted to post the following message the "Ecolog-L", the ESA's listserv. Unfortunately it never got posted for some reason (despite lots of political discussions, this perhaps was deemed off-topic?):

I was hoping to generate some discussion in regards to the reporting of statistical results from generalized linear mixed models as well as mixed models more generally, both non-standard approaches but of increasing use in ecology and evolutionary biology.

This is a topic that is proposed for entirely selfish reasons. I have conducted a series of analyses using generalized linear mixed models implemented in R using glmer/lmer (lme4 package). I have two categorical fixed variables, one with two levels and the other with four levels as well as a continuous fixed factor and a random factor (subject ID, using a mixed model approach to deal with repeated measures.) The response is proportional so I'm modeling the error according to a binomial distribution. With this approach, biological effects are evaluated based on z-values and associated p-values (an information theoretic or AIC based approach is not appropriate for my questions.)

This is a non-standardized statistical approach and while I am comfortable interpreting the results, reporting them is a different story. A particular sticking point is an inability to assess what would normally be considered "main-effects", there is no anova table to be generated and so the four-level fixed factor is assessed by comparing three of the levels to the fourth (using p-values based on z's). Douglas Bates explains part of the problem with determining df's and F's and p's with mixed models here: http://finzi.psych.upenn.edu/R/Rhelp02/archive/76742.html

The result is that there is no main effect for the four level factor (obviously irrelevant for the two level factor.) Bolker et al. (2009 in TREE) suggest further that evaluating fixed factors using drop in deviance tests is not necessarily a good option.

I am concerned that reviewers will have difficulty with this portion of the manuscript as I'll be reporting statistics without df's or the normal sort of statistical vernacular you are expected to report.

Based on my skimming, neither Crawley's "big" R book nor either of the Zurr mixed model for ecology books discuss the practical issue of reporting these results (in fact they both often only deal with two level categorical factors, which seems like a bit of an easy out on their part!)

Are there any recommendations for reporting results about levels of a factor beyond detailing the analyses in the methods?

SAS and other statistical packages will give you a typical "ANOVA" type table from this sort of analysis (from what I've heard anyway) but I'm still at a bit of a loss as to the best way to present these results. Following some of what Crawley and Zurr describe I also did some drop-in-deviance tests followed by collapsing the data but those results are harder to present in a clear manner (result in the same inferences being drawn though).

Does anyone have any recommendations about this sort of issue?

Ned Dochtermann

Hi there,

I am *far* from being an expert on this subject, but even though Bolker et al indeed suggest that dropping terms and evaluating the change in deviance is not a good option, I don't think that is a consensus. Douglas Bates, author of the lme4 package, for example, suggests in favor of it in this message:

http://markmail.org/message/bjoogfrm3uktmoqv

Apparently there are issues to how SAS implements calculating df for fixed-effects F tests, which is why prof. Bates hasn't implemented them yet (as you mention above). It's complicated for us as biologists who need to apply these techniques which aren't yet resolved in a theoretical level... I've stumbled across this problem several times too. But, "for now", I'd just settle with the chi-square approximation...

HTH,

Rafael

Thank you very much for the reply (I'd essentially given up before you replied on this topic getting responses so I do appreciate it despite the delay). What you describe is pretty much the route I'm taking for those analyses. We'll see how reviewers respond to it.

Thanks,

Ned

Best luck with that! On an additional note to the topic, I've come to know recently that permutation methods may be a good way to test fixed effects of such models. In the website below (which relates to the Bolker et al TREE paper), the link "worked examples" has some guides on how to approach the issue, but I haven't tried it myself. Maybe someone with a better insight on the issue can provide some insight or guidelines?

webpage: http://glmm.wikidot.com/

Thanks a lot for that! I've used permutation tests for a lot of things but not within a mixed model context, I'll have to check that out.

As is so often the case my results are qualitatively the same whether done "correct" or using more conventional approaches (basic repeated measures design). The lack of clear reporting guidelines is definitely an impediment to adopting the "correct" methods. Of course in my case the whole thing would be clearer if I were to use model comparison; alas that isn't appropriate for my study design.

Thanks,

Ned

Doubt anyone is on the site ever but I found this to be useful:

<!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:1; mso-generic-font-family:roman; mso-font-formatther; mso-font-pitch:variable; mso-font-signature:0 0 0 0 0 0;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-520092929 1073786111 9 0 415 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0in; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin;} a:link, span.MsoHyperlink {mso-style-noshow:yes; mso-style-priority:99; color:blue; text-decoration:underline; text-underline:single;} a:visited, span.MsoHyperlinkFollowed {mso-style-noshow:yes; mso-style-priority:99; color:purple; mso-themecolor:followedhyperlink; text-decoration:underline; text-underline:single;} .MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-size:10.0pt; mso-ansi-font-size:10.0pt; mso-bidi-font-size:10.0pt;} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;} div.WordSection1 {page:WordSection1;} -->

http://glmm.wikidot.com/random-effects-testing

Previously Ned A. Dochtermann wrote:

Thanks a lot for that! I've used permutation tests for a lot of things but not within a mixed model context, I'll have to check that out.

As is so often the case my results are qualitatively the same whether done "correct" or using more conventional approaches (basic repeated measures design). The lack of clear reporting guidelines is definitely an impediment to adopting the "correct" methods. Of course in my case the whole thing would be clearer if I were to use model comparison; alas that isn't appropriate for my study design.

Thanks,

Ned