lme() and lmer() giving conflicting results












10














I have been working with some data that has some problems with repeated measurements. In doing so I noticed very different behavior between lme() and lmer() using my test data and want to know why.



The fake data set I created has height and weight measurements for 10 subjects, taken twice each. I set up the data so that between subjects there would be a positive relationship between height and weight, but a negative relationship between the repeated measures within each individual.



set.seed(21)
Height=1:10; Height=Height+runif(10,min=0,max=3) #First height measurement
Weight=1:10; Weight=Weight+runif(10,min=0,max=3) #First weight measurement

Height2=Height+runif(10,min=0,max=1) #second height measurement
Weight2=Weight-runif(10,min=0,max=1) #second weight measurement

Height=c(Height,Height2) #combine height and wight measurements
Weight=c(Weight,Weight2)

DF=data.frame(Height,Weight) #generate data frame
DF$ID=as.factor(rep(1:10,2)) #add subject ID
DF$
Number=as.factor(c(rep(1,10),rep(2,10))) #differentiate between first and second measurement


Here is a plot of the data, with lines connecting the two measurements from each individual.
enter image description here



So I ran two models, one with lme() from the nlme package and one with lmer() from lme4. In both cases I ran a regression of weight against height with a random effect of ID to control for the repeated measurements of each individual.



library(nlme)
Mlme=lme(Height~Weight,random=~1|ID,data=DF)
library(lme4)
Mlmer=lmer(Height~Weight+(1|ID),data=DF)


These two models often (though not always depending on the seed) generated completely different results. I have seen where they generate slightly different variance estimates, calculate different degrees of freedom, etc., but here the coefficients are in opposite directions.



coef(Mlme)
# (Intercept) Weight
#1 1.57102183 0.7477639
#2 -0.08765784 0.7477639
#3 3.33128509 0.7477639
#4 1.09639883 0.7477639
#5 4.08969282 0.7477639
#6 4.48649982 0.7477639
#7 1.37824171 0.7477639
#8 2.54690995 0.7477639
#9 4.43051687 0.7477639
#10 4.04812243 0.7477639

coef(Mlmer)
# (Intercept) Weight
#1 4.689264 -0.516824
#2 5.427231 -0.516824
#3 6.943274 -0.516824
#4 7.832617 -0.516824
#5 10.656164 -0.516824
#6 12.256954 -0.516824
#7 11.963619 -0.516824
#8 13.304242 -0.516824
#9 17.637284 -0.516824
#10 18.883624 -0.516824


To illustrate visually, model with lme()



enter image description here



And model with lmer()



enter image description here



Why are these models diverging so much?










share|cite|improve this question
























  • What a cool example. It's also a useful example of a case where fitting fixed versus random effects of individual gives completely different coefficient estimates for the weight term.
    – Jacob Socolar
    2 hours ago
















10














I have been working with some data that has some problems with repeated measurements. In doing so I noticed very different behavior between lme() and lmer() using my test data and want to know why.



The fake data set I created has height and weight measurements for 10 subjects, taken twice each. I set up the data so that between subjects there would be a positive relationship between height and weight, but a negative relationship between the repeated measures within each individual.



set.seed(21)
Height=1:10; Height=Height+runif(10,min=0,max=3) #First height measurement
Weight=1:10; Weight=Weight+runif(10,min=0,max=3) #First weight measurement

Height2=Height+runif(10,min=0,max=1) #second height measurement
Weight2=Weight-runif(10,min=0,max=1) #second weight measurement

Height=c(Height,Height2) #combine height and wight measurements
Weight=c(Weight,Weight2)

DF=data.frame(Height,Weight) #generate data frame
DF$ID=as.factor(rep(1:10,2)) #add subject ID
DF$
Number=as.factor(c(rep(1,10),rep(2,10))) #differentiate between first and second measurement


Here is a plot of the data, with lines connecting the two measurements from each individual.
enter image description here



So I ran two models, one with lme() from the nlme package and one with lmer() from lme4. In both cases I ran a regression of weight against height with a random effect of ID to control for the repeated measurements of each individual.



library(nlme)
Mlme=lme(Height~Weight,random=~1|ID,data=DF)
library(lme4)
Mlmer=lmer(Height~Weight+(1|ID),data=DF)


These two models often (though not always depending on the seed) generated completely different results. I have seen where they generate slightly different variance estimates, calculate different degrees of freedom, etc., but here the coefficients are in opposite directions.



coef(Mlme)
# (Intercept) Weight
#1 1.57102183 0.7477639
#2 -0.08765784 0.7477639
#3 3.33128509 0.7477639
#4 1.09639883 0.7477639
#5 4.08969282 0.7477639
#6 4.48649982 0.7477639
#7 1.37824171 0.7477639
#8 2.54690995 0.7477639
#9 4.43051687 0.7477639
#10 4.04812243 0.7477639

coef(Mlmer)
# (Intercept) Weight
#1 4.689264 -0.516824
#2 5.427231 -0.516824
#3 6.943274 -0.516824
#4 7.832617 -0.516824
#5 10.656164 -0.516824
#6 12.256954 -0.516824
#7 11.963619 -0.516824
#8 13.304242 -0.516824
#9 17.637284 -0.516824
#10 18.883624 -0.516824


To illustrate visually, model with lme()



enter image description here



And model with lmer()



enter image description here



Why are these models diverging so much?










share|cite|improve this question
























  • What a cool example. It's also a useful example of a case where fitting fixed versus random effects of individual gives completely different coefficient estimates for the weight term.
    – Jacob Socolar
    2 hours ago














10












10








10


2





I have been working with some data that has some problems with repeated measurements. In doing so I noticed very different behavior between lme() and lmer() using my test data and want to know why.



The fake data set I created has height and weight measurements for 10 subjects, taken twice each. I set up the data so that between subjects there would be a positive relationship between height and weight, but a negative relationship between the repeated measures within each individual.



set.seed(21)
Height=1:10; Height=Height+runif(10,min=0,max=3) #First height measurement
Weight=1:10; Weight=Weight+runif(10,min=0,max=3) #First weight measurement

Height2=Height+runif(10,min=0,max=1) #second height measurement
Weight2=Weight-runif(10,min=0,max=1) #second weight measurement

Height=c(Height,Height2) #combine height and wight measurements
Weight=c(Weight,Weight2)

DF=data.frame(Height,Weight) #generate data frame
DF$ID=as.factor(rep(1:10,2)) #add subject ID
DF$
Number=as.factor(c(rep(1,10),rep(2,10))) #differentiate between first and second measurement


Here is a plot of the data, with lines connecting the two measurements from each individual.
enter image description here



So I ran two models, one with lme() from the nlme package and one with lmer() from lme4. In both cases I ran a regression of weight against height with a random effect of ID to control for the repeated measurements of each individual.



library(nlme)
Mlme=lme(Height~Weight,random=~1|ID,data=DF)
library(lme4)
Mlmer=lmer(Height~Weight+(1|ID),data=DF)


These two models often (though not always depending on the seed) generated completely different results. I have seen where they generate slightly different variance estimates, calculate different degrees of freedom, etc., but here the coefficients are in opposite directions.



coef(Mlme)
# (Intercept) Weight
#1 1.57102183 0.7477639
#2 -0.08765784 0.7477639
#3 3.33128509 0.7477639
#4 1.09639883 0.7477639
#5 4.08969282 0.7477639
#6 4.48649982 0.7477639
#7 1.37824171 0.7477639
#8 2.54690995 0.7477639
#9 4.43051687 0.7477639
#10 4.04812243 0.7477639

coef(Mlmer)
# (Intercept) Weight
#1 4.689264 -0.516824
#2 5.427231 -0.516824
#3 6.943274 -0.516824
#4 7.832617 -0.516824
#5 10.656164 -0.516824
#6 12.256954 -0.516824
#7 11.963619 -0.516824
#8 13.304242 -0.516824
#9 17.637284 -0.516824
#10 18.883624 -0.516824


To illustrate visually, model with lme()



enter image description here



And model with lmer()



enter image description here



Why are these models diverging so much?










share|cite|improve this question















I have been working with some data that has some problems with repeated measurements. In doing so I noticed very different behavior between lme() and lmer() using my test data and want to know why.



The fake data set I created has height and weight measurements for 10 subjects, taken twice each. I set up the data so that between subjects there would be a positive relationship between height and weight, but a negative relationship between the repeated measures within each individual.



set.seed(21)
Height=1:10; Height=Height+runif(10,min=0,max=3) #First height measurement
Weight=1:10; Weight=Weight+runif(10,min=0,max=3) #First weight measurement

Height2=Height+runif(10,min=0,max=1) #second height measurement
Weight2=Weight-runif(10,min=0,max=1) #second weight measurement

Height=c(Height,Height2) #combine height and wight measurements
Weight=c(Weight,Weight2)

DF=data.frame(Height,Weight) #generate data frame
DF$ID=as.factor(rep(1:10,2)) #add subject ID
DF$
Number=as.factor(c(rep(1,10),rep(2,10))) #differentiate between first and second measurement


Here is a plot of the data, with lines connecting the two measurements from each individual.
enter image description here



So I ran two models, one with lme() from the nlme package and one with lmer() from lme4. In both cases I ran a regression of weight against height with a random effect of ID to control for the repeated measurements of each individual.



library(nlme)
Mlme=lme(Height~Weight,random=~1|ID,data=DF)
library(lme4)
Mlmer=lmer(Height~Weight+(1|ID),data=DF)


These two models often (though not always depending on the seed) generated completely different results. I have seen where they generate slightly different variance estimates, calculate different degrees of freedom, etc., but here the coefficients are in opposite directions.



coef(Mlme)
# (Intercept) Weight
#1 1.57102183 0.7477639
#2 -0.08765784 0.7477639
#3 3.33128509 0.7477639
#4 1.09639883 0.7477639
#5 4.08969282 0.7477639
#6 4.48649982 0.7477639
#7 1.37824171 0.7477639
#8 2.54690995 0.7477639
#9 4.43051687 0.7477639
#10 4.04812243 0.7477639

coef(Mlmer)
# (Intercept) Weight
#1 4.689264 -0.516824
#2 5.427231 -0.516824
#3 6.943274 -0.516824
#4 7.832617 -0.516824
#5 10.656164 -0.516824
#6 12.256954 -0.516824
#7 11.963619 -0.516824
#8 13.304242 -0.516824
#9 17.637284 -0.516824
#10 18.883624 -0.516824


To illustrate visually, model with lme()



enter image description here



And model with lmer()



enter image description here



Why are these models diverging so much?







r mixed-model lme4-nlme






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 4 hours ago









Ben Bolker

22.3k16090




22.3k16090










asked 7 hours ago









Cody K

584




584












  • What a cool example. It's also a useful example of a case where fitting fixed versus random effects of individual gives completely different coefficient estimates for the weight term.
    – Jacob Socolar
    2 hours ago


















  • What a cool example. It's also a useful example of a case where fitting fixed versus random effects of individual gives completely different coefficient estimates for the weight term.
    – Jacob Socolar
    2 hours ago
















What a cool example. It's also a useful example of a case where fitting fixed versus random effects of individual gives completely different coefficient estimates for the weight term.
– Jacob Socolar
2 hours ago




What a cool example. It's also a useful example of a case where fitting fixed versus random effects of individual gives completely different coefficient estimates for the weight term.
– Jacob Socolar
2 hours ago










1 Answer
1






active

oldest

votes


















14














tl;dr if you change the optimizer to "nloptwrap" I think it will avoid these issues (probably).



Congratulations, you've found one of the simplest examples of multiple optima in a statistical estimation problem! The parameter that lme4 uses internally (thus convenient for illustration) is the scaled standard deviation of the random effects, i.e. the among-group std dev divided by the residual std dev.



Extract these values for the original lme and lmer fits:



(sd1 <- sqrt(getVarCov(Mlme)[[1]])/sigma(Mlme))
## 2.332469
(sd2 <- getME(Mlmer,"theta")) ## 14.48926


Refit with another optimizer (this will probably be the default in the next release of lme4):



Mlmer2 <- update(Mlmer,
control=lmerControl(optimizer="nloptwrap"))
sd3 <- getME(Mlmer2,"theta") ## 2.33247


Matches lme ... let's see what's going on. The deviance function (-2*log likelihood), or in this case the analogous REML-criterion function, for LMMs with a single random effect takes only one argument, because the fixed-effect parameters are profiled out; they can be computed automatically for a given value of the RE standard deviation.



ff <- as.function(Mlmer)
tvec <- seq(0,20,length=101)
Lvec <- sapply(tvec,ff)
png("CV38425.png")
par(bty="l",las=1)
plot(tvec,Lvec,type="l",
ylab="REML criterion",
xlab="scaled random effects standard deviation")
abline(v=1,lty=2)
points(sd1,ff(sd1),pch=16,col=1)
points(sd2,ff(sd2),pch=16,col=2)
points(sd3,ff(sd3),pch=1,col=4)
dev.off()


enter image description here



I continued to obsess further over this and ran the fits for random seeds from 1 to 1000, fitting lme, lmer, and lmer+nloptwrap for each case. Here are the numbers out of 1000 where a given method gets answers that are at least 0.001 deviance units worse than another ...



          lme.dev lmer.dev lmer2.dev
lme.dev 0 64 61
lmer.dev 369 0 326
lmer2.dev 43 3 0


In other words, (1) there is no method that always works best; (2) lmer with the default optimizer is worst (fails about 1/3 of the time); (3) lmer with "nloptwrap" is best (worse than lme 4% of the time, rarely worse than lmer).



To be a little bit reassuring, I think that this situation is likely to be worst for small, misspecified cases (i.e. residual error here is uniform rather than Normal). It would be interesting to explore this more systematically though ...






share|cite|improve this answer























    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "65"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f384528%2flme-and-lmer-giving-conflicting-results%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    14














    tl;dr if you change the optimizer to "nloptwrap" I think it will avoid these issues (probably).



    Congratulations, you've found one of the simplest examples of multiple optima in a statistical estimation problem! The parameter that lme4 uses internally (thus convenient for illustration) is the scaled standard deviation of the random effects, i.e. the among-group std dev divided by the residual std dev.



    Extract these values for the original lme and lmer fits:



    (sd1 <- sqrt(getVarCov(Mlme)[[1]])/sigma(Mlme))
    ## 2.332469
    (sd2 <- getME(Mlmer,"theta")) ## 14.48926


    Refit with another optimizer (this will probably be the default in the next release of lme4):



    Mlmer2 <- update(Mlmer,
    control=lmerControl(optimizer="nloptwrap"))
    sd3 <- getME(Mlmer2,"theta") ## 2.33247


    Matches lme ... let's see what's going on. The deviance function (-2*log likelihood), or in this case the analogous REML-criterion function, for LMMs with a single random effect takes only one argument, because the fixed-effect parameters are profiled out; they can be computed automatically for a given value of the RE standard deviation.



    ff <- as.function(Mlmer)
    tvec <- seq(0,20,length=101)
    Lvec <- sapply(tvec,ff)
    png("CV38425.png")
    par(bty="l",las=1)
    plot(tvec,Lvec,type="l",
    ylab="REML criterion",
    xlab="scaled random effects standard deviation")
    abline(v=1,lty=2)
    points(sd1,ff(sd1),pch=16,col=1)
    points(sd2,ff(sd2),pch=16,col=2)
    points(sd3,ff(sd3),pch=1,col=4)
    dev.off()


    enter image description here



    I continued to obsess further over this and ran the fits for random seeds from 1 to 1000, fitting lme, lmer, and lmer+nloptwrap for each case. Here are the numbers out of 1000 where a given method gets answers that are at least 0.001 deviance units worse than another ...



              lme.dev lmer.dev lmer2.dev
    lme.dev 0 64 61
    lmer.dev 369 0 326
    lmer2.dev 43 3 0


    In other words, (1) there is no method that always works best; (2) lmer with the default optimizer is worst (fails about 1/3 of the time); (3) lmer with "nloptwrap" is best (worse than lme 4% of the time, rarely worse than lmer).



    To be a little bit reassuring, I think that this situation is likely to be worst for small, misspecified cases (i.e. residual error here is uniform rather than Normal). It would be interesting to explore this more systematically though ...






    share|cite|improve this answer




























      14














      tl;dr if you change the optimizer to "nloptwrap" I think it will avoid these issues (probably).



      Congratulations, you've found one of the simplest examples of multiple optima in a statistical estimation problem! The parameter that lme4 uses internally (thus convenient for illustration) is the scaled standard deviation of the random effects, i.e. the among-group std dev divided by the residual std dev.



      Extract these values for the original lme and lmer fits:



      (sd1 <- sqrt(getVarCov(Mlme)[[1]])/sigma(Mlme))
      ## 2.332469
      (sd2 <- getME(Mlmer,"theta")) ## 14.48926


      Refit with another optimizer (this will probably be the default in the next release of lme4):



      Mlmer2 <- update(Mlmer,
      control=lmerControl(optimizer="nloptwrap"))
      sd3 <- getME(Mlmer2,"theta") ## 2.33247


      Matches lme ... let's see what's going on. The deviance function (-2*log likelihood), or in this case the analogous REML-criterion function, for LMMs with a single random effect takes only one argument, because the fixed-effect parameters are profiled out; they can be computed automatically for a given value of the RE standard deviation.



      ff <- as.function(Mlmer)
      tvec <- seq(0,20,length=101)
      Lvec <- sapply(tvec,ff)
      png("CV38425.png")
      par(bty="l",las=1)
      plot(tvec,Lvec,type="l",
      ylab="REML criterion",
      xlab="scaled random effects standard deviation")
      abline(v=1,lty=2)
      points(sd1,ff(sd1),pch=16,col=1)
      points(sd2,ff(sd2),pch=16,col=2)
      points(sd3,ff(sd3),pch=1,col=4)
      dev.off()


      enter image description here



      I continued to obsess further over this and ran the fits for random seeds from 1 to 1000, fitting lme, lmer, and lmer+nloptwrap for each case. Here are the numbers out of 1000 where a given method gets answers that are at least 0.001 deviance units worse than another ...



                lme.dev lmer.dev lmer2.dev
      lme.dev 0 64 61
      lmer.dev 369 0 326
      lmer2.dev 43 3 0


      In other words, (1) there is no method that always works best; (2) lmer with the default optimizer is worst (fails about 1/3 of the time); (3) lmer with "nloptwrap" is best (worse than lme 4% of the time, rarely worse than lmer).



      To be a little bit reassuring, I think that this situation is likely to be worst for small, misspecified cases (i.e. residual error here is uniform rather than Normal). It would be interesting to explore this more systematically though ...






      share|cite|improve this answer


























        14












        14








        14






        tl;dr if you change the optimizer to "nloptwrap" I think it will avoid these issues (probably).



        Congratulations, you've found one of the simplest examples of multiple optima in a statistical estimation problem! The parameter that lme4 uses internally (thus convenient for illustration) is the scaled standard deviation of the random effects, i.e. the among-group std dev divided by the residual std dev.



        Extract these values for the original lme and lmer fits:



        (sd1 <- sqrt(getVarCov(Mlme)[[1]])/sigma(Mlme))
        ## 2.332469
        (sd2 <- getME(Mlmer,"theta")) ## 14.48926


        Refit with another optimizer (this will probably be the default in the next release of lme4):



        Mlmer2 <- update(Mlmer,
        control=lmerControl(optimizer="nloptwrap"))
        sd3 <- getME(Mlmer2,"theta") ## 2.33247


        Matches lme ... let's see what's going on. The deviance function (-2*log likelihood), or in this case the analogous REML-criterion function, for LMMs with a single random effect takes only one argument, because the fixed-effect parameters are profiled out; they can be computed automatically for a given value of the RE standard deviation.



        ff <- as.function(Mlmer)
        tvec <- seq(0,20,length=101)
        Lvec <- sapply(tvec,ff)
        png("CV38425.png")
        par(bty="l",las=1)
        plot(tvec,Lvec,type="l",
        ylab="REML criterion",
        xlab="scaled random effects standard deviation")
        abline(v=1,lty=2)
        points(sd1,ff(sd1),pch=16,col=1)
        points(sd2,ff(sd2),pch=16,col=2)
        points(sd3,ff(sd3),pch=1,col=4)
        dev.off()


        enter image description here



        I continued to obsess further over this and ran the fits for random seeds from 1 to 1000, fitting lme, lmer, and lmer+nloptwrap for each case. Here are the numbers out of 1000 where a given method gets answers that are at least 0.001 deviance units worse than another ...



                  lme.dev lmer.dev lmer2.dev
        lme.dev 0 64 61
        lmer.dev 369 0 326
        lmer2.dev 43 3 0


        In other words, (1) there is no method that always works best; (2) lmer with the default optimizer is worst (fails about 1/3 of the time); (3) lmer with "nloptwrap" is best (worse than lme 4% of the time, rarely worse than lmer).



        To be a little bit reassuring, I think that this situation is likely to be worst for small, misspecified cases (i.e. residual error here is uniform rather than Normal). It would be interesting to explore this more systematically though ...






        share|cite|improve this answer














        tl;dr if you change the optimizer to "nloptwrap" I think it will avoid these issues (probably).



        Congratulations, you've found one of the simplest examples of multiple optima in a statistical estimation problem! The parameter that lme4 uses internally (thus convenient for illustration) is the scaled standard deviation of the random effects, i.e. the among-group std dev divided by the residual std dev.



        Extract these values for the original lme and lmer fits:



        (sd1 <- sqrt(getVarCov(Mlme)[[1]])/sigma(Mlme))
        ## 2.332469
        (sd2 <- getME(Mlmer,"theta")) ## 14.48926


        Refit with another optimizer (this will probably be the default in the next release of lme4):



        Mlmer2 <- update(Mlmer,
        control=lmerControl(optimizer="nloptwrap"))
        sd3 <- getME(Mlmer2,"theta") ## 2.33247


        Matches lme ... let's see what's going on. The deviance function (-2*log likelihood), or in this case the analogous REML-criterion function, for LMMs with a single random effect takes only one argument, because the fixed-effect parameters are profiled out; they can be computed automatically for a given value of the RE standard deviation.



        ff <- as.function(Mlmer)
        tvec <- seq(0,20,length=101)
        Lvec <- sapply(tvec,ff)
        png("CV38425.png")
        par(bty="l",las=1)
        plot(tvec,Lvec,type="l",
        ylab="REML criterion",
        xlab="scaled random effects standard deviation")
        abline(v=1,lty=2)
        points(sd1,ff(sd1),pch=16,col=1)
        points(sd2,ff(sd2),pch=16,col=2)
        points(sd3,ff(sd3),pch=1,col=4)
        dev.off()


        enter image description here



        I continued to obsess further over this and ran the fits for random seeds from 1 to 1000, fitting lme, lmer, and lmer+nloptwrap for each case. Here are the numbers out of 1000 where a given method gets answers that are at least 0.001 deviance units worse than another ...



                  lme.dev lmer.dev lmer2.dev
        lme.dev 0 64 61
        lmer.dev 369 0 326
        lmer2.dev 43 3 0


        In other words, (1) there is no method that always works best; (2) lmer with the default optimizer is worst (fails about 1/3 of the time); (3) lmer with "nloptwrap" is best (worse than lme 4% of the time, rarely worse than lmer).



        To be a little bit reassuring, I think that this situation is likely to be worst for small, misspecified cases (i.e. residual error here is uniform rather than Normal). It would be interesting to explore this more systematically though ...







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 4 hours ago

























        answered 5 hours ago









        Ben Bolker

        22.3k16090




        22.3k16090






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f384528%2flme-and-lmer-giving-conflicting-results%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            What visual should I use to simply compare current year value vs last year in Power BI desktop

            How to ignore python UserWarning in pytest?

            Alexandru Averescu