tag:blogger.com,1999:blog-6755324678393252721.post2878124181968724243..comments2023-03-16T04:53:31.763-07:00Comments on Twin methods in OpenMx: Structural equation modeling in OpenMx: RAM path methoddeevybeehttp://www.blogger.com/profile/15118040887173718391noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-6755324678393252721.post-82108682449979340702013-06-03T11:48:07.967-07:002013-06-03T11:48:07.967-07:00I found this to be a really thorough and intuitive...I found this to be a really thorough and intuitive explanation of an OpenMX example. Thank you for posting this.cheshmakhttps://www.blogger.com/profile/14107188817923031690noreply@blogger.comtag:blogger.com,1999:blog-6755324678393252721.post-4635590326392417362012-09-04T12:15:35.664-07:002012-09-04T12:15:35.664-07:00Sorry for the delay in replying. As I mentioned at...Sorry for the delay in replying. As I mentioned at the start of this blog, I'm not an expert in either structural equation modeling or OpenMx, and the problem you describe foxes me too.<br />I did what I usually do, and Googled "'standard errors' NaN OpenMx", which ultimately led me to some discussion of the NaN standard error problem on the OpenMx site. It wasn't terribly illuminating, but the noninvertible Hessian was mentioned here:<br />http://openmx.psyc.virginia.edu/thread/1494<br />and at the bottom of this page, there was this suggestion:<br />"If it is caused by a negative diagonal element in the inverse Hessian, then that might mean that your model hasn't properly converged. As this model runs very quickly, try a few different sets of starting values,"<br />I fiddled around with the script and found i could get real values of SE (though large) by altering start values of the last four free paths to .25 rather than 1.<br />This isn't all that satisfactory, I'm afraid, but it does suggest the problem lies in the convergence of the model. So I don't *think* it's a problem of underdefined model, but you could ask at the OpenMx forum and you may get a better reply.<br />deevybeehttps://www.blogger.com/profile/15118040887173718391noreply@blogger.comtag:blogger.com,1999:blog-6755324678393252721.post-8292181514445872862012-08-28T08:48:27.846-07:002012-08-28T08:48:27.846-07:00So this is caused by the inverted Hessian matrix (...So this is caused by the inverted Hessian matrix (which is the covariance matrix) having negative values on the diagonal.<br /><br />This is generally due to something being underdefined in the design? Is it a problem that the strength of the correlation between W and S can be modified by EITHER changing a or changing b? In some ways the model isn't defined well enough?<br /><br />In the tutorial you show that there are 10 observed values (in the covariance matrix) and 8 unknown parameters, suggesting 2 dof. I'm a bit worried that the 0s in the covariance matrix don't help much - if that makes sense?...<br /><br />Thinking about this a bit more: Given that this model can be split into two smaller models, surely these should be possible to estimate too? But when I count up the DoF for just the V,W,S sub-graph...<br /><br />(the model looks like: V-(a)->W, V-(b)->S, V-(1)->V, W-(e)->W, S-(f)->S)<br /><br />This gives us 4 unknowns (a,b,e,f), and only three values in the covariance matrix (a^2+e, b^2+f and ab)...<br /><br />...doesn't this mean the model is "secretly" underdefined?<br /><br />Just trying to figure this all out.<br /><br />Hope my question makes sense!<br /><br />-- Mike (lionfishy at gmail dot com).Unknownhttps://www.blogger.com/profile/00951224588726975842noreply@blogger.comtag:blogger.com,1999:blog-6755324678393252721.post-25266313844115251352012-08-28T08:30:50.582-07:002012-08-28T08:30:50.582-07:00This comment has been removed by the author.Unknownhttps://www.blogger.com/profile/00951224588726975842noreply@blogger.comtag:blogger.com,1999:blog-6755324678393252721.post-87715916441535229992012-08-27T11:57:16.998-07:002012-08-27T11:57:16.998-07:00Thanks for this really essential, excellent tutori...Thanks for this really essential, excellent tutorial. I'm working through it now, and hope to use it with fMRI time-course data...<br /><br />Anyway, I've found I've got a bit stuck on getting the PathCov_2factor script working. I've generated the data with the following script:<br /><br />#set.seed(0)<br />sigma = diag(c(1,1,1,1))<br />sigma[1,2] = 0.6;<br />sigma[2,1] = 0.6;<br />sigma[3,4] = 0.5;<br />sigma[4,3] = 0.5;<br />sigma<br />mydata = mvrnorm(80,c(0,0,0,0),sigma)<br />write.table(mydata,'~/sem_openmx/my4var')<br /><br />I initially wrote out your script by hand, but have now copied and pasted the whole script into R. Either way, I find I have problems with the estimated values of the Std. Error which all appear as NaN. With much larger sample sizes (e.g. 8000) some of these become numbers, but are large (in comparison to the estimates).<br /><br />Other parameters (such as the -2LL, etc) all seem reasonable:<br /><br />free parameters:<br /> name matrix row col Estimate Std.Error lbound ubound<br />1 a A W V 0.7692258 NaN <br />2 b A S V 0.8187957 NaN <br />3 c A B N 0.5022223 NaN <br />4 d A P N 0.5136759 NaN <br />5 e1 S W W 0.2756410 NaN <br />6 e2 S S S 0.3729581 NaN <br />7 e3 S B B 0.3859098 NaN <br />8 e4 S P P 0.6766048 NaN <br /><br />observed statistics: 10 <br />estimated parameters: 8 <br />degrees of freedom: 2 <br />-2 log likelihood: 212.9178 <br />saturated -2 log likelihood: 209.5215 <br />number of observations: 80 <br />chi-square: 3.396338 <br />p: 0.1830183 <br /><br />Any idea what might be the problem?<br /><br />Thanks again for the blog, it's really helped :)<br /><br />Mike [University of Edinburgh]<br /><br />Unknownhttps://www.blogger.com/profile/00951224588726975842noreply@blogger.com