Sunday 28 August 2011

At last! Starting OpenMx

OpenMx is an R library, i.e. a set of functions that is downloaded and stored as part of your R setup. It is installed from this website. At the time of writing, the latest version is automatically installed into R if you paste the following line into the R command line and press return.
If you don't have a default CRAN repository, you will be prompted to pick one. Just pick a repository near you and press the OK button. If you do have a default CRAN repository, you won't see that panel.
A few lines of R output will scroll by and your OpenMx library will be installed.
To check all is well, type
You should see a message: Loading required package: OpenMx
N.B. There have been some teething problems with the latest version of OpenMx; if the 'require' command gives an error message, you may need to reinstall R and try again. If all else fails, the very helpful people at the OpenMx forum will be able to assist if you describe your problem and send details of the console error message.
OpenMx model specification in R 
Before considering a script for optimisation, a brief introduction is needed to R syntax for model specification in OpenMx. To access OpenMx commands, type:
Now type:
     mxMatrix(type="Full",nrow=3,ncol=3, values=c(3,2,1,0,2,4,3,1,2),name="A")
The Console window automatically displays the matrix, A, that you have created. Unlike the matrices created with the R matrix command, this matrix has properties as well as numerical values. These additional properties are listed on the Console, preceded by the symbol @. For this matrix they are boringly blank, but in future applications they will be explained further, and you will see how they can be assigned values. 'Full' is the default kind of matrix where values are specified for all rows and columns. We will encounter other matrix types later on.

Models in OpenMx
In OpenMx, you first specify a model, and then run it. The model specifies a set of matrices that will act as input to the model, and computational operations that are to be performed, as well as other constraints.  Here is a very simple model that specifies two matrices and then adds them together. This is a pretty pointless thing to do in OpenMx, as it is much simpler to do it with regular R commands. However, it is useful for introducing you to the format of OpenMx models and specifications.

#                                                    Add_matrices
#   DVM Bishop, 7th March 2010
#  Toy program to illustrate features of OpenMx
# first of all specify the model; it does not run at this stage
myeasymodel=mxModel("add up",            
 mxMatrix(type="Full",nrow=3,ncol=3, values=c(3,2,1,0,2,4,3,1,2),name="A"),
 mxMatrix(type="Full",nrow=3,ncol=3, values=c(0,0,1,1,2,0,3,0,0),name="B"),
# The name 'add up' is used in output from the model
)  # Final close bracket denotes end of model specification

# results of the model are saved to 'myrun'
print(myrun@output$algebras)   # shows matrix mysum
Note the commas after each step of model specification except the final one. This indicates there are more model commands to come. Once the model is completed, a final close bracket is used.

The final print statement indicates that just part of the model output is shown, the numerical result of the computation, which is referred to as algebras. If you type myrun on the Console, you will see the full model output. Note that once again there are lots of lines beginning with the @ symbol, most of which are NULL. These are all properties of the model output. The complex structure of models and their outputs can be confusing, but will become clearer as we work through more examples.

This example is just to give you a feel for the format of an mxModel; in practice, you would not use OpenMx for simple matrix manipulation: rather it is used in optimization. The next script shows an OpenMx in this more usual role. It is performing the same task as the Likelihood_3var script in the previous section: estimating the correlations between three variables from the attitudes dataset. But instead of laboriously computing the log likelihood for each possible value to find a minimum, it uses the powerful optimisation methods of OpenMx to find a solution. But first, a brief digression, on packages.

Finding and installing packages in R
In general, we will be working in future with models of covariances rather than correlations, but so we can compare outputs with the Likelihood_3var script, we will display results as correlations. To simplify our task, we will load another package that has a command for converting a covariance matrix into a correlation matrix.
I found a relevant package simply by Googling "convert covariance to correlation in R". This led me to the bayesm package. Like OpenMx, this does not come as a default package in R, so you have to load it. But this is made very easy. At the Console, type:
You may be asked for your CRAN mirror again, but by now you should not be fazed by that.
You then see a long list of packages that can be loaded. Scroll down to bayesm and select it, and it will be added to your R libraries. Check all is OK with:

Right! Now we're ready to run the script, Find3corr_OpenMx
#  Find3corr_OpenMx
#  by DVM Bishop, 7th March 2010
#  based on script on p 13 of OpenMx Users Guide
require(OpenMx) # loads OpenMx package
require(bayesm) #l You'll need to have installed this package, see text
data(attitude)  # read in one of R's prepackaged datasets
mydata=attitude[,1:3]  #select the first 3 columns (ratings, complaints, privileges)
# set up a model which specifies the observed data, and
# matrices for computed expected values, and optimization method

mytriCovModel <- mxModel("triCov",
mxMatrix( type="Full", nrow=1, ncol=3, free=TRUE, values=60, name="expMean" ),
# 'values' gives starting values for estimates: select a number close to true means
# N.B. if a single starting value is given, it will be used for all variables. 
# If you want to give different starting values for different variables,
# then values can be specified as a vector, e.g. c(60,55,50)

mxMatrix( type="Lower", nrow=3, ncol=3, free=TRUE, values=100, name="Chol" ),
# see text for explanation of 'Chol'

mxAlgebra( expression=Chol %*% t(Chol), name="expCov", ),  
# Chol is multiplied by its transpose to give a square matrix: see text
mxData( observed=mydata, type="raw" ), 
# 'type' can be raw or cov (if covariance matrix as input)
# if raw data input, then covariance matrix is computed by OpenMx
mxFIMLObjective( covariance="expCov", means="expMean", dimnames=colnames(mydata) )
# This command specifies we will use the FIML method to find best fit between observed
#  and expected covariances and means for the variables in mydata
# dimnames is required to allow mapping of observed data to expected covariance matrix
# end of model specification; 
# Opening bracket after mxModel is matched by a close bracket

triCovFit <- mxRun(mytriCovModel)       # perform optimisation with model as set up above
my_emean=triCovFit[['expMean']]@values    # see text for explanation of '@values'
print("observed means")
print("expected means under optimal fit")
print("observed covariances")
print("expected covariances under optimal fit")
LL <- mxEval(objective,triCovFit);
print("-log likelihood")
print("observed correlations")
print("expected correlations under optimal fit")
ecor=matrix(nmat(my_ecov),ncol=3) #nmat function from bayesm

This script computes the likelihood (based on the similarity between observed and expected covariances) starting with the expected covariance set to the start values provided. These are then tweaked to find values that give a higher likelihood (or lower value of -2LL). In this way the process converges on the best fit between expected and obtained values. There are different algorithms that can be used to control how the tweaking process is done; in this script, a process called FIML is used.

The key point to understand about OpenMx is that it always involves setting up a model.
This can have any name you like, except that it must not contain any of  the characters "+-!~?:*/^%<>=&|$"; here we have called it 'mytriCovModel'. This name is assigned to the model in the same way as you might assign a name to a matrix. The key difference is that the model contains a set of parameters. If you are familiar with other programming languages, it may help to think of a model as analogous to a structure.
There is great flexibility about what a model can contain; and indeed a model be built out of other (sub)models. Another way to think of models is as nested Russian dolls. Each model can contain other models. The Russian doll is not entirely apt, as it implies each model can contain only one other model, whereas in fact, numerous models are possible,  as shown in the figure below, where a white outer box denotes the main model, which contains two pale grey submodels, each of which contains three dark grey submodels. The contents of boxes need not be full models – they can also be components of models: matrices, algebra statements, data specifications and optimization statements.
Figure 3: Models (dark squares) within models (grey squares) within a model (white square)
The mxRun function will run a model through the optimizer. The return value of this function is an identical model, with all the free parameters in the cells of the matrices of the model assigned to their final values.

The key information in the model is:
  • A matrix of observed data; this is specified using the command mxData
  • Matrices which specify how the expected covariances are to be derived, these are defined with the command mxMatrix.  The optimization procedure will start by estimating –LL for start values provided by the user – these are given as 'values'. The other point to note about an mxMatrix is that we can distinguish elements that we want to have estimated by the program (where Free=TRUE), and those we want to keep fixed at the start value. We will encounter cases where Free=FALSE later on. For the present, note that we want to estimate the whole covariance matrix, and so Free=TRUE for all elements of the expected covariance matrix.
  • Alebraic expressions which specify operations to be carried out on matrices, defined with mxAlgebra, and explained for this script more fully below
  • An expression specifying the optimization method to be used, here defined as mxFIMLObjective; the details of this are not visible to the user, but it specifies a procedure for comparing likelihoods for obtained and expected values of the covariances in order to minimize the –LL function. All OpenMx functions which are termed 'Objective' are functions that are used to achieve optimization.

Cholesky decomposition 
The mxAlgebra section of this script illustrates one important method that is commonly used in optimization routines, i.e. the Cholesky decomposition. As we saw in the previous script, if you allow a square covariance matrix to have different values estimated for it, there is a danger that the resulting matrix will not be suitable for the matrix operations that are used in estimating likelihood. In formal terms, it is not 'positive definite' (which means that it has one or more negative eigenvalues). This can be avoided if, instead of estimating the expected covariance matrix directly, we estimate a lower triangular matrix, which, when multiplied by its transpose, yields the matrix of expected covariances. This is known as a Cholesky decomposition. You may encounter the Cholesky decomposition in structural equation models where it can be used to perform a process analogous to stepwise multiple regression, but its more general use in modelling is just as a computational fix to constrain the kinds of matrix that can be estimated.  

Note: in these scripts so far, you have encountered statements such as free=TRUE.
This can be abbreviated to free=T (or free=F, for false). We avoid doing that because it can confuse beginners, but you will encounter scripts with this abbreviation. You should avoid using T or F as variable names. One reason why the scripts here tend to use variable names starting with 'my' is to help distinguish user-defined variables from R functions and operators.

A second point to note is that the second mxMatrix command now specifies that the type is "Lower". You can probably work out what this means, but may be confused by the fact that 'values' specifies a single starting value for the matrix. If this, or other features of a script, has you uncertain, you should always experiment to clarify what is happening in the script. In this case, you can just type at the console:
   mxMatrix( type="Lower", nrow=3, ncol=3, free=TRUE, values=100, name="Chol" )
and you will see full information about the matrix.
Compare this with the command:
   mxMatrix( type="Lower", nrow=3, ncol=3, free=TRUE, values=c(100,110,90,80,70,60), name="Chol1" )

You can refer to different parts of a model by using the name of part of a model, and by using the @ operator  to denote the part you want. As an illustration, let us make a toy model that just contains two matrices, one Full and one Lower, with the command:
if you now type
you can see the properties of the model, which has a number of blank fields, except for '@matrices'
To inspect the matrices, you type:
You now see the detailed information for both matrices. Using the Russian dolls analogy, 'mytoy' is the outermost doll, and mytoy@matrices is the next doll inside.
Suppose you want to just inspect mymat2, then you can either type
to specify that you want to see the 2nd matrix, or you can reference it by name, enclosed in double square brackets:
You can also specify that you only want to see part of the information of mymat2, by using the @ operator, e.g. to just see the start values:

No comments:

Post a Comment