This LaTeX document is available as postscript or asAdobe PDF.
Assumptions of the Model
A simple animal model will be utilized in this lesson.
The equation of the model is
Genotypic Values of Animals
One way to understand a model is to generate some fictitious data that correspond to the model exactly. Below are pedigrees of 16 animals. The first four were randomly chosen from a large population and subsequently mated at random.
The Relationship Matrix
The matrix of additive genetic relationships among the sixteen
(times 16) given below:
One method to generate additive genetic values of the 16 animals would
be to partition
into the product of a lower triangular
matrix times its transpose, obtain a vector of 16 pseudo-random normal
deviates, and pre-multiply this vector by the lower triangular
The Cholesky Decomposition is the procedure to compute the lower
In SAS IML, write
HALF(). The theory is
that the vector of pseudo-random normal deviates, say ,
By Following Pedigrees
A second and slightly more efficient method of generating true breeding values is to following the pedigree file chronologically. Sort the pedigrees so that the oldest animals appear before their progeny (as already given in the above table). Note that there are base population animals with unknown parents and animals with known parents.
Base population animals, by definition,
are unrelated to each other and are non-inbred.
represent the vector of base population animals,
in this case animals 1 to 4.
The other animals, 5 to 16, with known parents, are represented
The additive genetic value of any animal in
this vector can be written as
Assume heritability is 0.36, and that the variance of phenotypic
records is to be 100. Then,
A base population animal's additive genetic value is created
by obtaining a pseudo-random normal deviate, RND, and multiplying
For animals 1 to 4,
Residual and Phenotypic Values
To create phenotypic records, the equation of the model must be
specified. Assume that
of this sample of ei values was 109.83, which is
greater than the population value of 64.
However, such differences can be expected in small samples. In fact, the
variance of the estimate of the sample variance is
Assume a single record per animal, then
Note that base animals were not evaluated by this index because they did not have records. Also, relationships among animals were ignored. Animal 7, for example, had a record and 3 progeny, but the progeny information was not included in the evaluation. The MSE, mean squared error, is the average squared difference between the index and TBV. The MSE criterion is often used to compare different types of estimators. Another criterion is the variance of the estimated residuals (last column). The method giving the highest correlation between estimated and true values, or the smallest MSE, or the lowest variance of estimated residuals should be the preferred method.
The selection index method, although not a bad first attempt, comes pretty close to the true breeding values. The following methods are attempts to improve upon the selection index. Generalized Least Squares
Let a general model be written as
For the animal model, then
Thus, GLS in this situation, gives estimated breeding values that are the observations deviated from the overall mean. The correlation of with TBV is 0.6575, and the ranking of animals by are identical to the results from the selection index. Thus, in terms of genetic change, exactly the same animals would be selected for breeding. Note that the MSE for the GLS estimator is much larger than for the selection index estimator. Thus, MSE is not a perfect criterion for distinguishing between two procedures.
The estimated residuals in this case are all equal to zero, and therefore, the variance of the estimated residuals is also zero. However, this does not imply that GLS is the best method because the error is actually becoming part of the estimated breeding value. Regressed Least Squares
This method consists of 'shrinking' the GLS estimator,
towards its expected value. If animals were unrelated, then the
shrunken estimator would be
which is identical to the
selection index estimator. However, because animals are indeed
related, the procedure is formally written as
For the animal model example, then
The RLS estimator gave a higher correlation than selection index, but the MSE was also greater. The variance of estimated residuals was smaller than for selection index. Best Linear Unbiased Prediction
Prediction refers to the estimation of the realized value of a random
variable (from data) that has been sampled from a population with a
known variance-covariance structure.
The general mixed model is written as
The elements of are considered to be fixed effects while the elements of are random factors from populations of random effects with known variance-covariance structures. Both and may be partitioned into one or more factors depending on the situation.
The expectations of the random variables are
The prediction problem involves both and . A few definitions are needed.
The derivation of BLUP begins by equating the expectations of the
predictor and the predictand to determine what needs to be true
in order for unbiasedness to hold. That is,
Minimization of the diagonals of
is achieved by differentiating
with respect to the unknowns,
and equating the partial derivatives to null matrices.
Derivation of MME
Take the first and second partial derivatives of , the variance-covariance matrix of prediction errors plus the LaGrange Multiplier to force unbiasedness, and write them in matrix notation as
These final equations are known as Henderson's Mixed Model Equations or HMME. Notice that these equations are of order equal to the number of elements in and , which is usually less than the number of elements in , and therefore, are more practical to obtain than the original BLUP formulation. Also, these equations require the inverse of rather than , both of which are of the same order, but is usually diagonal or has a more simple structure than . Also, the inverse of is needed, which is of order equal to the number of elements in . The ability to compute the inverse of depends on the model and the definition of .
The variances of the predictors and prediction errors can be expressed in
terms of the generalized inverse of the coefficient matrix of
HMME. Recall that
Without loss of generality, assume that the coefficient matrix of
HMME is full rank (to
simplify the presentation of results), then
As the number of observations in the analysis increases, two things can be noted from these results:
Application to Example Data
For the simple animal model, HMME can be formed by noting that
Thus, BLUP analysis of the animal model gave a higher correlation with TBV than the simple selection index or the GLS estimator, and the result was very similar to the regressed least squares estimator except that the MSE was slightly higher for BLUP, but the variance of the estimated residuals was slightly smaller for BLUP. The BLUP analysis provided estimators for the base generation animals, and utilized the relationships among all animals in the data.
Partitioning BLUP Solutions
Partitioning solutions from HMME became useful in explaining results from BLUP to dairy breeders in terms that they could understand. Partitioning also helps the researcher to understand what is contributing to the solutions.
From HMME, animal solutions have contributions from three basic
sources of information, i.e. their own records, their parent average
breeding value, and the average of their progeny EBV deviated from
one half the mate's EBV. Take, for example, the equation for
animal 7 in the example. The equation is
From this small example, the weight on the parent average, PA, w2, is greater than the weights on either the animal's own record or on the average of 3 progeny, even at a heritability of .36. Dairy producers believe that w2 should be smaller than either w1 or w3, even though this could lead to a lower correlation between TBV and EBV. Below is a table of the weights for animal 7 if the heritability were different values.
As the heritability increases, the weight on an animal's own records increases, the weight on the parent average decreases, and the weight on the average of three progeny decreases. However, in all cases the weight on the parent average is still greater than the other two weights. In general, at least 4 progeny are needed to have the same weight on the parent average as on the progeny average. Thus, the parent average is equivalent to the average of four progeny, in terms of importance. At the same time, an animal needs records to have more importance than the parent average. At h2=0.5, this means that n must be greater than 2, but at h2=0.1, n must be greater than 18.
If animal 7 had 100 progeny, n=1, and h2=0.36, then w1=0.01070, w2=0.03805, and w3=0.95125, so that the progeny average becomes the most important piece of information in the long run. This is because the progeny average represents what an animal transmits to its offspring, while an animal's own record can be affected by many factors, and the parent average also does not have much relevance when an animal has many progeny. Robertson and Dempfle
This methodology was first given by Alan Robertson (1955) and was
later revived by Leo Dempfle (1989) in a symposium in honour of
The method is Bayesian in philosophy and consists of combining
prior information about
with information coming
from the data.
Application to Example Data
For the animal model example, the RMME would be
More realistically, the expected values of the base population animals can be assumed to be zero, i.e. , but due to selection or non random mating, the expected value of offspring may not be a null vector. Assume that , which says that the offspring are below the average of the base population animals. Also, assume that nothing is known about , so that s-1=0. The results from RMME under these assumptions and from HMME are given in the following table.
The correlation for RMME is the same as for HMME because the solutions for RMME are exactly 2 units lower than for HMME. However, notice that the MSE is smaller for RMME, which says that the solutions for offspring are closer to their true values.
The outcome from the use of RMME is dependent upon the Prior Info that goes into RMME. With good Prior Info the results are better than if the priors are poor. If the Prior Info is in error, then RMME could be worse than HMME or other methods.
RMME have not been used very often in animal breeding research, at least in terms of using Prior Info. With selection and non random mating being more important in animal breeding, use of RMME should increase in the future because it allows a means to account for these biases.
This LaTeX document is available as postscript or asAdobe PDF.Larry Schaeffer