next up previous


This LaTeX document is available as postscript or asAdobe PDF.

Random Regression Models
L. R. Schaeffer, March 1999
Updated March 2000

All biological creatures grow and perform over their lifetime. Traits that are measured at various times during that life are known as longitudinal data. Examples are body weights, body lengths, milk production, feed intake, fat deposition, and egg production. On a biological basis there could be different genes that turn on or turn off as an animal ages causing changes in physiology and performance. Also, an animal's age can be recorded in years, months, weeks, days, hours, minutes, or seconds, so that, in effect, there could be a continuum or continuous range of points in time when an animal could be observed for a trait. These traits have also been called infinitely dimensional traits.

Take body weight as an example, as given in the table below on gilts.

Animal Days on Test
  10 20 30 40 50 $\cdots$ 100
1 42 53 60 72 83   140
2 30 50 58 68 76   122
3 38 44 51 60 70   106
SD 1.6 3.7 3.9 5.0 5.3   13.9

The differences among the three animals increase with days on test as the gilts become heavier. As the mean weight increases, so also the standard deviation of weights increases. The weights over time could be modeled as a mean plus covariates of days on test and days on test squared. Depending on the species and trait, perhaps a cubic or spline function would fit the data better. The point is that the means can be fit by a linear model with a certain number of parameters.

Covariance Functions are a way of modeling the variances and covariances of the weights over time. This model could be different from the model for the actual weights. A random regression model essentially combines both functions into one model. Covariance functions can be written as random regressions where the covariates are standardized expressions of time that go from -1 to +1.

Multiple Trait Approach

The data presented in the previous table have typically been analyzed such that the weights at each day on test are different traits. If t is the day on test, i.e. 10, 20, 30, 40, 50, or 100, then a model for any one of them could be

\begin{displaymath}{\bf y}_{t} = {\bf Xb}_{t} + {\bf a}_{t} + {\bf e}_{t}, \end{displaymath}

which is just a simple, single record, animal model. Analyses are usually done so that the genetic and residual variances and covariances are estimated among the six weights. Suppose that an estimate of the genetic variances and covariances was

\begin{displaymath}{\bf G} = \left( \begin{array}{rrrrrr}
2.5 & 4.9 & 4.6 & 4.6...
...\\
1.9 & 6.7 & 9.5 & 10.9 & 15.3 & 150.0 \end{array} \right). \end{displaymath}

Suppose the covariance between weights on day 20 with day 70 was needed. The above matrix does not offer any value, but one might be able to get an estimate by extrapolation. A solution offered by Kirkpatrick et al.(1991) is to use covariance functions. A covariance function (CF) is a way of modelling variances and covariances of a longitudinal trait. The elements of ${\bf G}$are written as a function of time,

\begin{eqnarray*}Cov(y_{\ell},y_{m}) & = & f(t_{\ell},t_{m}) \\
& = & \sum^{k-...
...& = & \sum^{k-1}_{i=0} \sum^{k-1}_{j=0} q_{\ell} q_{m} \tau_{ij}
\end{eqnarray*}


where k is the dimension of ${\bf G}$, $q_{\ell}$ and qm are standardized time numbers between -1 to +1. Let tmin=10 and tmax=100 for this example, then for each $t_{\ell}$ calculate $q_{\ell}$ as

\begin{displaymath}q_{\ell} = -1 + 2 \left( \frac{t_{\ell}-t_{min}}{t_{max}-t_{min}}
\right), \end{displaymath}

which gives

ti 10 20 30 40 50 100
qi -1 -.778 -.556 -.333 -.111 +1

The quantity hij is an element of a matrix ${\bf H}$, and $\tau_{ij}$ is an element of another matrix, ${\bf T}$. The function, $\phi(t_{\ell})$, is known as a Legendre polynomial. To calculate the Legendre polynomials, define

\begin{eqnarray*}P_{0}(x) & = & 1, \ \ \mbox{and} \\
P_{1}(x) & = & x,
\end{eqnarray*}


then, in general,

\begin{displaymath}P_{n+1}(x) = \frac{1}{n+1} \left( (2n+1)x P_{n}(x) - nP_{n-1}(x)
\right). \end{displaymath}

These quantities are "normalized" to give

\begin{displaymath}\phi_{n}(x) = \left( \frac{2n+1}{2} \right)^{.5} P_{n}(x). \end{displaymath}

This gives the following series,

\begin{eqnarray*}\phi_{0}(x) & = & \left( \frac{1}{2} \right)^{.5} P_{0}(x) = .7...
...rac{3}{2}x^{2} - \frac{1}{2} ) \\
& = & -.7906 + 2.3717 x^{2},
\end{eqnarray*}


and so on. The first six can be put into a matrix, $\Lambda$, as

\begin{displaymath}\Lambda' = \left( \begin{array}{rrrrrr}
.7071 & 0 & 0 & 0 & 0...
...
0 & 4.3973 & 0 & -20.5206 & 0 & 18.4685
\end{array} \right). \end{displaymath}

Now define another matrix, ${\bf M}$, as a matrix containing the polynomials of the six standardized time values. That is,

\begin{displaymath}{\bf M} = \left( \begin{array}{rrrrrr}
1 & -1 & 1 & -1 & 1 & ...
... & .000 & -.000 \\
1 & 1 & 1 & 1 & 1 & 1 \end{array} \right). \end{displaymath}

This gives

\begin{eqnarray*}\Phi & = & {\bf M}\Lambda, \\
& = & \left( \begin{array}{rrrr...
... 1.2247 & 1.5811 & 1.8708 & 2.1213 & 2.3452
\end{array} \right).
\end{eqnarray*}


which can be used to specify the elements of ${\bf G}$ as

\begin{eqnarray*}{\bf G} & = & \Phi {\bf H} \Phi' \\
& = & {\bf M} (\Lambda {\bf H} \Lambda') {\bf M}' \\
& = & {\bf M} {\bf T} {\bf M}'.
\end{eqnarray*}


Note that $\Phi$, ${\bf M}$, and $\Lambda$ are matrices defined by the Legendre polynomial functions and by the standardized time values and do not depend on the data or values in the matrix ${\bf G}$. Therefore it is possible to estimate either ${\bf H}$ or ${\bf T}$,

\begin{eqnarray*}{\bf H} & = & \Phi^{-1} {\bf G} \Phi^{-T}, \\
& = & \left( \b...
...-289.00 & -63.13 & 120.35 & 137.85 & 54.03
\end{array} \right),
\end{eqnarray*}


and

\begin{eqnarray*}{\bf T} & = & {\bf M}^{-1} {\bf G} {\bf M}^{-T} \\
& = & \lef...
...23018.11 & -10081.33 & 23628.37 & 18429.39
\end{array} \right).
\end{eqnarray*}


By using ${\bf M}$ and ${\bf T}$ there is no need for the use of Legendre polynomials, but rather just functions of the standardized time values. Why Legendre polynomials? Because they are defined over the range of -1 to +1, and they are orthogonal. However, there are other kinds of orthogonal polynomials defined over the same range. The Legendre polynomials are probably the easiest to calculate.

Either ${\bf H}$ or ${\bf T}$ can be used to calculate the covariance between any two days on test between 10 and 100 days. Suppose we want the covariance between days 20 and 70. Using ${\bf T}$ we need a row of ${\bf M}$ for day 70, and the one for day 20. These are

\begin{eqnarray*}{\bf m}'_{20} & = & \left( \begin{array}{rrrrrr}
1 & -.778 & ....
...rrrr}
1 & .333 & .111 & .037 & .012 & .004 \end{array} \right).
\end{eqnarray*}


Then the variances and covariance are

\begin{eqnarray*}{\bf m}'_{20}{\bf T}{\bf m}_{20} & = & 13.5 \\
{\bf m}'_{70}{\...
...& 2097.0045 \\
{\bf m}'_{20}{\bf T}{\bf m}_{70} & = & -18.5571.
\end{eqnarray*}


Note that the variance for day 70 seems artificially very high. This is likely because there were not any estimates of variances and covariances closely around day 70. Also, it is unlikely that the covariance of weight traits would be negative. This situation shows the need to have time periods evenly represented in ${\bf G}$throughout the range that may be important.

Using ${\bf H}$ we need rows of $\Phi$, which can be obtained by

\begin{eqnarray*}\phi'_{20} & = & {\bf m}'_{20} \Lambda \\
& = & \left( \begin...
... .4082 & -.5271 & -.7622 & .0262 & .7817
\end{array} \right),
\end{eqnarray*}


then the variances and covariances are

\begin{eqnarray*}\phi'_{20} {\bf H} \phi_{20} & = & 13.5 \\
\phi'_{70}{\bf H} \...
... & = & 2097.0045 \\
\phi'_{20}{\bf H} \phi_{70} & = & -18.5571.
\end{eqnarray*}


Reduced Orders of Fit

Although the order of ${\bf G}$ in the previous example was six and polynomials of standardized ages to the fifth power were used to derive covariance functions, it could be that only squared or cubed ages are needed to adequately describe the elements of ${\bf G}$. Thus, we want to perform a reduced order of fit. That is, in

\begin{displaymath}{\bf G} = {\bf M}{\bf T}{\bf M}' \end{displaymath}

we seek an ${\bf M}$ that is rectangular, and a ${\bf T}$ that has smaller order. For ${\bf M}$ with k rows and m<k columns, then

\begin{eqnarray*}{\bf M}'{\bf G}{\bf M} & = & {\bf M}'{\bf MTM}'{\bf M} \\
& = & ({\bf M}'{\bf M}){\bf T}({\bf M}'{\bf M}),
\end{eqnarray*}


and pre- and post- multiply by the inverse of $({\bf M}'{\bf M})$to determine ${\bf T}$,

\begin{displaymath}{\bf T} = ({\bf M}'{\bf M})^{-1}{\bf M}'{\bf GM}
({\bf M}'{\bf M})^{-1}. \end{displaymath}

To illustrate, let m=3, then

\begin{displaymath}{\bf M} = \left( \begin{array}{rrr}
1 & -1 & 1 \\ 1 & -.778 ...
... & .111 \\ 1 & -.111 & .012 \\ 1 & 1 & 1
\end{array} \right), \end{displaymath}

and

\begin{eqnarray*}{\bf M}'{\bf M} & = & \left( \begin{array}{rrr}
6.0000 & -1.77...
...067 & -.0953 \\
-.5801 & -.0953 & 1.0902 \end{array} \right).
\end{eqnarray*}


Also,

\begin{displaymath}{\bf M}'{\bf GM} = \left( \begin{array}{rrr}
520.4 & 47.4666...
....2749 \\
286.1482 & 102.2749 & 197.1132 \end{array} \right). \end{displaymath}

The matrix ${\bf T}$ is then

\begin{displaymath}\left( \begin{array}{rrr}
25.6145 & 11.3363 & -8.5474 \\ 11....
... 24.5066 \\
-8.5474 & 24.5066 & 33.0640 \end{array} \right). \end{displaymath}

What order of reduced fit is sufficient to explain the variances and covariances in ${\bf G}$? Kirkpatrick et al.(1990) suggest looking at the eigenvalues of the matrix ${\bf H}$ from a full rank fit. Below are the values. The sum of all the eigenvalues was 3,865.6681, and also shown is the percentage of that total. Also included are the eigenvalues of ${\bf T}$ and their percentages.

${\bf H}$ ${\bf T}$
Eigenvalue Percentage Eigenvalue Percentage
3826.5161 .9899 87538.3830 .9859
22.6809 .0059 1147.1364 .0129
11.2688 .0029 56.6293 .0006
3.8368 .0010 26.9076 .0003
1.3291 .0003 18.1450 .0002
.0364 .0000 .1774 .0000

Both matrices indicate that the majority of change in elements in ${\bf G}$ is explained by a constant, and perhaps a little by a linear increment. Both suggest that a quadratic function of the polynomials is probably sufficient. Is there a way to statistically test the reduced orders of fit to determine which is sufficient? A goodness of fit statistic is $\hat{\bf e}'\hat{\bf e}$ where

\begin{displaymath}\hat{\bf e} = {\bf g} - \hat{\bf g} \end{displaymath}

and ${\bf g}$ is a vector of the half-stored elements of the matrix ${\bf G}$, i.e.,

\begin{displaymath}{\bf g}' = \left( \begin{array}{ccccccc}
g_{11} & g_{12} & \cdots & g_{16} & g_{22} & \cdots & g_{66}
\end{array} \right). \end{displaymath}

A half-stored matrix of order k has k(k+1)/2 elements. For k=6 there are 21 values. Likewise, $\hat{\bf g}$ is a vector of half stored elements of the matrix ${\bf MTM}'$. Although this matrix also has 21 values, because ${\bf M}$ has only m<k columns, the number of independent values is m(m+1)/2. For m=3 this number is 6.

The test statistic, $\hat{\bf e}'\hat{\bf e}$, has a Chi-square distribution with k(k+1)/2 - m(m+1)/2 degrees of freedom. In the example with m=3,

\begin{displaymath}{\bf MTM}' = \left( \begin{array}{rrrrrr}
5.1097 & 5.2460 & 5...
...5 & 4.0591 & 10.1655 & 21.1137 & 148.4817
\end{array} \right), \end{displaymath}

and the residuals (differences from the original ${\bf G}$) are

\begin{displaymath}{\bf G}-{\bf MTM}' = \left( \begin{array}{rrrrrr}
-2.6097 & -...
....9055 & 5.4409 & .7345 & -5.8137 & 1.5183
\end{array} \right), \end{displaymath}

so that the goodness of fit statistic is

\begin{displaymath}\hat{\bf e}'\hat{\bf e} = 163.6351, \end{displaymath}

with 21-6=15 degrees of freedom.

Is a fit of order 3 poorer than a fit of order 5? An F-statistic is possible by taking the difference in the goodness of fit statistics, divided by an estimate of the residual variance. The residual variance is estimated from a fit of order k-1 or in this case of order 5. The goodness of fit statistic for order 5 was 7.8490 with 21-15=6 degrees of freedom. Hence the residual variance is

\begin{displaymath}\sigma^{2} = 7.8490 / 6 \ = \ 1.3082. \end{displaymath}

The F-statistic to test if a fit of order 3 is different from a fit of order 5 is

\begin{eqnarray*}F & = & \frac{( \hat{\bf e}'\hat{\bf e}_{m=3} -
\hat{\bf e}'\...
...- 7.8490)/ 9}{1.3082} \\
& = & 17.3096 / 1.3082 \ = \ 13.2319,
\end{eqnarray*}


with (9,6) degrees of freedom. The table F-value at the (P=.05)level is 4.10. Thus, the difference is significant, and a fit of order 5 is better than a fit of order 3.

CF to RRM

How are covariance functions and a random regression model related to each other? If we know the lower and upper age range for a trait, tmin and tmax, then we know that the genetic variance for any particular age, i, within the age range is given by the covariance function, ${\bf m}'_{i}{\bf T}{\bf m}_{i}$and implies that there is a different true genetic value of an animal for every age within the age range. For the ith age of animal j, the true genetic value is

\begin{eqnarray*}a_{ji} & = & {\bf m}'_{i} {\bf r}_{j} \\
& = & \left( \begin{...
...} \\ r_{j2} \\ r_{j3} \\ r_{j4} \\
r_{j5} \end{array} \right).
\end{eqnarray*}


where ${\bf r}_{j}$ is a vector of regression coefficients for the jth animal, and ${\bf m}_{i}$ is a vector of polynomials of standardized age i. The variance of ${\bf r}_{j}$ is assumed to be

\begin{eqnarray*}Var({\bf r}_{j}) & = & {\bf T},
\end{eqnarray*}


then

\begin{eqnarray*}Var(a_{ji}) & = & Var({\bf m}'_{i}{\bf r}_{j}) \\
& = & {\bf m}'_{i}{\bf T}{\bf m}_{i}.
\end{eqnarray*}


Now notice that ${\bf r}_{j}$ is the same for animal j regardless of the age at which the observation is taken. Using this definition we can write the model equation for an observation taken at age ion animal j as

\begin{eqnarray*}y_{ji} & = & \mu_{i} + a_{ji} + e_{ji} \\
& = & \mu_{i} + {\bf m}'_{i}{\bf r}_{j} + e_{ji}.
\end{eqnarray*}


This is the basic starting point for a random regression model. The variance-covariance matrix of true genetic values follows the correct variances and covariances through the assumed time range, provided that the order of fit for the covariance function is sufficient to define this covariance structure adequately.

Notice that the residual effects may also have a different variance for each age. It is logical to assume that ejican be split into parts, i.e., eji & = & pji + te,

where pji also follows a covariance function (different from that of aji) assumed to relate environmental effects between observations on the same animal, and where te is a temporary environmental effect which is independent of animal and age and other te effects within and between animals. The pji factor is the same as a permanent environmental effect (repeated records on the same animal) for animal j, but which varies with the age i of the animal.

To write an overall model for several observations per animal at different ages, then

\begin{displaymath}{\bf y} = {\bf Xb} + {\bf Mr} + {\bf Mp} + {\bf e}, \end{displaymath}

where
${\bf b}$ must contain a different mean for each discrete age within the pre-defined age range,
${\bf r}$ are regression coefficients (random) for each animal representing the additive genetic contribution to the trait,
${\bf p}$ are regression coefficients (random) for each animal representing the permanent environmental effects,
${\bf e}$ is a vector of temporary environmental effects peculiar to each observation independent of age and animal,
${\bf X}$ is the design matrix relating observations to the respective age means,
${\bf M}$ is a matrix of standardized age covariates associated with each record and linked to the appropriate animals, and
${\bf y}$ is a vector of longitudinal data on animals.
$Var({\bf r})={\bf T}\otimes {\bf A}$ and
$Var({\bf p})={\bf S}\otimes {\bf I}$ where the dimensions of ${\bf T}$ and ${\bf S}$ are equal to the order of fit of the covariance functions. Usually the order of fit is the same for both effects, but theoretically, the genetic and permanent environmental effects might require different orders of fit.
$Var({\bf e})={\bf I}\sigma^{2}_{i}$ which allows for the possibility that the temporary residual variance could change throughout the age range of the data. This would also pick up any 'leftover' effects from the random regressions if the orders of fit were not sufficient to define the correct covariance structures over the age range.

Depending on the trait, there could be other fixed and random factors in the model. For example, contemporary group effects would be defined as those animals that were housed in the same location and measured on the same day by the same person. There could also be other effects like diet effects that could influence a trait on a given day which might differ from animal to animal and from one measurement to the next for the same animal.

The fixed age means in the model do not assume any particular shape of the trait as it changes with age. However, the age range may be large and thus, there would be a large number of discrete age group effects in the model, and possibly there may not be enough data to estimate the means accurately. Therefore, ages may need to be grouped to give fewer age groups overall. Alternatively, one might assume that a mathematical function of age fits the age means, and therefore only a few parameters need to be estimated. If possible, it is likely better to not assume any mathematical function and to have as many age groups as practically possible.

Simulation of Data

Below are the data structure and pedigrees of four dairy cows. Given is the age at which they were observed for stature during four visits to this one herd.

      Age at Classification
Cow Sire Dam Visit 1 Visit 2 Visit 3 Visit 4
1 7 5 22 34 47  
2 7 6 30 42 55 66
3 8 5 28 40    
4 8 1   20 33 44

The model equation will be

\begin{eqnarray*}y_{jik} & = & CG_{j} + b_{0} + b_{1}(A) + b_{2}(A)^{2} \\
& &...
...)^{2}) \\
& & + (p_{i0} + p_{i1}(A) + p_{i2}(A)^{2}) + e_{jik}
\end{eqnarray*}


where
CGj is a random contemporary group effect which is assumed to follow a normal distribution with mean 0 and variance, $\sigma^{2}_{c} = 4.$
b0, b1, and b2 are fixed regression coefficients on (A)= age and age squared which describes the general relationship between age and stature. The assumed true values for these parameters will be b0=.5, b1=1.2, and b2=-.01 and in this example, the ages are not standardized between -1 to +1.
ai0, ai1, and ai2 are random regression coefficients for animal i additive genetic effects, assumed to follow a multivariate normal distribution with mean vector null and variance-covariance matrix, ${\bf T}$, equal to

\begin{displaymath}\left( \begin{array}{rrr}
94.00 & -3.34 & .03098 \\ -3.34 & ...
... -.00144 \\
.03098 & -.00144 & .0000140 \end{array} \right). \end{displaymath}

pi0, pi1, and pi2 are random regression coefficients for animal i permanent environmental effects, assumed to follow a multivariate normal distribution with mean vector null and variance-covariance matrix, ${\bf S}$, equal to

\begin{displaymath}\left( \begin{array}{rrr}
31.6981 & -1.1263 & .010447 \\ -1....
...559 \\
.010447 & -.00048559 & .00000472 \end{array} \right), \end{displaymath}

and ejik is a temporary residual error term assumed to follow a normal distribution with mean 0 and variance, $\sigma^{2}_{e} = 9.$

The simulation process begins by

Step 1.
Generate four contemporary group (visit) effects using a random normal deviate generator and $\sigma_{c}=2$,

\begin{eqnarray*}CG_{1} & = & RND*2 = 2.5233, \\
CG_{2} & = & 2.1462, \\
CG_{3} & = & -2.0483, \\
CG_{4} & = & -2.4207.
\end{eqnarray*}


Step 2.
Generate the fixed effects, ${\bf Xb}$, where

\begin{displaymath}{\bf Xb} = \left( \begin{array}{rrr}
1 & 22 & 484 \\ 1 & 30 ...
...34.81 \\ 36.25 \\ 29.21 \\ 36.14 \\ 33.94 \end{array} \right). \end{displaymath}

Step 3.
Compute the Cholesky decomposition of ${\bf T}$ as

\begin{displaymath}{\bf L}_{T} = \left( \begin{array}{rrr}
9.6953597 & 0 & 0 \\...
...2 & 0 \\
.0031953 & -.001917 & .0003408 \end{array} \right). \end{displaymath}

For each animal, three values will be generated which define the random regression coefficents for the genetic effects. For animals 5, 6, 7, and 8, we assume they are base population animals and unrelated to each other. Generate 3 random normal deviates into a vector, ${\bf v}$, for each animal and premultiply by ${\bf L}_{T}$. The values obtained were

Animal a0 a1 a2
5 -6.08 .2225 -.002131
6 4.54 -.0822 .000439
7 19.35 -.5466 .005114
8 -4.99 .2624 -.002695

For animal 1 who has parents 7 and 5, first average the random regression coefficients of the parents,

\begin{displaymath}.5[ \left( \begin{array}{rrr} 19.35 & -.5466 & .005114 \end{a...
...n{array}{rrr} -6.08 & .2225 & -.002131 \end{array} \right) ] = \end{displaymath}


\begin{displaymath}\left( \begin{array}{rrr} 6.635 & -.16205 & .0014915 \end{array} \right). \end{displaymath}

Add to this $(.5)^{.5}{\bf L}_{T}{\bf v}$ where ${\bf v}$ is another vector of new random normal deviates. Animals 2, 3, and 4 are done in a similar manner ( we do not need to worry about inbreeding here). The other true genetic values are

Animal a0 a1 a2
1 -5.15 .3249 -.003547
2 13.47 -.1286 .000704
3 -6.25 .2839 -.002687
4 -6.84 .2415 -.002291

Step 4.
Compute the Cholesky decomposition of ${\bf S}$ as

\begin{displaymath}{\bf L}_{S} = \left( \begin{array}{rrr}
5.6301066 & 0 & 0 \\ ...
...28 & 0 \\
.0018556 & -.001113 & .0001946 \end{array} \right). \end{displaymath}

For animals with observations, i.e. animals 1, 2, 3, and 4, generate random regressions for permanent environmental effects by generating a 3 by 1 vector of random normal deviates for each animal and premultiplying by ${\bf L}_{S}$. The results are

Animal p0 p1 p2
1 3.67 -.1480 .001328
2 7.59 -.2352 .002204
3 -2.97 -.0034 .000386
4 3.26 -.0935 .000878

Step 5.
Generate individual observations. Just follow the model equation,

\begin{displaymath}{\bf y} = {\bf Xb} + {\bf W}{\bf c} + {\bf Ma} + {\bf Mp} + {\bf e}, \end{displaymath}

where ${\bf Xb}$ has already been created, ${\bf c}$, ${\bf a}$, and ${\bf p}$ have been created, and

\begin{displaymath}{\bf W} = \left( \begin{array}{rrrr}
1 & 0 & 0 & 0 \\ 1 & 0 ...
... 1 & 0 \\
0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right), \end{displaymath}

The matrix ${\bf M}$ is of order 12 by 12.

\begin{displaymath}{\bf M} = \left( \begin{array}{rrrrrrrrrrrr}
1 & 22 & 484 & ...
...& 0 & 0 & 0 & 0 & 0 & 0 & 1 & 44 & 1936
\end{array} \right). \end{displaymath}

Next create 12 random temporary residual effects by generating 12 random normal deviates and multiplying each by $\sigma_{e}=3$. The results, rounded to the nearest whole number, are given in the following table.

${\bf y}$ ${\bf Xb}$ ${\bf Wc}$ ${\bf Ma}$ ${\bf Mp}$ ${\bf e}$
24 22.06 2.5233 .2811 1.0567 -2.2206
44 27.50 2.5233 10.2456 2.5176 1.1294
24 26.26 2.5233 -.4074 -2.7626 -1.3047
36 29.74 2.1462 1.7963 .1732 1.7420
47 33.26 2.1462 9.3107 1.5995 1.0237
42 32.50 2.1462 .8068 -2.4884 8.6908
20 20.50 2.1462 -2.9264 1.7412 -1.1633
39 34.81 -2.0483 2.2850 -.3524 4.2048
41 36.25 -2.0483 8.5266 1.3211 -3.0580
34 29.21 -2.0483 -1.3654 1.1306 6.8494
44 36.14 -2.4207 8.0490 1.6674 .9080
28 33.94 -2.4207 -.6494 .8458 -3.3007

MME

The mixed model equations that need to be constructed to provide estimated breeding values are as follows;

\begin{displaymath}\left( \begin{array}{lllll}
{\bf X}'{\bf X} & {\bf X}'{\bf W}...
... M}'{\bf y} \\ {\bf0} \\
{\bf M}'{\bf y} \end{array} \right). \end{displaymath}

The entire MME can not be presented, but parts of the MME are given below.


\begin{displaymath}{\bf W}'{\bf W} = \left( \begin{array}{rrrr}
3 & 0 & 0 & 0 \...
...& 0 \\
0 & 0 & 3 & 0 \\
0 & 0 & 0 & 2 \end{array} \right), \end{displaymath}


\begin{displaymath}{\bf W}'{\bf X} = \left( \begin{array}{rrr}
3 & 80 & 2168 \\...
...0 \\
3 & 135 & 6323 \\
2 & 110 & 6292 \end{array} \right), \end{displaymath}


\begin{displaymath}{\bf W}'{\bf M} = \left( \begin{array}{rrrrrrrrrrrrr}
1 & 22 ...
...1 & 66 & 4356 & 0 & 0 & 0 & 1 & 44 & 1936
\end{array} \right), \end{displaymath}


\begin{displaymath}{\bf X}'{\bf X} = \left( \begin{array}{rrr}
12 & 461 & 19,70...
...23,807 \\
19,703 & 923,807 & 46,766,003 \end{array} \right), \end{displaymath}

${\bf X}'{\bf M}$ and ${\bf M}'{\bf M}$ are composed of the following four blocks of order 3, for the four animals with records;


\begin{eqnarray*}\mbox{Animal 1} & & \left( \begin{array}{rrr}
3 & 103 & 3849 \...
...& 3425 & 129121 \\
3425 & 129121 & 5094017 \end{array} \right).
\end{eqnarray*}


The right hand sides of the MME are

\begin{displaymath}{\bf X}'{\bf y} = \left( \begin{array}{r}
423 \\ 17,144 \\ 762,830 \end{array} \right), \end{displaymath}


\begin{displaymath}{\bf W}'{\bf y} = \left( \begin{array}{r}
92 \\ 145 \\ 114 \\ 72 \end{array} \right), \end{displaymath}

and

\begin{displaymath}{\bf M}'{\bf y} = \left( \begin{array}{r}
99 \\ 3,585 \\ 139...
...2,352 \\ 86,016 \\ 82 \\ 2,754 \\ 99,234
\end{array} \right). \end{displaymath}

The solutions to MME are

\begin{displaymath}\hat{\bf b}' = \left( \begin{array}{rrr}
.5131 & 1.7794 & -.01298 \end{array} \right), \end{displaymath}


\begin{displaymath}\hat{\bf c}' = \left( \begin{array}{r}
14.5400 \\ 13.7436 \\ 6.3133 \\ -1.0837
\end{array} \right). \end{displaymath}

Let the solutions for the animal additive genetic random regression coefficients be presented in tabular form as follows.

Animal a0 a1 a2
1 -1.720812 .039069 -.000334
2 9.908001 -.286581 .002556
3 -8.723928 .245155 -.002166
4 -2.972874 .108809 -.001022
5 -5.215457 .139228 -.001218
6 5.243107 -.150764 .001344
7 4.086682 -.120872 .001080
8 -4.114334 .132408 -.001206

Similarly, the solutions for the animal permanent environmental random regression coefficients can be given in tabular form.

Animal p0 p1 p2
1 -.761270 .012384 -.000093
2 3.536084 -.101685 .000906
3 -2.737505 .073743 -.000643
4 -.037309 .015559 -.000170

From the table of additive genetic solutions, it becomes a problem to decide how to rank the animals. If animals are ranked on the basis of a0, then animal 2 would be the highest (if that was desirable). If ranked on the basis of a1, then animal 3 would be the highest, and if ranked on the basis of a2, then animal 2 would be the highest. To properly rank the animals we need to calculate an EBV for stature at different ages, and then combine these with the appropriate economic weights. Suppose we calculate EBVs for 24, 36, and 48 mo of age. Suppose also that the economic weights were 2, 1, and .5, respectively, for the three EBVs, so that a Total Economic Value can be calculated as

\begin{displaymath}{\rm TEV} = 2*{\rm EBV(24)}+1*{\rm EBV(36)}+.5*{\rm EBV(48)}. \end{displaymath}

The results are shown in the following table.

Animal EBV(24) EBV(36) EBV(48) TEV
1 -.98 -.75 -.61 -3.00
2 4.50 2.90 2.04 12.93
3 -4.09 -2.71 -1.95 -11.85
4 -.95 -.38 -.10 -2.33
5 -2.58 -1.78 -1.34 -7.60
6 2.40 1.56 1.10 6.91
7 1.81 1.13 .77 5.14
8 -1.63 -.91 -.54 -4.44

The animal with the highest TEV was animal 2. It is interesting to compare animals 1 and 8. Animal 8 was lower than animal 1 for EBV(24) and EBV(36), but not for EBV(48), so that it is evident that growth patterns in these two animals are genetically different. This is a strength of a random regression model analysis, in that different patterns of growth at different ages can be spotted readily. Rankings of animals change with age. Thus, it is possible to change the pattern of growth to one that is desirable.

REML

Estimation of variances and covariances by EM REML is a little more complicated than usual in this model. The residual variance is still estimated as

\begin{displaymath}\hat{\sigma}^{2}_{e} = ({\bf y}'{\bf y} - \hat{\bf b}'{\bf X}...
...\bf M}'{\bf y}
- \hat{\bf p}'{\bf M}'{\bf y})/(N-r({\bf X})), \end{displaymath}

where

\begin{eqnarray*}{\bf y}'{\bf y} & = & 15,835, \\
\hat{\beta}'{\bf W}'{\bf y} &...
...bf X}) & = & 12 - 6 = 6, \\
\hat{\sigma}^{2}_{e} & = & 22.4378.
\end{eqnarray*}


Let the additive genetic random regression coefficients in a previous table be called the matrix ${\bf U}$ of order 8 by 3, and let the table of permanent environmental random regression coefficients be called ${\bf P}$ of order 4 by 3. The inverse of the MME can be represented as

\begin{displaymath}{\bf C} = \left( \begin{array}{llll}
{\bf C}_{xx} & {\bf C}_{...
...\bf C}_{pw} & {\bf C}_{pa} & {\bf C}_{pp}
\end{array} \right). \end{displaymath}

The matrix ${\bf C}_{aa}$ is of order 24 with 3 rows and columns per animal. For EM REML we need the inverse elements of the MME, ${\bf C}_{aa}$ for the additive genetic effects, and ${\bf C}_{pp}$ for the permanent environmental effects. To see this better, partition ${\bf C}_{aa}$ into the elements corresponding to each animal as

\begin{displaymath}\left( \begin{array}{rrrrrrrr}
{\bf C}_{1,1} & {\bf C}_{1,2} ...
...C}_{8,6} & {\bf C}_{8,7} & {\bf C}_{8,8}
\end{array} \right), \end{displaymath}

where each submatrix is of order 3 by 3. Now we need to treat each submatrix of ${\bf C}_{aa}$ as a single entity (like a scalar) and calculate the trace of ${\bf A}^{-1}{\bf C}_{aa}$, which in this case would result in a matrix of order 3 by 3. That is,

\begin{eqnarray*}tr({\bf A}^{-1}{\bf C}_{aa}) & = &
\left( \begin{array}{rrr}
...
....001228 \\
.0265454 & -.001228 & .0000119 \end{array} \right).
\end{eqnarray*}


Also needed is

\begin{displaymath}{\bf U}'{\bf A}^{-1}{\bf U} = \left( \begin{array}{rrr}
178.9...
...01304 \\
.0452538 & -.001304 & .0000116 \end{array} \right). \end{displaymath}

Then an estimate of ${\bf T}$ is

\begin{eqnarray*}\hat{\bf T} & = & ({\bf U}'{\bf A}^{-1}{\bf U}+
tr({\bf A}^{-1...
...-.003607 \\
.0801092 & -.003607 & .000035 \end{array} \right).
\end{eqnarray*}


For the permanent environmental variance-covariance matrix, ${\bf C}_{pp}$ is of order 12 with 3 rows and columns for each of the four animals with records, then

\begin{eqnarray*}\hat{\bf S} & = & ({\bf P}'{\bf P} +
tr({\bf C}_{pp})\hat{\si...
....000207 \\
.0044689 & -.000207 & 2.0145E-6 \end{array} \right),
\end{eqnarray*}


and therefore,

\begin{displaymath}\hat{\bf S} = \left( \begin{array}{rrr}
79.546965 & -2.837002...
...001198 \\
.0263294 & -.001198 & .0000116 \end{array} \right). \end{displaymath}

As usual, the calculations are iterative and must be repeated until convergence is reached. All results are about twice as large as the original values, possibly due to the particular sample that was generated.

Bayesian Approach - Gibbs Sampling

The conditional distributions from the joint posterior distribution for the solutions to the MME are all normal distributions, and each is sampled separately. Nothing is different from the previous models in this respect. The conditional distributions for

$\sigma^{2}_{e}$ is a scaled inverted Chi-square distribution with hyperparameters ve and S2e,

\begin{displaymath}\sigma^{2}_{e} = ({\bf e}'{\bf e} + v_{e}S^{2}_{e})/{\rm CHI}
(\tilde{v}_{e}), \end{displaymath}

${\bf T}$ for the animal additive genetic values has an inverted Wishart distribution, and
${\bf S}$ for the animal permanent environmental effects also has an inverted Wishart distribution,
Continue sampling in this manner.


next up previous

This LaTeX document is available as postscript or asAdobe PDF.

Larry Schaeffer
2000-03-21