The Cholesky decomposition of a Hermitian positive-definite matrix $\mathbf{A}$ is
with $L$ a lower triangular matrix .
Solve $A x = b$ for $x$ knowing that $A = LL^T$, with $L$ a lower triangular matrix.
We write $\alpha = L^T x$.
Compute $H^T A^{-1} H$ knowing that $A = LL^T$, with $L$ a lower triangular matrix.
with $v = L^{-1}H$
Compute $K_\star^T K^{-1} y$ knowing that $K_\star$ and $K$ are positive-definite, and $K = LL^T$ with $L$ a lower triangular matrix.
with $b = L^{-1}K_\star$ and $a = L^{-1}y$
See book of Rasmussen and Williams (2006), chap. 2, page 19.
Predictive mean: $\bar{f_\star} = K_\star^T (K + \sigma^2 I)^{-1}y$
Predictive variance: $Var(f_\star) = K_{\star\star}^T - K_\star^T (K + \sigma^2 I)^{-1}K_\star$
Algorithm of Rasmussen and Williams (2006):
Algorithm of GauProMod, file GPpred.cpp (note that here we write $K$ for $(K + \sigma^2 I)$):
L = K.llt().matrixL());
bt = (L.triangularView<Lower>().solve(Kstar)).adjoint();
a = L.triangularView<Lower>().solve(y);
M = bt * a;
btb = MatrixXd(kk,kk).setZero().selfadjointView<Lower>().rankUpdate(bt);
C = Kstarstar - vtv;
See book of Rasmussen and Williams (2006), chap. 2.7, page 27-29.
Algorithm of GauProMod, file GPpredmean.cpp:
See my notes…
Knowing that:
Cholesky decomposition of positive definite matrix $\mathbf{A}$
determinant of a positive definite matrix $\mathbf{A}$:
log rule
determinant of a lower triangular matrix
the log determinant of positive definite matrices is:
Thus to calculate the log determinant of a symmetric positive definite matrix in R:
L <- chol(A)
logdetA <- 2*sum(log(diag(L)))
See the proof here