API Reference
Types
laGP.GP — Type
GP{T<:Real, K}Gaussian Process model backed by AbstractGPs.jl with isotropic squared-exponential kernel.
This type uses AbstractGPs for the posterior computation while preserving laGP-specific quantities needed for the concentrated likelihood formula.
Fields
X::Matrix{T}: n x m design matrix (n observations, m dimensions)Z::Vector{T}: n response valueskernel::K: Kernel from KernelFunctions.jlchol::Cholesky{T}: Cholesky factorization of K + g*IKiZ::Vector{T}: K \ Z (precomputed for prediction)d::T: lengthscale parameter (laGP parameterization)g::T: nugget parameterphi::T: Z' * Ki * Z (used for variance scaling)ldetK::T: log determinant of K (used for likelihood)
Notes
The AbstractGPs posterior can be reconstructed from (X, Z, kernel, g) when needed. We cache the Cholesky and derived quantities for efficient repeated computations.
laGP.GPsep — Type
GPsep{T<:Real, K}Separable Gaussian Process model backed by AbstractGPs.jl with anisotropic kernel.
Uses a vector of lengthscales (one per input dimension) to capture varying input sensitivities.
Fields
X::Matrix{T}: n x m design matrix (n observations, m dimensions)Z::Vector{T}: n response valueskernel::K: ARD kernel from KernelFunctions.jlchol::Cholesky{T}: Cholesky factorization of K + g*IKiZ::Vector{T}: K \ Z (precomputed for prediction)d::Vector{T}: lengthscale parameters (m elements, one per dimension)g::T: nugget parameterphi::T: Z' * Ki * Z (used for variance scaling)ldetK::T: log determinant of K (used for likelihood)
laGP.GPPrediction — Type
GPPrediction{T<:Real}Result of GP prediction.
Fields
mean::Vector{T}: predicted mean valuess2::Vector{T}: predicted variances (if lite=true) or full covariancedf::Int: degrees of freedom (n observations)
laGP.GPPredictionFull — Type
GPPredictionFull{T<:Real}Result of GP prediction with full covariance matrix.
Fields
mean::Vector{T}: predicted mean valuesSigma::Matrix{T}: full posterior covariance matrix (ntest x ntest)df::Int: degrees of freedom (n observations)
Core GP Functions (Isotropic)
laGP.new_gp — Function
new_gp(X, Z, d, g)Create a new Gaussian Process model using AbstractGPs.jl backend.
Arguments
X::Matrix: n x m design matrix (n observations, m dimensions)Z::Vector: n response valuesd::Real: lengthscale parameter (laGP parameterization)g::Real: nugget parameter
Returns
GP: Gaussian Process model backed by AbstractGPs
laGP.pred_gp — Function
pred_gp(gp, XX; lite=true)Make predictions at test locations XX using AbstractGPs-backed GP.
Arguments
gp::GP: Gaussian Process modelXX::Matrix: test locations (n_test x m)lite::Bool: if true, return only diagonal variances
Returns
GPPrediction: prediction results with mean, s2, and df
laGP.llik_gp — Function
llik_gp(gp)Compute the log-likelihood of the GP.
Uses the concentrated likelihood formula from R laGP: llik = -0.5 * (n * log(0.5 * phi) + ldetK)
Arguments
gp::GP: Gaussian Process model
Returns
Real: log-likelihood value
laGP.dllik_gp — Function
dllik_gp(gp; dg=true, dd=true)Compute gradient of log-likelihood w.r.t. d (lengthscale) and g (nugget).
Arguments
gp::GP: Gaussian Process modeldg::Bool: compute gradient w.r.t. nugget g (default: true)dd::Bool: compute gradient w.r.t. lengthscale d (default: true)
Returns
NamedTuple: (dllg=..., dlld=...) gradients
laGP.d2llik_gp — Function
d2llik_gp(gp; d2g=true, d2d=true)Compute second derivatives of log-likelihood w.r.t. d (lengthscale) and g (nugget).
Used by Newton's method for 1D parameter optimization.
Arguments
gp::GP: Gaussian Process modeld2g::Bool: compute second derivative w.r.t. nugget g (default: true)d2d::Bool: compute second derivative w.r.t. lengthscale d (default: true)
Returns
NamedTuple: (d2llg=..., d2lld=...) second derivatives
laGP.update_gp! — Function
update_gp!(gp; d=nothing, g=nothing)Update GP hyperparameters and recompute internal quantities.
Arguments
gp::GP: Gaussian Process modeld::Real: new lengthscale (optional)g::Real: new nugget (optional)
laGP.extend_gp! — Function
extend_gp!(gp, x_new, z_new)Extend a GP with a new observation using O(n²) incremental Cholesky update.
This is much faster than rebuilding the GP from scratch when sequentially adding points, as it avoids the O(n³) full Cholesky factorization.
Mathematical Background
Given existing Cholesky L where K = LLᵀ, when adding a new point:
K_new = [K k ]
[kᵀ κ ]The updated Cholesky is:
L_new = [L 0]
[lᵀ λ]Where:
- l = L⁻¹ k (forward solve, O(n²))
- λ = sqrt(κ + g - lᵀl)
Arguments
gp::GP: Gaussian Process model to extendx_new::AbstractVector: new input point (length m)z_new::Real: new output value
Returns
gp: The modified GP (for convenience, same object as input)
Core GP Functions (Separable)
laGP.new_gp_sep — Function
new_gp_sep(X, Z, d, g)Create a new separable Gaussian Process model using AbstractGPs.jl backend.
Arguments
X::Matrix: n x m design matrix (n observations, m dimensions)Z::Vector: n response valuesd::Vector: lengthscale parameters (m elements, one per dimension)g::Real: nugget parameter
Returns
GPsep: Separable Gaussian Process model backed by AbstractGPs
laGP.pred_gp_sep — Function
pred_gp_sep(gp, XX; lite=true)Make predictions at test locations XX using AbstractGPs-backed separable GP.
Arguments
gp::GPsep: Separable Gaussian Process modelXX::Matrix: test locations (n_test x m)lite::Bool: if true, return only diagonal variances
Returns
GPPrediction: prediction results with mean, s2, and df
laGP.llik_gp_sep — Function
llik_gp_sep(gp)Compute the log-likelihood of the GPsep.
Arguments
gp::GPsep: Separable Gaussian Process model
Returns
Real: log-likelihood value
laGP.dllik_gp_sep — Function
dllik_gp_sep(gp; dg=true, dd=true)Compute gradient of log-likelihood w.r.t. d (lengthscales) and g (nugget).
Arguments
gp::GPsep: Separable Gaussian Process modeldg::Bool: compute gradient w.r.t. nugget g (default: true)dd::Bool: compute gradient w.r.t. lengthscales d (default: true)
Returns
NamedTuple: (dllg=..., dlld=...) gradients
laGP.d2llik_gp_sep_nug — Function
d2llik_gp_sep_nug(gp)Compute second derivative of log-likelihood w.r.t. nugget g for separable GP.
Used by Newton's method for 1D nugget optimization.
Arguments
gp::GPsep: Separable Gaussian Process model
Returns
Real: second derivative d²llik/dg²
laGP.update_gp_sep! — Function
update_gp_sep!(gp; d=nothing, g=nothing)Update GPsep hyperparameters and recompute internal quantities.
Arguments
gp::GPsep: Separable Gaussian Process modeld::Vector{Real}: new lengthscales (optional)g::Real: new nugget (optional)
laGP.extend_gp_sep! — Function
extend_gp_sep!(gp, x_new, z_new)Extend a separable GP with a new observation using O(n²) incremental Cholesky update.
This is much faster than rebuilding the GP from scratch when sequentially adding points, as it avoids the O(n³) full Cholesky factorization.
Mathematical Background
Given existing Cholesky L where K = LLᵀ, when adding a new point:
K_new = [K k ]
[kᵀ κ ]The updated Cholesky is:
L_new = [L 0]
[lᵀ λ]Where:
- l = L⁻¹ k (forward solve, O(n²))
- λ = sqrt(κ + g - lᵀl)
Arguments
gp::GPsep: Separable Gaussian Process model to extendx_new::AbstractVector: new input point (length m)z_new::Real: new output value
Returns
gp: The modified GP (for convenience, same object as input)
MLE Functions (Isotropic)
laGP.mle_gp! — Function
mle_gp!(gp, param; tmax, tmin=sqrt(eps(T)))Optimize a single GP hyperparameter via maximum likelihood.
Arguments
gp::GP: Gaussian Process model (modified in-place)param::Symbol: parameter to optimize (:d or :g)tmax::Real: maximum value for parameter (required)tmin::Real: minimum value for parameter (default: sqrt(eps(T)), matching R's behavior)
Returns
NamedTuple: (d=..., g=..., its=..., msg=...) optimization result
laGP.jmle_gp! — Function
jmle_gp!(gp; drange, grange, maxit=100, verb=0, dab=(3/2, nothing), gab=(3/2, nothing))Joint MLE optimization of d and g for GP.
Arguments
gp::GP: Gaussian Process model (modified in-place)drange::Tuple: (min, max) range for dgrange::Tuple: (min, max) range for gmaxit::Int: maximum iterationsverb::Int: verbosity leveldab::Tuple: (shape, scale) for d priorgab::Tuple: (shape, scale) for g prior
Returns
NamedTuple: (d=..., g=..., tot_its=..., msg=...)
laGP.amle_gp! — Function
amle_gp!(gp; drange, grange, maxit=100, verb=0, dab=(3/2, nothing), gab=(3/2, nothing))Alternating MLE optimization for isotropic GP (R-style jmleGP).
Alternates between Newton optimization for d and g until convergence. This matches R's laGP algorithm where both d and g use Newton's method.
Arguments
gp::GP: Gaussian Process model (modified in-place)drange::Tuple: (min, max) range for dgrange::Tuple: (min, max) range for gmaxit::Int: maximum outer iterations (default: 100)verb::Int: verbosity leveldab::Tuple: (shape, scale) for d prior; if scale=nothing, computed from rangegab::Tuple: (shape, scale) for g prior; if scale=nothing, computed from range
Returns
NamedTuple: (d=..., g=..., dits=..., gits=..., tot_its=..., msg=...)
laGP.darg — Function
darg(X; d=nothing, ab=(3/2, nothing))Compute default arguments for lengthscale parameter.
Based on pairwise distances in the design matrix X. If d is provided, it is used as the returned starting value.
Arguments
X::Matrix: design matrixd::Union{Nothing,Real}: user-specified d (optional)ab::Tuple: (shape, scale) for Inverse-Gamma prior; if scale=nothing, computed from range
Returns
NamedTuple: (start=..., min=..., max=..., mle=..., ab=...)
laGP.garg — Function
garg(Z; g=nothing, ab=(3/2, nothing))Compute default arguments for nugget parameter.
Based on squared residuals from the mean. If g is provided, it is used as the returned starting value.
Arguments
Z::Vector: response valuesg::Union{Nothing,Real}: user-specified g (optional)ab::Tuple: (shape, scale) for Inverse-Gamma prior; if scale=nothing, computed from range
Returns
NamedTuple: (start=..., min=..., max=..., mle=..., ab=...)
MLE Functions (Separable)
laGP.mle_gp_sep! — Function
mle_gp_sep!(gp, param, dim; tmax, tmin=sqrt(eps(T)), maxit=100, verb=0, dab=(3/2, nothing))Optimize separable GP hyperparameters via maximum likelihood.
- If
param == :danddimis provided, optimizes a single lengthscale (1D grid + Brent). - If
param == :danddimisnothing, optimizes all lengthscales jointly (L-BFGS-B). - If
param == :g, optimizes the nugget (1D grid + Brent).
Arguments
gp::GPsep: Separable Gaussian Process model (modified in-place)param::Symbol: parameter to optimize (:d or :g)dim::Union{Int,Nothing}: dimension index for :d (ignored for :g). Ifnothing, optimizes all d.tmax: maximum value(s) for parameter(s). Scalar or vector for:d.tmin: minimum value(s) for parameter(s). Scalar or vector for:d.maxit::Int: maximum iterations for joint L-BFGS-B (whendimisnothing)verb::Int: verbosity for joint L-BFGS-Bdab: prior tuple ford(passnothingto disable prior)
Returns
NamedTuple: (d=..., g=..., its=..., msg=...) optimization result
laGP.jmle_gp_sep! — Function
jmle_gp_sep!(gp; drange, grange, maxit=100, verb=0, dab=(3/2, nothing), gab=(3/2, nothing))Joint MLE optimization of lengthscales and nugget for GPsep.
Arguments
gp::GPsep: Separable Gaussian Process model (modified in-place)drange::Union{Tuple,Vector}: range for d parametersgrange::Tuple: (min, max) range for gmaxit::Int: maximum iterationsverb::Int: verbosity leveldab::Tuple: (shape, scale) for d priorgab::Tuple: (shape, scale) for g prior
Returns
NamedTuple: (d=..., g=..., tot_its=..., msg=...)
laGP.amle_gp_sep! — Function
amle_gp_sep!(gp; drange, grange, maxit=100, verb=0, dab=(3/2, nothing), gab=(3/2, nothing))Alternating MLE optimization for separable GP (R-style jmleGPsep).
Alternates between L-BFGS optimization for all d dimensions and Newton for g. This matches R's laGP algorithm.
Arguments
gp::GPsep: Separable Gaussian Process model (modified in-place)drange::Union{Tuple,Vector}: range for d parametersgrange::Tuple: (min, max) range for gmaxit::Int: maximum outer iterations (default: 100)verb::Int: verbosity leveldab::Tuple: (shape, scale) for d prior; if scale=nothing, computed from rangegab::Tuple: (shape, scale) for g prior; if scale=nothing, computed from range
Returns
NamedTuple: (d=..., g=..., dits=..., gits=..., tot_its=..., conv=..., msg=...)
laGP.darg_sep — Function
darg_sep(X; d=nothing, ab=(3/2, nothing))Compute default arguments for lengthscale parameters (separable version).
Mirrors laGP's darg behavior: uses pairwise squared distances to set start/min/max, then applies those same ranges to each dimension unless d is user-specified.
Arguments
X::Matrix: design matrixd::Union{Nothing,Real,Vector}: user-specified d start(s) (optional)ab::Tuple: (shape, scale) for Inverse-Gamma prior; if scale=nothing, computed from range
Returns
NamedTuple: (ranges=..., ab=...) where ranges is Vector of per-dimension NamedTuples
Acquisition Functions
laGP.alc_gp — Function
alc_gp(gp, Xcand, Xref)Compute Active Learning Cohn (ALC) acquisition values.
ALC measures expected variance reduction at reference points Xref if we were to add each candidate point from Xcand to the design.
alc_gp(gp::GPsep, Xcand, Xref)Compute Active Learning Cohn (ALC) acquisition values for separable GP.
laGP.mspe_gp — Function
mspe_gp(gp, Xcand, Xref)Compute Mean Squared Prediction Error (MSPE) acquisition values.
MSPE is related to ALC and includes the current prediction variance.
Arguments
gp::GP: Gaussian Process modelXcand::Matrix: candidate points (n_cand x m)Xref::Matrix: reference points (n_ref x m)
Returns
Vector: MSPE values for each candidate point
Local GP Functions (Isotropic)
laGP.lagp — Function
lagp(Xref, start, endpt, X, Z; d, g, method=:alc, close=1000, verb=0)Local Approximate GP prediction at a single reference point.
Builds a local GP by starting with nearest neighbors and sequentially adding points that maximize the chosen acquisition function.
Arguments
Xref::Vector: single reference point (length m)start::Int: initial number of nearest neighborsendpt::Int: final local design sizeX::Matrix: full training design (n x m)Z::Vector: full training responsesd::Real: lengthscale parameterg::Real: nugget parametermethod::Symbol: acquisition method (:alc, :mspe, or :nn)close::Int: size of closest candidate pool (default 1000, matching laGP)verb::Int: verbosity level
Returns
NamedTuple: (mean=..., var=..., df=..., indices=...)
laGP.agp — Function
agp(X, Z, XX; start=6, endpt=50, close=1000, d, g, method=:alc, verb=0, parallel=true)Approximate GP predictions at multiple reference points.
Calls lagp for each row of XX, optionally in parallel using threads.
Arguments
X::Matrix: training design (n x m)Z::Vector: training responsesXX::Matrix: test/reference points (n_test x m)start::Int: initial number of nearest neighborsendpt::Int: final local design sizeclose::Int: size of closest candidate pool (default 1000, matching laGP)d::Union{Real,NamedTuple}: lengthscale parameter or (start, mle, min, max)g::Union{Real,NamedTuple}: nugget parameter or (start, mle, min, max)method::Symbol: acquisition method (:alc, :mspe, or :nn)verb::Int: verbosity levelparallel::Bool: use multi-threading
Returns
NamedTuple: (mean=..., var=..., df=..., mle=...)
Local GP Functions (Separable)
laGP.lagp_sep — Function
lagp_sep(Xref, start, endpt, X, Z; d, g, method=:alc, close=1000, verb=0)Local Approximate GP prediction at a single reference point using separable GP.
Builds a local GP with per-dimension lengthscales by starting with nearest neighbors and sequentially adding points that maximize the chosen acquisition function.
Arguments
Xref::Vector: single reference point (length m)start::Int: initial number of nearest neighborsendpt::Int: final local design sizeX::Matrix: full training design (n x m)Z::Vector: full training responsesd::Union{Real,Vector{<:Real}}: per-dimension lengthscale parameters (scalar replicated)g::Real: nugget parametermethod::Symbol: acquisition method (:alc or :nn). Note: :mspe not supported for separableclose::Int: size of closest candidate pool (default 1000)verb::Int: verbosity level
Returns
NamedTuple: (mean=..., var=..., df=..., indices=...)
laGP.agp_sep — Function
agp_sep(X, Z, XX; start=6, endpt=50, close=1000, d, g, method=:alc, verb=0, parallel=true)Approximate GP predictions at multiple reference points using separable GP.
Calls lagp_sep for each row of XX, optionally in parallel using threads.
Arguments
X::Matrix: training design (n x m)Z::Vector: training responsesXX::Matrix: test/reference points (n_test x m)start::Int: initial number of nearest neighborsendpt::Int: final local design sizeclose::Int: size of closest candidate pool (default 1000)d::Union{Real,Vector{<:Real},NamedTuple}: lengthscale parameters or (start, mle, min, max)g::Union{Real,NamedTuple}: nugget parameter or (start, mle, min, max)method::Symbol: acquisition method (:alc or :nn)verb::Int: verbosity levelparallel::Bool: use multi-threading
Returns
NamedTuple: (mean=..., var=..., df=..., mle=...) where mle.d is a Matrix (n_test x m) when MLE enabled
Index
laGP.GPlaGP.GPPredictionlaGP.GPPredictionFulllaGP.GPseplaGP.agplaGP.agp_seplaGP.alc_gplaGP.amle_gp!laGP.amle_gp_sep!laGP.d2llik_gplaGP.d2llik_gp_sep_nuglaGP.darglaGP.darg_seplaGP.dllik_gplaGP.dllik_gp_seplaGP.extend_gp!laGP.extend_gp_sep!laGP.garglaGP.jmle_gp!laGP.jmle_gp_sep!laGP.lagplaGP.lagp_seplaGP.llik_gplaGP.llik_gp_seplaGP.mle_gp!laGP.mle_gp_sep!laGP.mspe_gplaGP.new_gplaGP.new_gp_seplaGP.pred_gplaGP.pred_gp_seplaGP.update_gp!laGP.update_gp_sep!