API Reference

Types

laGP.GPType
GP{T<:Real, K}

Gaussian Process model backed by AbstractGPs.jl with isotropic squared-exponential kernel.

This type uses AbstractGPs for the posterior computation while preserving laGP-specific quantities needed for the concentrated likelihood formula.

Fields

  • X::Matrix{T}: n x m design matrix (n observations, m dimensions)
  • Z::Vector{T}: n response values
  • kernel::K: Kernel from KernelFunctions.jl
  • chol::Cholesky{T}: Cholesky factorization of K + g*I
  • KiZ::Vector{T}: K \ Z (precomputed for prediction)
  • d::T: lengthscale parameter (laGP parameterization)
  • g::T: nugget parameter
  • phi::T: Z' * Ki * Z (used for variance scaling)
  • ldetK::T: log determinant of K (used for likelihood)

Notes

The AbstractGPs posterior can be reconstructed from (X, Z, kernel, g) when needed. We cache the Cholesky and derived quantities for efficient repeated computations.

source
laGP.GPsepType
GPsep{T<:Real, K}

Separable Gaussian Process model backed by AbstractGPs.jl with anisotropic kernel.

Uses a vector of lengthscales (one per input dimension) to capture varying input sensitivities.

Fields

  • X::Matrix{T}: n x m design matrix (n observations, m dimensions)
  • Z::Vector{T}: n response values
  • kernel::K: ARD kernel from KernelFunctions.jl
  • chol::Cholesky{T}: Cholesky factorization of K + g*I
  • KiZ::Vector{T}: K \ Z (precomputed for prediction)
  • d::Vector{T}: lengthscale parameters (m elements, one per dimension)
  • g::T: nugget parameter
  • phi::T: Z' * Ki * Z (used for variance scaling)
  • ldetK::T: log determinant of K (used for likelihood)
source
laGP.GPPredictionType
GPPrediction{T<:Real}

Result of GP prediction.

Fields

  • mean::Vector{T}: predicted mean values
  • s2::Vector{T}: predicted variances (if lite=true) or full covariance
  • df::Int: degrees of freedom (n observations)
source
laGP.GPPredictionFullType
GPPredictionFull{T<:Real}

Result of GP prediction with full covariance matrix.

Fields

  • mean::Vector{T}: predicted mean values
  • Sigma::Matrix{T}: full posterior covariance matrix (ntest x ntest)
  • df::Int: degrees of freedom (n observations)
source

Core GP Functions (Isotropic)

laGP.new_gpFunction
new_gp(X, Z, d, g)

Create a new Gaussian Process model using AbstractGPs.jl backend.

Arguments

  • X::Matrix: n x m design matrix (n observations, m dimensions)
  • Z::Vector: n response values
  • d::Real: lengthscale parameter (laGP parameterization)
  • g::Real: nugget parameter

Returns

  • GP: Gaussian Process model backed by AbstractGPs
source
laGP.pred_gpFunction
pred_gp(gp, XX; lite=true)

Make predictions at test locations XX using AbstractGPs-backed GP.

Arguments

  • gp::GP: Gaussian Process model
  • XX::Matrix: test locations (n_test x m)
  • lite::Bool: if true, return only diagonal variances

Returns

  • GPPrediction: prediction results with mean, s2, and df
source
laGP.llik_gpFunction
llik_gp(gp)

Compute the log-likelihood of the GP.

Uses the concentrated likelihood formula from R laGP: llik = -0.5 * (n * log(0.5 * phi) + ldetK)

Arguments

  • gp::GP: Gaussian Process model

Returns

  • Real: log-likelihood value
source
laGP.dllik_gpFunction
dllik_gp(gp; dg=true, dd=true)

Compute gradient of log-likelihood w.r.t. d (lengthscale) and g (nugget).

Arguments

  • gp::GP: Gaussian Process model
  • dg::Bool: compute gradient w.r.t. nugget g (default: true)
  • dd::Bool: compute gradient w.r.t. lengthscale d (default: true)

Returns

  • NamedTuple: (dllg=..., dlld=...) gradients
source
laGP.d2llik_gpFunction
d2llik_gp(gp; d2g=true, d2d=true)

Compute second derivatives of log-likelihood w.r.t. d (lengthscale) and g (nugget).

Used by Newton's method for 1D parameter optimization.

Arguments

  • gp::GP: Gaussian Process model
  • d2g::Bool: compute second derivative w.r.t. nugget g (default: true)
  • d2d::Bool: compute second derivative w.r.t. lengthscale d (default: true)

Returns

  • NamedTuple: (d2llg=..., d2lld=...) second derivatives
source
laGP.update_gp!Function
update_gp!(gp; d=nothing, g=nothing)

Update GP hyperparameters and recompute internal quantities.

Arguments

  • gp::GP: Gaussian Process model
  • d::Real: new lengthscale (optional)
  • g::Real: new nugget (optional)
source
laGP.extend_gp!Function
extend_gp!(gp, x_new, z_new)

Extend a GP with a new observation using O(n²) incremental Cholesky update.

This is much faster than rebuilding the GP from scratch when sequentially adding points, as it avoids the O(n³) full Cholesky factorization.

Mathematical Background

Given existing Cholesky L where K = LLᵀ, when adding a new point:

K_new = [K    k  ]
        [kᵀ   κ  ]

The updated Cholesky is:

L_new = [L    0]
        [lᵀ   λ]

Where:

  • l = L⁻¹ k (forward solve, O(n²))
  • λ = sqrt(κ + g - lᵀl)

Arguments

  • gp::GP: Gaussian Process model to extend
  • x_new::AbstractVector: new input point (length m)
  • z_new::Real: new output value

Returns

  • gp: The modified GP (for convenience, same object as input)
source

Core GP Functions (Separable)

laGP.new_gp_sepFunction
new_gp_sep(X, Z, d, g)

Create a new separable Gaussian Process model using AbstractGPs.jl backend.

Arguments

  • X::Matrix: n x m design matrix (n observations, m dimensions)
  • Z::Vector: n response values
  • d::Vector: lengthscale parameters (m elements, one per dimension)
  • g::Real: nugget parameter

Returns

  • GPsep: Separable Gaussian Process model backed by AbstractGPs
source
laGP.pred_gp_sepFunction
pred_gp_sep(gp, XX; lite=true)

Make predictions at test locations XX using AbstractGPs-backed separable GP.

Arguments

  • gp::GPsep: Separable Gaussian Process model
  • XX::Matrix: test locations (n_test x m)
  • lite::Bool: if true, return only diagonal variances

Returns

  • GPPrediction: prediction results with mean, s2, and df
source
laGP.llik_gp_sepFunction
llik_gp_sep(gp)

Compute the log-likelihood of the GPsep.

Arguments

  • gp::GPsep: Separable Gaussian Process model

Returns

  • Real: log-likelihood value
source
laGP.dllik_gp_sepFunction
dllik_gp_sep(gp; dg=true, dd=true)

Compute gradient of log-likelihood w.r.t. d (lengthscales) and g (nugget).

Arguments

  • gp::GPsep: Separable Gaussian Process model
  • dg::Bool: compute gradient w.r.t. nugget g (default: true)
  • dd::Bool: compute gradient w.r.t. lengthscales d (default: true)

Returns

  • NamedTuple: (dllg=..., dlld=...) gradients
source
laGP.d2llik_gp_sep_nugFunction
d2llik_gp_sep_nug(gp)

Compute second derivative of log-likelihood w.r.t. nugget g for separable GP.

Used by Newton's method for 1D nugget optimization.

Arguments

  • gp::GPsep: Separable Gaussian Process model

Returns

  • Real: second derivative d²llik/dg²
source
laGP.update_gp_sep!Function
update_gp_sep!(gp; d=nothing, g=nothing)

Update GPsep hyperparameters and recompute internal quantities.

Arguments

  • gp::GPsep: Separable Gaussian Process model
  • d::Vector{Real}: new lengthscales (optional)
  • g::Real: new nugget (optional)
source
laGP.extend_gp_sep!Function
extend_gp_sep!(gp, x_new, z_new)

Extend a separable GP with a new observation using O(n²) incremental Cholesky update.

This is much faster than rebuilding the GP from scratch when sequentially adding points, as it avoids the O(n³) full Cholesky factorization.

Mathematical Background

Given existing Cholesky L where K = LLᵀ, when adding a new point:

K_new = [K    k  ]
        [kᵀ   κ  ]

The updated Cholesky is:

L_new = [L    0]
        [lᵀ   λ]

Where:

  • l = L⁻¹ k (forward solve, O(n²))
  • λ = sqrt(κ + g - lᵀl)

Arguments

  • gp::GPsep: Separable Gaussian Process model to extend
  • x_new::AbstractVector: new input point (length m)
  • z_new::Real: new output value

Returns

  • gp: The modified GP (for convenience, same object as input)
source

MLE Functions (Isotropic)

laGP.mle_gp!Function
mle_gp!(gp, param; tmax, tmin=sqrt(eps(T)))

Optimize a single GP hyperparameter via maximum likelihood.

Arguments

  • gp::GP: Gaussian Process model (modified in-place)
  • param::Symbol: parameter to optimize (:d or :g)
  • tmax::Real: maximum value for parameter (required)
  • tmin::Real: minimum value for parameter (default: sqrt(eps(T)), matching R's behavior)

Returns

  • NamedTuple: (d=..., g=..., its=..., msg=...) optimization result
source
laGP.jmle_gp!Function
jmle_gp!(gp; drange, grange, maxit=100, verb=0, dab=(3/2, nothing), gab=(3/2, nothing))

Joint MLE optimization of d and g for GP.

Arguments

  • gp::GP: Gaussian Process model (modified in-place)
  • drange::Tuple: (min, max) range for d
  • grange::Tuple: (min, max) range for g
  • maxit::Int: maximum iterations
  • verb::Int: verbosity level
  • dab::Tuple: (shape, scale) for d prior
  • gab::Tuple: (shape, scale) for g prior

Returns

  • NamedTuple: (d=..., g=..., tot_its=..., msg=...)
source
laGP.amle_gp!Function
amle_gp!(gp; drange, grange, maxit=100, verb=0, dab=(3/2, nothing), gab=(3/2, nothing))

Alternating MLE optimization for isotropic GP (R-style jmleGP).

Alternates between Newton optimization for d and g until convergence. This matches R's laGP algorithm where both d and g use Newton's method.

Arguments

  • gp::GP: Gaussian Process model (modified in-place)
  • drange::Tuple: (min, max) range for d
  • grange::Tuple: (min, max) range for g
  • maxit::Int: maximum outer iterations (default: 100)
  • verb::Int: verbosity level
  • dab::Tuple: (shape, scale) for d prior; if scale=nothing, computed from range
  • gab::Tuple: (shape, scale) for g prior; if scale=nothing, computed from range

Returns

  • NamedTuple: (d=..., g=..., dits=..., gits=..., tot_its=..., msg=...)
source
laGP.dargFunction
darg(X; d=nothing, ab=(3/2, nothing))

Compute default arguments for lengthscale parameter.

Based on pairwise distances in the design matrix X. If d is provided, it is used as the returned starting value.

Arguments

  • X::Matrix: design matrix
  • d::Union{Nothing,Real}: user-specified d (optional)
  • ab::Tuple: (shape, scale) for Inverse-Gamma prior; if scale=nothing, computed from range

Returns

  • NamedTuple: (start=..., min=..., max=..., mle=..., ab=...)
source
laGP.gargFunction
garg(Z; g=nothing, ab=(3/2, nothing))

Compute default arguments for nugget parameter.

Based on squared residuals from the mean. If g is provided, it is used as the returned starting value.

Arguments

  • Z::Vector: response values
  • g::Union{Nothing,Real}: user-specified g (optional)
  • ab::Tuple: (shape, scale) for Inverse-Gamma prior; if scale=nothing, computed from range

Returns

  • NamedTuple: (start=..., min=..., max=..., mle=..., ab=...)
source

MLE Functions (Separable)

laGP.mle_gp_sep!Function
mle_gp_sep!(gp, param, dim; tmax, tmin=sqrt(eps(T)), maxit=100, verb=0, dab=(3/2, nothing))

Optimize separable GP hyperparameters via maximum likelihood.

  • If param == :d and dim is provided, optimizes a single lengthscale (1D grid + Brent).
  • If param == :d and dim is nothing, optimizes all lengthscales jointly (L-BFGS-B).
  • If param == :g, optimizes the nugget (1D grid + Brent).

Arguments

  • gp::GPsep: Separable Gaussian Process model (modified in-place)
  • param::Symbol: parameter to optimize (:d or :g)
  • dim::Union{Int,Nothing}: dimension index for :d (ignored for :g). If nothing, optimizes all d.
  • tmax: maximum value(s) for parameter(s). Scalar or vector for :d.
  • tmin: minimum value(s) for parameter(s). Scalar or vector for :d.
  • maxit::Int: maximum iterations for joint L-BFGS-B (when dim is nothing)
  • verb::Int: verbosity for joint L-BFGS-B
  • dab: prior tuple for d (pass nothing to disable prior)

Returns

  • NamedTuple: (d=..., g=..., its=..., msg=...) optimization result
source
laGP.jmle_gp_sep!Function
jmle_gp_sep!(gp; drange, grange, maxit=100, verb=0, dab=(3/2, nothing), gab=(3/2, nothing))

Joint MLE optimization of lengthscales and nugget for GPsep.

Arguments

  • gp::GPsep: Separable Gaussian Process model (modified in-place)
  • drange::Union{Tuple,Vector}: range for d parameters
  • grange::Tuple: (min, max) range for g
  • maxit::Int: maximum iterations
  • verb::Int: verbosity level
  • dab::Tuple: (shape, scale) for d prior
  • gab::Tuple: (shape, scale) for g prior

Returns

  • NamedTuple: (d=..., g=..., tot_its=..., msg=...)
source
laGP.amle_gp_sep!Function
amle_gp_sep!(gp; drange, grange, maxit=100, verb=0, dab=(3/2, nothing), gab=(3/2, nothing))

Alternating MLE optimization for separable GP (R-style jmleGPsep).

Alternates between L-BFGS optimization for all d dimensions and Newton for g. This matches R's laGP algorithm.

Arguments

  • gp::GPsep: Separable Gaussian Process model (modified in-place)
  • drange::Union{Tuple,Vector}: range for d parameters
  • grange::Tuple: (min, max) range for g
  • maxit::Int: maximum outer iterations (default: 100)
  • verb::Int: verbosity level
  • dab::Tuple: (shape, scale) for d prior; if scale=nothing, computed from range
  • gab::Tuple: (shape, scale) for g prior; if scale=nothing, computed from range

Returns

  • NamedTuple: (d=..., g=..., dits=..., gits=..., tot_its=..., conv=..., msg=...)
source
laGP.darg_sepFunction
darg_sep(X; d=nothing, ab=(3/2, nothing))

Compute default arguments for lengthscale parameters (separable version).

Mirrors laGP's darg behavior: uses pairwise squared distances to set start/min/max, then applies those same ranges to each dimension unless d is user-specified.

Arguments

  • X::Matrix: design matrix
  • d::Union{Nothing,Real,Vector}: user-specified d start(s) (optional)
  • ab::Tuple: (shape, scale) for Inverse-Gamma prior; if scale=nothing, computed from range

Returns

  • NamedTuple: (ranges=..., ab=...) where ranges is Vector of per-dimension NamedTuples
source

Acquisition Functions

laGP.alc_gpFunction
alc_gp(gp, Xcand, Xref)

Compute Active Learning Cohn (ALC) acquisition values.

ALC measures expected variance reduction at reference points Xref if we were to add each candidate point from Xcand to the design.

source
alc_gp(gp::GPsep, Xcand, Xref)

Compute Active Learning Cohn (ALC) acquisition values for separable GP.

source
laGP.mspe_gpFunction
mspe_gp(gp, Xcand, Xref)

Compute Mean Squared Prediction Error (MSPE) acquisition values.

MSPE is related to ALC and includes the current prediction variance.

Arguments

  • gp::GP: Gaussian Process model
  • Xcand::Matrix: candidate points (n_cand x m)
  • Xref::Matrix: reference points (n_ref x m)

Returns

  • Vector: MSPE values for each candidate point
source

Local GP Functions (Isotropic)

laGP.lagpFunction
lagp(Xref, start, endpt, X, Z; d, g, method=:alc, close=1000, verb=0)

Local Approximate GP prediction at a single reference point.

Builds a local GP by starting with nearest neighbors and sequentially adding points that maximize the chosen acquisition function.

Arguments

  • Xref::Vector: single reference point (length m)
  • start::Int: initial number of nearest neighbors
  • endpt::Int: final local design size
  • X::Matrix: full training design (n x m)
  • Z::Vector: full training responses
  • d::Real: lengthscale parameter
  • g::Real: nugget parameter
  • method::Symbol: acquisition method (:alc, :mspe, or :nn)
  • close::Int: size of closest candidate pool (default 1000, matching laGP)
  • verb::Int: verbosity level

Returns

  • NamedTuple: (mean=..., var=..., df=..., indices=...)
source
laGP.agpFunction
agp(X, Z, XX; start=6, endpt=50, close=1000, d, g, method=:alc, verb=0, parallel=true)

Approximate GP predictions at multiple reference points.

Calls lagp for each row of XX, optionally in parallel using threads.

Arguments

  • X::Matrix: training design (n x m)
  • Z::Vector: training responses
  • XX::Matrix: test/reference points (n_test x m)
  • start::Int: initial number of nearest neighbors
  • endpt::Int: final local design size
  • close::Int: size of closest candidate pool (default 1000, matching laGP)
  • d::Union{Real,NamedTuple}: lengthscale parameter or (start, mle, min, max)
  • g::Union{Real,NamedTuple}: nugget parameter or (start, mle, min, max)
  • method::Symbol: acquisition method (:alc, :mspe, or :nn)
  • verb::Int: verbosity level
  • parallel::Bool: use multi-threading

Returns

  • NamedTuple: (mean=..., var=..., df=..., mle=...)
source

Local GP Functions (Separable)

laGP.lagp_sepFunction
lagp_sep(Xref, start, endpt, X, Z; d, g, method=:alc, close=1000, verb=0)

Local Approximate GP prediction at a single reference point using separable GP.

Builds a local GP with per-dimension lengthscales by starting with nearest neighbors and sequentially adding points that maximize the chosen acquisition function.

Arguments

  • Xref::Vector: single reference point (length m)
  • start::Int: initial number of nearest neighbors
  • endpt::Int: final local design size
  • X::Matrix: full training design (n x m)
  • Z::Vector: full training responses
  • d::Union{Real,Vector{<:Real}}: per-dimension lengthscale parameters (scalar replicated)
  • g::Real: nugget parameter
  • method::Symbol: acquisition method (:alc or :nn). Note: :mspe not supported for separable
  • close::Int: size of closest candidate pool (default 1000)
  • verb::Int: verbosity level

Returns

  • NamedTuple: (mean=..., var=..., df=..., indices=...)
source
laGP.agp_sepFunction
agp_sep(X, Z, XX; start=6, endpt=50, close=1000, d, g, method=:alc, verb=0, parallel=true)

Approximate GP predictions at multiple reference points using separable GP.

Calls lagp_sep for each row of XX, optionally in parallel using threads.

Arguments

  • X::Matrix: training design (n x m)
  • Z::Vector: training responses
  • XX::Matrix: test/reference points (n_test x m)
  • start::Int: initial number of nearest neighbors
  • endpt::Int: final local design size
  • close::Int: size of closest candidate pool (default 1000)
  • d::Union{Real,Vector{<:Real},NamedTuple}: lengthscale parameters or (start, mle, min, max)
  • g::Union{Real,NamedTuple}: nugget parameter or (start, mle, min, max)
  • method::Symbol: acquisition method (:alc or :nn)
  • verb::Int: verbosity level
  • parallel::Bool: use multi-threading

Returns

  • NamedTuple: (mean=..., var=..., df=..., mle=...) where mle.d is a Matrix (n_test x m) when MLE enabled
source

Index