Random Material Fields in Beams: The Karhunen-Loève Expansion
Contents
A beam’s elastic modulus $E$ is never truly constant. Curing gradients, aggregate scatter, and manufacturing variability make it vary from point to point. The result is a random field $E(x)$ where nearby points are correlated.
A deterministic analysis uses one value of $E$ and gets one deflection. But the real stiffness varies in space, so the deflection becomes a random variable. Its distribution depends on where the soft and stiff zones fall.
The Karhunen-Loève (KL) expansion decomposes this infinite-dimensional field into a finite sum of orthogonal modes, each driven by one random number. The spatial correlation is preserved. The result can be plugged directly into a finite element model to propagate material uncertainty into structural response.
Interactive model
The 3D beam is colour-coded by local stiffness (red = soft, blue = stiff). Deflected neutral axis in green.
Step 1: define the covariance kernel
A Gaussian random field $E(x)$ on $[0, L]$ is specified by its mean $\mu_E$ and a covariance kernel $C(x_1,x_2)$. The kernel encodes how stiffness at one point relates to stiffness at another. The exponential kernel is a standard choice:
\[C(x_1,x_2) = \sigma_E^2\,\exp\!\left(-\frac{|x_1-x_2|}{l_c}\right)\]The correlation length $l_c$ controls how smooth the field is. Large $l_c$ produces slowly varying fields. Small $l_c$ produces rapid fluctuations. This kernel is positive definite and physically motivated: correlation decays exponentially with distance, which matches most material properties.
Slide $l_c$ to see the covariance matrix and its midpoint slice.
Small $l_c$: correlation hugs the diagonal, only nearby points are related. Large $l_c$: the whole matrix fills in and the beam acts as one random number.
Step 2: extract eigenmodes from the kernel
The covariance kernel is a continuous operator. To reduce it to a finite representation, solve the Fredholm eigenvalue problem:
\[\int_0^L C(x, x')\, e_k(x')\, dx' = \lambda_k\, e_k(x)\]This produces eigenvalue-eigenfunction pairs $(\lambda_k, e_k)$ with three key properties:
- Variance partition: $\sum \lambda_k = \sigma_E^2 L$. The eigenvalues split the total field variance across modes.
- Orthonormality: $\int e_j\,e_k\,dx = \delta_{jk}$. The modes are independent spatial patterns.
- Decay: smooth fields (large $l_c$) concentrate most variance in the first few eigenvalues. Rough fields spread it across many modes.
Each $e_k(x)$ is a spatial shape. Mode 1 is the smoothest global pattern. Mode $k$ has approximately $k{-}1$ zero crossings. In any realisation, each mode is weighted by $\sqrt{\lambda_k}\,\xi_k$ where $\xi_k \sim \mathcal{N}(0,1)$.
Faded curves show all 8 modes. Higher modes are oscillatory with tiny eigenvalues. Truncating at $N = 4$ or 5 is usually enough.
Step 3: assemble the field from modes
Any realisation of the random field is:
\[E(x) \approx \mu_E + \sum_{k=1}^{N} \sqrt{\lambda_k}\; \xi_k\; e_k(x), \qquad \xi_k \sim \mathcal{N}(0,1)\]Draw $N$ standard normal numbers, multiply each by its eigenvalue weight and eigenfunction, and sum. The entire spatially correlated field is generated from just $N$ random draws. Each term adds one layer of spatial structure. Slide to watch it build up:
At $N{=}1$ the field is nearly uniform, just a single smooth perturbation around the mean. Each additional term adds finer spatial detail. The coefficients above are fixed for illustration. The interactive model draws fresh $\xi_k$ each run.
Step 4: use the field in a FEM model
With constant $E$, the simply supported beam under uniform load $q$ has a closed-form midspan deflection:
\[\delta_\text{det} = \frac{5qL^4}{384\,E\,I}\]When $E$ varies spatially, that constant is replaced by the KL expansion:
\[\frac{d^2}{dx^2}\!\left[\left(\mu_E + \sum_{k=1}^{N} \sqrt{\lambda_k}\,\xi_k\,e_k(x)\right)I\;\frac{d^2 v}{dx^2}\right] = q, \qquad v(0)=v(L)=0\]The variable coefficient $E(x)$ makes this equation non-constant. There is no closed-form solution. Instead it is solved with a finite element model.
How the KL field enters the FEM. The beam is divided into elements. For each element $e$, the local stiffness is evaluated from the KL expansion at the element nodes:
\[E_e = \tfrac{1}{2}\bigl[E(x_e) + E(x_{e+1})\bigr], \qquad E(x_i) = \mu_E + \sum_{k=1}^{N}\sqrt{\lambda_k}\,\xi_k\,e_k(x_i)\]This $E_e$ replaces the constant $E$ in the element stiffness matrix $\mathbf{k}_e = \frac{E_e\,I}{h^3}\,\mathbf{K}_0$, where $\mathbf{K}_0$ is the standard Hermitian beam element matrix and $h$ is the element length. The global system $\mathbf{K}\,\mathbf{u} = \mathbf{F}$ is assembled and solved as usual.
The key point: the eigenvalues $\lambda_k$, eigenfunctions $e_k(x_i)$, and the mean $\mu_E$ are computed once during preprocessing. Each Monte Carlo sample only requires drawing $N$ random numbers $\xi_k$, evaluating a weighted sum at each node, assembling, and solving. No new eigendecomposition is needed per sample.
The shaded envelope below spans the min-to-max deflection across 200 realisations. The deterministic solution sits inside this band, sometimes conservative, sometimes not.
FEM error vs the analytical solution is below 0.1%. The green band is the uncertainty a designer must account for. Its width feeds directly into reliability assessment and design factor calibration.
Why this matters
Without KL expansion, there are two poor alternatives:
- Single deterministic value. Use $E = \mu_E$ everywhere. Fast, but misses the effect of spatial variability entirely. No distribution of deflections, no reliability information.
- Independent random elements. Assign a random $E$ to each finite element independently. This captures randomness but destroys correlation. Adjacent elements can jump from very stiff to very soft, producing unrealistically rough fields that do not match real material behaviour.
The KL expansion avoids both problems. It generates spatially correlated fields with the correct statistics, controlled by just $N$ independent random variables. The eigenvalue decay determines how many are needed. For smooth fields, 4 to 6 modes typically capture over 99% of the variance.
The workflow is:
- Preprocess once. Build the covariance matrix from $\mu_E$, $\sigma_E$, $l_c$. Solve the eigenvalue problem to get $\lambda_k$ and $e_k(x_i)$ at all mesh nodes.
- Sample cheaply. For each Monte Carlo trial, draw $N$ standard normal numbers $\xi_k$. Evaluate $E(x_i) = \mu_E + \sum \sqrt{\lambda_k}\,\xi_k\,e_k(x_i)$ at every node.
- Solve the FEM. Assemble element stiffness matrices using the per-element $E_e$ and solve the system.
- Collect statistics. Repeat steps 2-3 for many samples. Extract distributions of deflection, stress, or any quantity of interest.
This is efficient because the expensive eigendecomposition happens once. Each sample is just $N$ random draws plus a linear algebra solve.
Python
import numpy as np
# ── Problem parameters (matches interactive model) ──────────
L = 5.0 # beam length [m]
n = 61 # discretisation points
mu = 30.0 # mean E [GPa]
sigma = 4.5 # std dev E [GPa]
lc = 2.0 # correlation length [m]
N_terms = 6 # KL modes to retain
N_samples= 200 # Monte Carlo realisations
x = np.linspace(0, L, n)
dx = x[1] - x[0]
# ── Step 1: covariance matrix ────────────────────────────────
X1, X2 = np.meshgrid(x, x, indexing='ij')
C = sigma**2 * np.exp(-np.abs(X1 - X2) / lc) # [GPa²]
# ── Step 2: Fredholm eigenvalue problem ──────────────────────
# Discretise with trapezoidal quadrature: W^½ C W^½ v = λ v
w = np.ones(n) * dx; w[0] *= 0.5; w[-1] *= 0.5
W_sqrt = np.diag(np.sqrt(w))
W_sqrt_inv = np.diag(1.0 / np.sqrt(w))
B = W_sqrt @ C @ W_sqrt
evals_raw, evecs_raw = np.linalg.eigh(B)
idx = np.argsort(evals_raw)[::-1]
evals = evals_raw[idx]
eigfuncs = W_sqrt_inv @ evecs_raw[:, idx]
# Normalise: ∫ek²(x)dx = 1, enforce positive first lobe
for k in range(n):
nrm = np.sqrt(np.sum(w * eigfuncs[:, k]**2))
eigfuncs[:, k] /= nrm
if eigfuncs[n // 4, k] < 0:
eigfuncs[:, k] *= -1
total_var = np.sum(evals)
print(f"Σλk = {total_var:.2f} σ²L = {sigma**2 * L:.2f}")
# ── Step 3: generate random realisations ─────────────────────
# E(x) = μ + Σ √λk · ξk · ek(x) [GPa]
rng = np.random.default_rng(42)
samples = np.zeros((N_samples, n))
for s in range(N_samples):
xi = rng.standard_normal(N_terms)
E_field = np.full(n, mu)
for k in range(N_terms):
E_field += np.sqrt(evals[k]) * xi[k] * eigfuncs[:, k]
samples[s] = E_field
# ── Step 4: covariance reconstruction check ──────────────────
C_rec = sum(evals[k] * np.outer(eigfuncs[:, k], eigfuncs[:, k])
for k in range(N_terms))
rel_err = np.linalg.norm(C - C_rec, 'fro') / np.linalg.norm(C, 'fro')
print(f"Covariance reconstruction error ({N_terms} modes): {rel_err*100:.2f}%")
# ── Step 5: Euler-Bernoulli FEM (simply supported, UDL) ─────
def solve_beam_fem(Ex_field_Pa, I_sec, q, L_beam):
"""Hermitian beam FEM. Returns nodal deflections v [m]."""
n_ = len(Ex_field_Pa)
nel = n_ - 1
h = L_beam / nel
nDOF = 2 * n_
K = np.zeros((nDOF, nDOF))
F = np.zeros(nDOF)
for e in range(nel):
EI = (Ex_field_Pa[e] + Ex_field_Pa[e+1]) * 0.5 * I_sec
k_ = EI / h**3
dof = [2*e, 2*e+1, 2*e+2, 2*e+3]
ke = k_ * np.array([
[ 12, 6*h, -12, 6*h],
[ 6*h, 4*h**2, -6*h, 2*h**2],
[-12, -6*h, 12, -6*h],
[ 6*h, 2*h**2, -6*h, 4*h**2]])
fe = np.array([q*h/2, q*h**2/12, q*h/2, -q*h**2/12])
for i in range(4):
for j in range(4):
K[dof[i], dof[j]] += ke[i, j]
F[dof[i]] += fe[i]
K[0, 0] += 1e20 # v(0) = 0
K[2*nel, 2*nel] += 1e20 # v(L) = 0
return np.linalg.solve(K, F)[0::2]
# Beam: 300x500 mm concrete, q = 20 kN/m
I_sec = 0.3 * 0.5**3 / 12 # 3.125e-3 m⁴
q_ = 20e3 # 20 kN/m [N/m]
# Analytical deflection (uniform E)
v_det = q_ / (24 * mu*1e9 * I_sec) * x * (L**3 - 2*L*x**2 + x**3)
print(f"δ_det (analytical) = {v_det.max()*1e3:.3f} mm")
# FEM validation
v_fem = solve_beam_fem(np.full(n, mu*1e9), I_sec, q_, L)
print(f"δ_FEM (uniform E) = {v_fem.max()*1e3:.3f} mm")
print(f"FEM error = {abs(v_fem.max()-v_det.max())/v_det.max()*100:.4f}%")
# Monte Carlo with KL samples
dmax_all = np.zeros(N_samples)
for s in range(N_samples):
Ex_Pa = np.maximum(samples[s] * 1e9, 1e6) # GPa -> Pa, clamp positive
v_kl = solve_beam_fem(Ex_Pa, I_sec, q_, L)
dmax_all[s] = v_kl.max() * 1e3 # mm
print(f"δ_max: mean={dmax_all.mean():.3f} mm, std={dmax_all.std():.3f} mm, "
f"CoV={dmax_all.std()/dmax_all.mean()*100:.1f}%")
References
[1] Loève, M. (1977). Probability Theory I (4th ed.). Springer.
[2] Ghanem, R. G. & Spanos, P. D. (1991). Stochastic Finite Elements: A Spectral Approach. Springer.
[3] Phoon, K. K., Huang, S. P. & Quek, S. T. (2002). Implementation of Karhunen-Loève expansion for simulation of stochastic processes. Computers & Structures, 80(12), 1049–1060.
[4] Sudret, B. & Der Kiureghian, A. (2000). Stochastic Finite Element Methods and Reliability. UCB/SEMM Report 2000/08.
Basem Rajjoub