Notes for Probability and Stochastic System

Combined notes for EECS 501 & MECHENG 549

Probability Basics

Probability Space

  • Notation: $(\Omega,\mathcal{F},\mathbb{P})$
    • $\Omega$: Sample space
    • $\mathcal{F}$: Event space. Require 3 axioms (to be σ-field).
    • $\mathbb{P}$: Probability measure. Require 3 axioms.
  • Product Space: Probability spaces can be combined using Cartesian product.
  • Independence: $\mathbb{P}(A_{k_1}\cap A_{k_2}\cap …\cap A_{k_l})=\prod_{i=1}^l \mathbb{P}(A_i),\;\forall \{k_i\}_1^l\subset\{ 1..n\}$
  • Conditional probability: $\mathbb{P}\left(A_i \middle| A_j\right)=\mathbb{P}(A_i\cap A_j)/\mathbb{P}(A_j)$
  • Total probability: $\mathbb{P}(B)=\sum_{i=1}^n \mathbb{P}(B\cap A_i)=\sum_{i=1}^n \mathbb{P}\left(B\middle| A_i\right)\mathbb{P}(A_i)$, where $\{A_1,\cdots,A_n\}$ are disjoint and partition of $\Omega$.
  • Bayes’ Rule: $$\mathbb{P}(A_j|B)=\frac{P(B|A_j)P(A_j)}{\sum^n_{i=1} P(B|A_i)P(A_i)}$$
    • Priori: $\mathbb{P}(B|A_j)$
    • Posteriori: $\mathbb{P}(A_j|B)$

Random Variables

Note: The equations are written in continuous case by default, one can get the equation for discrete case by changing integration into summation and changing differential into difference.

  • Random Variable $\mathcal{X}$ is $\mathcal{X}: (\Omega,\mathcal{F},\mathbb{P})\rightarrow(\Omega_ \mathcal{X},\mathcal{F}_ \mathcal{X},\mathbb{P}_ \mathcal{X})$
  • Continuous & Discrete & Mixed Random Variable:
    • Can be defined upon whether $\Omega_\mathcal{X}$ is continuous
    • Can be defined upon whether we can find continuous density function $f_\mathcal{X}$
    • $\mathcal{F}_\mathcal{X}$ for continuous $\mathcal{X}$ is a Borel σ-field
    • All three kinds of random variables can be expressed by CDF or “extended” PDF with Dirac function.
  • Scalar Random Variable: $\mathcal{X}: \Omega\rightarrow\mathbb{F}$
    • Formal definition: $\mathcal{X}: (\Omega,\mathcal{F},\mathbb{P})\rightarrow(\Omega_ \mathcal{X}\subset\mathbb{F},\mathcal{F}_ \mathcal{X}\subset\left\{\omega\middle| \omega\subset\Omega_ \mathcal{X}\right\},\mathbb{P}_ \mathcal{X}:\mathcal{F}_ \mathcal{X}\rightarrow[0,1])$
    • Cumulative Distribution Function (CDF): $F_ \mathcal{X}(x)=\mathbb{P}(\mathcal{X}(\omega)\leqslant x)$
    • Probability Mass Function (PMF): $p_ \mathcal{X}(x)=\mathbb{P}(\mathcal{X}(\omega)=x)$
    • Probability Density Function (PDF): $$f_ \mathcal{X}(x)=\mathbb{P}(x< \mathcal{X}(\omega)\leqslant x+ \mathrm{d}x)=dF_ \mathcal{X}(x)/ \mathrm{d}x$$
  • Vector Random Variable (Multiple Random Variables): $\mathcal{X}: \Omega\rightarrow\mathbb{F}^n$

    • Formal definition $\mathcal{X}: (\Omega,\mathcal{F},\mathbb{P})\rightarrow(\Omega_ \mathcal{X}\subset\mathbb{F}^n,\mathcal{F}_ \mathcal{X}\subset\left\{\omega\middle| \omega\subset\Omega_ \mathcal{X}\right\},\mathbb{P}_ \mathcal{X}:\mathcal{F}_ \mathcal{X}\rightarrow[0,1])$
    • Cumulative Distribution Function (CDF): $F_ \mathcal{X}(x)=\mathbb{P}(\{\mathcal{X}(\omega)\}_i\leqslant x_i), i=1\ldots n$
    • Probability Mass Function (PMF): $p_ \mathcal{X}(x)=\mathbb{P}(\{\mathcal{X}(\omega)\}_i=x)$
    • Probability Density Function (PDF): $$f_ \mathcal{X}(x)=\mathbb{P}(x_i< \{\mathcal{X}(\omega)\}_i\leqslant x_i+ \mathrm{d}x_i)=\frac{\partial^n}{\partial x_1\partial x_2\ldots\partial x_n}F_ \mathcal{X}(x)$$
  • Independence($\perp \!\!\! \perp$): $P(\mathcal{X}\in A\cap \mathcal{Y}\in B)=P(\mathcal{X}\in A)P(\mathcal{Y}\in B)$

    • Independent CDF:$F_{\mathcal{XY}}(x,y)=F_\mathcal{X}(x)F_\mathcal{Y}(y)$ iff $\mathcal{X}\perp \!\!\! \perp\mathcal{Y}$
    • Independent PMF:$p_{\mathcal{XY}}(x,y)=p_\mathcal{X}(x)p_\mathcal{Y}(y)$ iff $\mathcal{X}\perp \!\!\! \perp\mathcal{Y}$
    • Independent PDF:$f_{\mathcal{XY}}(x,y)=f_\mathcal{X}(x)f_\mathcal{Y}(y)$ iff $\mathcal{X}\perp \!\!\! \perp\mathcal{Y}$
  • Marginalization

    • Marginal distribution function: $F_{1:m}(x_{1:m})\equiv F\left(\begin{bmatrix}x_1&x_2&\ldots&x_m&\infty&\ldots&\infty\end{bmatrix}^\top\right)$
    • Marginal density function: $f_{1:m}(x_{1:m})=\int^\infty_{-\infty}\cdots\int^\infty_{-\infty} f(x) \mathrm{d}x_{m+1}\cdots \mathrm{d}x_n$
  • Conditional (on event $\mathcal{E}$)

    • Conditional probability: $\mathbb{P}(\mathcal{X}\in\mathcal{D}|\mathcal{E})=\mathbb{P}(\{\omega|\mathcal{X}(\omega)\in\mathcal{D}\}\cap \mathcal{E})/\mathbb{P}(\mathcal{E})$ on a event $\mathcal{E}\in\mathcal{F}$ and a set $\mathcal{D}\subset\mathcal{F}$.
    • Conditional CDF: $F_ \mathcal{X}(x|\mathcal{E})=\mathbb{P}(\mathcal{X}_i\leqslant x_i|\mathcal{E})$
    • Conditional PMF: $p_ \mathcal{X}(x|\mathcal{E})=\mathbb{P}(\mathcal{X}_i=x_i|\mathcal{E})$
    • Conditional PDF: $f_ \mathcal{X}(x|\mathcal{E})=\frac{\partial^n}{\partial x_1\partial x_2\ldots\partial x_n}F_ \mathcal{X}(x|\mathcal{E})$
  • Conditional (on variable $\mathcal{Y}$)
    • Conditional probability: $\mathbb{P}(\mathcal{X}\in\mathcal{D}_1|\mathcal{Y}\in\mathcal{D}_2)=\mathbb{P}(\{\omega|\mathcal{X}(\omega)\in\mathcal{D}_1,\mathcal{Y}(\omega)\in\mathcal{D}_2\})/\mathbb{P}(\mathcal{Y}(\omega)\in\mathcal{D}_2)$.
    • Conditional PDF (similar for PMF): $$f_{\mathcal{X}|\mathcal{Y}\in\mathcal{D}}(x)=\frac{\int_{\mathcal{Y}\in\mathcal{D}}f_{\mathcal{XY}}(x,y)}{\int_{\mathcal{Y}\in\mathcal{D}}f_{\mathcal{Y}}(y)},\;f_{\mathcal{X}|\mathcal{Y}}(x|y)=f_{\mathcal{X}|\mathcal{Y}=y}(x)=\frac{f_{\mathcal{XY}}(x,y)}{f_{\mathcal{Y}}(y)}$$
      • Using total probability we have $f_ \mathcal{X}(x)=\int f_{\mathcal{X}|\mathcal{Y}}(x|y)f_ \mathcal{Y}(y)\mathrm{d}y$. This can be further integrated into Bayes’ rule.
    • Conditional CDF: Can be defined similarly, of defined as $F_{\mathcal{X}|\mathcal{Y}\in\mathcal{D}}(x)\equiv\int f_{\mathcal{X}|\mathcal{Y}\in\mathcal{D}}(x)\mathrm{d}x$
      • Similarly we have $F_ \mathcal{X}(x)=\int F_{\mathcal{X}|\mathcal{Y}}(x|y)f_ \mathcal{Y}(y)\mathrm{d}y$
    • Substitution law: $\mathbb{P}((\mathcal{X},\mathcal{Y})\in \mathcal{D}|\mathcal{X}=x)=\mathbb{P}((x,\mathcal{Y})\in \mathcal{D})$
      • Common usage: Suppose $\mathcal{Z}=\psi(\mathcal{X},\mathcal{Y})$, then $p_\mathcal{Z}(z)=\int_x \mathbb{P}\left(\psi(x,\mathcal{Y})=z\right)p_\mathcal{X}(x)$

Expectation & Moments

  • Expectation: $\mathbb{E}\psi(\mathcal{X})=\int^\infty_\infty\cdots\int^\infty_\infty\psi(x)f_ \mathcal{X}(x) \mathrm{d}x_1\ldots \mathrm{d}x_n$
    • Linearity of expectation: $\mathbb{E}[A\mathcal{\mathcal{X}}]=A\mathbb{E}[\mathcal{X}] (\forall \mathcal{X},\mathcal{Y},\forall A\in\mathbb{R}^{m\times n})$
    • Independent expectation: $\mathbb{E}[\prod^n_i\mathcal{X}_i]=\prod^n_i\left(\mathbb{E}\mathcal{X}_i\right)(\forall\;\text{indep.}\;\mathcal{X}_i)$
    • Conditional expectation: $\mathbb{E}[\mathcal{Y}|\mathcal{X}=x]=\int^\infty_{-\infty}yf_{\mathcal{Y}|\mathcal{X}}(y|x)\mathrm{d}y$
      • Total expectation/Smoothing: $\mathbb{E}_ \mathcal{X}[\mathbb{E}_ {\mathcal{Y}|\mathcal{X}}(\mathcal{Y}|\mathcal{X})]=\mathbb{E}\mathcal{Y}$
      • Substitution law: $\mathbb{E}[g(\mathcal{X},\mathcal{Y})|\mathcal{X}=x]=\mathbb{E}[\psi(x,\mathcal{Y})|\mathcal{X}=x]$
      • $\mathbb{E}[\psi(\mathcal{X})|\mathcal{X}]=\psi(\mathcal{X})$
      • $\mathbb{E}[\psi(\mathcal{X})\mathcal{Y}|\mathcal{X}]=\psi(\mathcal{X})\mathbb{E}(\mathcal{Y}|\mathcal{X})$
      • Towering: $\mathbb{E}_ {\mathcal{Y}|\mathcal{Z}}[\mathbb{E}_ \mathcal{X}(\mathcal{X}|\mathcal{Y},\mathcal{Z})|\mathcal{Z}]=\mathbb{E}_ {\mathcal{X}|\mathcal{Z}}[\mathcal{X}|\mathcal{Z}]$
  • Moment (p-th order): $\mu_p(\mathcal{X})=\mathbb{E}[\mathcal{X}^p]$
    • Mean: $\mu_ \mathcal{X}=\mathbb{E}[\mathcal{X}]$
  • Central moment (p-th order): $\mathbb{E}[(\mathcal{X}-\mu_ \mathcal{X})^p]$
    • Variance: $\sigma_ \mathcal{X}^2=\mathbb{E}[(\mathcal{X}-\mu_ \mathcal{X})^2]$
    • Skewness: $\gamma=\mathbb{E}[(\mathcal{X}-\mu_ \mathcal{X})^3]/\sigma_\mathcal{X}^3$
  • Correlation: $\text{corr}(\mathcal{X}_i,\mathcal{X}_j)=\mathbb{E}[\mathcal{X}_i\mathcal{X}_j]$
    • Correlation matrix: $C=\mathbb{E}[\mathcal{X}_i\mathcal{X}_j^\top]$
    • Correlation coefficient: $\rho(\mathcal{X}_ i,\mathcal{X}_ j)=\frac{\text{corr}(\mathcal{X}_ i,\mathcal{X}_ j)}{\sigma_{\mathcal{X}_ i}^2\sigma_{\mathcal{X}_ j}^2}$
  • Covariance: $\text{cov}(\mathcal{X}_i,\mathcal{X}_j)=\mathbb{E}[(\mathcal{X}_i-\mu_i)(\mathcal{X}_j-\mu_j)]$
    • Covariance matrix: $S=\mathbb{E}\left[(\mathcal{X}-\mu_ \mathcal{X})(\mathcal{X}-\mu_ \mathcal{X})^\top\right]$
    • Uncorrelated: $\rho(\mathcal{X}_ i,\mathcal{X}_ j)=0 \Leftrightarrow\text{cov}(\mathcal{X}_i,\mathcal{X}_j)=0\Leftrightarrow\text{corr}(\mathcal{X}_i,\mathcal{X}_j)=\mathbb{E}[\mathcal{X}_i]\mathbb{E}[\mathcal{X}_j]$ (uncorrelated is necessary for independent)
    • Cases where uncorrelated implies independence: (1) Jointly Gaussian (2) Bernoulli
  • Centered variable: $\tilde{\mathcal{X}}=\mathcal{X}-\mu_ \mathcal{X}$

Transform methods

  • Probability generating function(PGF): similiar to Z-tranform $$G_ \mathcal{X}(z)\equiv\mathbb{E}[z^\mathcal{X}]=\sum_{x_i}z^{x_i}p_ \mathcal{X}(x_i)$$
    • For $\mathcal{F}_ \mathcal{X}=\mathbb{N}$, we have $$\frac{\mathrm{d}^k}{\mathrm{d}z^k}G_ \mathcal{X}(z)\Biggr|_{z=1}=\mathbb{E}[\mathcal{X}(\mathcal{X}-1)\cdots(\mathcal{X}-(k-1))]$$
  • Moment generating function(MGF): similar to Laplace transform $$M_ \mathcal{X}(z)\equiv\mathbb{E}[e^{s\mathcal{X}}]=\int^\infty_{-\infty}e^{sx}f_ \mathcal{X}(x)\mathrm{d}x$$
    • Generally we have $$\frac{\mathrm{d}^k}{\mathrm{d}s^k}M_ \mathcal{X}(s)\Biggr|_{s=0}=\mathbb{E}[\mathcal{X}^k]$$
  • Characteristic function: similar to Fourier transform $$\phi_ \mathcal{X}(\omega)\equiv\mathbb{E}[e^{j\omega \mathcal{X}}]=\int^\infty_{-\infty}e^{j\omega x}f_ \mathcal{X}(x)\mathrm{d}x$$
    • Generally we have $$\frac{\mathrm{d}^k}{\mathrm{d}\omega^k}\phi_ \mathcal{X}(\omega)\Biggr|_{\omega=0}=j^k\mathbb{E}[\mathcal{X}^k]$$
  • Joint characteristic function: for vector case $\mathcal{X}\in\mathbb{R}^n$, we define vector $u$ and $$\phi_ \mathcal{X}(u)\equiv\mathbb{E}[e^{ju^\top \mathcal{X}}]$$
    • Trivial usage: if $\mathcal{Y}=\mathcal{X}_ 1+\mathcal{X}_ 2$, then $\phi_ \mathcal{Y}(u)=\phi_{\mathcal{X}_ 1}(u)\phi_{\mathcal{X}_ 2}(u)$

Common distributions

  • Bernoulli: $\Omega_\mathcal{X}=\{0,1\}$ $$p_\mathcal{X}(1)=p,\;p_\mathcal{X}(0)=q=1-p$$
    • $\mu_\mathcal{X}=p,\;\sigma^2_\mathcal{X}=pq,\;G_\mathcal{X}(z)=q+pz$
  • Binomial - $\mathcal{B}(n,p)$: $\Omega_\mathcal{X}=\{0,1,\ldots,n\}$ $$p_\mathcal{X}(k)=\binom{n}{k}p^k(1-p)^{n-k}$$
    • $\mu_\mathcal{X}=np,\;\sigma^2_\mathcal{X}=np(1-p),\;G_\mathcal{X}(z)=(1-p+pe^z)^n$
    • Generation: $\sum^n_{i=1} \mathcal{X}_i$, where $\mathcal{X}_i\sim\text{Bernoulli}(p)$
  • Multinomial: $\Omega_\mathcal{X}=\{0,1,\ldots,n\}^k$ $$p_\mathcal{X}(x)=\binom{n}{x_1!\cdots x_k!}p_1^{x_1}\cdots p_k^{x_k}$$
  • Geometric: $\Omega_\mathcal{X}=\mathbb{N}$ $$p_\mathcal{X}(k)=(1-p)^{k-1}p$$
    • $\mu_\mathcal{X}=\frac{1}{p},\;\sigma^2_\mathcal{X}=\frac{1-p}{p^2},\;G_\mathcal{X}(z)=\frac{pz}{1-(1-p)z}$
  • Poisson: $\Omega_\mathcal{X}=\mathbb{N}$ $$p_\mathcal{X}(k)=\frac{\lambda^k}{k!}\exp\{-\lambda\}$$
    • $\mu_\mathcal{X}=\lambda,\;\sigma^2_\mathcal{X}=\lambda,\;G_\mathcal{X}(z)=\exp\{\lambda(z-1)\}$
  • Uniform - $\mathcal{U}(a,b)$: $\Omega_\mathcal{X}=[a,b]$ $$f(x)=
    \begin{cases}
    1/(b-a), & x \in [a,b] \\
    0, & \text{otherwise}
    \end{cases}$$
    • $\mu_\mathcal{X}=\frac{1}{2}(a+b),\;\sigma^2_\mathcal{X}=\frac{1}{12}(b-a)^2,\;M_\mathcal{X}(s)=\frac{e^{sb}-e^{sa}}{s(b-a)}\;(s\neq 0)$
  • Normal - $\mathcal{N}(\mu,\sigma)$: $\Omega_\mathcal{X}=\mathbb{R}$
    $$f(x)=\frac{1}{\sqrt{2\pi\sigma}}\exp\left\{-\frac{(x-\mu)^2}{2\sigma^2}\right\}$$
    • $M_\mathcal{X}(s)=\exp\left\{\mu s+\frac{1}{2}\sigma^2s^2\right\}$
  • Joint Normal: $\Omega_\mathcal{X}=\mathbb{R}^n$
    $$f(x)=\frac{1}{\sqrt{(2\pi)^n \det(S)}}\exp\left\{-\frac{1}{2}(x-\mu)^\top S^{-1}(x-\mu)\right\}$$
  • Rayleigh: $\Omega_\mathcal{X}=[0,\infty]$ $$f(x)=\frac{x}{\sigma^2}\exp\left\{-\frac{x^2}{2\sigma^2}\right\}H(x)$$ where $H(x)$ is Heaviside step function
  • Exponential - $\mathcal{E}(\lambda)$: $\Omega_\mathcal{X}=[0,\infty]$ $$f(x)=\frac{1}{\mu}\exp\left\{-\frac{x}{\mu}\right\}H(x)$$
    • $\mu_\mathcal{X}=1/\lambda,\;\sigma^2_\mathcal{X}=1/\lambda^2,\;M_\mathcal{X}(s)=\lambda/(\lambda-s)$
  • Laplacian: $$f(x)=\frac{1}{2b}\exp\left\{-\frac{|x-\mu|}{b}\right\}$$

Famous Inequalities

  • Cauchy-Schwarz Inequality: $S=\left(\mathbb{E}[\mathcal{X}\mathcal{X}_j]\right)^2\leqslant\left(\mathbb{E}[\mathcal{X}_i]\right)^2\left(\mathbb{E}[\mathcal{X}_j]\right)^2$
  • Markov Inequality: $\mathbb{P}(\mathcal{X}\geqslant a)\leqslant \mathbb{E}\mathcal{X}/a,\;a>0$
  • Chebychev Inequality: $\mathbb{P}(|\mathcal{X}-\mu|\geqslant\delta)\leqslant\sigma^2/\delta^2$
  • Chernoff bound: $\mathbb{P}(\mathcal{X}\geqslant a)\leqslant \underset{s\geqslant 0}{\min}\,e^{-as}M_ \mathcal{X}(s)$
  • Law of Large Numbers: let $\mathcal{X}_i$ be samples drawn from $(\mathbb{R}^n,\mathcal{F}^n,\mathbb{P})$, and $\mathbb{P}$ is such that $\mathcal{X}_k$ has mean $\mu$ and covariance $S$
    • Weak version: if $\mathcal{Y}_ k=\frac{1}{k}\sum^k_{j=1}\mathcal{X}_j$ then $$\underset{k\rightarrow\infty}{\lim} \mathbb{P}\left(\left\lVert \mathcal{Y}_k-\mu\right\rVert>\epsilon\right)=0$$
    • Strong version: if $\mathcal{Y}_ k=\frac{1}{k}\sum^k_{j=1}\mathcal{X}_j$ then $$\underset{k\rightarrow\infty}{\lim} \mathcal{Y}_k=\mu$$
    • Central Limit Theorem: if $\mathcal{Y}_ k=\frac{1}{\sqrt{k}}\sum^k_{j=1}S^{-1/2}\left(\mathcal{X}_ j-\mu\right)$ then $$\underset{k\rightarrow\infty}{\lim} f_{\mathcal{Y}_k}(y_k)=\mathcal{N}(0,I)$$

Uncertainty Propagation

Suppose $\mathcal{Y}=\psi(\mathcal{X})$ or specifically $y=\psi(x_1)=\psi(x_2)=\cdots=\psi(x_K)$

  • Scalar case: $$f_ \mathcal{Y}(y)=\sum^K_{k=1} f_ \mathcal{X}(\psi^{-1}_ k(y))\left| \frac{\partial\psi}{\partial x}\biggr|_{x=\psi^{-1}_k(y)} \right|^{-1}$$
  • Vector case: $$f_ \mathcal{Y}(y)=\sum^K_{k=1} f_ \mathcal{X}(\psi^{-1}_ k(y))\left|\det(J)\right|^{-1},\text{where Jacobian }J=\frac{\partial\psi}{\partial x}\biggr|_{x=\psi^{-1}_k(y)}$$
  • Trivial Case — Summation: $\mathcal{Y}=\mathcal{X}_ 1+\mathcal{X}_ 2$, then $f_ \mathcal{Y}=\int^\infty_{-\infty}f_{\mathcal{X}_1}(x_1)f_{\mathcal{X}_2}(y-x_1) \mathrm{d}x_1$
    Another way is to use the method of choice: $F_\mathcal{Y}(y)=\int_{\psi(x)\leq y}f_\mathcal{X}(x)\mathrm{d}x$

Miscellaneous Corollaries

  1. For random valuable that takes positive values, $\mathbb{E}(X)=\int^\infty_0 \mathbb(\mathcal{X}>x)dx$
  2. If $\mathcal{X}_1,…\mathcal{X}_n$ are IID continuous random variables, then $\mathbb{P}(\mathcal{X}_1<\mathcal{X}_2<\ldots<\mathcal{X}_n)=1/n!$
  3. Define $\mathcal{X}_ {(j)}$ to be the j-th smallest among $\mathcal{X}_ 1,…\mathcal{X}_ n$. Suppose $\mathcal{X}_ 1,…\mathcal{X}_ n$ are IID random variables with PDF $f$ and CDF $F$, then $$f_ {\mathcal{X}_ {(j)}}(x)=\frac{n!}{(n-j)!(j-1)!}[F(x)]^{j-1}[1-F(x)]^{n-j}f_ \mathcal{X}(x)$$

Discrete-Time Stochastic System

Stochastic Sequences

  • Definition: Given $k\in\mathbb{K}\subseteq\mathbb{Z}$ a sequence of integers, $\mathcal{X}(k,\omega): (\Omega,\mathcal{F},\mathbb{P})\rightarrow(\mathbb{R}^n,\mathcal{F}_ \mathcal{X},\mathbb{P}_ \mathcal{X})$ is a random/stochastic sequence.
  • Uncertainties: Consider a casual system $F$ relates some scalar inputs $u(k)$ to output $x(k)$
    • Epistemic/Model uncertainty: $\mathcal{X}(k,\omega)=F(k,u(k),u(k-1),\ldots,\omega)$. (system is stochastic and input is deterministic).
    • Aleatoric/Input uncertainty: $\mathcal{X}(k,\omega)=f(k,U(k,\omega),u(k-1,\omega),\ldots)$ (system is deterministic and input is stochastic).
  • Realization: An outcome $\mathcal{X}(k,\omega)=x(k)$ is called a realization of stochastic sequence $\mathcal{X}$
  • Terminology and convention
    • $\mathcal{X}(k,\omega)$ is often written as $\mathcal{X}(k)$ when there’s no ambiguity.
    • $\mathcal{K}=\mathbb{Z}$ if not specified.
    • Sequence over a set $\mathcal{K}_1\subseteq\mathcal{K}$ are denoted $\mathcal{X}(\mathcal{K}_1)$.
    • $\mathcal{X}$ denotes $\mathcal{X}(\mathcal{K})$ if not specified.
    • Consecutive subsequence: $$\mathcal{X}(k:l)=\{\mathcal{X}(k),\mathcal{X}(k+1),\ldots,\mathcal{X}(l)\},\;x(k:l)=\{x(k),x(k+1),\ldots,x(l)\}$$
    • Abbreviations:
      • SS - stochastic sequence
      • IID - independent indentically distributed

Probabilistic characterization

  • Distribution and density: $$F_ \mathcal{X}\left(k:l;x(k:l)\right)\equiv\mathbb{P}((\mathcal{X}_i(k)\leqslant x_i(k))\cap\cdots\cap(\mathcal{X}_i(l)\leqslant x_i(l)),\;i=1\ldots n)$$ $$f_ \mathcal{X}\left(k:l;x(k:l)\right)\equiv \frac{\partial^{n(l-k+1)}}{\partial x_1(k)\cdots\partial x_n(l)}F_ \mathcal{X}(k:l;x(k:l))$$
    Here $k:l$ actually denotes a set of consecutive integers, it can be also changed to ordinary sets $\{k,l\}$ or single scalar $k$.
  • Ensemble Average: $\mathbb{E}[\psi(\mathcal{X}(k))]$, doing summation over different realization at same time $k$
    • Mean: $\mu_ \mathcal{X}(k)\equiv\mathbb{E}[\mathcal{X}(k)]=\int^\infty_{-\infty}x(k)f_ \mathcal{X}(k;x(k))\mathrm{d}x(k)$
    • Conditional Mean: $\mu_ \mathcal{X}(l|k)\equiv\mathbb{E}[\mathcal{X}(l)|\mathcal{X}(k)=x(k)]$
  • Time Average: $\frac{1}{2N+1}\sum^N_{k=-N}\psi(\mathcal{X}(k))$, doing summation over different time k of same realization
  • Autocorrelation:
    • Scalar case: $r_ \mathcal{X}(k,l)\equiv\mathbb{E}[\mathcal{X}(k)\mathcal{X}(l)]=\int^\infty_{-\infty}\int^\infty_{-\infty}x(k)x(l)f_ \mathcal{X}(k,l;x(k,l))\mathrm{d}x(k)\mathrm{d}x(l)$
    • Vector case: $R_ \mathcal{X}(k,l)\equiv\mathbb{E}[\mathcal{X}(k)\mathcal{X}^\top(l)]$
    • Conditional autocorrelation: $R_ \mathcal{X}(k,l|q)\equiv\mathbb{E}[\mathcal{X}(k)\mathcal{X}^\top(l)|\mathcal{X}(q)=x(q)]$
  • Autocovariance:
    • Scalar case: $\kappa_ \mathcal{X}(k,l)\equiv\mathbb{E}[(\mathcal{X}(k)-\mu_ \mathcal{X}(k))(\mathcal{X}(l)-\mu_ \mathcal{X}(l))]$
    • Vector case: $\mathrm{K}_ \mathcal{X}(k,l)\equiv\mathbb{E}[(\mathcal{X}(k)-\mu_ \mathcal{X}(k))(\mathcal{X}(l)-\mu_ \mathcal{X}(l))^\top]$
    • Conditional autocovariance: $\mathrm{K}_ \mathcal{X}(k,l|q)\equiv\mathbb{E}[(\mathcal{X}(k)-\mu_ \mathcal{X}(k|q))(\mathcal{X}(l)-\mu_ \mathcal{X}(l|q))^\top]$
    • Useful conclusion: $\mathrm{K}(a,b)=\mathrm{K}(b,a)^T$
  • Strong Stationarity(aka. strict sense): (necessarily identically distributed over time) $$\forall x(k:l)\in\mathbb{R}^{n(l-k+1)},\;\forall s\in\mathbb{Z},\;f_ \mathcal{X}(k:l;x(k:l))=f_ \mathcal{X}(k+s:l+s;x(k:l))$$
  • Weak Stationarity(aka. wide sense): $\forall k,l$ if $$\mu_ \mathcal{X}(k)=\mu_ \mathcal{X}(l)\;\text{and}\;\mathrm{K}_ \mathcal{X}(k,l)=\mathrm{K}_ \mathcal{X}(k+s,l+s)\equiv\bar{\mathrm{K}}_ \mathcal{X}(s)$$ Weak stationarity is necessary condition for stationarity. (Equal when Gaussian distributed)
  • Ergodicity: $\mathcal{X}$ is called ergodic in $\psi$ if
    1. $\mathbb{E}[\psi(\mathcal{X})]$ is stationary
    2. Ensemble average is equal to Time average, that is $$ \frac{1}{2N+1}\sum^N_{k=-N}\psi(\mathcal{X}(k))\rightarrow\mathbb{E}[\psi(\mathcal{X})]\;\text{as}\;l\rightarrow \infty$$

Markov Sequences

  • Markov Sequence: A ss. $\mathcal{X}(k)$ is called a (Discrete-time) Markov sequence if $$f_ \mathcal{X}\left(k,x(k)\middle|\mathcal{X}(k-1)=x(k-1),\mathcal{X}(k-2)=x(k-2),\ldots\right)=f_ \mathcal{X}(k;x(k)|\mathcal{X}(k-1)=x(k-1))$$
  • Guassian-Markov Sequence (GMS): $\mathcal{X}(k+1)=g(k,\mathcal{X}(k),W(k))$ where $W(k)$ is iid. Guassian
    • In LTV system, the form is usually $\mathcal{X}(k+1)=A(k)\mathcal{X}(k)+B(k)W(k)$
    • For linear Markov sequences, the deterministic mean sequence and centered uncertain sequence completely decouple. So we often assume that ss. is centered in this case.
    • We often make some assumption on the initial condition $\mathcal{X}(0)$, such as known, deteministic or uniformly distributed within certain domain.
  • Hidden Markov model: Sequence $\mathcal{Y}$ is called a Hidden Markov Model if it’s modeled by a system of the form $$\begin{align}\mathcal{X}(k+1)&=g(k,\mathcal{X}(k),W(k)) \\ \mathcal{Y}(k)&=h(k,\mathcal{X}(k),W(k))\end{align}$$
    We also say that $\mathcal{Y}$ has a (discrete-time) stochastic state space.

Linear Markov Model

Given the model $$\begin{align}\mathcal{X}(k+1)&=A(k)\mathcal{X}(k)+B(k)W(k) \\ \mathcal{Y}(k)&=C(k)\mathcal{X}(k)+D(k)W(k)\end{align}$$
Note that we assume $\mathcal{X}(k)$ and $\mathcal{Y}(k)$ are centered with regard to deterministic inputs.

  • Explicit state transition: By recursive substitution, $$\mathcal{X}(k)=\Psi(k,q)\mathcal{X}(q)+\sum^{k-1}_ {i=q}\Gamma(k,i)W(i)$$ where state transition matrix $\Psi(k,q)=\begin{cases}I, &k=q \\ \prod^{k-1}_ {i=q}A(i),& k>q\end{cases}$ and $\Gamma(k,i)=\Psi(k,i+1)B(i)$.
  • Conditioned Mean sequence: $\mu_ \mathcal{X}(k|q)=\Psi(k,q)\mu_ \mathcal{X}(q)+\sum^{k-1}_{i=q}\Gamma(k,i)\mu_W(i)$
  • Conditioned Autocovariance Matrix: $$\mathrm{K}_ \mathcal{X}(k,l|q)=\Psi(k,q)S_ \mathcal{X}(q)\Psi^\top(l,q)+\sum^{min\{k,l\}-1}_{i=q}\Gamma(k,i)S_W(i)\Gamma^\top(l,i)$$
    • A special case: $S_ \mathcal{X}(k|q)=\mathrm{K}_ \mathcal{X}(k,k|q)=\sum^{k-1}_{i=q}\Gamma(k,i)S_W(i)\Gamma^\top(k,i)$, $S_ \mathcal{X}(k|k)=0$
    • Useful equation (stationary centered case): $$\mathrm{K}_ \mathcal{X}(k,l)=\begin{cases} S_ \mathcal{X}(k)\cdot(A^\top)^{(l-k)}&,l>k\\ S_ \mathcal{X}(k)&,l=k\\ A^{(k-l)}S_ \mathcal{X}(k)&,l< k\end{cases}$$
  • Conditional Autocorrelation Matrix: $R_ \mathcal{X}(k,l|q)=\mathrm{K}_ \mathcal{X}(k,l|q)+\mu_ \mathcal{X}(k|q)\mu_ \mathcal{X}^\top(l|q)$
  • Recursive Form:
    • Mean sequence: $\mu_ \mathcal{X}(k+1|q)=A(k)\mu_ \mathcal{X}(k|q)+B(k)\mu_W(k)$
    • (Discrete-time algebraic) Lyapunov difference equation (algebraic Stein difference equation): $S_ \mathcal{X}(k+1|q)=A(k)S_ \mathcal{X}(k|q)A^\top(k)+B(k)S_W(k)B^\top(k)$
  • Convergence when $k\rightarrow\infty$:
    • Mean convergence: $\mu_ \mathcal{X}(k|q)$ converges requires $\max_i|\lambda_i(A)|<1$ ($\lambda$ denotes eigenvalue)
    • Covariance convergence: $S_ \mathcal{X}(k|q)$ converges requires $\max_i|\lambda_i(A)|<1$
    • (Discrete-time) Lyapunov equation (Stein equation): $\bar{S}_ \mathcal{X}=A\bar{S}_ \mathcal{X}A^\top+B\bar{S}_W(k)B^\top$. Solution for this equation exists iff. $A$ is asymptotically stable (characterizing sequence is stationary).
  • Observation property:
    • Mean: $\mu_ \mathcal{Y}(k|q)=C(k)\mu_ \mathcal{X}(k|q)+D(k)\mu_W(k)$
    • Covariance: $$\mathrm{K}_ \mathcal{Y}(k,l|q)=\begin{cases}C(k)\mathrm{K}_ \mathcal{X}(k,l|q)C^\top(l)+C(k)\Gamma(k,l)S_W(l)D^\top(l)&:k>l \\C(k)S_ \mathcal{X}C^\top(k)+D(k)S_W(k)D^\top(k)&:k=l\\ C(k)\mathrm{K}_ \mathcal{X}(k,l|q)C^\top(l)+D(k)S_W(k)\Gamma^\top(l,k)C^\top(l)&:k< l\end{cases}$$
    • Stationary covariance: $$\mathrm{K}_ \mathcal{Y}(s)=\begin{cases}CA^{|s|}\bar{S}_ \mathcal{X}C^\top+CA^{|s|-1}B\bar{S}_ WD^\top&:s\neq 0 \\C\bar{S}_ \mathcal{X}C^\top+D\bar{S}_ WD^\top&:s=0\end{cases}$$

Gaussian Stochastic Sequence

  • Jointly Gaussian $\Rightarrow, \nLeftarrow$ Marginally Gaussian
  • (Theorem 6) A linear controllable GMS $\mathcal{X}(k)$ is stationary iff. $A(k)=A, B(k)=B$ (time-invariant) and $A$ is asymptotically stable.
  • (Corollary 1) All LTI stationary GMS are also ergodic in all finite momoents
  • Solve $\mathcal{X}(k+1)=A\mathcal{X}(k)+BW(k)$ when $\mathcal{X}$ is stationary ($\max_i|\lambda_i(A)|<1$)
    1. solve $\bar{\mu}_ \mathcal{X}=A\bar{\mu}_ \mathcal{X}+B\bar{\mu}_W$ for $\bar{\mu}$
    2. solve $\bar{S}_ \mathcal{X}=A\bar{S}_ \mathcal{X}A^\top+B\bar{S}_ WB^\top$ for $\bar{S}_ \mathcal{X}$
    3. calculate $\Sigma_ \mathcal{X}(k:l|q)=\begin{bmatrix}\mathrm{K}_ \mathcal{X}(k,k|q)&\cdots&\mathrm{K}_ \mathcal{X}(k,l|q)\\ \vdots&\ddots&\vdots\\ \mathrm{K}_ \mathcal{X}(l,k|q)&\cdots&\mathrm{K}_ \mathcal{X}(l,l|q)\end{bmatrix}$ using $\bar{S}_ \mathcal{X}$
    4. Then $f_ \mathcal{X}(k:l;x(k:l))$ is determined with $\mu_ \mathcal{X}(k:l)=\{\bar{\mu}_ \mathcal{X},\bar{\mu}_ \mathcal{X}\ldots\bar{\mu}_ \mathcal{X}\}$ and $\Sigma_ \mathcal{X}(k:l)$ above.

Observation & Filtering

  • (LTI) Luenberger observer: $$\begin{align}\hat{\mathcal{X}}(k+1)&=A\hat{\mathcal{X}}(k)+L(\hat{\mathcal{Y}}(k)-\mathcal{Y}(k))+B\bar{\mu}_W \\ \hat{\mathcal{Y}}(k)&=C\hat{\mathcal{X}}(k)+D\bar{\mu}_W\end{align}$$ where $L$ is the observer gain.
    • Note: We often assume that the process & measurement noise are decoupled and independent
    • Combined form: $\hat{\mathcal{X}}(k+1)=[A+LC]\hat{\mathcal{X}}(k)-L\mathcal{Y}(k)+B\mu_W(k)$
  • State estimation residual $r(k)=\mathcal{X}(k)-\hat{\mathcal{X}}(k)$
    • Combined form: $r(k+1)=[A+LC]r(k)+[B+LD]\tilde{W}(k)$
    • Stationary covariance can be solved by a Lyapunov equation $$\bar{S}_r=[A+LC]\bar{S}_r[A+LC]^T+[LD+B]\bar{S}_W[LD+B]^T$$
  • Objective: minimize $\bar{S}_r=\mathbb{E}(rr^\top)$
    • Solutions: $L=-A\bar{S}_rC^\top[C\bar{S}_rC^\top+D\bar{S}_WD^\top]^{-1}$ (Kalman observer gain)
    • Or solve (Discrete-time) Algebraic Riccati equation $$\bar{S}_r=A\bar{S}_rA^\top+B\bar{S}_WB^\top-A\bar{S}_rC^\top[C\bar{S}_rC^\top+D\bar{S}_WD^\top]^{-1}C\bar{S}_rA^\top$$
  • Innovation sequence $e(k)=\hat{\mathcal{Y}}(k)-\mathcal{Y}(k)$ with $L$ is optimal
    • We can find that $\mu_e=0$ and $\mathrm{K}_e(k+s,k)=\begin{cases}C\bar{S}_rC^\top+D\bar{S}_WD^T&:s=0 \\0&:s\neq0\end{cases}$. So the innovation sequence is iid. (only in Kalman observer)
  • (Output) Probabilistically-equivalent model: $$\begin{align}\mathcal{X}(k+1)&=A\mathcal{X}(k)+Le(k) \\ \mathcal{Y}(k)&=C\mathcal{X}(k)-e(k)\end{align}$$

Notes for math typing in Hexo

  1. Escape \ by \\. Especially escape { by \\{ instead of \{, and escape \\ by \\\\.
  2. Be careful about _, it’s used in markdown as italic indicator. Add space after _ is a useful solution.
  3. Some useful Mathjax tricks at StackExchange
  4. Several capital Greek characters should directly use its related Latin alphabet with \mathrm command.
Shoot me some coffee money XD
0%