Symmetric Alpha-Stable Processes I

In two weeks I will be attending the STM2016 workshop in Tokyo on spatial-temporal modeling. During the workshop I will be presenting some work with Nourddine Azzaoui and Gareth Peters on the simulation of a general class of non-stationary ${\alpha}$-stable processes. In this post, I want to provide some background to this work.

Processes with Independent Increments

To understand the class of processes we have been studying, it is helpful to begin with a review of stochastic processes with independent increments. Recall that a continuous parameter stochastic process is the process ${\{x_t,~t \in T\}}$ where ${T}$ is an interval of ${\mathbb{R}}$. A process with independent increments is then a process with random variables ${\{x_t\}}$ that for ${t_1 < ... < t_n}$, the differences

${x_{t_2} - x_{t_1},\ldots,x_{t_n} - x_{t_{n-1}}}$

are mutually independent and ${x_n = x_0 = \sum_{i=1}^n x_i - x_{i-1}}$. Moreover, if the distribution of ${x_t - x_s}$ depends only on ${t-s}$, a process with independent increments is said to have strict sense stationary increments.

To ensure that a given process with independent increments exists, we can apply the Kolmogorov existence theorem. The key thing we need to check is consistency. To see that we run into no problems, suppose that ${s_1 < s_2 < s_3}$ and set ${y_1 = x_{s_2} - x_{s_1}}$, ${y_2 = x_{s_3} - x_{s_2}}$ and ${y_3 = x_{s_3} - x_{s_1}}$. If the process ${\{y_t\}}$ has independent increments then ${y_1}$ and ${y_2}$ are mutually independent. Moreover, since ${y_3 = y_1 + y_2}$ it also follows that ${y_3}$ is the sum of two independent random variables, as required.

Note that it is not necessary to assign a distribution to the ${x_i}$‘s and consider instead the process ${\{x_t - x_0,~t \in T\}|}$, where ${\mathbb{P}(x_0 = 0) = 1}$. A key example of a process with independent increments is Brownian motion.

Example: (Brownian Motion) Suppose that ${x_t - x_s}$ is real and normally distributed, with

${\mathbb{E}[x_t - x_s] = 0}$ ${\mathbb{E}[(x_t - x_s)^2] = \sigma^2|t - s|}$,

with ${\sigma > 0}$. The interval ${T}$ is set to ${[0,\infty)}$ and ${\mathbb{P}(x_0 = 0) = 1}$.

Symmetric ${\alpha}$-Stable Processes with Independent Increments

Consider the process ${X = \{X_t,~t \in \mathbb{R}\}}$, with finite dimensional distributions corresponding to the distributions of the random vectors ${(X_{t_1},\ldots,X_{t_n})}$, ${t_1,\ldots,t_n \in \mathbb{R}}$, ${n \geq 1}$. ${X}$ is said to be an ${\alpha}$-stable process if for any ${A,B > 0}$ there is a number ${C > 0}$ such that the random vectors ${\mathbf{X} = (X_{t_1},\ldots,X_{t_n})}$ satisfy the stability condition equality (in distribution)

${A\mathbf{X}^{(1)} + B\mathbf{X}^{(2)} = C\mathbf{X},}$

where ${\mathbf{X}^{(1)},\mathbf{X}^{(2)}}$ are independent copies of ${\mathbf{X}}$. Gaussian processes correspond to the case ${\alpha = 2}$. For further details see Stable Non-Gaussian Random Processes.

In general, symmetric ${\alpha}$-stable random vectors (${1 < \alpha < 2}$) do not have closed-form joint probability density functions. However, they do admit a closed-form characteristic function, which is given by

${\Phi(\theta) = \exp\left(-\int_{\mathbb{S}^{d-1}} \left|\sum_{j=1}^d s_i\theta_i\right|^{\alpha}d\Gamma(\mathbf{s})\right),~\theta \in \mathbb{R}^d.}$

The measure ${\Gamma}$ is finite, symmetric and uniquely determined on the Borel subsets of the unit sphere ${\mathbb{S}^{d-1} \triangleq \{\mathbf{s} \in \mathbb{R}^d|~\|\mathbf{s}\| = 1\}}$.

For ${1 < \alpha < 2}$, symmetric ${\alpha}$-stable are qualitatively different to Gaussian processes. In particular, they are not characterized by their mean and covariance functions. In fact, the covariance is infinite, which is due to the heavy tail property of ${\alpha}$-stable random variables.

In the case of symmetric ${\alpha}$-stable processes (${1 < \alpha < 2}$), a natural notion of dependence is the covariation. Let ${z \in \mathbb{C}}$ and ${p \in \mathbb{R}}$, denote ${z^{\langle p\rangle} = |z|^p \mathrm{sign}(p)}$. For a pair of symmetric ${\alpha}$-stable random variables, denoted by ${X_1,X_2}$, the covariation is defined as

${[X_1,X_2]_{\alpha} \triangleq \int_{\mathbb{S}^1} s_1s_2^{\langle \alpha - 1\rangle}d\Gamma_{(X_1,X_2)}(s_1,s_2).}$

A general method to represent stochastic processes is via a stochastic integral, which is defined by the Riemann-Stieltjes integral

${x_t = \int_T f(t,\lambda)d\xi(\lambda),}$

where ${f(t,\lambda)}$ is a step function, ${\xi}$ is an appropriate stochastic process, and ${d\xi}$ is a random measure (i.e., a measure-valued random element). In some cases, the class of functions can be extended using completion arguments. However, for our purposes it is sufficient to stick to step functions.

Stochastic integral representations play a key role in the theory of symmetric ${\alpha}$-stable processes. In this case, the process ${\xi = (\xi_t,~t\in\mathbb{R})}$ is a symmetric ${\alpha}$-stable process. Sufficient conditions for the stochastic integral representation to hold are that ${1 < \alpha < 2}$ and the following regularity conditions hold:

1. The process ${\xi}$ possesses weak right limits.
2. For any linear combination ${\zeta}$ of elements of ${\xi}$, the map ${\zeta \rightarrow [\xi_t,\zeta]_{\alpha}}$ is of bounded variation on ${\mathbb{R}}$.

Aside from being a key notion of dependence, the covariation plays an important role in defining a norm for symmetric ${\alpha}$-stable stochastic integrals. Denote ${\mathcal{L}}$ as the linear space of step functions ${f}$. Then, the map ${\|f\|_{\mathcal{L}}^{\alpha} = \left[\int fd\xi,\int fd\xi\right]_{\alpha}}$ is a norm.

Symmetric ${\alpha}$-stable processes with independent increments play a particularly important role. Here, the random measure ${d\xi}$ is defined as ${d\xi([s,t)) = \xi_t - \xi_s.}$

Denote the weak limit of ${\xi_t}$ as ${\xi_{t+0}}$. Cambanis and Miller have shown that ${\|f\|_{\mathcal{L}}^{\alpha} = \int |f|^{\alpha}dF}$, where ${F(t)}$ corresponds to the scale parameter of the random variable ${\xi_{t+0}}$. This result provides a basis for extending the space of functions ${f}$ to a wider class of functions (denoted by ${\Lambda_{\alpha}(d\xi)}$ in Cambanis and Miller).

It is also possible to represent the covariation in terms of the function ${F}$. Consider the two processes ${\eta_1 = \int fd\xi}$ and ${\eta_2 = \int gd\xi}$, where ${f}$ and ${g}$ are step functions. Cambanis and Miller have shown ${[\eta_1,\eta_2]_{\alpha} = \int fg^{\langle \alpha - 1\rangle}dF.}$ Why is this important? In general, consider an $n$-dimensional skeleton $\mathbf{X} = (X_1,\ldots,X_n)$ of an $\alpha$-stable process with $1 < \alpha < 2$. In this case, the characteristic function of $\mathbf{X}$ can be written as

$\Phi_{\mathbf{X}}(\theta) = \exp([\sum_{i=1}^n \theta_i X_i, \sum_{i=1}^n \theta_i X_i]_{\alpha})$.

I’ll leave it there for now. In our current work, we are considering a class of symmetric ${\alpha}$-stable processes with non-independent increments.We are using a generalization of these results for processes with independent increments to provide an explicit representation of the process which can be used to develop simulation and estimation techniques.