Gaussian noise channels at medium SNR

Shannon’s noisy channel coding theorem tells us that the capacity is the maximum rate we can transmit information reliably over a noisy channel in the class of memoryless channels. In general, computing the capacity is a difficult problem. As such, there has been extensive work on asymptotics.

In the case of the additive Gaussian noise channel, the capacity is well known. However, it is still interesting to characterize the behavior of the asymptotes (as the SNR tends to zero or infinity) for use in proofs or to provide simple design guidelines for real-world communication systems.

Despite the work on asymptotes, it is more difficult to characterize the behavior at medium SNR without using the exact expression for the capacity.

In this post, I look at the medium behavior of the Gaussian noise channel. It turns out the SNR of $0$ decibels is particularly special and suggests a way of obtaining simple capacity approximations at medium SNR for general classes of additive noise channels.

Analyzing the effect of side information: a perturbative approach

Quite often we need to make decisions. These decisions will typically depend on some information that we have obtained, either through direct observation or because it has been communicated to us by someone (or something) else. This side information is not usually perfect. For example, our measurements will not be without error and information communicated to us will be subject to imperfections in the communication medium.

How can we understand the effect of imperfect side information on our decisions?

ISIT 2016 Paper

In July, the International Symposium on Information Theory will be held in Barcelona, Spain. One of the papers that will be appearing there is some recent work I have done with Mauro de Freitas, Laurent Clavier, Alban Goupil, Gareth Peters, and Nourddine Azzaoui. We have been considering variations on the additive $\alpha$-stable noise channel $Y = X + N$, where $N$ is an $\alpha$-stable random vector.

These kinds of channels appear in various communication systems including wireless and quite recently molecular. As such, it is interesting to try and compute the capacity of these channels. The special case $\alpha = 2$ with a power constraint has been extensively studied (it is the Gaussian case!), but in general the capacity is not well understood for other values of $\alpha$ with any constraints.

Enter our paper. We considered the case where the noise $N$ is an isotropic $\alpha$-stable random vector. So, the channel is the additive isotropic $\alpha$-stable noise ($AI\alpha SN$) channel. Our results? A quick summary:

1. The optimal input for the $AI\alpha SN$ channel subject to a constraint $\mathbb{E}[|\mathbf{X}|^r] = (\mathbb{E}[|X_1|^r],\mathbb{E}[|X_2|^r])^T \preceq \mathbf{c},~r < \alpha$ exists and is unique.
2. The capacity subject to $\mathbb{E}[|\mathbf{X}|^r] \preceq \mathbf{c},~r < \alpha$, is lower bounded by a function of the form: $\frac{1}{\alpha}\log(1 + Kc_{\min}^{\alpha})$. ($c_{\min}$ is just the minimum of the elements of $\mathbf{c}$ and $K$ is a constant that depends on the noise parameters)

We also had a brief look at the extension to parallel channels, but for that you will need to read the paper.

For the official summary…
Title: Achievable rates for additive isotropic alpha-stable noise channels
Abstract: Impulsive noise arises in many communication systems—ranging from wireless to molecular—and is often modeled via the $\alpha$-stable distribution. In this paper, we investigate properties of the capacity of complex isotropic $\alpha$-stable noise channels, which can arise in the context of wireless cellular communications and are not well understood at present. In particular, we derive a tractable lower bound, as well as prove existence and uniqueness of the optimal input distribution. We then apply our lower bound to study the case of parallel $\alpha$-stable noise channels and derive a bound that provides insight into the effect of the tail index $\alpha$ on the achievable rate.