(2) I am now an Associate Editor for IEEE Communications Letters.

(3) A new paper has been accepted for IEEE ICC 2018:

**Malcolm Egan**, Trang C. Mai, Trung Q. Duong and Marco Di Renzo, “Coordination via advection dynamics in nano networks with molecular communication”, *accepted for publication in IEEE International Conference on Communications (ICC), (2018).*

(3) An invited paper will appear in CISS 2018:

**Malcolm Egan **and Samir M. Perlaza, “Capacity approximation of continuous channels by discrete inputs”, *accepted for publication in CISS 2018 ( Invited Paper)*

(4) In December, our paper appeared at GLOBECOM 2017:

**Malcolm Egan**, Laurent Clavier, Mauro de Freitas, Anne Savard and Jean-Marie Gorce, “Wireless communication in dynamic interference”, *Proc. IEEE Global Communications Conference*, 2017. [HAL]

(5) Our new preprint on coexistence in molecular communications is available on HAL:

**Malcolm Egan**, Trang C. Mai, Trung Q. Duong and Marco Di Renzo, “Coexistence in molecular communications,” 2017. [HAL]

]]>

Andrea Tassi, Malcolm Egan, Robert J. Piechocki and Andrew Nix, “Modelling and Design of Millimeter-Wave Networks for Highway Vehicular Communication”, *accepted for publication in IEEE Transactions on Vehicular Technology**.*

Check it out on arXiv here.

(2) With Laurent Clavier, Mauro de Freitas, Louis Dorville and Jean-Marie Gorce, I have a new paper accepted in IEEE GLOBECOM 2017:

Malcolm Egan, Laurent Clavier, Mauro de Freitas, Louis Dorville, Jean-Marie Gorce and Anne Savard, “Wireless communication in dynamic interference”, *accepted in IEEE GLOBECOM 2017*.

This work applies our previous information theoretic results on additive alpha-stable noise channels to large-scale wireless networks to support the Internet of Things.

]]>We are also visiting IEMN in Lille on the 13th June to present our work and discuss ongoing investigations into the coexistence problem in molecular communications.

**(2) **On the 22nd June I will be presenting in the INRIA POLARIS seminar in Grenoble on the topic of mechanism design for on-demand transport. You can find the details here.

**(3)** Between 26th June until the 30th June I will be attending the IEEE International Symposium on Information Theory. I will present joint work with Samir Perlaza and Slava Kungurtsev on our capacity sensitivity framework. You can find the paper here.

**(4) **I have a new paper with Andrea Tassi on arXiv.

Tassi, A. Egan, M., Piechocki, R. and Nix, A., “Modeling and design of millimeter-wave networks for highway vehicular communication,” available arXiv:1706.00298.

]]>

Egan, M., Gorce, J.-M. and Cardoso, L., “Fast initialization of cognitive radio systems”, *accepted in IEEE Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2017.*

You can now find it on HAL.

**(2)** I will present in the POLARIS seminar at INRIA Grenoble on the 22nd June. Details are below.

**Title:** Mechanism design in on-demand transport

**Abstract:** Uber is one of several recent companies adopting a business model that lies in stark contrast with the standard approach used by taxi services. Underlying Uber’s business model is a new architecture–based on a market mechanism–which governs how commuters, drivers, and the company interact with each other. In this talk, we develop a new general model for on-demand transport networks with self-interested passengers and drivers. With this model, we introduce market mechanisms to allocate and price journeys, as well as the market formation subproblem. By analysis and simulation, we characterize the performance of the mechanisms and discuss insights using data obtained from a real on-demand transport provider.

**(3) **On Wednesday 24th May, I will be attending the GDR ISIS meeting on “Entropies, divergences et mesures informationnelles classiques et généralisées” in Paris.

**(4)** Mauro de Freitas from Université Lille 1 has been visiting me in Lyon from the 12th to 18th May. He presented some of our joint work on impulsive noise in communications. Here are the details:

**Title:** Wireless Networks in Dynamic Interference

**Abstract: **This work is motivated by the Internet of Things, where devices can transmit for very short periods of time. A consequence is that interference is dynamic; that is, the active transmitter set can change very rapidly. In this case, the Gaussian interference model may not be the most appropriate. In fact, dynamic interference can be better modeled by impulsive interference, particularly alpha-stable noise. In this talk, we characterize the capacity in the presence of alpha-stable noise via upper and lower bounds, and consider the behavior in medium interference regimes. This analysis reveals many similarities with the well understood Gaussian case, such as outage probability characterizations and power control in parallel channels.

(1) A new paper on capacity bounds for the symmetric alpha-stable noise channel is to appear in IEEE Transactions on Information Theory:

Mauro de Freitas, Malcolm Egan, Laurent Clavier, Alban Goupil, Gareth Peters and Nourddine Azzaoui, “Capacity bounds for additive symmetric alpha-stable noise channels,” *to appear in IEEE Transactions on Information Theory*.

(2) A new paper on molecular communication in the presence of anomalous diffusion is to appear in IEEE Communication Letters:

Trang C. Mai, Malcolm Egan, Trung Q. Duong and Marco di Renzo, “Event detection in molecular communication networks with anomalous diffusion,” *to appear in IEEE Communication Letters*.

(3) A new paper on capacity sensitivity in non-Gaussian noise channels is to appear in IEEE International Symposium on Information Theory:

Malcolm Egan, Samir M. Perlaza and Vyacheslave Kungurtsev, “Capacity Sensitivity in Additive Non-Gaussian Noise Channels,” *to appear in IEEE International Symposium on Information Theory 2017*.

An extended version of this work can be found in the INRIA report here.

]]>**1.** I have finished my contract in the Laboratoire de Mathématiques at Université Blaise Pascal in Clermont-Ferrand and will begin a new position in Département Télécommunications INSA Lyon in the middle of November working with Jean-Marie Gorce and Leonardo Cardoso.

**2.** With Gareth Peters, Ido Nevat, Mahyar Shirvanimoghaddam and Iain B. Collings, I have a new paper accepted:

Malcolm Egan, Gareth W. Peters, Ido Nevat, Mahyar Shirvanimoghaddam and Iain B. Collings, “A ruin theoretic design approach for wireless cellular network sharing with facilities”, *to appear in Transactions on Emerging Telecommunications Technologies.*

This paper concerns network sharing in facilities, which I have been discussing here and here.

**3. **With Andrea Tassi, Robert Piechocki and Andy Nix, I have a new paper accepted in SigTelCom2017:

Andrea Tassi, Malcolm Egan, Robert J. Piechocki and Andrew Nix, “Wireless vehicular networks in emergencies: a single frequency network approach”, in *Proc. SigTelCom2017, to appear*.

In related news, last week I was in Prague and presented some related work with Andrea, Robert and Andrew on mmWave communications for vehicular networks. The talk was targeted at the computer scientists working on multi-agent coordination algorithms for intelligent transport systems in the Artificial Intelligence Center in the Czech Technical University in Prague.

**4. **In December, I will be presenting at the CFE-CMStatistics Conference in Seville Spain. My talk is entitled: *Simulation of a general class of alpha-stable processes*, which is based on work with Nourddine Azzaoui, Gareth Peters and Arnaud Guillin. Here is the abstract:

* The heavy-tail and extremal dependence properties of **-stable processes have lead to their extensive use in fields ranging from finance to engineering. In these fields, the stochastic integral representation plays an important role both in characterizing **-stable processes as well as for the purposes of simulation and parameter estimation. In order use the stochastic integral representation, constraints on the random measure must be imposed. A key constraint is the independently scattered condition, where disjoint increments of the random measure are independent. A key feature of the independently scattered condition is that the covariation is both left and right additive, which allows for simulation and estimation of this class of processes. Recently, a new generalization of the independently scattered condition has been introduced, which also preserves the left and right additivity of the covariation. This new generalization allows the characteristic function a wide class of -stable processes to be determined by a bimeasure. We deal with the problem of simulating from the bimeasure characterization of **-stable processes. In particular, we prove conditions under which the bimeasure leads to a positive definite characteristic function for the case of a two-dimensional skeleton. Based on this result, we then propose a method to construct and simulate **dimensional skeletons, for arbitrary **.*

In a previous post, I discussed some different ways that infrastructure can be owned by different parties, especially in the setting where users are in a facility (e.g., mine, power plant or large residential area). I briefly mentioned the need for new evaluation metrics in order to optimize network sharing agreements and it is this aspect I want to explore further in this post. In particular, I will introduce the notion of the *probability of ruin*.

Suppose that you own a component of a wireless communication network. This might be spectrum, it might be a number of transmitting devices (e.g., small-cells), or you might even be an internet service provider (ISP) with backhaul infrastructure to connect small-cell networks. Irrespective of your particular component, the key question is whether or not providing it is profitable.

The basic idea of profitability is that what you pay is less than what you earn; your income is higher than your expenses. This simple idea pervades economic analysis of many systems. However, we need to account for the fact that the income and expenditures vary over time. If we are interested in a business, how do we decide that the business is profitable after 6 months, or a year, or indefinitely?

In the context of the economic analysis of wireless network sharing arrangements, the common approach is to consider the *expected profit;* that is, the income and expenditures are averaged. For example, see:

- Duan, L., Huang, J. and Shou, B., “Duopoly competition in dynamic spectrum leasing and pricing,”
*IEEE Transactions on Mobile Computing*, vol. 11, no. 11, pp. 1706-1719, 2012. - Yang, Y. and Quek, T., “Optimal subsidies for shared small cell networks–a social network perspective,”
*IEEE Journal on Selected Topics in Signal Processing*, vol. 8, no. 4, pp. 690-702, 2014. - Park, J., Kim, S. and Zander, J., “Asymptotic behavior of ultra-dense cellular networks and its economic impact,” in
*Proc. IEEE Global Communications Conference (GLOBECOM)*, 2014. - Berry, R., Honig, M., Nguyen, T., Subramanian, V., Zhou, H. and Vohra, R., “On the nature of revenue-sharing contracts to incentivize spectrum-sharing,” in
*Proc. IEEE INFOCOM*, 2013. - Cano, L., Capone, A., Carello, G., Cesana, M. and Passacantando, M., “Cooperative infrastructure and spectrum sharing in heterogeneous mobile networks,”
*IEEE Journal of Selected Areas in Communications*, vol. 34, no. 10, pp. 2617-2629, 2016.

To make the notion of expected profit more precise and to explore its limitations, it is helpful to introduce the *basic **revenue surplus process*:

**Definition (Basic Revenue Surplus Process): **Let be the income stochastic process and be the expenditure process. Then, the revenue surplus process is , where the random variable .

The basic revenue surplus process provides insight into the financial resources of the business at each time , without accounting for the interest (i.e., the interest rate is zero). As assuming an interest rate of zero does not obscure any of the main ideas, in the remainder of this post, I will focus on this case.

Assuming taking expectations is meaningful, the expected profit at time is then . This may seem like a natural thing to do, but it is worthwhile exploring the implications. For example, it means that the main criterion for profitability is that ; i.e., a business is unprofitable if its expected profit drops below a threshold .

But in practice, if a businesses profit drops below zero at *any* point in time before the target time , then the business must borrow money which may not be desirable. If instead, we require that the revenue surplus for all , then it is clear that the expected profit does not do the job.

To properly account how the revenue surplus varies over time, we need the more refined notion of the *probability of ruin*. This idea has been widely considered in mathematical insurance and risk theory; for example, see

- De Vylder, F. and Goovaerts, M., “Recursive calculation of finite time ruin probabilities,”
*Insurance: Mathematics and Economics*, vol. 7, pp. 1-7, 1988. - Asmussen, S., (2000),
*Ruin Probabilities*. Singapore: World Scientific Publishing Co.

The key question in ruin theory is: *what is the probability that the revenue surplus drops below zero at a time before a given* *time* ? To answer this question, we need to define the time of ruin:

**Definition (Time of Ruin):** The time of ruin is defined as .

The probability of ruin is then just the probability that the time of ruin is before time . (Readers might notice the similarities between the probability of ruin and the probability of buffer overflow in queuing theory.)

In a future post, I will explore a concrete calculation of the probability of ruin in the context of wireless network sharing, accounting for non-zero interest rates. For now, I just want to point out some implications of using the probability of ruin rather than the expected profit.

One implication is that if the income and expenditures are linked to usage, then the dynamics of the demand can influence the probability of ruin in ways that do not affect the expected profit. This is due to the fact that the probability of ruin does not just consider the first moment. As such, the variance (and higher order moments) also play a role, which suggests that if user demand has large fluctuations then the expected profit may not properly capture these affects.

Another implication is that financial parameters such as the interest rate become coupled to the revenue and expenditure processes. If these processes are dependent on the infrastructure design, then there can be unexpected dependence of the ruin probability on the resource allocation in the wireless network.

Recently, in

- Ballesteros, L., et al., “Effect of network performance on smartphone user behavior,” in the
*Workshop on Perceptual Quality of Systems (PQS)*, 2016

the authors showed that the network performance (i.e., available resources to serve users) influences the way users behave; e.g., which apps they use.

If the ability to use certain apps leads to a higher revenue, then the probability of ruin will be jointly dependent on financial parameters (such as the interest rate) and also physical layer parameters (such as how time-frequency resources are allocated). To cope with this kind of complicated coupling, it may be necessary to introduce specialized contracts for network sharing. Perhaps there will even be some interest in contract theory; e.g.,

- Tirole, J.,
*The Theory of Corporate Finance*. Princeton University Press, 2006.

In the case of the additive Gaussian noise channel, the capacity is well known. However, it is still interesting to characterize the behavior of the asymptotes (as the SNR tends to zero or infinity) for use in proofs or to provide simple design guidelines for real-world communication systems.

Despite the work on asymptotes, it is more difficult to characterize the behavior at medium SNR without using the exact expression for the capacity.

In this post, I look at the medium behavior of the Gaussian noise channel. It turns out the SNR of decibels is particularly special and suggests a way of obtaining simple capacity approximations at medium SNR for general classes of additive noise channels.

More precisely, I will look at the role of in the additive Gaussian noise channel , where is a standard Gaussian random variable (zero mean, variance of ) and is the signal-to-noise ratio (SNR). It is well known that in this case, the capacity (subject to a power constraint) is given by

bits.

In information and communication theory, it is common to work in decibels (dB) and we can write , where is the SNR in decibels. Here are three situations where having or dB is particularly special.

**(1) The intercept of the asymptote of the capacity as is dB.**

As the SNR in dB , we can write

,

which is the linear asymptote. Observe that setting means that the intercept dB, as claimed.

**(2) The bend point of the capacity occurs at dB.**

It is common to study the capacity as either the SNR tends to infinity or the SNR tends to zero. This is because, in general, it is difficult to obtain a simple characterization of the behavior at medium levels of SNR.

One approach to characterize the behavior of the capacity at medium SNR is to evaluate the bend point.

*Definition: *The bend point, , of the capacity is the SNR such that the second derivative of the capacity is maximized.

This might seem an odd definition, but there is a lot of intuition here. In particular, observe that the second derivative provides information about how the rate of change of the capacity is varying. By finding where the second derivative is maximized, we find where the curve is the most *bent*.

A figure helps to illustrate the situation. Observe in Fig. 1 that as , the slope of the capacity curve tends to zero. As the SNR is increased, the slope starts to increase until it reaches the high SNR asymptote, where it is once again constant. As such, the second derivative is zero as both and . In between, the second derivative is positive and the bend point its maximum.

*Fig. 1: Plot of the Gaussian capacity and asymptotic capacity curves.*

To find the bend point, it is not hard to show that the second derivative of the capacity has a unique maximum. As such, we can compute the third derivative and find where it is zero. After changing our units to nats (i.e., the logarithm is base ), we need to solve

Observe that the solution to this equation is dB, as claimed.

**(3) The cross-over point between the capacity curve and the MMSE curve is dB.**

Suppose we want to estimate the Gaussian input from the observation , where as usual is Gaussian noise. The error of an estimate can be measured in the mean-square sense

.

The choice of to minimize the mean-square error is achieved by

.

The minimum mean-square error (MMSE) is then given by

.

To find the cross-over point (illustrated in Fig. 2), we then need to solve

,

which has the solution , equivalent to dB.

*Fig. 2: Plot of the capacity and MMSE curves.*

**Conclusions**

We have seen that a signal-to-noise ratio or dB is special in the case of additive Gaussian noise channels. Although these facts may appear to be trivial, the fact that all three hold makes the compelling suggestion that there are medium SNR features of the capacity that are connected to the asymptotic behavior of the capacity and also the behavior of the MMSE.

**Further Reading**

For more on the bend point and an application to multiuser MIMO, see

- Egan, M., “Low-high SNR transition in multiuser MIMO,”
*Electronic Letters*, vol. 51, no. 3, pp. 296-298, 2015.

For more on connections between the mutual information and MMSE in the Gaussian case, see

- Guo, D., Shamai, S. and Verdú, S., “Mutual information and minimum mean-square error in Gaussian channels,”
*IEEE Transactions on Information Theory*, vol. 51, no. 4, pp. 1261-1282, 2005. - Guo, D., Wu, Y., Shamai, S. and Verdú, S., “Estimation in Gaussian noise: properties of the minimum mean-square error,”
*IEEE Transactions on Information Theory*, vol. 57, no. 4, pp. 2371-2385, 2011.

For connections in non-Gaussian noise, see

- Guo, D., Shamai, S. and Verdú, S., “Additive non-Gaussian noise channels: mutual information and conditional mean estimation,” in
*Proc. of the IEEE International Symposium on Information Theory*, 2005.

In this post, I want to turn to a different aspect: *the market formation problem*.

In practice, many providers need to allocate and price a large number of passengers and drivers over short periods of time. This can be difficult as many available drivers can potentially service each passenger. The market mechanisms I described in this post have different ways of dealing with this issue:

- In posted price (e.g., Uber) and hybrid PP/A mechanisms, one request is dealt with at a time. This means that a set of drivers must be offered a journey with each passenger.
**Problem:**how should this set of drivers be selected? - In double auction mechanisms, multiple drivers and passengers are allocated simultaneously.
**Problem:**how should a provider decide which groups of passengers and drivers can be matched?

In each mechanism, the challenge is to find compatible passengers and drivers. This is important for two reasons:

- not all passengers can be serviced by all drivers; and
- it is undesirable for available drivers to deal with a large number of offers as it can be a distraction from driving safely.

As such, we need to decompose the initial market consisting of all drivers and passengers into submarkets consisting of compatible drivers and passengers. Note that this problem is not usually addressed in the economic or computer science literature on mechanism design, where it is assumed that all agents in the initial market are compatible. This is due to the homogeneous goods assumption. However, in practice ensuring each market is compatible is an important problem.

So, when are drivers and passengers compatible? Or, alternatively, when is a given passenger-driver pair incompatible?

There are two factors that affect whether of not a given driver and a passenger are not compatible:

- Hard constraints are not satisfied; for example, the driver cannot reach the passenger at a desired pick-up time as the pair are too far apart, or the driver’s vehicle type is not acceptable to the passenger.
- Soft constraints are not satisfied; for example, a passenger is not
*likely*to be accepted by a driver because the driver does not typically serve passengers with the requested pick-up or drop-off locations.

In traditional taxi services, hard constraints are usually enforced; however, soft constraints are more difficult to quantify and use in submarket formation. Often providers use expert knowledge to design heuristic rules, but these are typically not rigorously verified.

An alternative approach is *data-driven market formation*. In this approach, historical data available to the provider is used to develop statistical models to quantify the probability that a given passenger-driver pair is compatible. This data can be obtained via transaction records, which is now possible due to the ubiquitous use of sophisticated smartphone apps.

Such a data-driven market formation approach is highly desirable as it means that the market formation rule can exploit a range of features that may not be obvious even to experts. It can lead to an increase in the proportion of passengers that are served while reducing the number of offers to each driver—promoting safe driving.

How are the statistical models for data-driven market formation developed? In a project lead by Jan Mrkos and Jan Drchal at the Czech Technical University in Prague, a data-driven market formation algorithm has been proposed to aid providers in finding compatible submarkets.

Here is the basic idea:

- Select a number of features that potentially influence when a driver will respond to a passenger request.
- Using a data set of historical transactions, determine the influence of each feature on the probability that a driver will respond to a request; i.e., construct the statistical model.
- For each passenger-drive pair compute the probability that the pair is compatible.
- For each request, select the set of drivers that are offered the journey so that the probability at least one driver responds is greater than a threshold (e.g., 90%).

This algorithm has been tested on a data set kindly provided by Liftago, which is based in the Czech Republic. A limited data set is also available upon request. Liftago’s mechanism falls in the hybrid PP/A class, which is described in this earlier post.

Here are some of the observations, which were obtained by learning the statistical model and simulation on Liftago’s data set. Further discussion is available in the technical report here.

*Feature ranking: *Let’s start by looking at some different features that might be important in forming submarkets. Clearly, factors like distance will matter, but how much do other factors such as driver histories affect compatibility? In the following table is a complete list of features that were considered.

**Table 1:*** * *Features derived from the Liftago transactions dataset*

**Feature ****Description**

pickup_distance direct Euclidean distance to pickup (km)

ride_distance direct Euclidian distance from pickup to destination (km)

pickup_center pickup Euclidian distance from the Prague center (km)

ride_center pickup Euclidian distance from the Prague center (km)

hour time of day (h)

day day of the week (0 – 6)

mean_accept_rate driver’s mean accept rate over all transaction records

Fig. 1 shows the relative impact of each feature on the probability a given driver will respond to a passenger. Observe that the pickup distance and the mean accept rate are the most important features, which means that driver histories play an important role.

**Fig 1:** Ranking of features for submarket formation.

*Performance of the Statistical Model:* The model outperforms Liftago’s initial market formation algorithm in terms of the average ratio of responses per request (**0.867** for the model and **0.476** for Liftago’s original approach) and the average number of drivers that are offered a journey (less than **4** drivers for the model and more than **7 **for Liftago’s original approach).

A key conclusion of this study is that the data-driven maket formation approach appears to outperform the heuristic algorithm initially adopted by Liftago, based on results from the available data set. This suggests that adopting a data-driven approach—as opposed to purely expert-based heuristics—is a promising way to find compatible markets in on-demand transport.

*For more details, a technical report is available on arXiv:*

Mrkos, J., Drchal, J., Egan, M. and Jakob, M., “Liftago on-demand transport dataset and market formation algorithm based on machine learning,” *available at https://arxiv.org/pdf/1608.02858*, (2016).

*For other blog posts in this series on market-based approaches to on-demand transport, see:*

Mechanism design for on-demand transport

Market-based on-demand transport

A simulation tool for market-based on-demand transport

*For a collection of research papers related to market-based on-demand transport, see here.*

These days, computational processing resources are cheap so it is possible to do sophisticated processing at higher layers, ranging from the MAC, to the network, and even up to the application layer. For this reason, there has been a huge development of techniques to exploit these computational resources (for example, software defined networking and network coding). Although these techniques do have limitations, they now form a valuable means of improving the quality of communication, without changing much at the PHY layer.

On the other hand, PHY layer advances cost a lot of money. Any change to the hardware used to actually send data over the air requires the upgrade of thousands of transmitting devices (or base stations). As such, to justify the upgrade there needs to be a clear business case that provides operators with incentives to make these changes.

One such PHY layer method which has been readily adopted by operators to improve quality of service has been to bring base stations closer to their intended users. Over the last 5 years, this trend has been ramped up with the development of small-cells: small, cheap and low-powered devices that can serve a small number of users. Although these devices have limited range, there are now a lot of them. This has lead to improved quality of service for users in previously difficult to serve locations (e.g., residences and areas with high user density).

In parallel with the development of small-cells has been another new idea: dynamic network sharing. As base stations have traditionally been expensive, the idea is that the costs can be shared via innovative investment structures. That is, multiple operators can share in the costs of base stations and in return have access to a proportion of time-frequency blocks (e.g., the Swedish model). Alternatively, a third-party can rollout the base stations and then lease access to one or more operators. These third-party operators are often called tower companies, with thousands in India and the USA alone.

Perhaps the most ambitious network sharing framework has been proposed by Linda Doyle and her collaborators at Trinity College Dublin. In their *networks without borders *proposal, they envisage infrastructure owners (i.e., owners of base stations and fibre backhaul) that can dynamically pool resources to match the instantaneous demand of operators in each region. Of course, such sharing requires the presence of contracts between each infrastructure owner and each operator. How to do this largely remains an open problem.

At the heart of the networks without borders framework and perhaps network sharing more generally is an interesting principle: *only pay for what you use.* Contracts of this kind naturally invoke ideas from market mechanism design, such as auctions and posted price mechanisms. These kinds of contracts are now being applied to spectrum allocation, but have not yet developed into a full dynamic network sharing framework that also includes infrastructure.

A first step to implement the only pay for what you use principle was developed for indoor wireless networks by Jan Markendahl and Amirhossein Ghanbari at KTH. In one of their proposals, third parties not only lease their infrastructure (as for tower companies), but also operate their own networks. They suggested the setup illustrated in Fig. 1, where users are served by the third party, which in turn obtains access to data and spectrum from a traditional operator (data could in principle be also provided by an ISP, but the huge volume would also require a special contract).

**Fig. 1**: Third-party owned and operated wireless networks.

But who is this third party? And how do they ensure that their business is financially sustainable? How do they get the required fibre backhaul, efficient power sources and property in order to rollout their base stations?

To answer these questions, we can ask another: *who already has these resources?* Recently, with Gareth Peters, Ido Nevat, Mahyar Shirvanimoghaddam and Iain Collings, I suggested that one option is facilities; e.g., power plants, mines, universities and large residential blocks. Facilities also tend to have unusual data traffic demands, such as upload rates in stadiums. As such, facilities may not be easy to service by traditional operators and additional financial incentives may be required. So, what if these facilities could operate their own wireless networks, analogously to Markendahl and Ghanbari’s proposal for indoor networks?

These facility-operated networks would be required to purchase base stations. Ideally, they should also be able to build their network incrementally to minimize initial sunk costs and the associated financial risk. This is where small-cells can play an important role. As small-cells are cheap and are low-powered, there is a low initial investment and as the number of users increases, more base stations can be added without serious interference issues (since the network is small and easily characterized, mitigation techniques can be readily applied).

What about the contracts between users, the facility and traditional operators? In our paper, we made a proposal that took into account the very different natures of how users are served (via wireless links) and how the facility obtains access to spectrum and data (via wired links), which can be found here.

A natural question is then: how can we evaluate the network sharing arrangement. This is an important question and ties directly to our initial problem of determining how much the PHY layer matters.

In particular, the network sharing arrangement ultimately depends on the financial sustainability of the facility’s network. As such, we should consider its profitability. There are two metrics that are suggested from mathematical finance: the expected profit and the probability of ruin. In this post, I explore the benefits and drawbacks of each. For now, however, I want to return to the question how we can find out how much the PHY layer matters.

A key aspect of the PHY layer is resource allocation; for example, how to decide how many time-frequency blocks or power to allocate to each user. The reason? Each way of allocating resources has different costs and yields different levels of quality of service for users. As such, the resource allocation will ultimately determine the revenue obtained from each user, which directly influences the profitability. The question of how much the PHY layer matters therefore corresponds to how robust the profit is to different resource allocation schemes.

At this stage, there is no straightforward answer, especially when sophisticated resource allocation schemes are used. However, one lesson from finance provides some insight: *if you have enough money in the bank, even large temporary losses can be written off.* In light of this lesson, what we do know is that the financial state of the facility can mean that simple design choices payoff. How much the PHY layer matters then depends on how it affects the financial sustainability of infrastructure owners or operators. And perhaps this is the real question that wireless PHY layer researchers and designers should be asking.

**Further Reading**

- Doyle, L., Kibilda, J., Forde, T. and DaSilva, L, “Spectrum without bounds, networks without borders,”
*Proceedings of the IEEE*, vol. 102, no. 3, pp. 351-365, 2014. - Mölleryd, B. and Markendahl, J., “The role of network sharing in transforming the operator business: impact on profitability and competition,” in
*Proc. European Regional Conference of the International Telecommunication Society*, 2013. - Markendah, J. and Ghanbari, A., “Shared smallcell networks multi-operator or third party solutions or both?” in
*Proc. International Symposium on Modeling & Optimization in Mobile, Ad Hoc & Wireless Networks*, 2013. - Dewenter, R. and Haucap, J., “Incentives to lease mobile virtual network operators (MVNOs),” in
*Proc. 34th Research Conf. Comm. Information Internet Policy*, 2006. - Duan, L., Huang, J. and Shou, B., “Duopoly competition in dynamic spectrum leasing and pricing,”
*IEEE Transactions on Mobile Computing*, vol. 11, no. 11, pp. 1706-1719, 2012. - Egan, M., Peters, G.W., Nevat, I. and Collings, I.B., “Pass go and collect $200: the profitable union of facilities and small-cells,” in
*Proc. IEEE International Conference on Communications (ICC)*, 2015. - Egan, M., Peters, G.W., Nevat, I., Shirvanimoghaddam, M. and Collings, I.B., “A ruin theoretic design approach for wireless cellular network sharing with facilities”,
*to appear in Transactions on Emerging Telecommunications Technologies**.*