# Little Allocation for an M/M/c queue

In queuing theory, an M/M/c queue is one where arrivals (λ) happen by a Markov (Poisson) process, average time spent in the queue (μ) is also a Markov/Poisson process, and there are c servers, and a unified queue for all servers.  This is commonly used to model call centers, but in natural economics we will use it to model products as services.  As you know, I like to use computers as an example product-as-service, so we will use Little Allocation to make decisions on an example computer lab.

First, some initialization:
We’re going to make this very conservative, so let’s say people spend an average of 8 hours a day on the computer.  I probably do, but I’m a terrible example of a normal person; This is just to make it more difficult.
So, μ=8h=28800s
Let’s make our population n=36, just because it doesn’t really matter.
Because I’m not yet at the point of incorporating when people wake up, sleep, and tend to use things,  let’s say that people arrive relatively evenly over the 17 hours they are awake. This puts their arrival rate at (61200s/36p)=1700s.  This is a scale parameter, the inverse of a Poisson parameter.
So, λ=1/1700s=0.00059 arrivals/s.
A simple Little’s Law calculation would say that we need 17 ($L = \lceil \lambda \mu \rceil = 17$) computers for our example lab, given the above parameters.  However, we want to establish a certain level of service for this: The probability that you will have to wait in line to use a computer.  The formula for this in an M/M/c queue is called the Erlang C function, and is this cumbersome thing:
$C(c,\lambda / \mu)=\frac{\left ( \frac{(c\rho)^c}{c!} \right ) \left ( \frac{1}{1-\rho} \right ) }{ \sum^{c-1}_{k=0} \frac{(c\rho )^k}{k!}+\left (\frac{(c\rho )^c}{c!}\right ) \left (\frac{1}{1-\rho}\right )}$
Where ρ is the load proportion of each computer, and is equal to $\rho = \frac{\lambda \mu}{c}$

In a natural economic sense, this is equivalent to the probability that the demand for use-time is greater than the supply of use-time.

I have written code to perform Little Allocation by probability for M/M/c queue for Sage, a mathematics library for Python.  You can view the source here, though the interesting part is simply the following:

def Little_alloc_p(lam,mu,n,p_q):
L = Little(lam,mu) #traffic intensity
c = L + 1
rho = L / c
p = E_2c(c,rho)
while p_q < p:
c = c + 1 #a more advanced method should be used for large values of c
rho = L / c
p = E_2c(c,rho)
return c

This simply iterates the Erlang C function with increasing values of c until it is below the threshold set with p_q.

Plugging our example parameters into Sage for several levels of service:
$P(s_t < d_t) = 0.05: 25$
$P(s_t < d_t) = 0.1: 24$
$P(s_t < d_t) = 0.5: 22$

Remember, the goal here is to set a minimum level of service;  Using typical private allocation, this would require 36 computers, one for each person.  This means that even for a population that uses a computer 8 hours a day on average, you still only need 25 compared to 36, a 30% reduction in the number of copies.  For the average use-time according to the 2012 BLS survey of leisure time, you would only need 7, an 80% reduction, with a 95% level of service.  And remember, I’m using “level of service” to mean “you don’t have to wait, at all, 95% of the time”.

Private allocation sets a maximum level of service, but in doing so often limits the number of people it can serve through artificial scarcity.  Sure, those 36 people would be able to use a computer without waiting 100% of the time, but if there is a material constraint, or monetary constraint, then perhaps two or more people won’t get a computer, which means there is at most a 94.4% level of service, but with at least 30-80% more copies of the product.

Factoring in downtime:Things break, especially when they’re being used by lots of different people, so we may want to produce extras to maintain service even with downtime.  A simple way to do this would be to use the time to failure and the average downtime.  Again we have a scale parameter and an average wait time, so for each computer, we can use a formula similar to Little’s Law:
$L_f = ttf^{-1}dt$
Let’s play with it.  Say we have a ttf of 3 months, which is not very long.  Say it takes a week to repair the computer, which is fairly long.
$3mo\ ttf = 7,776,000s$
$7d dt = 604,800s$
$L_f = ttf^{-1} dt = 0.078$
So, for our population of 25 computers, we will get
$L_f \times c = 1.95$ two additional.  For our population of 7,
$L_f \times c = 0.546$ less than one additional, which seems as if it may not be worth the extra unit.

In light of the second result, we may wish to add a threshold (th) to the generalization of this calculation.
$\tilde{c} = c(1+(L_f-th<0?0:\lceil L_f \rceil))$
For those of you who don’t recognize the c?v:v structure, this is the ternary operator.

This way, we can have a balance of less waste and higher uptime.