Decision-making in realtime

How do we make effective decisions quickly enough to keep up with a large number of simultaneous problems in the same system?  Decisions are not always single actions, they are ongoing movements whose path can be altered.

One way to do this would be to make a best guess immediately, and then wait for better information that may reveal it as the optimal/suboptimal decision.  As the probability of the decision being in error increases, the likelihood that the decision needs to be changed also increases.

This can be thought of as repeated estimation, polling, and hypothesis testing.
I have previously visited the problem of how many computers to produce for a group of people.  If this is your first time reading, the basic design assumption is that products are used more like services, where a set of computers is shared by a number of people in a community.  My hypothetical example resulted in the derivation of Little’s Law, which relates the necessary size of a queue to the average time spent in the queue and the rate that members of the queue arrive.  However, when we start up a new community, or create a new service, we have no idea what values either of these parameters will take, so we have to make a guess at first.

It’s pretty safe to assume that for the vast majority of people, 8 hours a day will be the highest amount of time spent using any single service.  If we are trying to decide how many computers to build for, say, 100 people, our best first guess can be made using this 8h/d assumption.
There are 24 hours in a day, so we’ll say that each person is likely to use a computer for at most 1/3 of the day.  One-third of 100 is 33.33…, which we will round up to 34.  This is the initial guess for number of computers to produce.  For simplicity, let’s say it takes an infinitesimal amount of time to build them, and we start with all 34.

Now, after a short period, we have enough data to be considered a fairly large sample, and it appears that the sample mean use time is actually 5.85h/d.  This is significantly less than our original guess.  Now comes the hypothesis testing:
H_0, the null hypothesis, is our original guess, which says that the mean is 8h/d
H_a, the alternative hypothesis, is our observation, which is 5.85h/d.
We should perform a z-test on this, which is the equivalent of asking,
“If the actual mean were 8 hours a day, what would be the probability that we would observe the mean to be 5.85 hours a day?”

I lost some of the details of this calculation in the intoxicated haze the original draft was written in, but the outcome of the Z-test was that the probability in this case was 2%; the likelihood of observing 5.85h/d average when the true average is 8h/d is only 2%.  In other words, it is 98% certain that we have overestimated the number of computers we needed to produce.

Now, let’s revise our earlier calculation: 24/5.85 = 4.1; The reciprocal of 4.1 is 0.24375, and multiplying this by the population of 100 gets us 24.375, which we will round up to 25.  This is a 26% reduction in the number of computers needed.

Now we have an additional problem: What do we do with these extra computers that we have on hand?  There are three obvious possibilities:

  1. Repurpose: Transport the computers somewhere they are needed.This is workable provided the following conditions hold true:
    1. The computers are needed elsewhere
    2. The energy requirements for transportation of these computers is the lowest among the three options
  2. Recycle: Deconstruct the computers into their constituent materials for something else.
    Conditions:
    1. The energy use is the lowest among the three options.
  3. Store and wait for a deficit: Store the computers for a set period and signal other communities of a surplus state.  Wait for a signal from another community of a deficit state.
    Conditions:
    1. The variance in demand for computers is high (or, this is early in the production stages and the likelihood of incorrect guesses is high)
    2. The energy use is lowest among the three options.

Assuming this community already had the space for all these computers, (3) will probably be the best option, at least for a short period of time.

In a different case, say we didn’t produce enough computers.  Our choices here are essentially converse to the previous three: Request more, build more, and wait for a surplus.

Notice that in all three possible actions, we want to pick the one with the lowest energy use.  In the near future, i.e. in the next century or so, it is a near-certainty that we will be operating under energy constraints.  Renewable energy sources work, but it is not yet clear that a reliable, high-energy infrastructure like the one we have now will be immediately possible.  It would be very wise to operate under the assumption that we will have to constrain our energy use, so it becomes very important to account for the energy use of our decisions.

As it turns out, money is (usually) an indirect account of energy use.  However, the way money is used to make decisions encourages increasing energy scales and decisions based on personal gain rather than community gain.  Money is traded for the goal of profit, resulting in the maximum number of products to fulfill a given demand.  By using energy-minimizing actions to fulfill demand, we come up with a minimal number of products, which means more user demands can be fulfilled with a given material and energy supply.  It requires a restructuring of the way we use things and think about civilization, but this process is already well under its way.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s