### Lognormality > Stochastic "Inventory Control"

12 July 2006 | | categories: inventory control, orms, projects, school, software | blog archive

-[ background ]-

In the Spring of 2003, I took a course in stochastic processes. The second exam was take home and required some coding, which I did in Turbo Pascal, as in the case of my tic tac toe program one year later (the posting of which reminded me of the code from my stochastic processes class).

-[ assignment and notes ]-

A local store uses an (s, S) inventory policy for a particular product. Every Friday evening after the store closes, the inventory level is checked. If the stock level for the product is greater than s, no action is taken. Otherwise, if the stock on hand is less than or equal to s, an amount S-s is procured over the weekend and is available when the store re-opens on Monday morning.

Let {Xn, n = 1, 2, …} be the stock on hand when the inventory is checked on Friday evening of week n and let {Zn, n = 1, 2, …} be the demand for product during week n.

Then for any ω ∈ Ω,

Xn+1(ω) = Xn(ω) - Zn+1(ω)  if s < Xn(ω), Zn+1(ω) ≤ Xn(ω)

Xn+1(ω) = S - Zn+1(ω)  if Xn(ω) ≤ Zn(ω) ≤ S

Xn+1(ω) = 0  otherwise.

Let s = 3, S = 5, X0 = 0 and assume customers arrive at the store according to a Poisson process with rate λ = 4 per week. (Each customer wants one unit of product.)

Phase I of the exam required the proof that {Xn, n = 1, 2, …} is a Markov chain and writing its transition matrix. These are intuitive (or you’re looking for the answers to your exam, assuming the prof. still uses this exam), so I won’t include them here.

Phases II and III of the exam required some coding.

Phase II

A) Write a program to generate (i.e., fill in) the one-step transition matrix for a passed value of S. Print the one-step matrices for S = 4.

B) Let π = (1, 0, …, 0). Write a program to compute the equilibrium probabilities of the Markov chain using Jacobi iteration. Also, plot πj as a function of n, i.e., the number of Jacobi iterations.

Jacobi Iteration.

Jacobi Iteration is a simple computational technique for obtaining the equilibrium probabilities of a Markov chain using a series of matrix multiplies rather than solving a system of linear equations.

Let π[n] be the probability row vector representing the probability of occupying a given state after n transitions. That is, πj[n] = P{in state j after n transitions}.

By conditioning on the state occupied at transition n we have

π[n+1] = π[n]P  (Eqn 1)

where P is the one-step transition matrix for the Markov chain. In the limit as n goes to ∞, π[n] goes to π, the vector of equilibrium probabilities. Depending on the system and the desired accuracy, convergence can be quite rapid.

To compute the equilibrium probabilities, start with π, then compute π, then π and so on using Equation 1 until π[n+1] is “close” to π[n]. For this problem, stop iterating when

 Max{ πj[n+1] - πj[n] , all j in State Space} = 0.0001.

Computational note: Due to round-off error by the computer, it may be necessary to “re-normalize” π[n] every few iterations so that its elements sum to one. To do this, add up the first m-1 elements of the vector π[n] (assuming the chain has m states) then set the mth element equal to one minus the sum. The elements now sum to one.

The source code and binaries for the solutions to these Phase II questions are linked below. I’m not including Phase III’s code because, unlike Phase II’s, it does not have many possible uses outside the context of this exam. Phase II’s code is relevant because somewhere, someone might want to write a simple program to perform Jacobi iterations or to write one-step transition matrices. This code could easily be extended for other (non-Poisson) arrival distributions and/or for generation of the n-step transition matrix.