Filtering for Discrete-Time Markov Processes and Applications to Inventory Control with Incomplete Information

Abstract:  In this paper, we provide a general framework for the problem of estimating the state of a system whose dynamics is governed by a Markov Chain in discrete time. We describe applications to inventory control systems with partial observations. We introduce conditional distributions and unnormalized conditional probabilities to transform nonlinear state transition equations to linear ones. With linear equations, stochastic optimal control problems can be studied much more easily.