Dynamic Programming & Stochastic Control
Course DescriptionThe course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We start with dynamic models of random phenomena, and in particular, the most popular classes of such models: Markov chains and Markov decision processes. We then consider optimal control of a dynamical system over both a finite and an infinite number of stages. We will also discuss approximation methods for problems involving large state spaces. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. Applications of dynamic programming in a variety of fields will be covered in recitations. ContentThe following topics will tentatively be covered in the course:
KeywordsMarkov chains, Markov decision processes, dynamic programming, optimal stopping time PrerequisitesSolid knowledge of undergraduate probability, especially conditional distributions and expectations, and Markov chains. Mathematical maturity and the ability to write down precise and rigorous arguments are also important. A class in analysis will be helpful, although this prerequisite will not be strictly enforced. References
|