Centres Of Excellence

To focus on new and emerging areas of research and education, Centres of Excellence have been established within the Institute. These ‘virtual' centres draw on resources from its stakeholders, and interact with them to enhance core competencies

Read More >>

Faculty

Faculty members at IIMB generate knowledge through cutting-edge research in all functional areas of management that would benefit public and private sector companies, and government and society in general.

Read More >>

IIMB Management Review

Journal of Indian Institute of Management Bangalore

IIM Bangalore offers Degree-Granting Programmes, a Diploma Programme, Certificate Programmes and Executive Education Programmes and specialised courses in areas such as entrepreneurship and public policy.

Read More >>

About IIMB

The Indian Institute of Management Bangalore (IIMB) believes in building leaders through holistic, transformative and innovative education

Read More >>

Finite and Infinite Horizon Shapley Games with Nonsymmetric Partial Observation

Arnab Basu and Lukasz Stettner
Journal Name
SIAM Journal on Control and Optimization (SICON)
Journal Publication
others
Publication Year
2015
Journal Publications Functional Area
Decision Sciences and Information Systems
Publication Date
Vol. 53, No. 6, December 2015, pp. 3584-3619
Abstract

We consider asymmetric partially observed Shapley-type finite-horizon and infinite-horizon games where the state, a controlled Markov chain $\{X_t\}$, is not observable to one player (minimizer) who observes only a state-dependent signal $\{Y_t\}$. The maximizer observes both. The minimizer is informed of the maximizer's action after (before) choosing his control in the MINMAX (MAXMIN) game. A nontrivial open problem in such situations is how the minimizer can use this knowledge to update his belief about $\{X_t\}$. To address this, the maximizer uses off-line control functions which are known to the minimizer. Using these, novel control-parameterized nonlinear filters are constructed which are proved to characterize the conditional distribution of the full path of $\{X_t\}$. Using these filters, recursive algorithms are developed which show that saddle-points exist in both behavioral and Markov strategies for the finite-horizon case in both games. These algorithms are extended to prove saddle-points in Markov strategies for both games for the infinite-horizon case. A counterexample shows that the finite-horizon MINMAX value may be greater than the MAXMIN value. We show that the asymptotic limits of these values converge to the corresponding MINMAX and MAXMIN saddle-point values in the infinite-horizon setup. Another counterexample shows that the uniform value need not exist.

Finite and Infinite Horizon Shapley Games with Nonsymmetric Partial Observation

Author(s) Name: Arnab Basu and Lukasz Stettner
Journal Name: SIAM Journal on Control and Optimization (SICON)
Volume: Vol. 53, No. 6, December 2015, pp. 3584-3619
Year of Publication: 2015
Abstract:

We consider asymmetric partially observed Shapley-type finite-horizon and infinite-horizon games where the state, a controlled Markov chain $\{X_t\}$, is not observable to one player (minimizer) who observes only a state-dependent signal $\{Y_t\}$. The maximizer observes both. The minimizer is informed of the maximizer's action after (before) choosing his control in the MINMAX (MAXMIN) game. A nontrivial open problem in such situations is how the minimizer can use this knowledge to update his belief about $\{X_t\}$. To address this, the maximizer uses off-line control functions which are known to the minimizer. Using these, novel control-parameterized nonlinear filters are constructed which are proved to characterize the conditional distribution of the full path of $\{X_t\}$. Using these filters, recursive algorithms are developed which show that saddle-points exist in both behavioral and Markov strategies for the finite-horizon case in both games. These algorithms are extended to prove saddle-points in Markov strategies for both games for the infinite-horizon case. A counterexample shows that the finite-horizon MINMAX value may be greater than the MAXMIN value. We show that the asymptotic limits of these values converge to the corresponding MINMAX and MAXMIN saddle-point values in the infinite-horizon setup. Another counterexample shows that the uniform value need not exist.