Template-Type: ReDIF-Paper 1.0 Author-Name: Philipp Renner Author-Name-First: Philipp Author-Name-Last: Renner Author-Name: Simon Scheidegger Author-Name-First: Simon Author-Name-Last: Scheidegger Title: Machine learning for dynamic incentive problems Abstract: We propose a generic method for solving infinite-horizon, discrete-time dynamic incentive problems with hidden states. We first combine set-valued dynamic programming techniques with Bayesian Gaussian mixture models to determine irregularly shaped equilibrium value correspondences. Second, we generate training data from those pre-computed feasible sets to recursively solve the dynamic incentive problem by a massively parallelized Gaussian process machine learning algorithm. This combination enables us to analyze
models of a complexity that was previously considered to be intractable. To demonstrate the broad applicability of our framework, we compute solutions for models of repeated agency with history dependence, many types, and varying preferences. Creation-Date: 2017 File-URL: http://www.lancaster.ac.uk/media/lancaster-university/content-assets/documents/lums/economics/working-papers/LancasterWP2017_027.pdf File-Format: application/pdf Number: 203620397 Classification-JEL: C61, C73, D82, D86, E61 Keywords: Dynamic Contracts, Principal-Agent Model, Dynamic Programming, Machine Learning, Gaussian Processes, High-performance Computing Handle: RePEc:lan:wpaper:203620397