LIM JUN HUP
Deep reinforcement learning (DRL) has recently shown promising results in multi-agent gameplay. However, multi-agent training imposes heavy data requirements on the already sample-inefficient reinforcement learning paradigm. Bayesian methods are capable of quantifying uncertainty in machine learning, and in the context of sparse data it seems a natural complement to multi-agent reinforcement learning (MARL). Therefore, the aim of our project is to investigate the extent to which Bayesian MARL can alleviate the difficulties in DRL.