Safe and efficient off-policy reinforcement learning R Munos, T Stepleton, A Harutyunyan, M Bellemare Advances in neural information processing systems 29, 2016 | 729 | 2016 |
Reinforcement learning from demonstration through shaping T Brys, A Harutyunyan, HB Suay, S Chernova, ME Taylor, A Nowé Twenty-fourth international joint conference on artificial intelligence, 2015 | 290 | 2015 |
Expressing Arbitrary Reward Functions as Potential-Based Advice A Harutyunyan, S Devlin, P Vrancx, A Nowé Twenty-Ninth Conference on Artificial Intelligence (AAAI), 2015 | 125 | 2015 |
Policy Transfer using Reward Shaping T Brys, A Harutyunyan, ME Taylor, A Nowé Fourteenth International Conference on Autonomous Agents and Multi-Agent …, 2015 | 110 | 2015 |
Q () with off-policy corrections A Harutyunyan, MG Bellemare, T Stepleton, R Munos International Conference on Algorithmic Learning Theory, 305-320, 2016 | 100 | 2016 |
On the expressivity of markov reward D Abel, W Dabney, A Harutyunyan, MK Ho, M Littman, D Precup, S Singh Advances in Neural Information Processing Systems 34, 7799-7812, 2021 | 97 | 2021 |
Hindsight credit assignment A Harutyunyan, W Dabney, T Mesnard, M Gheshlaghi Azar, B Piot, ... Advances in neural information processing systems 32, 2019 | 97 | 2019 |
Multi-objectivization of reinforcement learning problems by reward shaping T Brys, A Harutyunyan, P Vrancx, ME Taylor, D Kudenko, A Nowé 2014 international joint conference on neural networks (IJCNN), 2315-2322, 2014 | 97 | 2014 |
Counterfactual credit assignment in model-free reinforcement learning T Mesnard, T Weber, F Viola, S Thakoor, A Saade, A Harutyunyan, ... arXiv preprint arXiv:2011.09464, 2020 | 69 | 2020 |
The termination critic A Harutyunyan, W Dabney, D Borsa, N Heess, R Munos, D Precup arXiv preprint arXiv:1902.09996, 2019 | 58 | 2019 |
Multi-objectivization and ensembles of shapings in reinforcement learning T Brys, A Harutyunyan, P Vrancx, A Nowé, ME Taylor Neurocomputing 263, 48-59, 2017 | 49 | 2017 |
Real-time gait event detection based on kinematic data coupled to a biomechanical model S Lambrecht, A Harutyunyan, K Tanghe, M Afschrift, J De Schutter, ... Sensors 17 (4), 671, 2017 | 32 | 2017 |
Learning with options that terminate off-policy A Harutyunyan, P Vrancx, PL Bacon, D Precup, A Nowe Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018 | 27 | 2018 |
Predicting seat-off and detecting start-of-assistance events for assisting sit-to-stand with an exoskeleton K Tanghe, A Harutyunyan, E Aertbeliën, F De Groote, J De Schutter, ... IEEE Robotics and Automation Letters 1 (2), 792-799, 2016 | 27 | 2016 |
Reinforcement learning in POMDPs with memoryless options and option-observation initiation sets D Steckelmacher, D Roijers, A Harutyunyan, P Vrancx, H Plisnier, A Nowé Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018 | 25 | 2018 |
Shaping Mario with Human Advice A Harutyunyan, T Brys, P Vrancx, A Nowé Fourteenth International Conference on Autonomous Agents and Multi-Agent …, 2015 | 25 | 2015 |
An analysis of quantile temporal-difference learning M Rowland, R Munos, MG Azar, Y Tang, G Ostrovski, A Harutyunyan, ... | 21 | 2023 |
Planted-model evaluation of algorithms for identifying differences between spreadsheets A Harutyunyan, G Borradaile, C Chambers, C Scaffidi 2012 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC …, 2012 | 17 | 2012 |
Off-Policy Shaping Ensembles in Reinforcement Learning A Harutyunyan, T Brys, P Vrancx, A Nowe Frontiers in Artificial Intelligence and Applications 263 (ECAI 2014), 1021 …, 2014 | 14 | 2014 |
Conditional importance sampling for off-policy learning M Rowland, A Harutyunyan, H Hasselt, D Borsa, T Schaul, R Munos, ... International Conference on Artificial Intelligence and Statistics, 45-55, 2020 | 12 | 2020 |