Publications

Type of Publication: Article in Journal

A User Study on Explainable Online Reinforcement Learning for Adaptive Systems

Author(s):
Metzger, Andreas; Laufer, Jan; Feit, Felix; Pohl, Klaus
Title of Journal:
ACM Trans. Auton. Adapt. Syst. (TAAS)
Volume (Publication Date):
19 (2024)
Number of Issue:
3
Location(s):
New York, NY, USA
Keywords:
adaptive system, machine learning, reinforcement learning, explainability, interpretability, debugging
Digital Object Identifier (DOI):
doi:10.1145/3666005
Citation:
Download BibTeX

Abstract

Online reinforcement learning (RL) is increasingly used for realizing adaptive systems in the presence of design time uncertainty because Online RL can leverage data only available at run time. With Deep RL gaining interest, the learned knowledge is no longer represented explicitly but hidden in the parameterization of the underlying artificial neural network. For a human, it thus becomes practically impossible to understand the decision-making of Deep RL, which makes it difficult for (1) software engineers to perform debugging, (2) system providers to comply with relevant legal frameworks, and (3) system users to build trust. The explainable RL technique XRL-DINE, introduced in earlier work, provides insights into why certain decisions were made at important time steps. Here, we perform an empirical user study concerning XRL-DINE involving 73 software engineers split into treatment and control groups. The treatment group is given access to XRL-DINE, while the control group is not. We analyze (1) the participants’ performance in answering concrete questions related to the decision-making of Deep RL, (2) the participants’ self-assessed confidence in giving the right answers, (3) the perceived usefulness and ease of use of XRL-DINE, and (4) the concrete usage of the XRL-DINE dashboard.