A Serious Game to Promote a Responsible Use of Artificial Intelligence in Business
We developed and implemented a serious moral game and tested it with students in the Management and Information Science program at Leuphana. Early use of the game in teaching shows positive results.
Call to Action
Artificial intelligence (AI) will continue to be used in millions of projects worldwide in all sectors to automate or revolutionize work processes. Millions of people already use relevant applications regularly. However, AI and machine learning also come with many ethical risks and problems, which professionals often fail to recognize. These challenges must be addressed in higher education, as today’s students of information science and business administration will be tomorrow’s decision-makers.
Traditional ethics education has two major shortcomings. First, it usually focuses on the development of ethical reasoning, judgment, and problem-solving. Without doubt, moral reasoning is an important competence, an ability that helps people sort out dilemmas. However, this ability only accounts for 10 percent of failures in moral behavior. We need to also train people to (1) care enough to look closely (due diligence, moral commitment), (2) recognize ethical problems intuitively (moral sensitivity), and (3) speak up and address problems proactively (moral resoluteness).
Second, ethics education is most frequently based on lectures and reading (theoretical approach) or the discussion of cases (deliberative approach). These approaches are significantly less effective than experiential learning (engagement approach), which asks learners to imagine how they would actually deal with challenging situations and provides them with feedback on their imagined behaviors. A recent meta-analysis found that an engagement approach led to better learning outcomes of all sorts—including knowledge acquisition, moral reasoning, attitudes, ethical sensitivity, and moral behavior.
Innovation Description
To provide effective ethics training on the responsible use of AI at Leuphana, we have developed a serious moral game. CO-BOLD has been designed to address the main reasons that people often fail to detect and address ethical problems in the usage of AI. Players slip into the role of a quality assurance manager at a large tech company. Their mission is to assess the quality of an innovative AI assistant for investment consultants (“CO-BOLD”) before it is delivered to a large bank.
To win the game, players need to adopt a questioning mind. Players learn to recognize eight ethical risks and problems that are common in AI applications. Moreover, they learn to think about the quality of AI applications in terms of eight quality dimensions and to develop professional skepticism in evaluating AI applications. Finally, the game challenges learners to speak up and address ethical problems with AI against business pressures.
For the accompanying research, we developed an ethical sensitivity test for risks associated with artificial intelligence. Before training, a group of 28 master’s students of management only scored an average of 2.8 points (out of 14) on this test. In other words, from seven ethical issues that could be recognized, most respondents noticed fewer than two problems and were oblivious to the rest (including, for instance, the potential of severe harm). This lack of ethical sensitivity is common—we have also found it in students of information science—and a serious problem.
Innovation Impact
Until now, we have tested CO-BOLD in six courses with more than a total of 250 business administration and information science students at Leuphana. In general, students have indicated that they enjoyed the experience of playing CO-BOLD for roughly 2.5 hours. In the initial phases, their feedback helped improve the game significantly. They perceive the game as mentally challenging, but students suggest that it conveys the complex concepts in an accessible manner. The game is combined with exercises and lectures as part of a short course on AI Ethics, and students have also expressed approval of the remainder of the course. Anonymous feedback from management students indicates that the game is relatively realistic, rewarding, immersive, and educative.
In terms of learning, the short course also appears to be effective. As part of their recent exams, 95 of the previously mentioned management students retook our sensitivity test on ethical risks of AI. On average, they identified four of seven ethical problems, whereby their total scores more than doubled. We can assume that the exams provided additional motivation to study the relevant problems in detail. Further experiments will show how much people learn from playing the game alone, or as part of more extensive courses. For now, we are highly satisfied that the course seems to stimulate the kind of learning and thinking that we were hoping for.
Reference Link
- CO-BOLD: A Serious Game to Promote a Responsible Use of Artificial Intelligence in Business, Leuphana University Lüneburg