Training Leaders in Responsible AI

Article Icon Article
Tuesday, August 10, 2021
By Genevieve Smith, Olaf Groth
Photo by: iStock/metamorworks
Business schools must teach the moral and economic imperatives for mitigating bias within artificial intelligence.

With artificial intelligence slated to add 15.7 trillion USD to the global economy by 2030, business school graduates undoubtedly will be working at and leading organizations that interact with AI in some way. Much of today’s AI relies on machine learning, in which algorithms learn from massive amounts of data to find patterns and make predictions. And machine learning is used in applications that range from spam filters to facial recognition software to the programming that runs robots and self-driving cars.

As the global influence of AI expands, business leaders will be instrumental in prioritizing and successfully implementing strategies to advance its use responsibly. This means they must understand the moral and economic imperatives for mitigating bias within AI.

Unfortunately, most business executives aren’t prepared to address these imperatives—and few of them learn about the issue of bias in their business school programs. Most core classes that delve into data, data analysis, and artificial intelligence fail to explore critical topics of responsibility, cultural appropriateness, and ethics. The courses don’t examine how datasets can be biased based on who is collecting data and how it is collected. Nor do they examine ways in which AI systems using machine learning can lead to biased outcomes for certain groups and other ethical issues.

Business schools need to step up. They need to chart the path toward responsible AI and instill in their students the competencies that customers and stakeholders expect business leaders to employ for more equitable, fair, and human-centric AI solutions.

Some universities already are updating their undergraduate and graduate data science curricula to address critical questions about bias. For instance, the University of California, Berkeley, is expanding its responsible data science and computer science curriculum, and its Haas School of Business offers a course called Ethics & AI. At Hult International Business School—which has locations in Boston, San Francisco, London, Dubai, and Shanghai—a course called Big Think AI: Future of Humanity also deals with issues of responsible AI. But even these courses are not core requirements.

Six Steps

So how can business schools better prepare students to respond to the risks and opportunities related to inclusive AI? Here, we outline six ways.

Detail how datasets can harbor bias, and outline strategies to mitigate it. In core courses on data and data analytics, instructors should explain how human decisions go into collecting and labeling data, as well as developing algorithms. When assigning class projects, instructors should provide students with questions and prompts related to good practices in mitigating bias. They also should share existing tools and resources developed by academia and industry.


Business schools need to instill in their students the competencies that business leaders should employ for more equitable, fair, and human-centric AI solutions.

Offer a course on the ethics of AI in management. This course could explore a range of ethical issues related to machine learning in AI. Classes could cover:

  • The business and societal risks posed by biased and opaque AI systems, as well as the economic and societal opportunities for advancing inclusive and human-centric AI.
  • Opportunities to utilize AI to tackle pressing global problems such as climate change, pandemics, crime, and corruption.
  • Potential unintended consequences and other downstream impacts of AI systems—called second- and third-order effects—that could lead to increased social fragility. During these discussions, students could employ adversarial thinking to imagine how AI systems could be exploited or used for malicious purposes.
  • Governance strategies that address ethical issues of AI, including bias and data privacy.
  • Strategies for balancing speed-to-market with ethics, inclusivity, and fairness.
  • Good practices in industry, such as Sama’s ethical data labeling approach that provides quality jobs for marginalized African communities. Other examples are Doteveryone’s Consequence Scanning guide, Google’s PAIR Data Cards, the Data Nutrition Project’s Nutrition Labels for Datasets, and Salesforce’s Embedding Ethics by Design.

Draw from social science research and coursework. Instructors should make it clear that, in the context of machine learning, fairness and ethics have multiple and sometimes conflicting definitions, especially across geographic borders. Most companies have signed on to responsible AI principles that include fairness. But defining fairness is not a task for data scientists and engineers and cannot be solved technically. Rather, defining and operationalizing notions of fairness is intricately linked to management priorities and perspectives and must draw from social science research and understanding.

Train leaders to operationalize AI and data governance. As Harvard University’s Berkman Klein Center for Internet & Society has demonstrated, there is no shortage of high-level responsible and ethical AI principles. But principles do not build a business. Leaders need actionable operating frameworks that integrate commercial and moral imperatives, which would apply to everyone from C-level executives to the most junior product developers and coders. Business schools need to include these frameworks in core business curricula. Think tanks and convening forums, such as the World Economic Forum’s Centre for the Fourth Industrial Revolution and Cambrian Futures, already are developing such approaches for leaders.

Engage more diverse faculty and students. Who researches and manages AI systems matters, because the perspectives and knowledge of those individuals are integrated into the development and use of AI systems. Who teaches AI matters, too, as the instructors’ perspectives and knowledge inform course content and shape students’ experiences. Unfortunately, there is an immense lack of diversity among AI researchers and faculty. As AI systems handle more of our individual and business decisions, it’s imperative that business schools hire and develop diverse global leaders.


It’s critical that business schools build opportunities for students to collaborate across boundaries as a way to mitigate cultural bias and advance more universal and responsible AI.

Support multinational collaboration. Business schools can convene global conferences on topics of responsible AI and invite academics, students, and practitioners to attend. AI systems often are developed in leading economies such as the U.S. and China. Unfortunately, these systems frequently fail to incorporate local needs, priorities, and perspectives when deployed in global contexts. It’s critical that business schools build opportunities for students to learn and collaborate across boundaries as a way to mitigate cultural bias and advance more universal and responsible AI.

One example is an AI course developed by Olaf Groth, a co-author of this article, and originally taught at Hult before being adapted for Haas. Groth now partners with telecom company Orange and bilateral development corporation GIZ to teach a version of it across Africa. In the course, the audience, speakers, and instructors discuss how to avoid and mitigate cultural biases in datasets or algorithm designs that are rooted in non-African value systems. These biases are particularly sensitive in areas such as education, reproductive health, and surveillance of criminal activities. Empowering African developers—or teams of African and non-African developers immersed in local value systems—is fundamental to achieving local appropriateness.

Resources and Exercises

To integrate AI issues into their courses, business schools can look to resources created by other institutions. As an example, the Massachusetts Institute of Technology in Cambridge has developed the Moral Machine Game, which invites users to provide a human perspective on decisions made by machine intelligence, such as self-driving cars.

Several resources also have been developed by Berkeley Haas. For instance, the Center for Equity, Gender & Leadership (EGAL) has developed a playbook for business leaders called “Mitigating Bias in Artificial Intelligence.” This playbook examines the technical and nontechnical roots of bias, including how management decisions and priorities impact bias within AI products. It also explores the impacts of biased AI, looks at the challenges of tackling bias, and outlines seven strategic “plays” across teams, AI models, and corporate governance and leadership.

The same EGAL team developed a case study exercise that can be used in data science courses. In the case, students are presented with a scenario in which a healthcare company develops a machine learning system that predicts which hospitalized patients are at high risk of developing COVID-19 complications. The system informs how life-saving resources such as ventilators should be allocated. The case asks students to consider how the AI tool may develop bias against certain marginalized communities and has students identify technical and management-related strategies that can tackle bias.

Educators also might find inspiration in these two syllabi:

  • Ethics & Artificial Intelligence: This course from Berkeley Haas and Hult delves into the diversity and power structures of the global AI innovation landscape. It also looks at the ways AI is deployed in fragile societal systems that suffer from weak governance, misaligned incentives, and other social or economic ills. Students in the course create conceptual solutions with concrete responsible, human-centric use case designs that respond to societal needs.
  • Human Contexts & Ethics of Data: This UC Berkeley Data Science course teaches students how to recognize, analyze, and shape the human contexts and ethics of data. In the capstone assignment, students are asked to describe and provide solutions to an ethical or social problem related to data science.

Benefits and Challenges

AI will only become more important to our personal lives and more ubiquitous in business contexts. Business educators must help students understand not only the potential benefits of AI, but also the serious challenges it presents.

At Berkeley Haas, we plan to continue leaning into the Defining Leadership Principles that drive our school’s culture. These four principles encourage students to question the status quo, adopt confidence without attitude, consider themselves students always, and look beyond themselves—four actions that will be necessary if they are to work toward mitigating bias in AI. If business educators fail to teach students to question what they see in AI now, we will forfeit the right to question their decisions later.

Smith is the lead author of Berkeley Haas' Center for Equity, Gender, and Leadership playbook called “Mitigating Bias in Artificial Intelligence." Groth is the co-author of The AI Generation: Shaping Our Global Future With Thinking Machines and the forthcoming The Great Remobilization: Designing a Smarter World.

Authors
Genevieve Smith
Associate Director, Center for Equity, Gender, and Leadership, Berkeley Haas, University of California Berkeley
Olaf Groth
Professor, Hult International Business School
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.