Research Roundup: October 2023
How Do Crowds Weed Out Bad Ideas?
Groups of people evaluate early-stage ideas differently than they do later-stage ideas, according to a new paper published in Research Policy. Its co-authors believe that studying how open innovation works at all stages will provide insights to help innovation managers save time and resources by eliminating bad ideas earlier in the process.
The four researchers exploring this premise include Linus Dahlander of ESMT Berlin; Michela Beretta of Aarhus University in Denmark; Arne Thomas of the University of Amsterdam; Lars Frederiksen of ESMT Berlin; and Shahab Kazemi and Morten H.J. Fenger, who formerly were researchers at Aarhus University.
The team analyzed data from the interactive LEGO IDEAS platform, where individuals can submit proposals for new LEGO sets to other platform users. These proposals are crowdsourced, going through rounds of selection until a winning idea emerges. The researchers found that in early rounds, users evaluated each proposal primarily based on the status of the idea creator and the quality of the creator’s presentation; in later rounds, they looked instead at the idea’s popularity and growth trajectory.
The team also explored whether machine learning (ML) models could identify the winners from a crowd of ideas. The team discovered early signs to indicate which ideas were most likely to face crowd rejection. The researchers concluded that “ML algorithms can predict which ideas are weeded out at the early stages of crowd selection but that picking winners at later stages is less predictable,” the researchers conclude.
In other words, “organizations could use algorithms to pre-filter early ideas,” saving resources early in the idea generation process. This would allow them to reserve human attention for later on, when the ideas that remain are of higher quality.
“By understanding these critical characteristics of successful ideas, managers can sieve out underwhelming concepts right at the inception,” says Dahlander. “The long-term implications? More informed, strategic decisions in innovation management.”
Thinking of Banning ChatGPT? Not So Fast
When ChatGPT brought artificial intelligence (AI) into the mainstream last year, the kneejerk reaction for many people—especially those in education, journalism, and other similar fields—was to ban its use. But such bans might produce unintended negative effects.
Four researchers from Olin Business School at Washington University in St. Louis and the National University of Singapore (NUS) have written a working paper that explores what happened earlier this year when the Italian Data Protection Authority (GPDP) instituted a nationwide ban of ChatGPT. The authority made this move after determining that the service violated data protection laws. The ban was lifted a month later, after OpenAI, ChatGPT’s creator, had “addressed or clarified” the GPDP’s concerns.
Jeremy Bertomeu, associate professor of accounting at Olin, wrote the working paper with accounting professors Yupeng Lin and Yibin Liu and doctoral student Zhenghui Ni, all of NUS. The researchers saw the GPDP’s action as an opportunity to study what happens when AI’s use is outlawed. They found that businesses—especially those that provide professional, scientific, and technological services—experienced an average negative return of 8.7 percent compared with similar businesses in other European countries. The ban affected new and small businesses the most.
“Regulatory policy should always be based on careful cost-benefit analyses and public input. Our data demonstrate what can go wrong when regulators skip these fundamental steps.”
“Not since the introduction of the internet has a technology so quickly transformed how businesses operate and compete. What’s remarkable is that [AI is] easy and cheap to adopt, helping small, underdog businesses compete with much larger firms without making heavy investments in infrastructure or human capital,” says Bertomeu in an article on Olin’s website.
The research team also found that the ban affected Italian investors, who could not use ChatGPT to gather information about the financial market. This “information asymmetry” caused an increase in bid-ask spreads, defined as the amount by which the seller’s ask price exceeds the buyer’s bid price. As a result, Italian firms saw their liquid assets decrease.
With concerns about security threats and data privacy at top of mind for many government officials, other countries might be considering restricting AI’s use. But what happened in Italy should make policymakers worldwide think carefully before regulating the technology, the paper’s authors argue.
“Regulatory policy should always be based on careful cost-benefit analyses and public input,” says Bertomeu. “Our data demonstrate what can go wrong when regulators skip these fundamental steps.”
More Business Schools Are Integrating AI—Carefully
Generative AI is being taught at three out of four business schools that responded to a survey conducted in August by the Graduate Business Curriculum Roundtable (GBC Roundtable). The survey included responses from 72 faculty and professionals at 68 business schools, most located in the U.S.
According to the GBC Roundtable’s report of its survey findings, business schools are integrating AI into their curricula, but only on a limited basis. Just 15 percent of respondents noted that generative AI is significantly or fully taught in their schools’ programs, and only 19 percent of the schools represented offer dedicated courses in the topic. These courses most commonly cover introductions to AI, the ethics and legal implications of AI, and industry innovation.
Among those responding:
- 30 percent report that either their business schools or their universities have policies regarding the use of generative AI.
- 20 percent report having formal groups working on crafting AI policy.
- 33 percent have discussions underway.
- 28 percent say that generative AI is significantly or fully integrated into faculty research.
- 50 percent say business schools should be using generative AI to personalize student advising, tutoring, and career coaching.
Perhaps not surprisingly, the survey shows that schools with AI policies are more likely to have integrated generative AI into their curricula than schools that do not yet have such policies in place.
Early adopters are exploring the applications of AI in a variety of ways, such as using AI chatbots to answer common student questions, offering advising and career planning help, and supporting student self-assessment. Some professors are using generative AI to clean up course instructions, generate ideas for discussion board assignments, and assist analytics students in writing code. A handful of professors are experimenting with using generative AI to batch grade certain assignments.
Most schools acknowledge that the technology presents challenges relating to academic integrity, misinformation, and bias. That said, they still are prioritizing generative AI in their teaching and research.
“With the growth of generative AI, business schools are presented with additional unique challenges and opportunities,” says Jeff Bieganek, executive director of the GBC Roundtable. “It is exciting to see the thoughtful and impressive actions and programs that business schools and their leaders are developing and implementing to integrate generative AI into their programs, curriculum, and operations.”
Research News
■ An open-source tool for cybersecurity. Supported by a 600,000 USD grant from the National Science Foundation, researchers at the University of Arizona (UA) in Tucson and Indiana University (IU) in Bloomington are developing an open-source software tool to give security analysts more capacity to protect their organizations from cyberthreats. Researchers are working on a Vulnerability Management System driven by artificial intelligence-enabled analytics, which analysts can use to search for information, share research findings, and collaborate to manage and protect susceptible data at their organizations.
The project’s principal investigator is Hsinchun Chen, the UA Regents’ Professor of MIS and director of the university’s Artificial Intelligence Laboratory. Chen is working with co-principal investigator Sagar Samtani, assistant professor of operations and decision technologies at the IU Kelley School of Business. Companies providing additional computing support include OmniSOC, the shared security operations center based at IU Bloomington, and Jetstream2, led by IU’s Pervasive Technology Institute.
■ An emphasis on qualitative research methods. In September, the School of Business at Aalto University in Espoo, Finland, opened the Center for Qualitative Management Studies. The center, known as Qual+, will support researchers as they develop new methodologies, engage in multidisciplinary dialogue, and explore future trends in qualitative methods. Qual+ will organize seminars, workshops, and other events focused on issues related to qualitative scholarship.
“Quantitative methods alone are not enough if we want to address causal complexity, future uncertainty, and systemic and processual changes in order to produce radically new solutions.”
The center’s focus on qualitative research is important because during a time of disruption and rapid change, “research aiming to predict the future and identify recurring patterns is not useful,” said Rebecca Piekkari, center director, at its inaugural event. “Quantitative methods alone are not enough if we want to address causal complexity, future uncertainty, and systemic and processual changes in order to produce radically new solutions.”
■ An effort to tackle challenges in healthcare. The new Center for Health and Business at Bentley University in Waltham, Massachusetts, will bring together 80 faculty and staff members across disciplines to address challenges in the healthcare industry. The goal, according to the school’s statement, is to “create a more financially sustainable, accessible, and equitable health system that balances cost and quality.”
The center will focus on three areas: educating learners, producing actionable research, and building partnerships with companies in the sector. Among the center’s early initiatives will be Business of Health Innovation, a weeklong program scheduled for summer 2024 that will introduce the business of healthcare to rising high-school juniors and seniors.
Danielle Hartigan, associate professor of health studies, will serve as the center’s executive director. Hartigan is one of three faculty members awarded a multiyear grant from the U.S. National Institutes of Health to study how institutional trust affects public health outcomes. She and her colleagues will involve students as research assistants for the project. Another area of center research will include how artificial intelligence and virtual reality will improve healthcare delivery and outcomes.
■ Support for critical environmental issues. The Andrew Sabin Family Foundation has donated 5 million USD to Wake Forest University in Winston-Salem, North Carolina, which the school will use to turn its Center for the Environment, Energy and Sustainability into the Andrew Sabin Family Center for Environment and Sustainability. The Sabin Center will support interdisciplinary scholarship and initiatives that focus on issues such as climate change, biodiversity loss, environmental contamination, drought and water scarcity, resource depletion, and deforestation. The center will involve faculty and students from across the university, including the schools of business, medicine, law, and divinity.
■ Exploration of generative AI’s benefits. A new report is available that discusses how AI and generative AI platforms could benefit society in South Africa. The report was jointly released by Wits Business School (WBS) in Johannesburg, the Boston Consulting Group, and Microsoft South Africa. The report presents use cases, describes possible contributions, and explores the potential impact of AI in the healthcare, education, financial inclusion, and agriculture sectors.In healthcare, for example, AI platforms could complete work such as transcribing and summarizing consultation notes, updating patient files, and disseminating alerts and advice in different languages, freeing up time for doctors and nurses to focus on more pressing tasks. In education, AI-powered tutors could address teacher shortages and make educational assistance and resources more accessible to learners at all levels.
“We need to understand that AI is a powerful tool, a technology that complements and enhances human capabilities. If we do not do this, we risk instilling fear about its potential to replace humans,” says Maurice Radebe, head and director of WBS. “Therefore, we need to take a collaborative approach to AI, where its best use cases emerge when combined with the unique strengths of human intelligence.”
Send press releases, links to studies, PDFs, or other relevant information regarding new and forthcoming research, grants, initiatives, and projects underway to AACSB Insights at [email protected].