Search

Overlooked Risks of Using AI in Business

AI risk

In this era of generative AI, the concerns of organizations and people often center around keeping up with and adopting new AI technologies. It is irrefutable that AI is powerful and useful in numerous business activities. In many cases, using AI can partially automate work and improve efficiency drastically. In addition, AI enables new approaches in customer service and other human interactions. However, there is also a risk of using AI in wrong ways, which could be worse than not using AI at all. No, this is not referring to the danger of deepfake or data privacy issues. In addition to those commonly mentioned troubles, multiple academic papers point to other potential perils of using AI for workplace tasks.

There are at least three risks associated with using AI tools such as ChatGPT for business problem-solving tasks.

First, AI tools tend to generate ideas of low diversity, which could narrow people’s thinking. In a field experiment (not peer-reviewed yet) conducted within Boston Consulting Group (BCG), an elite consulting company, the ideas generated by participants with access to GPT-4 were 41% less diverse than the ideas generated by participants without help from AI (Candelon et al., 2023; Dell’Acqua et al., 2023). Specifically, the task was to brainstorm ideas and develop business cases (among other things) for new footwear products to satisfy an unmet need. Idea diversity was evaluated through computational methods (using TF-IDF and cosine similarity). It is important to note that those participants who edited AI’s ideas did not significantly increase idea diversity. This suggests that the participants tended to conform to AI’s output. If AI leads people to think in narrower ways, it can very well inhibit, rather than promote, innovation. Therefore, AI users need to keep this risk in mind and explore ways to infuse thought diversity into an AI-supported thinking process.

Second, AI, especially LLM-based tools like ChatGPT, can appear highly trustworthy even in activities at which it is poor. The BCG study mentioned above includes a second task to improve a fictitious company based on interview notes with (fictitious) executives and historical business performance data. This task has a correct answer. The authors suggest that LLM is not particularly good at using such nuanced qualitative and quantitative data to solve complex business problems. Having access to GPT-4 reduced the performance in solving the problem. Surprisingly, the worst performers were those who had a quick training on prompting and GPT-4’s limitations. Perhaps these participants were more confident in their ability to use AI, yet still failed to assess AI’s output critically. In another study (not peer-reviewed yet), recruiters with a good but imperfect AI tool for evaluating job applicants actually performed worse than those with a poor tool (Dell’Acqua, 2022). Those with the good tool exerted less effort, spent less time on the task, and were more likely to select the AI-selected candidate. In contrast, those with the poor tool (with lower accuracy in evaluation) used their own judgment more and ended up with better selection. These studies suggest that there could be a strong tendency for people to overestimate and overtrust AI tools when these tools appear capable.

The least obvious and thus alarming risk is the possibility of employees losing motivation and skills. Many people believe the use of AI can be beneficial because it may automate boring tasks and allow us to focus on high-level, interesting activities. But that is not exactly what AI does. As an example, for many professionals, coming up with new ideas or writing in compelling ways is the most engaging part of their work. AI, with all its flaws, is useful and indeed used in those areas. The disruption of these types of work by AI can be demotivating. In addition to the motivation issue, getting less on-the-job practice and learning might reduce skills in important intellectual work over time (Beane, 2019, 2022). Just think about what GPS does to our driving navigation skills: it is possible that AI could do the same to our intellectual capabilities. In fact, about 70% of the interviewees in the BCG study believed that frequent use of AI in idea generation might harm their creative abilities over time.

Some people may argue that AI technologies are improving very quickly. These potential problems may go away with better and more accurate models or applications. However, in the paradigm of large language models, developing AI that recognizes its own limitations appears challenging. Therefore, it likely remains our responsibility to discern when to trust AI and when not to. Moreover, human nature evolves much slower than technologies. It is hard to expect humans to keep task motivation when AI automates the work, even if the work is intellectual. With these critical risks, companies should think hard not only about learning and adopting the hottest AI tools, but also about how to use them in ways that actually promote innovation, minimize overtrust and inaccuracies, as well as sustain employee motivation and skills. This is a difficult undertaking. But the first step is to let people in your organization know the risks, instead of embracing AI tools blindly. A second suggestion is to let employees work on what they really care about. Working in a boring job gives people a lot of incentives to delegate as much to AI as possible. This might not sound like a problem by itself. However, the consequence may be a further reduction in effort, motivation, professional growth, and job satisfaction, potentially leading to a vicious cycle. In contrast, those who are intrinsically interested in their work are more likely to make better use of their time when AI automates part of their job. They may also find better use of AI in the first place. Many additional measures are needed to address all the concerns: some measures may be technical, some may be organizational. The bottom line is, now is the time to start the risk mitigation.

References

Beane, M. (2019). Learning to work with intelligent machines. Harvard Business Review, 97(5), 140-148.

Beane, M. (2022). Today’s Robotic Surgery Turns Surgical Trainees into Spectators: Medical Training in the Robotics Age Leaves Tomorrow’s Surgeons Short on Skills. IEEE Spectrum, 59(8), 32-37.

Candelon, F., Krayer, L., Rajendran, S., Martinez D.Z. (2023, Sept 21). How People Can Create—and Destroy—Value with Generative AI. Boston Consulting Group. https://www.bcg.com/publications/2023/how-people-create-and-destroy-value-with-gen-ai

Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., … & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper, (24-013).

Dell’Acqua, F. (2022). Falling asleep at the wheel: Human/AI Collaboration in a Field Experiment on HR Recruiters. Working paper.

Share this post

Leave a Comment