Meet your new banker: an algorithm. Your company’s talent acquisition, your corporate accountant, your investment broker, your credit assessor… all on the list of jobs that are now accepting applications from non-organic AI employees. Since the debut of generative (makes new stuff) and agentic (completes assigned tasks) AI specialists, the corporate world has been chasing a future that AI promised: the most efficient and objective decision making possible. Can a cold and uncaring large language learning model be the solution?
New empirical research in the field suggests that it might be a little trickier than that. AI is a pretty vast term related to a platform of technology that uses algorithms to automate previously human tasks. Decision making in the C-Suite is one of those areas. For decades, corporate tasks like scouting new talent, analyzing good investment opportunities, and setting department budgets were the responsibility of (biased) human employees. And to be clear: some level of bias in decision making is necessary… even helpful. Cognitive bias, which is defined by a reliance on proven causality and trends, is awfully helpful when deciding whether or not to hire someone who has a history of terminated employment. Emotional bias, on the other hand, can be a fickle and destructive impulse (A CEO prefers a laid-back atmosphere, and thus misses out on a perfect Type-A personality for their team.)
The argument in favor of AI assistance in various corporate finance and management roles was that it’s empirical and quantitative reasoning will help deliver results that reflect less discriminatory thinking. But as new analysis shows from Leavy (2020), AI seems to have inherited a lot of our human preferences. The problem, as fellow Siraj Kariyilaparambu Kunjumuhammed of the Modern College of Business and Science in Oman summarizes it, has to do with history; historical data, to put a fine point on it. When AI is prompted to make a decision, it is invariably drawing on any available historical data regarding similar cases in similar decisions. What does that mean? Well, if a particular healthcare company uses an AI analysis to determine prospective locations for a new clinic, and the data reflects that high income neighborhoods present more consistent Return on Investment for their providers, the AI might favor a fiscally promising (if medically over-served) neighborhood to a more at-risk neighborhood. And it can get more sinister than that— research by Obemeyer et al., (2019) suggests that some AI used in employee selection and medical risk assessment models may have inherited a racial bias after grouping surnames and places of residence with data that corporate records had identified as more ‘professionally successful.’
The machines can only be expected to follow the guide of the decision making reflected in the data which it has. That means that it’s bound to allow for some of it’s human developer’s discriminatory practices (Aquino, 2023). But that doesn’t mean it’s wholly without merit: the Global Monetary Fund and it’s team of international finance regulators believe that AI will be critical to the future of things like fraud detection and protection against money laundering. As regulators, they believe that AI requires oversight and supervision— and that if prompted to correct for it’s algorithmic predispositions, it will develop a new, healthy pattern. Pre-processing the data to account for discrepancy before it gets fed into the model is one solution. Another involves hiring teams of varied backgrounds and identities with domain specific knowledge to review the prompting process (Srinivasan and Chander, 2021). And almost all AI users are exploring ways to get more rigorously annotated reports from the AI on how it reaches a conclusion.
One way or another, its highly likely that all of us will be on the receiving end of a corporate AI decision pretty soon. And it stands that when the time comes, we will hope we can trust in the fairness of that mystifying and futuristic process. But it could take a while. Don’t be surprised if your algorithmic banker comes with a human babysitter.
References
Aquino, Y. S. J. (2023). Making decisions: Bias in artificial intelligence and data-driven
diagnostic tools. Australian Journal of General Practice, 52(7), 439–442. 10.31128/
AJGP-12-22-663037423238
Kunjumuhammed, Siraj Kariyilaparambu Adoption of Artificial Intelligence in Corporate Finance: Addressing Bias and Ethical Considerations Advances in finance, accounting, and economics book series., 18 Apr 2024, pages 1 – 16
Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S.,
Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder‐Kurlanda, K.,
Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze,
C., & Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory
survey. Wiley Interdisciplinary Reviews. Data Mining and Knowledge
Discovery, 10(3), e1356. 10.1002/widm.1356
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial
bias in an algorithm used to manage the health of populations. Science, 366(6464),
447–453. 10.1126/science.aax234231649194
Srinivasan, R., & Chander, A. (2021). Biases in AI Systems: A survey for practitioners.
ACM Queue; Tomorrow’s Computing Today, 19(2), 45–64. 10.1145/3466132.3466134