Optimizing the Management of Artificial Decision Making: A Case for Unpacking the Black Box and Improving Explainability of Algorithms
Mr. Dawei David Wang
Ph.D. Candidate in Management and Organizations
Kellogg School of Management
Northwestern University
From smartphones to household appliances to self-driving cars, artificial intelligence (AI) increasingly impacts our lives in ways once thought unimaginable. Utilizing decision-making systems based on computer algorithms, more enterprises are engaging in digital transformation, incorporating AI technologies daily. However, one major concern is that most AI-based processes continue to remain as “black boxes”, sometimes leading to unexpectedly biased or even harmful decisions. With the increasing demand for Environmental, Social, and Corporate Governance (ESG), investors and governments are placing a greater emphasis on not only “what your company does” but “how your company does it”. Thus, it is highly likely that the lack of explainability would create a challenge for companies embracing digital transformation involving AI technology in the near future. In this talk, I demonstrate a case how an algorithm was initially not fully explained. I unpacked the “black box” by conducting “experiments” on the algorithm and making deliberate modifications to the data. My explanations of how the algorithm works led to completely different theoretical, practical and policy implications. In closing, I hope to connect this case to future research surrounding the increasing trend in digital transformation and environmental pressure for fair, interpretable and transparent algorithms.