Proactive policies_AI
Proactive policies_AI
• Mitigating inherent risk: There are many examples of AI gone wrong: AI facial
recognition systems have misidentified persons accused of crimes, AI credit scoring has
demonstrated gender bias, AI-driven housing and benefits applications have amplified
discriminatory language, and many AI development projects—like IBM’s Watson for
Oncology project, which burned through around $62 million before being abandoned—
have failed. There is a need for elaborate policy development to cover multiple levels of
AI technology risks, including ethics, performance and equity.
Cite as: Samuel, J. (2021), “A Call for Proactive Policies for Informatics and Artificial Intelligence
Technologies”, Scholars Strategy Network. Url: https://scholars.org/contribution/call-proactive-
policies-informatics-and
Dobbs, et al. (2015), “The four global forces breaking all the trends”, McKinsey Global Institute