EthicalOS Check-List 080618
EthicalOS Check-List 080618
Most tech is designed with the best intentions. But once a product is
released and reaches scale, all bets are off. The Risk Mitigation Manual
presents eight risk zones where we believe hard-to-anticipate and
unwelcome consequences are most likely to emerge.
HOW IT WORKS:
Choose a technology, product or feature you’re working on. Read through the checklist and
identify the questions and risk zones most relevant to you and the technology you’ve chosen.
Use the “Now what?” action items to start investigating and mitigating these risks.
© 2018 Institute for the Future and Omidyar Network. (CC BY-NC-SA 4.0). SR-2005 | www.iftf.org 1
Risk Zone 3: Economic & Asset Inequalities
q Who will have access to this technology and who won’t? Will people or communities who don’t have
access to this technology suffer a setback compared to those who do? What does that setback look like?
What new differences will there be between the “haves” and “have-nots” of this technology?
q What asset does your technology create, collect, or disseminate? (example: health data, gigs, a virtual
currency, deep AI) Who has access to this asset? Who has the ability to monetize it? Is the asset
(or profits from it) fairly shared or distributed with other parties who help create or collect it?
q Are you using machine learning and robots to create wealth, rather than human labor? If you are
reducing human employment, how might that impact overall economic well-being and social stability?
Are there other ways your company or product can contribute to our collective economic security,
if not through employment of people?
© 2018 Institute for the Future and Omidyar Network. (CC BY-NC-SA 4.0). SR-2005 | www.iftf.org 2
Risk Zone 7: Implicit Trust & User Understanding
q Does your technology do anything your users don’t know about, or would probably be surprised to
find out about? If so, why are you not sharing this information explicitly—and what kind of backlash
might you face if users found out?
q If users object to the idea of their actions being monetized, or data being sold to specific types of
groups or organizations, though still want to use the platform, what options do they have?
Is it possible to create alternative models that build trust and allows users to opt-in or opt-out
of different aspects of your business model moving forward?
q Are all users treated equally? If not—and your algorithms and predictive technologies prioritize certain
information or sets prices or access differently for different users—how would you handle consumer
demands or government regulations that require all users be treated equally, or at least
transparently unequally?
© 2018 Institute for the Future and Omidyar Network. (CC BY-NC-SA 4.0). SR-2005 | www.iftf.org 3