Lecture Ethics Security
Lecture Ethics Security
Lecture 3.2
Ethics and Security
The Edge AI and Robotics Teaching Kit is licensed by NVIDIA and UMBC under the
Creative Commons Attribution-NonCommercial 4.0 International License.
2
Topics
Manipulation of Behavior
Transparency of AI
3
Learning Objectives
Discuss common ways that online data can be used to manipulate behavior
4
Recommended Reading
5
Will AI take over the world?
6
Future and Ethics of AI
Retrieved from
https://www.nist.gov/system/files/documents/2020/08/17/NIST%20Explainable%20AI%20Draft%
20NISTIR8312%20%281%29.pdf
8
Future and Ethics of AI
11
Privacy and Surveillance
12
Privacy and Surveillance
Federal Agencies have strict privacy laws to protect data they collect
Private industry uses that data as a commodity. Internet websites and vendors are required by FTC to
provide privacy statement Privacy policy - Wikipedia.
You can research anyone’s address, phone number and sometimes network.
Phone apps can be used to track location. When is this good and when is this bad?
13
How to protect privacy
Ensure applications comply with NIST Cybersecurity, Data, and Privacy Framework Guidelines:
https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.01162020.pdf
• https://github.com/NVIDIA-AI-IOT/redaction_with_deepstream
• https://docs.microsoft.com/en-us/azure/media-services/previous/media-services-redactor-walkthrough
• https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/notebooks/deepstream_test_1.ipynb
14
OpenCV Redacting an Image (CPU)
fpath='./cascades/haarcascade_frontalface_alt.xml'
import numpy as np
epath='./cascades/haarcascade_eye.xml'
import cv2
import matplotlib.pyplot as plt
face_detect = cv2.CascadeClassifier(fpath)
eye_detect = cv2.CascadeClassifier(epath)
# A function for plotting the images
def plotImages(img):
faces = face_detect.detectMultiScale(gray, 1.3, 5)
plt.imshow(img, cmap="gray")
for (x,y,w,h) in faces:
plt.axis('off')
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
plt.style.use('seaborn')
roi_gray = gray[y:y+h, x:x+w]
plt.show()
roi_color = image[y:y+h, x:x+w]
roi = cv2.GaussianBlur(roi_color, (23, 23), 30)
image = cv2.imread('lena_color.tif')
# impose this blurred image on original image to get final image
image[y:y+roi.shape[0], x:x+roi.shape[1]] = roi
# Converting BGR image into a RGB image
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Display the output
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
plotImages(image)
15
DeepStream Lab - GPU
Video Analytics for image analytics and redaction
Nvidia Online Lab: Getting Started with DeepStream for Video Analytics on Jetson Nano
https://courses.nvidia.com/courses/course-v1:DLI+C-IV-02+V1/info
This lab will walk you through installing and configuring DeepStream on your Jetson Nano.
You will need to use either your mini-USB connection or directly attach your Nano to a Monitor, keyboard and mouse to
view the videos.
16
Manipulation
of Behavior
17
Manipulation of behavior
Sites are aimed at ‘attention extraction’ to get you click as much as possible and then harvest and sell
your data
With accumulation of data from repeated interaction with systems, users can be targeted and nudged to
perform actions
Advertisers and online sellers and exploit users with bias, addiction and other behavioral issues to
maximize profits like casinos
18
Human Futures (Data Markets about you)
Weight loss
Safety
Attractiveness
Relationship Advice
Animal rescue
19
Who Wins?
20
Transparency in AI
21
Transparency in AI
AI should be traceable and explainable
AI systems for automated decision support and “predictive analytics” raise “significant concerns
about lack of due process, accountability, community engagement, and auditing” (Whittaker et al.
2018: 18ff).*
A person affected by an automated decision may not understand how that decision was made.,
the system is “opaque” to that person. Experts who developed machine learning algorithms
behind the system may not even know how a particular pattern was identified. Bias in decision
systems and data sets is exacerbated by this opacity. To remove bias, the analysis of opacity and
bias go hand in hand, and political response must tackle both issues together.*
*Adapted from from Ethics of Artificial Intelligence and Robotics (Stanford Encyclopedia of Philosophy)
22
Explainable AI
Explanation Accuracy – explanations are inline with the system’s process for generating output
Knowledge Limits – only operates under the system it was designed for or the output is not deemed reliable
23
Human Robot Interaction
24
Human Robot Interaction
Psychologists?
Nurses
Household help
Parents
25
Human Robot Interaction
Robotic limbs
Ethics of Artificial Intelligence and Robotics (Stanford Encyclopedia of Philosophy) makes a case about
longer lifespans will have increased need for healthcare workers but not necessarily in the availability of
healthcare workers. Could robots shore up health care workers or enable fewer workers?
26
Automation and Employment
27
Automation and Employment
What currently seems to happen in the labor market as a result of AI and robotics automation is “job
polarization” or the “dumbbell” shape (Goos, Manning, and Salomons 2009): The highly skilled technical
jobs are in demand and highly paid, the low skilled service jobs are in demand and badly paid, but the
mid-qualification jobs in factories and offices, i.e., the majority of jobs, are under pressure and reduced
because they are relatively predictable, and most likely to be automated (Baldwin 2019).
28
Autonomous AI
There is some discussion of “trolley problems” in this context. In the classic “trolley problems” (Thomson 1976;
Woollard and Howard-Snyder 2016: section 2) various dilemmas are presented. The simplest version is that of a
trolley train on a track that is heading towards five people and will kill them, unless the train is diverted onto a side
track, but on that track there is one person, who will be killed if the train takes that side track.
This Robot would let 5 People die | AI on Moral Questions | Sophia answers the Trolley Problem - YouT
ube
29
Autonomous Accuracy
Automated checkouts – benefit of cost savings in staff productivity might outweigh occasional loss in revenue due
to inaccurate checkouts.
30
AI Policy
31
Policies
Lot of effort to create policies and may conflict with other polices
EU policy document suggests “trustworthy AI” should be lawful, ethical, and technically robust, and then spells
this out as seven requirements: human oversight, technical robustness, privacy and data governance, transparency,
fairness, well-being, and accountability (AI HLEG 2019 [OIR]).
32
Other Resources
33
34
35
36
Edge AI and Robotics Teaching Kit
Thank You