snorreod-iteration3
snorreod-iteration3
The history of AI
AI came about during the second world war when the allies started making machines with
the objective of breaking the Germans communication devices. According to Grudin (Grudin
2009) the term Artificial Intelligence was first used in 1956 by American mathematician and
logician John McCarthy.
My definition of Robot:
1/11
IN5480 - Individual assignment 3
snorreod
“A machine that can perform at least one task. A Robot often has a human-like element in
appearance or how it accomplishes a task that makes it different from what humans normally
would just call a machine. “
I think this definition of robots highlights there is a sort of mental difference between what we
would just call a machine and what we would call a robot, and that it often lies in that the
robot completes its task in a more advanced human-like manner.
Universal Design
Definition of Universal design by universaldesign.ie:
“Universal Design is the design and composition of an environment so that it can be
accessed, understood and used to the greatest extent possible by all people regardless of
their age, size, ability or disability” (Universal Design 2020)
I think this a good definition based on what I have learned about universal design and its
focus on designing stuff that can be used by everyone regardless.
2/11
IN5480 - Individual assignment 3
snorreod
I decided to compare microsoft's AI guidelines with Nielsen and Molich's 10 User Interface
Design Guidelines.
The HCI
- Visibility of system status
- “Make clear why the system did what it did” matches this in that it focuses on
showing where the system is and how it got there.
- Match between system and the real world
- Matches the “Match relevant social norms” in that the system should be
expected to be able to communicate with a user in a given context.
- User control and freedom
- Matches “Support efficient correction” in that it is easy to fix when system or
user has made an error.
- Consistency and standards
- This matches the point “Update and adapt cautiously”. Both the hci guideline
and the ai guideline is warning us about making changes that can result in the
user having troubling get back into the system as it has changed beyond what
can be expected from him.
- Error prevention
3/11
IN5480 - Individual assignment 3
snorreod
- This matches “show contextually relevant information” in that we should limit
the chances for the user to make a mistake.
- Recognition rather than recall
- Sort of matches “Show contextually relevant information” in that we need to
display task-related information in way that it is easy to recognize it and not
having to recall where it is.
- Flexibility and efficiency of use
- Connected to “Learn form user behavior” in that the system should be able to
learn from the users behavior so that it can improve upon its efficiency in
doing the tasks it knows the user frequently does.
- Aesthetic and minimalist design
- Matches “Show contextually relevant information” the main focus should be
to present the users with the information they need for their current tasks, not
unnecessary information that might distract them from finding what they need.
- Help users recognize, diagnose and recover from errors
- Is connected to “Make clear why the system did what it did” in that it should
be clear to the user why the system behaved the way it did.
- Help and documentation
- This can be connected to the initial guidelines with “Make clear what the
system can do” and “Make clear how well the system can do what it can do”
in that program easily tells the user what it can do and how well the system
can do it.
References:
https://www.interaction-design.org/literature/article/user-interface-design-guidelines-10-rules-
of-thumb Guideline by Jakob Nielsen and Rolf Molich
https://www.microsoft.com/en-us/ai?activetab=pivot1%3aprimaryr6
Grudin, Jonathan. AI and HCI: Two Fields Divided by a Common Focus. AI magazine 30,
no 4 (September 18, 2009)
4/11
IN5480 - Individual assignment 3
snorreod
Appendix 1:
Based on the feedback I received from the last Individual Assignment I have tried to
structure this assignment to have a bit more logical and consistent structure,
hopefully making it a bit easier to read.
Individual Assignment 2
1 AI-infused systems
5/11
IN5480 - Individual assignment 3
snorreod
behavior. Based upon what it has learned it should also try to improve, meaning
making less errors as time goes by and as it gets the feedback and information it
needs to properly fulfill its tasks.
But errors are in fact themselfs a characteristic of the system as it is nearly
impossible to make a system without any sort of error. These errors can be
manifested in the system in a lot of different ways either technical like the system
crashing or social like the system not functioning properly for different kinds of
people then the ones that built the system.
Another characteristic seen from the user perspective is that AI-Infused systems
often function like a black-box. Meaning that you do not know what the systems
actually does with your input and why it delivers the output it does.
2.1 Main take-aways from Amershi et al. (2019) and Kocielnik et al.
(2019)
The main take-away from Amershi et al (2019) concerning interaction design is that
with the rapidly increasing amount of solutions implementing different types of AI the
interaction design community needs to try to keep up in order to shape the solutions
in a way that fits the users. Amerishi et al contribution to this is the 18 guidelines that
they have developed to help clarify some problems they see and make the solutions
more accountable for the users.
6/11
IN5480 - Individual assignment 3
snorreod
For the second design guideline I have chosen g8: “Support efficient dismissal”. The
system incorporates this by letting the user tell the system to ignore false-positives
that the system's autocorrect function has picked up, which can happen to for
example some names probably because of social biases. Another instance where it
incorporates this guideline is the ease you can correct it when it actually tries to
autocorrect something that it shoulnt have and you can by either returning or
pressing the backspace key dismiss change.
Therefore it is important that the chabot try to adhere to some of the AI design
guidelines given in Amershi et al. The most importants is probably “Make clear what
the system can do” so that the user don't have waste a lot of time on a solution that
will never and if the chatbot is setup more like a filter before the users come to an
actual human being make it possible to “Support efficient dismissal” so that the user
can easily dismiss the chatbot and go right to talking to an actual human. The
chatbot should also as much as possible follow “Remember recent interactions' ' so
that the user does not need to repeat already stated information that the chatbot
should already be aware of, as this can cause a lot of frustration for the user.
Another challenge when it comes to chatbot is shaping the language of the chatbot
so that it is understandable and actually helps the user in the task they are trying to
accomplish (given that it is a typical customer support chatbot). One way of making
sure of this can perhaps be to base it upon successful interactions actual human
beings in customer support has had with people, but even that might just be
7/11
IN5480 - Individual assignment 3
snorreod
propagating something that could have been explained in much simpler and
understandable terms.
References:
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., ... &
Teevan, J. (2019). Guidelines for human-AI interaction. In Proceedings of the 2019
CHI Conference on Human Factors in Computing Systems (paper no. 3). ACM.
(https://www.microsoft.com/en-us/research/uploads/prod/2019/01/Guidelines-for-Hu
man-AI-Interaction-camera-ready.pdf)
Kocielnik, R., Amershi, S., & Bennett, P. N. (2019). Will You Accept an Imperfect
AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems. In
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
(paper no. 411). ACM.
(https://www.microsoft.com/en-us/research/uploads/prod/2019/01/chi19_kocielnik_et
_al.pdf)
Liao, Q. V., Gruen, D., & Miller, S. (2020, April). Questioning the AI: Informing
Design Practices for Explainable AI User Experiences. In Proceedings of the 2020
CHI Conference on Human Factors in Computing Systems (paper no. 463). ACM.
(https://dl.acm.org/doi/abs/10.1145/3313831.3376590)
Yang, Q., Steinfeld, A., Rosé, C., & Zimmerman, J. (2020, April). Re-examining
Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. In
Proceedings of the 2020 chi conference on human factors in computing systems
(Paper no. 164). (https://dl.acm.org/doi/abs/10.1145/3313831.3376301)
8/11
IN5480 - Individual assignment 3
snorreod
Individual Assignment 3:
Industrial robot and Nano robot
The industrial robot is explained in Phillips et al (2016) as a tool to multiply the
physical capabilities of a human. An example of this sort of robot would be the typical
robot arm you often see in factories doing repetitive tasks that it can do faster and
more reliably than humans due to the fact that it doesn't have to concern itself with
stamina like a human would. This task would probably be put somewhere in the
middle of the right corner of the two-dimensional framework shown in Shneiderman
(2020).This due to the nature that these robots perform their tasks separately from
humans at different stages of the product's assembly line. Meaning that the humans
working around it probably don't have that much control over it, other than being able
to stop it in case of errors or emergencies.
.
The Nano robot explained in Phillips et al (2016) is explained as a tool to multiply
cognitive capabilities. The robot in the article, the Black Hornet, does this by allowing
it’s operator to program waypoints for it to take, allowing it’s operator to watch from
the video feed transmitting from the robot instead of having to for example peek
around a corner. If i had to place it in the two-dimensional framework shown in
Shneiderman (2020) this sort of robot would probably find itself in the upper right
quadrant somewhere. I based this on what I have understood from the way it
functions. In that humans have full control over the path of the robot, but that a lot of
the flying and sensor-reading are more automated. Meaning that the operators no
longer have any need for physically peeking around the corner, in order to get some
general situational awareness.
Industrial robot
Increasing the automation of the industrial robot would probably mean making it take
on more tasks in producing whatever product it is making. It would also probably
mean making it able to a larger degree to learn from what it is doing and optimize
itself without needing human intervention.This would probably lead to faster and
cheaper production, but would probably also lead to it replacing even more
human-tasks at it’s factory. An effect of this could be that workers get a more fulfilling
workday as repetitive and boring tasks are done by the robots, but it can also lead to
unemployment as the workers talent is no longer needed to produce a product.
There can also be some negative aspects to the AI itself deciding how to improve
itself, as there could be biases in how it measures improved performance like
neglecting a negative effect in the new procedure, for example that the industrial
robot wears out at a much faster phase then before.
9/11
IN5480 - Individual assignment 3
snorreod
Decreasing it’s autonomy would lead to it needing more human input in order to
produce the product leading to slower and more expensive production, but would
probably be seen as a good thing from highly specialized workers at the factory as
their service would be needed more.
Nano robot
Increasing it’s level of autonomy even further would probably go at the expense of
human control as the robot would probably decide the route and the places it would
monitor. These could both be a positive thing as it could perhaps find things the
operator couldn't sense in the same way as the robot, but could also make it worse
as the user would either not be confident enough to actually use the system or
maybe too confident leading to the operator trusting it blindly. These are both
scenarios that could lead to dangerous situations given the environment this robot is
meant for, where the operator's trust in the system can have life and death
consequences.
Decreasing the automation of the nano bot would probably lead to it being harder to
operate as quality of life features like it being able to hover on it’s own would
probably disappear. This would probably reduce its usefulness in the field as you
would need somebody to completely dedicate themself to controlling the robot during
operation, instead of it being able to do certain tasks by itself.
For the industrial robot I would hope that the current level of explainability is
sufficient enough. With sufficient I mean so that the person charged with controlling
the robot performance is able to understand how it performs its task. Meaning that
he should be able to look over how it is programmed to do the task, seeing things
like how it maneuvers and how much force it applies. So that he can see that it is set
up right to perform its tasks. And so that he is able to see that it is not performing its
tasks based on any biases that could have a negative impact on both the safety of
the manufacturing process and the quality of the finished product.
10/11
IN5480 - Individual assignment 3
snorreod
I would think that given that this is military grade equipment, where the users trust in
the equipment may be key in saving lives. Would mean that the system vendor has
had a big focus on achieving an acceptable level of explainability in order for the
military to be able audit it and for them to be able to properly train an operator in how
to use the system.
References:
Hagras, H., Toward Human-Understandable, Explainable AI, Computer, 51, 9, 2018,
28- 36 https://ieeexplore.ieee.org/document/8481251
Phillips, Elizabeth & Schaefer, Kristin & Billings, Deborah & Jentsch, Florian &
Hancock, Peter. (2015). Human-Animal Teams as an Analog for Future
Human-Robot Teams: Influencing Design and Fostering Trust. Journal of
Human-Robot Interaction. 5. 100. 10.5898/JHRI.5.1.Phillips.
11/11