AI and Machine Learning 1
AI and Machine Learning 1
– OPPORTUNITIES, CHALLENGES
AND A PLAN FOR NORWAY
ISBN 978-82-8400-000-8 (digital edition)
Artificial intelligence has made a powerful leap forward in recent years. Ma-
chines can now learn to interpret text, speech and images. This means that ad-
vanced tasks that to date have been reserved for human workers can now be
done more quickly and at a lower price by machines. This brings major oppor-
tunities for the creation of value and better welfare services, but the technology
can also have an effect on the rights of citizens, and it may result in greater in-
equality.
This report from the Norwegian Board of Technology describes how machines
learn, what their areas of application are and what challenges are inherent to
this technology. The report argues that Norway needs a strategy for artificial
intelligence, and advances 14 proposals which address, among other things,
what areas of expertise we need, how personal data should be used and what
development we want for society.
• Erik Fosse, surgeon and director of the intervention centre at Oslo Univer-
sity Hospital
• Siri Hatlen, former director of Oslo University Hospital and head of the
Norwegian Board of Technology
• Steinar Madsen, medical director at the Norwegian Medicines Agency
• Hans Olav Melberg, health economist and associate professor at University
of Oslo
• Damoun Nassehi, general practitioner and member of the Norwegian
Board of Technology
• Michael Riegler, senior researcher at the Simula Metropolitan Center for
Digital Engineering and researcher at the University of Oslo
Tore Tennøe
Director, Norwegian Board of Technology
CONTENTS
SUMMARY 9
UNSUPERVISED........................................................................................................................... 29
Semi-supervised learning........................................................................................................ 33
Transfer learning ..................................................................................................................... 33
Generative adversarial networks ............................................................................................ 35
Classification ........................................................................................................................... 38
Cluster analyses ...................................................................................................................... 38
Anomaly detection ................................................................................................................... 39
Predictive analyses ................................................................................................................. 39
REFERENCES 82
SUMMARY
Artificial intelligence (AI) has made a powerful leap forward in recent years.
Most of us use it on a daily basis when we conduct web searches, navigate
through traffic, translate texts, use speech commands on our smartphones, or
filter out unwanted email.
Computers can now learn correlations, rules and strategies from experiences in
real-world data, without anyone telling them what these correlations are. They
can continuously adapt to the data, and the more data they have access to, the
more accurate they become (adaptivity). This means that computers can
9
perform tasks on their own (autonomy). Complex tasks and decision-making
can thus be taken over by machines, with faster execution times and lower costs.
Areas of application
Machine learning is used to make predictions. Put simply, predictions are a
matter of filling in missing information. Predictions take the information avail-
able, i.e. data, and use it to generate information that is not available. This may
be information about the past, present or future, such as, for example, detecting
whether a credit card transaction was fraudulent, determining whether a mole
is malignant or predicting what the weather will be like tomorrow.
There are multiple prediction techniques. The most common ones are:
10
2. Clustering is used to explore new datasets without advance knowledge of
the correlations. It finds new structures and patterns in (unlabelled) data
and divides them into groups or clusters based on similar properties. This
technique can be used to group film consumers who are similar to one an-
other so that they can receive targeted film recommendations.
3. Anomaly detection techniques discover events that are not consistent with
an expected pattern in a dataset. The anomaly may be an attempt at bank
fraud, a data breach, an unfavourable disease development, or disturb-
ances in the ecosystem.
Speech and audio recognition technologies translate speech to text and vice-
versa. This is now in daily use on smartphones in the form of virtual assistants,
and can make it easier to manage data systems and simplify routine tasks.
Image analysis and video analysis recognise objects in images and video. These
technologies have made significant leaps forward over the last few years, and
can now automate very advanced and resource-intensive tasks such as driving
and imaging diagnostics.
11
Can we trust machines?
Artificial intelligence is already affecting many choices that are made by indi-
viduals and organisations. That makes it all the more important for us to be able
to trust and understand the recommendations that algorithms give us. There
are, meanwhile, several challenges inherent in the way machines learn:
• The black box problem: Unsupervised learning means that machines can
identify new patterns and correlations in datasets, but they cannot neces-
sarily explain the causal relations. The algorithms may be fairly opaque and
difficult to understand; this is referred to as the black box problem. Lack of
explanation makes it difficult both to appeal a decision and to accept re-
sponsibility for the decisions.
12
Artificial intelligence brings major opportunities for the creation of value and
better welfare services, but it can also have an effect on civil security and the
rights of citizens, and it may result in greater inequality.
This suggests that Norway should have its own AI strategy. A national strategy
should address the competencies challenge, the need for data and responsible
development. The Norwegian Board of Technology has the following concrete
suggestions for such a strategy:
2. Establish a key institution: Norway's research assets are too few and too
scattered. In order to strengthen research efforts and become attractive in
terms of recruitment and international cooperation, it may be a good idea
for Norwegian authorities to establish a key institution for research in ar-
tificial intelligence and machine learning. To ensure adequate breadth and
depth of research, the institution may encompass multiple environments
in a virtual organisation.
3. Define ambitious and concrete goals for Norway: Norway does not have
the right conditions to make sweeping investments in artificial intelligence,
but the country can play a leading role when it comes to connecting domain
knowledge with general knowledge on AI. Norway should outline invest-
ments within areas where we have a combination of good training data and
significant social need, such as healthcare, public services, sustainable en-
ergy and clean oceans.
4. Master’s degrees reinforced with AI: Machine learning will become an im-
portant element in many industries and professions, such as manufactur-
ing, oil and energy, media and entertainment, agriculture and aquaculture,
medicine, education and public services. All professions and educational
programmes should include an introduction to artificial intelligence and
13
machine learning. A dedicated master’s degree programme in artificial in-
telligence should also be established.
6. Open public data: Open public data can contribute to innovation and new
services in many sectors. The public sector in Norway should have ambi-
tions to publish more public data, and to do so in an open format that is
easy to navigate and reuse in machine learning.
7. Data sharing that serves the community: If data from Norwegian hospi-
tals, schools and smart cities is shared with third parties, the community
should receive added value in the form of improved public services, new
business development, jobs or tax revenue. It is therefore necessary for
government authorities to establish legal frameworks that make it possible
to exchange data securely and that ensure that the distribution of rights,
values and responsibilities is fair and balanced.
8. Give citizens real control over their own data: If public data about us is
used to drive research and innovation, this will require that citizens get to
have a say in the matter. Government agencies must therefore establish a
clear digital social contract that provides citizens with a real possibility to
control and shape their digital profiles and to determine whether and how
their personal data should be shared.
14
extreme pressure on established values such as autonomy, democracy, jus-
tice, equality, solidarity and responsibility.
11. Requirement for open algorithms in the public sector: When machines
take over tasks that were previously carried out by humans, it is especially
important to show that the algorithms do not make biased recommenda-
tions. Algorithms used by the public sector should, as a general rule, be
open to public access and audit so that other societal actors can verify that
they are being used correctly and with ethical responsibility.
12. Auditing algorithms: Algorithms for machine learning that for critical rea-
sons cannot be open to the public should nonetheless be subject to evalua-
tion before they can be put into broad use in society. One possibility is to
require auditing or certification from an independent third party, who can
evaluate whether the decisions behind the algorithm are fair, accurate, ex-
plainable and verifiable.
13. Ethics by design: Undesirable events such as biased or unfair decisions can
lead to a breakdown in trust that would be difficult and costly to correct
afterwards. The concept of Privacy by design should be expanded so that
the algorithm's propensity to result in discrimination or manipulation is
assessed from as early as the design stage.
15
16
A SPRING THAW FOR
ARTIFICIAL
INTELLIGENCE
The cover of Nature on 2 February 2017 featured algorithms that can learn to
classify moles just as well as doctors can. The headline was the result of a col-
laboration between doctors from Stanford University and researchers in artifi-
cial intelligence (AI) such as Sebastian Thrun – the man behind the Google car.2
The group had trained a neural network on clinical images of moles before it
was tested against 21 certified dermatologists on 2,000 images. In almost every
test, the algorithm proved to be more sensitive and accurate than the specialists.
It captured more actual cases of melanoma while producing fewer false posi-
tives at the same time.
17
This is a major advance in itself. In Norway, melanoma is the second most com-
mon type of cancer in the age group of 25-49 years, and more than 2,000 pa-
tients are diagnosed annually.3 But the authors behind the Nature article had
even more to offer. The same type of machine learning can also be adapted and
used in other medical specialisations, such as ENT, optometry, radiology and
pathology. The method is not only fast; it is also possible to use mobiles and
tablets for diagnosis.
This is only one of many examples of machine learning innovations over the last
few years. IBM's Watson won Jeopardy, Apple lets us talk with Siri on our
smartphones, Google's driverless car has driven millions of kilometres, and Fa-
cebook recognises faces just as well as people do. As a result, many are now
saying that we are in a spring thaw for artificial intelligence after an AI winter
in which development produced little by way of practical results.
3 The Norwegian Cancer Society 2018 and Norwegian Institute of Public Health 2018.
4 See also Tørresen 2014.
18
In the early 2000s, the field transitioned from being rule-driven to being driven
by statistics and data, and machine learning became the predominant ap-
proach. In 1959, Arthur Samuel defined machine learning as the "field of study
that gives computers the ability to learn without being explicitly programmed." 5
Computers can now learn correlations, rules and strategies from experiences in
real-world data, without anyone telling them what these correlations are. They
can continuously adapt to the data, and the more data they have access to, the
more accurate they become (adaptivity). This means that computers can per-
form tasks on their own (autonomy). Complex tasks and decision-making can
thus be assumed by machines, with faster execution times and lower costs. 6
The goal of artificial intelligence is for machines to also be able to learn intui-
tion and knowledge that is difficult to express in rules; something which the
neural network approach has shown to be possible. Neural networks are in-
spired by the structure and function of biological neural networks in the brain.
Neural networks can also learn things that were not previously known, or that
are not possible for humans to learn.7
Figure 1: Relationship between artificial intelligence, machine learning and neural networks. 8
5 Al-Darwish 2018.
6Adaptivity and autonomy are characteristic properties emphasised in the Finnish online course on
basic artificial intelligence; see also: https://course.elementsofai.com/1/1.
7 Ahlqvist et al. 2018.
8 Inspired by Wahed 2018.
19
focuses specifically on the use of neural networks, which is the approach cur-
rently driving advances in artificial intelligence.
BETTER ALGORITHMS
Machine learning has seen rapid development over the past few years, driven
by three significant changes that have taken place in parallel: (1) the develop-
ment of better algorithms, especially in neural networks, (2) access to large
amounts of data and (3) easy and reasonably inexpensive access to continuously
increasing levels of computing power.
The models for learning in neural networks consist of several layers of so-called
neurons. The neurons in one layer learn by using input values from previous
layers and sending new learning on to the next layer, all the way until the final
layer, which produces the final output value. This might mean, for example, de-
termining the category of an image ("Yes, this is an image of a malignant mela-
noma").
20
Is it a dog?
Let's assume that we have trained a neural network to recognise dogs in images, as
illustrated in Figure 2. Important properties of a dog are that it has fur, two ears, two eyes,
and a snout. We wish to classify a new image. The first layer of input values in the neural
network will consist of a number of nodes equal to the number of pixels in the image. The
second layer consists of neurons that take in the pixels and look for different forms such as
lines, circles and edges. The third layer consists of neurons that evaluate what the lines,
circles and edges represent. Pixels that a neuron evaluates as two circles will be sent to two
other neurons that evaluate whether these are a pair of eyes or a pair of ears, respectively.
Properties that are heavily weighted are shown in green in the figure. The final layer of output
values will provide an evaluation of whether the image is of a dog or not.
The more layers a neural network consists of, the more complicated the struc-
tures it can analyse. Neural networks with many layers between the input- and
output values are called deep learning networks. They have the ability to learn
complex correlations and then undertake generalisations to recognise relations
it has not seen before. The strength of deep learning networks is that they can
learn what is important in order to understand an image, for example, without
needing this to be explicitly explained. This makes deep learning a powerful tool
in machine learning. The drawback is that this technology often demands ex-
tensive data and computing power, and the models can be complicated and dif-
ficult to explain in terms that people can readily understand.
Progress in neural networks and deep learning have made it possible over the
last few years to train increasingly accurate machine learning algorithms, and
they are widely used in image, video, text and sound recognition. The algo-
rithms can now recognise objects in images better than humans, 10 and it has
been demonstrated that it can be more accurate to talk into machines than to
type information in by hand.11 Google's machine learning-based translation sys-
tem became 60% better through use of neural networks. 12 Neural networks can
also be used to make forecasts, such as predicting extreme weather.13
10 Karpathy 2014.
11 Ruan et al. 2017.
12 Turner 2016.
13 Lui et al. 2016.
21
LARGE VOLUMES OF DATA
Machine learning, and neural networks in particular, learn by being fed large
volumes of training data from the real world. Digital content has been produced
smoothly and steadily over the last few decades, but the rate has really boomed
in recent years. Every day we produce 2.5 trillion bytes of data, and 90 per cent
of the digital information in the world today has been produced in the last two
years.14 These enormous volumes of data are helping to make machine learning
markedly more accurate.
Signals from sensors on smart phones and industrial equipment, digital images
and videos, a continuous stream of updates in social media, and the dawning
internet of things (IoT) will produce far more digital raw material to work with
over the coming years.
14 IBM 2018.
22
Figure 3: Neural networks scale well with increasing access to data compared to traditional
analysis techniques.15
• Moore's law. The capacity of the general processing unit (CPU) has on av-
erage doubled steadily every 24 months over the past 50 years.
• New, powerful computer chips, such as graphics processing units (GPU)
and processors special designed for neural networks can reach speeds up
to several times faster than general CPUs.16
• Cloud computing. Powerful machine learning infrastructures optimised to
manage neural networks are offered as cloud services. These can be used,
purchased or leased as needed without having to make costly investments
of one's own.
23
As a result, it has been possible to start experimenting with, developing, and
applying machine learning easily and quickly at a reasonable cost. However, for
these advances to continue, there is a need for new and less data-intensive al-
gorithms. The exponential development in computing power is starting to run
up against physical limits, and there is stiff competition on access to processing
capacity because of the expansion of Bitcoin.17
17 Tassev 2018.
24
HOW MACHINES LEARN
GUIDED BY DATA
The most successful type of machine learning in recent years has been super-
vised learning, which learns from experience in datasets with real-world exam-
ples.
25
Each example has properties, also called input values. The input values may be
pixel values of an image, soundwaves in an audio stream or other values such
as living space, lot size, or number of bedrooms in a home. The dataset is also
labelled with an output value, such as the animal an image represents, the
words in the audio stream or the sales
price of the home.
Supervised learning can also be used to predict a future event. In one envisioned
example, a seller wants to know which users will end up cancelling a subscrip-
tion, so that he can launch a targeted campaign to retain them before they can-
cel. However, the seller does not know how he can identify the users who want
to cancel. Assume that the company has records for 10,000 customers, half of
which have cancelled and half of which are still customers. A supervised learn-
ing algorithm can train a predictive model that learns properties of those who
have cancelled and those that have remained loyal customers. Once the model
is trained, it can predict which of the current customers are most likely to can-
cel, so that the customer relations manager can prioritise initiatives focused on
them.
26
makes correct predictions for each new observation, including observations
that it has never seen before.
One common way of assessing the quality of a predictive model is to set aside a
test set from the training data. The input values in the test set are fed into the
model, and quality is assessed on the basis of how well the answers the model
produces correspond with the correct answers. Common measures of the qual-
ity of a predictive model include sensitivity and specificity:19
• Sensitivity is the proportion of those who actually have an illness that are
correctly captured as being sick (true positive). High sensitivity means that
there are few sick individuals that will not be captured (few false negatives).
• Specificity is the proportion of those that are actually healthy who are cor-
rectly captured as healthy (true negatives). High specificity means that few
healthy individuals are incorrectly classified as being sick (few false posi-
tives).
In the example with images of moles, the neural network was trained with a
labelled dataset of 129,450 clinical images that included 2,032 different dis-
eases. The algorithm's performance was compared with that of 21 certified der-
matologists on two types of diagnoses, the most common being actinic kerato-
sis and the most dangerous being malignant melanoma. For each test, the der-
matologists and the algorithm were presented with 135 and 130 images, respec-
tively, that they had not seen before, and where the true condition had been
verified through biopsy (meaning that they were correctly labelled). The predic-
tive model achieved both better sensitivity and specificity than the majority of
dermatologists.
19Other measures of quality include accuracy, error rate, F1-score, MCC (Matthews Correlation
Coefficient), positive predictive value and negative predictive value.
27
Algorithms can be over or underfitted to the data they are trained on, which is
something that can cause a machine learning algorithm to perform poorly.
• An algorithm that is overfitted is too finely tuned to the training data, and
will not manage to make accurate predictions when faced with new obser-
vations that it has not seen before. It quite simply learns too many details
from the training set. In the home example, this can mean that one or more
of the properties (living area, lot size and location) are not so important
when it comes to predicting the price, or that the algorithm has not had
enough training data to learn from.
• An algorithm that is underfitted or biased is not adjusted to the training
data well enough, and will not be able to make accurate predictions when
faced with new observations either. In the home example this can mean
that the properties (living space, lot size and location) are not sufficient to
predict a home's selling price in general and that more properties are
needed.
In other words, it is not only the volume of data that is a determining factor in
how accurate a model is. The properties in the dataset one uses can often be
critical factors. Developers working on learning algorithms often try to advance
with many properties before they arrive at a final model.
A dynamic model can continuously improve the model with new input values
and can be used while the surroundings are in constant flux. One example is
monitoring to identify attempts at data intrusion. Continuous learning can
make the learning model more accurate, but the drawback is that these changes
take immediate effect and that the developers consequently have less control.
28
Adverse development of virtual assistants
Microsoft's virtual assistant Tay continuously learned from conversations it had with internet
users. However, it was exposed to systematic false-learning by users and developed into
becoming a Nazi sex robot. Microsoft scrapped Tay 24 hours after launch. 20
UNSUPERVISED
People learn exceptionally well without supervision, and adopt most of their
knowledge about the world through pattern detection and association. Unsu-
pervised learning is a similar approach, where machines learn through recog-
nising patterns and sorting data without advance knowledge of the categories.
Unsupervised learning can identify patterns that humans cannot even detect,
and have potentially greater accuracy and scalability than supervised learning.
21
20 Wakefield 2016.
21 Fagella 2016.
22 Hof 2018.
29
white border might be hidden by a tree branch. The system can nevertheless
make the necessary changes to be able to classify a stop sign as a stop sign. The
company believes that unsupervised learning will make it possible for tomor-
row's autonomous vehicles to better adjust to new situations on the road. 23
A research team at Mount Sinai Hospital in New York has used unsupervised
deep learning network to extract properties from the patient records of 700,000
individuals in a system called DeepPatient. These properties were then used as
input values for other machine learning algorithms to perform classification.
Without expert instructions and based only on data, DeepPatient's learning al-
gorithms detected new patterns and created a model that can place patients into
the group that is best suited to them. 24
By comparing similar patients, the hope is that it will be possible to predict the
future disease profile of the patient or give her the treatment that has proven
most beneficial for this type of patient. To evaluate the model, researchers used
76,000 test patients covering 78 diseases. Researchers believe that their
method can predict disease better than the most common statistical methods.
The method appears to be especially good at predicting diabetes, schizophrenia
and different types of cancer.25
Unsupervised learning is still an immature field. For the time being, most sys-
tems require some exercise or feedback from humans. If and when we learn to
build robust unsupervised systems that learn without human involvement, this
23 Hall-Geisler 2017. Note that the company in question does not use neural networks, but a simple
cluster analysis.
24 Miotto et al. 2016.
25 Miotto et al. 2016.
26 Zames 2016.
30
might open up many possibilities. They would be able to look at complex prob-
lems in new ways to help us detect hidden patterns in how diseases spread, how
the price of securities develops in a market or how customers purchasing be-
haviour changes, for example.27
REINFORCEMENT LEARNING
In reinforcement learning, the machine learns through trial and error, and is
rewarded or punished depending on whether the behaviour brings it closer to
or farther from a goal.
AlphaGo made some unusual moves that originally were thought to be wrong,
but which have actually given human Go players new insight into the game. For
example, Lee Sedol has won all his games since he played against AlphaGo and
has said that AlphaGo taught him to play the game in a more creative way.29
The key to the breakthroughs in reinforcement learning has been the use of
deep neural networks.30 Thanks to deep learning, we have an effective way to
31
recognise patterns in data, such as positions on the Go board. Every time the
program makes a mistake or does something right, it calculates a value that is
saved in large tables that are updated as the program learns. For large and com-
plicated tasks, this requires massive computing resources.
Later on, a further development of the algorithm, Alpha Zero, learned to play
chess on its own, with only the rules of chess as input values. After four hours
of training, it beat the world's highest-ranking chess program, Stockfish.33
32
HYBRID MODELS
SEMI-SUPERVISED LEARNING
In a hypothetical example, we have a few images of cats and dogs that are la-
belled, and a lot of images of cats and dogs that are not labelled. Through an
unsupervised learning process we can group the images into clusters. The cat
and dog images will presumably end up in two different groups. Since we also
know how cats and dogs look based on the labelled dataset, the system can label
the group that most resembles dogs as dogs and then do the equivalent for cats.
Active learning
Active learning is a type of semi-supervised learning where the model itself selects what
unlabelled data will be most informative, and then asks a human to label (categorise) it.
This technique can achieve better performance than one would achieve by run-
ning supervised learning only on labelled data or by running unsupervised
learning only on unlabelled data.
TRANSFER LEARNING
Techniques for transfer learning mean that we no longer need to reinvent the
wheel for every problem we wish to solve, but instead build on existing
knowledge. These techniques make it possible to transfer knowledge from do-
mains where we have a lot of labelled data to new and similar areas where the
data foundation is more sparse, costly, or dangerous to obtain. Knowledge from
training to recognise images of cars can, for example, be used to train systems
to recognise lorries.
Transfer learning can lend itself to the medical field because there is often a lack
of training data, and this is something that a transfer model can compensate
for. In the example with melanoma, the model was trained by using techniques
33
for transfer learning on an existing neural network called ImageNet 35. It was
trained on a volume of images of various general object categories, and can rec-
ognise where there are objects in the images, the shape of the objects, etc. This
network can therefore be used to identify where in the image there might be
moles and the shape of these potential moles. The specific mole algorithm can
build further on this know-how and concentrate on learning to recognise
whether there actually are moles and on being able to differentiate between ma-
lignant and benign moles.
Spam filtering
Many people do not label a sufficient number of messages as spam for individual email filters
to be adequately effective. At the same time, a general email filter that is the same for
everyone will not be accurate enough. A hybrid general/customised filter solution that makes
use of transfer learning can be effective if it can learn from all the users who consistently
label spam and at the same time learn from each individual who only labels a few emails as
spam.36
Simulation is a transfer learning technique that uses data in a simpler and less
risky manner. For example, it is necessary to have data from collisions and ac-
cidents to train self-driving cars, and it is necessary to have data from people
who fall down in order to train systems that automatically detect people falling.
These types of data are difficult to obtain, making it necessary to run simula-
tions as part of system training.
When the dataset is sparse but there is access to a learning model that has been
trained on a similar domain, transfer learning techniques have proven to deliver
better performance.39
35 See Image-net.org.
36 Multi-task learning 2018.
37 Etherington 2017.
38 Mannes 2016.
39 Gupta 2017.
34
GENERATIVE ADVERSARIAL NETWORKS
Such networks are a good method of learning from unlabelled data, which can
be the key to making computers more intelligent in the years ahead.40
The GAN method is composed of two types of networks that compete with one
another. The first network, the G-network, will learn to generate synthetic data
that is as similar as possible to actual images. The other network, the D-net-
work, will learn to detect which images are real and which ones are false. To
illustrate, we can look at the G and D-networks as criminals trying to counterfeit
money and the police seeking to detect the counterfeiters, respectively. The
criminals need to learn to counterfeit money so that the police cannot detect
them, whilst the police need to learn to recognise counterfeit money. Competi-
tion forces both parties to continuously improve.
Let's suppose that we need more images of cats to make a learning algorithm
better at recognising cats. The G-network starts with an image that consists of
completely random pixels. The D-network receives the image and evaluates
whether it is a realistic image of a cat or not. In the next round, the G-network
produces a new image by making a slight adjustment to the previous version
based on hints it gets from the D-network. And so on and so on. The feedback
from the D-network makes the G-network get better at generating realistic im-
ages of cats. By working together, these networks can both produce very realis-
tic synthetic data and become better at detecting authentic data.
40 Knight 2017a.
35
Detecting intestinal disorders with high sensitivity and specificity
Angiodysplasia41 is a malformation of blood vessels in the walls of the intestinal tract, and it
is one of the most common cases of bleeding in the intestine. Diagnosis is made by
interpreting images of the intestine. One method is to have the patient swallow a camera pill
that takes up to 60,000 images of the intestine. One study has shown that doctors examining
such images detect only 69 per cent of cases (specificity). Researchers at the Simula
Research Centre and the University of Oslo have developed GANs that can detect
angiodysplasia in such images of the intestine with accuracy and specificity approaching 100
per cent and sensitivity of 98 per cent. This is far superior to other machine learning
approaches.42
The GANs method can also learn what characterises the music of Beethoven
and create new pieces of music that sound as if they were composed by Beetho-
ven, or in the same way learn to create paintings resembling the work of Munch.
GANs can also be more practically useful by filling in missing data in an incom-
plete image, automatically generating scenes in a video game, making images
sharper, or generating simulated data to train self-driving cars.43
41 Angiodysplasia, 2009.
42 Pogorelov et al. 2018.
43 Goodfellow et al. 2014 and Goodfellow 2017.
36
APPLICATIONS –
FROM HEALTHCARE TO
CARS
37
available, i.e. data, and use it to generate information that is not available. 44
This may be predictions about the past, present or future, such as, for example,
detecting whether a credit card transaction was fraudulent, determining
whether a mole is malignant or predicting what the weather will be like tomor-
row with a certain degree of accuracy.
There are multiple prediction techniques, the most common of which are de-
scribed below.
CLASSIFICATION
Classification is the most widely used machine learning technique and is used
to determine what category a new observation belongs to. This might involve,
for example, identifying what is in a picture. Techniques for supervised learn-
ing lend themselves well to classification, and neural networks have proven to
be highly effective.
Differentiation is usually made between sorting into two classes (binary) and
multiple classes (multiclass). A spam filter is an example of binary classifica-
tion that predicts whether an email is spam or not spam. Diagnostic tools that
predict the most probable diagnosis or diagnoses on the basis of a new pa-
tient's symptoms will build on multi-classification.
CLUSTER ANALYSES
This technique can be used to group film consumers who are similar to one an-
other so that they can receive targeted film recommendations. Or the technique
can be used to identify patients with similar symptoms and how treatments
have worked in different groups so that new patients can receive a more tar-
geted treatment.
38
Cluster analyses can also be used to generate labelled datasets (which are often
lacking) by identifying clusters and then having someone label the clusters.
ANOMALY DETECTION
Anomaly detection techniques discover events that are not consistent with an
expected pattern in a dataset. Such anomalies may be attempts at bank fraud,
data breaches or disturbances in the ecosystem. In a medical context, such tech-
niques can be used to track developments of patient health conditions and dis-
cover any potentially dangerous or undesirable development.
PREDICTIVE ANALYSES
45 Reilly 2017.
46 Lui et al. 2016 and Jones 2017.
39
SPEECH AND SOUND RECOGNITION
Speech and sound recognition technologies are used to translate speech into
text and vice versa. Speech technology is now in daily use on mobile phones in
the form of virtual assistants 47 such as Siri on the Apple iPhone, the Google
assistant on Android phones and Amazon’s assistant Alexa.
Deep neural networks are well suited for learning how to recognise certain
words in an audio stream and this is the most important reason that speech
recognition has seen major improvements over a short time. The error rate fell
from 8.5% to 4.9% from summer of 2016 to 2017.48
Speech-to-text technologies can make dialogue with data systems more natural
so that they are easier to use. The speech interface can help older and function-
ally impaired people who have difficulty using a keyboard or touch screen so
that they can still use digital services and welfare technology. Doctors can man-
age data systems in the operating theatre by speech, keeping their hands free.
47 Virtual assistant (also known as a chatbot): an application that provides guidance and answers
questions through the use of natural language, either in writing or verbally.
48 Brynjolfsson and McAfee 2017a.
49 Carey 2016.
40
DETECTING RISK SIGNALS IN HEALTH EXAMS
Many healthcare measurements take the form of digital audio signals, such as
those from digital ECG systems or stethoscopes. Speech recognition technolo-
gies can interpret and discover abnormal signals in such audio streams.
Kardia is a small device that measures a two-lead ECG through the fingertips.
Users can measure ECG regularly and record the measurements automatically
in a mobile application. The app uses machine learning to build a cardiac profile
for each patient from the measurements. If a later measurement does not fit the
profile, Kardia detects this and alerts the patient and/or health personnel. 50
CliniCloud is a digital stethoscope used to measure heart sounds. Users can rec-
ord their heart sounds on their own and receive help from a computer or doctor
interpreting it. The company behind the device plans to use machine learning
to interpret the heart sound recordings, and they hope to be able to interpret
them as well as doctors can. The goal is to be able to discover abnormalities in
the audio stream even if doctors have not detected them.51
TEXT RECOGNITION
TRANSLATING LANGUAGE
Google and Facebook have transitioned to using deep learning techniques for
translation. When Google's method was published in 2016, they reported that
50AliveCOR 2017.
Conversation with CEO Andrew Lin of CliniCloud 24.10.2017 and Niesche 2015.
51
52Ferrucci 2018. In January 2018, Alibaba's program for artificial intelligence was the first to beat
humans in a Stanford University reading and comprehension test. The program from Alibaba scored
82.44 per cent compared to 82.304 per cent scored by humans. Fenner 2018.
41
it reduced errors by 60 per cent.53 Google now translates almost every language
to and from English in this way. Facebook uses this for more than 4.5 billion
translations per day.54
Text analysis can help make customer service representatives more effective or
automate parts of question-and-answer services, as they have done for the Nor-
wegian Tax Administration and Udacity (see fact box).
53 Castelvecchi 2016.
54 Ong 2017.
42
Automated assistance for tax questions
The Norwegian Tax Administration is in the process of developing a virtual assistant that
responds to tax questions from the public. They started by looking into whether it would be
useful and profitable to have a tool to help customer representatives respond more quickly
and accurately to inquiries. However, they discovered that they were already able to respond
quickly to uncomplicated and frequently asked questions, while the uncommon and more
complex inquiries were also difficult for machines to understand and answer. They concluded
that it would be better to create a virtual assistant that could help the public directly with the
simplest questions. They use machine learning, i.e. text and speech processing, on a training
set of various types of questions and intentions to understand what the user is asking for.
The answers follow fixed rules so it does not need machine learning to run. 55
DIGITAL TRIAGE
In health services, text analysis can help streamline triage at locations such as
Accident and Emergency or the GP’s office by providing faster and more tar-
geted responses to those calling in. The triage service can gradually be auto-
mated, which is being tested in the United Kingdom.58
55 Presentation of the VAKI project, Norwegian Tax Administration, 8 December 2017 and SkLNytt
2017.
56 Ng 2017.
57 Brynjolfsson and McAfee 2017b.
58 Murgia 2017.
43
meaning and key information from the journal and thereby lighten the work-
load of medical personnel.
Prior to every operation the hospital in Agder uses a system that warns of any
allergies, and it does this in a shorter time than what an ordinary physician
spends going through the patient's papers.59 This can save time, which is espe-
cially important in situations where time is critical.
Image analysis and video analysis recognise objects in images and video. Face-
book and other social media now recognise faces in images and ask their friends
if they want to help label them with names. In the latest version of Apple's iPh-
one, facial recognition is used to unlock the device and as identification for ser-
vices in the device.
These technologies have seen major advances in recent years. The image recog-
nition error rate has fallen from greater than 30% in 2010 to around 2.25% in
2017 for the best systems. In comparison, the human error rate is about 5%. 60
Video recognition systems, which are used in self-driving cars and elsewhere,
previously made errors as often as every 30 frames of video, while the best sys-
tems currently make errors less than once per 30 million video frames. 61
The advances in the field mean that automation of both trivial and more ad-
vanced and resource-intensive tasks can be worth the investment.
OBJECT RECOGNITION
These techniques can detect and recognise objects as faces and texts in images.
This can be used, for example, to quickly identify individuals in digital photo
albums or automatically create captions.
59 Christiansen 2017.
60 Echersley and Nasser 2018.
61 Brynjolfsson and McAfee 2017a.
44
By combining text translation techniques with techniques to recognise letters
in images, it is possible, for example, to translate advertising images from one
language to another.62
DIAGNOSTICS
AUTONOMOUS VEHICLES
Analyses of video together with data from sensors in the car and in the environ-
ment allow vehicles to manoeuvre in traffic. These technologies are now so good
that they can perform the analyses in close to real time, allowing the cars to
manoeuvre and react immediately if something unexpected happens.64
Image and video analyses can also improve or generate images and video. Im-
ages can be made sharper and scenes in video games can be generated automat-
ically. Black and white films can also be automatically colourised65.
RECOMMENDER SYSTEMS
Recommender systems are used to conduct risk assessments and provide per-
sonalised services.
62 Brownlee 2016.
63 Haugnes 2017.
64 Hawkins 2018.
65 Brownlee 2016.
45
The "Netflix Prize", which was held from 2006 to 2009, was an important im-
petus for the development of new and better algorithms for recommender sys-
tems. The company published a dataset of more than 100 million film rankings
and offered a prize of $1,000,000 to whoever could make more accurate rec-
ommendations than the company’s own system. The winner in 2007 was 8 per
cent more accurate and used a collection of 107 different algorithmic ap-
proaches.66
Content-based filtering uses a series of features of an item to recommend other items with
similar properties.
PERSONALISED OFFERS
Most online services selling goods and services now use recommender systems
in some form or another to recommend products that you are likely to like and
thereby also purchase. For example, one often gets recommendations that say
"people like you also bought this".
By analysing what customers have done and then calculating the probability of
what they will come to do, online shops can customise offers for each individual
visitor on their own.67
66 Bell 2018.
67 Mystore 2017.
46
European, national, and local roads in Norway. The algorithms were able to de-
termine properties in the road and its surroundings that increase the risk of
accidents; these can then be used to prevent accidents.68
PREDICTING DISEASE
The sooner an illness is detected, the greater the probability of recovery will be.
In Horsens, Denmark, initial studies show that algorithms can predict with 90
per cent probability who will be admitted with, for example, a blood clot over
the course of the next 100 days. 69
CUSTOMISED EDUCATION
Adaptive learning systems can help teachers adapt education to each individual
student's level of skill and maturity. One research project had the aim of keeping
all students in the flow zone, where the balance between what is too difficult
and what is too easy is adjusted to the level of the individual. The project dis-
covered that the dropout rate could be reduced by more than half and entire
classes could boost their performance by nearly one entire grade on average.70
PERSONALISED TREATMENT
The fitness app UA Record uses data on diet and physical and psychological
behaviour that they compile with results from people with similar health and
fitness profiles in order to develop personal training programs.72
47
LIST OF AREAS OF APPLICATION
48
AREA OF APPLICATION EXAMPLE AI ELEMENT
GENERAL TOOL
Speech recognition Apple’s Siri, Google Assis- Classification
tant, Amazon Alexa Speech and sound
recognition
Translation Google Translate, Classification
Facebook Translator Text analysis
Automated customer service Kongsberg municipality, Classification
tax inquiries, bank inquiries Text recognition
Recruiting Better balance between Classification
women and men, LinkedIn
COMMERCE and BANKING
Automated loan application Automated loan application Recommender sys-
tems
Personalised online shop- Amazon Recommender sys-
ping tems
49
AREA OF APPLICA- EXAMPLES AI ELEMENT
TION
FIRST LINE HELP Symptom checker, Classification
Automated triage Text analysis
IMAGE INTERPRETA- Melanoma, Lung cancer Classification
TION nodules, Colon cancer, Cluster analyses
Breast cancer, Image analysis
eye diseases,
pneumonia,
prostate, colon, and lung
cancer, heart disease
IDENTIFYING ACUTE Acute kidney injury,
EVENTS hospital infections
ASSISTED DIAGNOS- Diabetic retinopathy, Text analysis
TICS rare cancers, Anomaly detection
abnormal heart sound,
BETTER TARGETED Predicting disease pro- Text and image analysis
TREATMENT gression Recommender systems
of lung cancer,
optimal combination Recommender systems
of medicines,
predicting disease devel- Image analysis
opment
of tumours,
identifying new subgroups Anomaly detection
of diabetes, Recommender systems
optimal medication for Reinforcement learning
lung cancer
SELF-TREATMENT OF Diabetes, asthma, Cogni- Recommender systems
CHRONIC DISEASES tive Classification
behavioural therapy,
mental health
COMPLIANCE WITH Motivate and alert, Predictive analysis
MEDICATION asthma medication
DISCOVER ADVERSE Falls, self-measurements, Image analysis
HEALTH DEVELOP- abnormal heart sound, Sound recognition
MENTS heart failure Predictive analyses
OPTIMISING Course of care, standard- Unsupervised and semi-
RESOURCE USE ised treatment package supervised learning
for prostate cancer
RISK OF DISEASE Predict disease progres- Predictive analyses
sion, risk of blood clots
50
CAN WE TRUST
MACHINES?
The possibility of creating machines that think and take decisions raises many
ethical questions. In the long term, it may be necessary to consider whether ma-
chines can be given a moral status, and it could be possible that machines come
to achieve superintelligence by improving themselves in a positive feedback
loop; a so-called "intelligence explosion".73 If this were to happen, it could pose
an existential threat to humans. But such perspectives assume algorithms and
physical preconditions that do not exist today.
Here we shall address important challenges that are already inherent in today's
technology, namely domain-specific machine learning based on neural net-
works. The various ways in which machines learn present several challenges:
• Supervised learning uses historical data that may reflect a biased relation-
ship in society and lead to discriminatory decisions.
51
• Unsupervised learning identifies new patterns and correlations, but may
provide little transparency, be difficult to understand and explain, and
make responsibility unclear.
BIASED ALGORITHMS
American citizen Kevin Johnson had good financial standing and a high credit
score, but was suddenly informed that his credit limit was reduced by nearly 65
per cent. The reason was not a default or late payment on Johnson's part, but
rather the fact that his shopping pattern resembled the pattern of customers
who have difficulty paying.74
Supervised learning means that machines can classify or predict outcomes fairly
accurately, but the predictions can only be as reliable and neutral as the data
they are based on. For so long as there are inequalities in society such as exclu-
sion and other traces of discrimination, this will also be reflected in the data.
The algorithms can therefore contribute to discriminatory decisions.
Systems that evaluate job applications and select the best candidates are one
such example of this. When the algorithms are trained on data from previous
hires, they may be influenced by biased choices and practices from interview
sessions. They can unintentionally continue to learn from prejudices such as
racial, gender or ethnic bias. Such profiling based on biased data can contribute
52
to self-fulfilling prophecies and the stigmatisation of groups even if this was not
intended on the part of the developer.
This is not an unsolvable problem. With increased awareness on the part of de-
velopers, algorithms can be programmed to counteract bias or meet a non-dis-
crimination quota. The Norwegian ICT company Evry, for example, has
achieved its goal of having more female employees after the company started
using an AI system as part of the recruitment process. The female proportion of
the nearly 600 employees in 2017 was 33 per cent, and 40 per cent among new
graduates, compared to only 20 per cent a few years earlier. The company be-
lieves this is due to the fact that the system bases selection on more objective
criteria.75
In 2013, Eric Loomis was sentenced to six years in prison for trying to escape
from police in a car that had previously been used in a shooting in Wisconsin.
The judge based the harsh sentence not only on Loomis' criminal record, but
also on the COMPAS algorithm, which calculated Loomis' risk of repeat offend-
ing to be high.76
Loomis appealed the sentence to the Supreme Court, arguing that the judge
used an algorithm that he could neither examine nor challenge. The factors that
go into the assessments and how much weight they are given are considered a
business secret according to the company behind COMPAS. Loomis lost the ap-
peal. The judges believed that he would have received the same sentence re-
gardless based on the usual factors, such as the crime and his history of offence.
The court did, however, indicate that they thought it was problematic to use a
secret algorithm to send someone to prison.77
This type of problem will become a major concern in the future. Machine learn-
ing algorithms provide advice and increasingly take decisions in areas of major
significance to peoples' quality of life and development, such as loan and job
applications, medical diagnoses and law enforcement. So, it will become a
53
problem if the responsible parties are no longer able or willing to explain how
and why a decision was taken. The algorithms become "black boxes" that in-
stead conceal the assessments, uncertainties and choices on which their deci-
sions are based.
The latter type has become particularly relevant because of development in ma-
chine learning. Traditional, rule-based machine learning methods were devel-
oped by people and so they are thereby also easier for people to interpret. This
is different from deep neural networks, which can have hundreds of millions of
connections that each make a small contribution to the final decision.
Unsupervised learning means that machines can identify new patterns and cor-
relations in the data, but they cannot necessarily explain their causal relation.
The algorithms can be somewhat opaque and difficult to understand, and the
lack of explanation makes it difficult to both appeal a decision and assume re-
sponsibility for the decisions.
• The University of Oslo and the Simula Centre are developing a tool to help
doctors report and explain how algorithms that analyse video of the intes-
tine reached their recommendations. The system selects images that have
been important in the decision and provides graphics showing what in the
image has been a determining factor for the recommendation79
78 Knight 2017b.
79 GitHub 2018.
54
• XAI (Explainable AI) is being conducted under the Defense Advanced Re-
search Projects Agency (DARPA) in the United States. The agency is seek-
ing assistance in automated alerts, such as when planes or satellites dis-
cover something suspicious, in addition to an explanation as to why some-
thing is flagged by an algorithm. This way, the operators can ignore false
alarms.80
• LIME (Local Interpretable Model-Agnostic Explanations) show what ele-
ments in a model have been relevant for a prediction, such as what symp-
toms were more important for a model predicting whether a personal has
influenza.81
• A research team at the University of California, Berkeley, has developed
a program which is trained to discover various bird species in photographs,
and which provides an explanation for its recommendations. The system is
assisted by another neural network that has been trained to connect prop-
erties in an image with sentences describing what people see in the image.
The answer from the algorithm might sound something like This is a west-
ern grebe because the bird has a long, white neck, a yellow, pointed beak
and red eyes.82
RIGHT TO AN EXPLANATION
In work with the new European General Data Protection Regulation (GDPR),
the right to an explanation of decisions based on algorithms has therefore be-
come an important topic. Explanations behind a decision may be of two types:
• How the system works, i.e. the logic, meaning, expected outcomes and the
general functionality of the system. Information on the logic may indicate
whether decision trees are used or how various types of information are
weighted and connected. For example, high speed can automatically lead
to higher insurance premiums.
80 Gunning 2018.
81 Ribeiro, Singh and Guestrin 2016.
82 The Economist 2018a.
55
• Explanation of a specific decision, i.e. rationale, justifications and individ-
ual circumstances leading to the decision. Explanation of an individual's
increased insurance premiums may be, for example, that they have driven
six mph (10 km) over the speed limit on average.
Before automated processing begins, the individual must have enough infor-
mation to be able to give consent or make objections. In this instance, only sys-
tem functionality is available. Once a decision has been made, and one wishes
to appeal the decision, for example, information on the specific decision is also
available.
In the recitals of the General Data Protection Regulation (Recital 71) it states
that guarantees for persons subject to automated decisions shall include "...spe-
cific information to the data subject and the right to obtain human intervention,
to express their point of view, to obtain an explanation of the decision reached
after such assessment and to challenge the decision."84
The European General Data Protection Regulation (GDPR) provides data sub-
jects with the right not to be involved in a decision that is based exclusively on
automated handling when this decision has a significant impact on the individ-
ual in question.85 However, while the right to an explanation of automated de-
cisions is addressed in the recitals, it is not mentioned in the regulation itself.
It is thereby not legally binding.86
This lack of clarity is detrimental, given the spread and potential that machine
learning has for both fully automated and partially automated decisions.
56
WHO CAN BE HELD ACCOUNTABLE?
ETHICAL ALGORITHMS
Techniques for reinforcement learning mean that machines can reach optimal
strategies to reach their objectives within the rules that humans define for them.
AlphaGo Zero has shown that algorithms can achieve better strategies than the
best GO players. The machines will, however, overlook or ignore considerations
that are not explicitly expressed in the rules. This can make it difficult to get a
general view of all important considerations when the rules are written, espe-
cially for complex systems.
One example relates to military use. The World Economic Forum has raised
concerns regarding the use of algorithms such as AlphaGo in warfare. The algo-
rithm seeks to maximise the likelihood of winning rather than optimising the
margins. If this game logic is used in autonomous weapons, it could result in
57
violations to the principle of proportionality, because the algorithm would not
see any difference between killing one or one thousand enemies. This can lead
to more offensive warfare, 88
This presents a particular challenge when it comes to ethical choices. One di-
lemma can be illustrated with self-driving cars. They can potentially be very
good at avoiding accidents. But what if the machine's most optimal choice for
the car and its passengers increases the probability that someone else will be
injured at the same time? What way should the car drive if there is a large ani-
mal in the road that will potentially cause major damage to the car and injury
to the passengers, but there is a small child on the sidewalk?
Any ethical routing decisions must be expressed in the rules if the system is to
take them into account. It may be unpleasant and difficult to express ethical
choices in clear and unambiguous rules that the machine understands.
MALICIOUS USE
Technology can be used in many ways and for different purposes. Echo sound-
ing, for example, was originally developed to detect and neutralise submarines,
but later became a key tool in fisheries. Conversely, research on viruses can also
be used to develop dangerous weapons for bio-terror or war. This is often re-
ferred to as dual-use of technology.
58
Dual use challenges are also a central concern with respect to artificial intelli-
gence. Autonomous drones delivering consumer goods can, for example, also
deliver explosives. Some general traits of machine learning make these chal-
lenges pressing:
• More effective and scalable. AI systems can perform tasks more effectively
while they can be quickly scaled at a reasonable cost at the same time. This
can make it harder to defend against attacks such as phishing. 90
• Better. AI systems can perform tasks far better than humans. They can
classify medical images better than experts and they are more skilled than
the top-ranked players in chess and GO.
90 Phishing is a concept where attackers trick a victim into doing something (such as providing
confidential information or transferring money) by sending the person in question an email and
pretending to be an organisation or person that the individual trusts. Until now, phishing attacks
have been largely based on identically worded emails sent out to large volumes of recipients. AI can
make phishing much more targeted and effective by first screening each victim by studying activity
on social media and then personally tailoring the content of emails and the alleged identity of the
sender. It is thereby much more likely that the victim will be deceived. In the future such attacks may
become both more effective and more frequent on a wider scale.
91 Brundage et al. 2018.
59
• Digital security: for example, training machines to make cyber attacks
more targeted.
Optical illusion
Machine learning models can be tricked into making mistakes with a type of optical illusion.
By adding noise to images that is not visible to the naked eye, a model can be tricked into
making classification errors. An image that completely appears to be a panda, for example,
could be classified as a monkey by adding invisible noise into the image.92 Such attacks may
have dramatic consequences. Algorithms can be trained to interpret images of a speed limit
sign showing 50 mph as 150 mph. If one were to print out a false image and glue it to an
ordinary traffic sign, autonomous vehicles would drive much faster than the actual speed
limit, with all the consequences that this brings. (This can be prevented by designing the
system to handle abnormal sensor values, such as by cross-coordinating it with a digital
map.) Attempts have also managed to trick algorithms to interpret images of a person wearing
special glasses into classifying them as another person. 93 People can thereby change identity
by wearing such glasses, and get through passport checks unauthorised, for example.
There can be tension between concerns over transparency and security. Trans-
parency with respect to algorithms will be important to reduce the risk of vul-
nerability and abuse, though it can expose the algorithms for malicious use at
the same time.
92 OpenAI.com 2017.
93 Margolin 2016.
94 Brundage et al. 2018.
60
61
14 PROPOSALS FOR
NORWAY
MAJOR OPPORTUNITIES
In this report we have given examples of how machine learning is already af-
fecting many different sectors and having an impact on areas such as:
62
• Finance: loan applications, interpreting contracts
• Energy: optimisation of data centres and energy system, prevention
of errors in the petroleum sector
• Public services: individually tailored services, automated case han-
dling, recruitment, translation
95 Purdy and Daugherty 2016. The analysis covers 12 countries, including Sweden, Finland and the
United Kingdom and it covered the period from 2016 to 2035.
96 O’Neil 2016.
63
A third possible consequence is disruption of the labour market by having many
work duties overtaken by intelligent machines over the course of a relatively
short period. A study by SSB (Statistics Norway) suggests that one out of three
Norwegian jobs is at high risk of being replaced by machines over the course of
the next two decades.97 Estimations from the OECD suggest that fewer jobs will
be able to completely disappear, but that, for about one-third of employees,
large parts of their jobs may be overtaken by computers.98
Artificial intelligence is a technology that has made a powerful leap forward over
the past few years. New types of machine learning benefit from ever-increasing
computing power and the massive amounts of data produced in society. A re-
cent study shows that 85 per cent of the US population already uses services
based on artificial intelligence, such as, for example, navigation, streaming ser-
vices or transport.99 This figure for Norway is presumably higher.
64
learning, with investments such as Google Brain (a new type of AI operating
system), Google Cloud and DeepMind.101 Amazon has invested 306 million dol-
lars in new AI positions, making it the leading company in terms of recent in-
vestments in AI.102
65
Many of the EU countries are underway with their own strategic work. The first
reports with proposals to the government have been published in both Sweden
and Finland.106 In March, French president Emmanuel Macron presented his
plan for artificial intelligence, with an investment in research of 1.5 billion eu-
ros.107 In Great Britain, the government and a number of private enterprises
have entered into an AI Sector Deal for the development of artificial intelli-
gence.108
NTNU and several leading companies in Norway have recently joined to estab-
lish the Norwegian Open AI Lab, and the Research Council of Norway lists ar-
tificial intelligence as one of several priority areas in the IKTPLUSS program.111
Meanwhile, the government's Long-term Plan for Research and Higher Edu-
cation 2015-2024 mentions neither artificial intelligence nor machine learning.
Norway does not have any national strategy for artificial intelligence, and it does
not have one in the works, either.
106 Vinnova 2018 and Finland's Ministry of Economic Affairs and Employment 2017.
107 Reuters 2018.
108 Department for Business, Energy and Industrial Strategy and Department for Digital, Culture,
Scimago 2018 shows that China (102,000) and the USA (84,000) are in the lead with two to three
times as many publications on artificial intelligence as the runner-up, Japan (34,000), over the past
20 years. Norway is in 41st place with 1,700 publications. Kaggle (2016) shows where the 100 top-
ranked developers on Kaggle are from.
110 Stirling et al. 2018.
111 Telenor 2018 and https://www.forskningsradet.no/no/Utlysning/IKTPLUSS/1254002623262/.
66
In the following, we will present concrete input for a Norwegian strategy for
artificial intelligence. The most important elements are the right and adequate
expertise to develop, evaluate and implement machine learning; access to data
that balances personal data protection with the ability to drive innovation; and
measures and principles for development that is both responsible and desira-
ble.
Norway is currently trailing in this area. The Research Council of Norway has
established, for example, that it is particularly challenging to meet demands for
know-how in artificial intelligence and machine learning in Norway.116
67
The government's Long-term Plan for Research and Higher Education 2015–
2024 shall be revised over the course of 2018. Space should be created here for
dedicated investment in artificial intelligence and machine learning.
Norway's research resources are too few and too scattered. In order to
strengthen research efforts and become attractive in terms of recruitment and
international cooperation, it may be a good idea for Norwegian authorities to
establish a key institution for research in artificial intelligence and machine
learning.
In Ontario, Canada, local and state agencies have established the Vector Insti-
tute together with the University of Toronto and private companies, among oth-
ers, as one of three national hubs for the development of artificial intelligence.
The investment is also an initiative to prevent the province from losing exper-
tise in artificial intelligence and machine learning to the major US-based com-
panies.118
Norway does not have the resources to invest as broadly as China or France, but
it can be a world leader in terms of connecting domain knowledge with general
knowledge on artificial intelligence.
The investment stems from the Canadian AI strategy; see also University of Toronto 2017 and
118
CIFAR 2017.
68
face today. At the same time, they shall include realistic research and innovation
activities, with requirements for measurable and time-oriented results.119
One of May's missions has the aim of using data, artificial intelligence and in-
novation to transform the prevention, early diagnosis and management of dis-
eases such as cancer, diabetes, heart disease and dementia before 2030. One of
the ambitions is for it to be possible within 15 years to diagnose cancer in the
lungs, colon, prostate, and ovaries at a much earlier stage in 50,000 patients
per year, thereby increasing the five-year survival rate for 22,000 more citizens
of the UK every year.120
Artificial intelligence can help solve many important social challenges. It would
make good sense for Norway to formulate objectives within the following areas
where we have a combination of good training data and significant social needs:
• Health: Norway has a relatively unified health services network with good
health data and digitally active users. Increased demand for health services
is expected to increase in step with the age wave.
• Public services: Norway has world-class public data as a result of its well-
organised welfare state and its digitally active citizens. Forecasts state that
public expenditures will increase more quickly than public revenues start-
ing in 2030.121 It will therefore become necessary to reconfigure how the
public sector delivers its services.
• Sustainable energy: There are already large volumes of sensor data from
oil- and gas installations. Equinor has recently established two digitised
operating centres in Bergen, and anticipates that investments of between
one and two billion kroner shall yield an increased value creation of around
15–20 billion kroner.122 Agder Energi uses machine learning to optimise
hydropower production.123 The global climate and environmental chal-
lenges demand a realignment and transition to products and services that
exert significantly less negative consequences for the climate and environ-
ment than we are seeing today. Society must undergo a green shift.
69
• Clean oceans: Norwegian institutions and companies have extensive data
from satellites, buoys and drones that can provide important knowledge.
Norway has legal usage rights over vast areas of ocean and presides over
enormous resources both at sea and in the offshore industry. At the same
time, the ocean is under significant pressure as a result of pollution, heat-
ing and acidification, among other things. This is where Norway can take a
special responsibility.
All professions and courses of study, both at the university and college levels,
should therefore provide an introduction to artificial intelligence and machine
learning as a supplementary offer for students and researchers. One example
where this is happening is the Faculty of Medicine at the University of Bergen,
which is beginning a new course for medical students in the spring of 2019. The
course will enable them to understand and evaluate how machine learning and
data analysis can be used in predictive and personalised medicine.124
124 http://www.uib.no/emne/ELMED219.
Another example is the School of Management at the Norwegian University of Life Sciences, which
offers a course in machine learning for the optimisation of business processes.
https://www.nmbu.no/emne/INN355.
125 Example areas may be energy and environment, aquaculture and agriculture, law enforcement and
70
alongside a job, and they can also be integrated into many fields, such as health,
law, and logistics.126
Artificial intelligence will affect our lives and the choices we make, both pri-
vately and professionally. It is important that as many people as possible un-
derstand the key implications of artificial intelligence, so they can think criti-
cally on the topic, help shape its use in the workplace, participate and drive the
debate.
Secondly, there is a major need for training in the workplace. The OECD evalu-
ates that around one third of Norwegian jobs will have radically altered content
in the future as a result of automation and artificial intelligence. Around
850,000 Norwegians will therefore need comprehensive skills development
126A similar structure has now been proposed in Finland. Finland's Ministry of Economic Affairs and
Employment 2017, page 52.
127 Department for Business, Energy and Industrial Strategy and Department for Digital, Culture,
mathematics or programming. Completion of the course results in academic credits for Finnish
residents and a certificate for everyone who completes it. The ambition is to have one per cent of the
Finnish population complete the course in the first year. https://course.elementsofai.com/.
71
initiatives, something that the current further educational structure is not
equipped for.129
This calls for Norway to reformulate today's system for further education and
life-long-learning and adapt it to the individual by offering new incentives.
Singapore offers SkillsFuture for Digital Workplace, which provides training
in digital skills adapted to various age groups.130 All citizens over the age of 25
receive SkillsFuture Credit, which is 500 Singapore dollars (around 300 eu-
ros) to spend on a course every year.131
Open public data can contribute to innovation and new services in many sec-
tors.
As a general rule, public institutions should share data. Norway is in tenth place
out of 114 countries on a scale showing the degree to which public agencies pub-
lish and use open data.132 The public sector in Norway should nonetheless have
72
ambitions to publish more public data, and to ensure that it is in an open format
that is easy to navigate and reuse in machine learning.133
In the UK’s Sector Deal, the authorities have committed to publishing more
open data, even if the country is already ranked number 1 on the same scale. 134
If data from Norwegian hospitals, schools and smart cities are to be shared with
third parties, the community should receive added value in the form of im-
proved public services, new business development, jobs or tax revenue.
How data creates value and for whom is not always foreseeable with machine
learning. The machines can learn on their own and arrive at new correlations
that were not previously known, and learning can be transferred from one sys-
tem to another. This makes it complicated to regulate the responsibility and
rights of parties. For example, how much should the public pay for services
trained on their own data?
Citizens are part of the case, since public data is about them and from them.
This means additional requirements for responsible use and transparency will
be required if trust in data sharing is to be maintained.
This is an area where Norwegian authorities can take inspiration from Great
Britain, which is establishing so-called Data Trusts.135 This is a far-reaching le-
gal framework for the sharing of data between public organisations and private
companies that will develop artificial intelligence. The framework includes
133 For example, registers such as the Norwegian Patient Registry and Prescription Registry should
share aggregated data and synthetic data files (not personally identifiable) that everyone can use.
134 Department for Business, Energy and Industrial Strategy and Department for Digital, Culture,
Media and Sport (UK) 2018 and Hall and Pesenti 2017. See also Thornhill 2017 and Artificial Lawyer
2017.
73
concrete tools, agreement templates and mechanisms for the distribution of
values created.136
Know-how must also be developed into how data can be shared, connected and
handled in a secure manner that still allows access for many. This know-how
applies to the creation of synthetic data and encryption, for example. 138
If public data about us is shared to drive research and innovation, this should
require that citizens have real control over how their own data is shared, and it
must be guaranteed that this is done securely.
In Norway, citizens currently have limited control over data on themselves. Cit-
izens' data is in various private and public "silos" with different policies for col-
lection, sharing and use. This also makes it difficult to understand, evaluate and
manage the risk associated with data collection and use.
136 Finance Norway has, together with the Brønnøysund Register Centre, the Norwegian Tax
Administration, the Norwegian Labour and Welfare Administration and the Police Authority
collaborated on digitisation in the DSOP (Digital Interaction Public-Private) collaboration. The
purpose is to be able to share information easily, effectively and digitally so as to achieve the greatest
possible level of productivity in society, but at the same time within secure frameworks that account
for the individual's privacy protection. The framework is open for everyone, and elements of it can be
evaluated for further development and adapted to the sharing of data to develop AI systems. See
https://www.bits.no/project/dsop/ and Holte 2018.
137 In the UK, the establishment of an operational organisation for Data Trusts has been recommended.
A new Centre for Data Ethics and Innovation will be established. Hall and Presenti 2017.
138 Germany has been working to establish know-how on data sharing and what the public can do:
74
and defining their digital profile and determining if and how their own data
shall be shared will be an important aspect of such a social contract.
Government agencies must therefore arrange for citizens to have access to ap-
propriate tools so that they can truly and effectively control information on
themselves.140 In the same way that they manage their personal finances in
online banking, they must be provided with a digital interface that presents a
simple and understandable overview of how personal data is managed and used
by the public sector. The citizen must also be given the possibility to actively
grant or revoke permissions for various usage purposes.
In its Global Risk Report 2017, the World Economic Forum calls artificial intel-
ligence one of the most rapidly developing technologies with the greatest utility
value, but also with the greatest potential for harm.141
Learning machines can contribute to better welfare services, fast and accurate
diagnoses and better sustainability. At the same time, artificial intelligence may
mean fewer jobs, more surveillance, greater inequality and autonomous weap-
ons.
140 The French and Finnish AI strategies also make the same suggestion, referring to examples such
as Personaldata.ai, Personal Information Management Systems (PIMS) and MyData.org. See also
Villani 2018, page 31, and Finland's Ministry of Economic Affairs and Employment 2017, pages 44
and 45.
141 World Economic Forum 2017.
142 Mazzucato 2018 and May 2018.
75
9. ETHICAL GUIDELINES
The government has declared that it will develop guidelines and ethical princi-
ples for the use of artificial intelligence.143 This is a good idea. The possibility of
creating machines that learn, interpret and take decisions raises many ethical
questions.
In the long term, it may be necessary to consider whether machines can be given
a moral status, and it could be possible that machines come to achieve superin-
telligence and become an existential threat to humans.144 But such perspectives
assume algorithms and physical preconditions that do not exist today.
This report has illustrated some of the ethical dilemmas that already exist and
that will become increasingly amplified. Traditional European values such as
dignity, autonomy, freedom, solidarity, equality, democracy and trust are being
challenged in pace with digitisation in general and with the development of ar-
tificial intelligence in particular.145 The government should begin to develop
ethical guidelines and practices in areas where the technology is already exert-
ing tremendous pressure on established values:
• Democracy. The potential for political manipulation through the use of ar-
tificial intelligence is manifest. The British consulting firm Cambridge An-
alytica used Facebook as a data provider to create a psychological profile of
several million private individuals and as a platform to offer political influ-
ence to customers. The manipulation of media with personalised fake news
also undermines democratic values. The Chinese AI strategy is unambigu-
ous in its objectives to use machine learning to achieve social control.
76
analysis and influence. This becomes amplified since network effects give
large commercial actors near monopolies within their areas.
• Equality. Machines learn from data collected about society. They can re-
flect historically biased conditions and thereby solidify discriminatory de-
cisions and lead to discrimination. Personalisation is in principle also dis-
criminatory.
• Solidarity. Welfare systems such as health services and social security, and
various forms of insurance are based on mutual sharing of risk. Increasing
personalisation and hyperindividualisation through risk scoring and pre-
diction for every individual citizen can undermine this.
• Responsibility. The fact that machines gain more autonomy with artificial
intelligence can obscure the underlying principle that people must always
be responsible for decisions that affect other people. The algorithms may
be slightly opaque and difficult to understand, which makes it difficult both
to anchor responsibility for decisions and to appeal the decisions. It may
also be impossible to know whether you are in contact with a machine or a
human. The growth of intelligent weapon systems with a high potential for
autonomy pushes the question of responsibility to the extreme.
The European General Data Protection Regulation (GDPR) provides data sub-
jects with the right not to be involved in a decision that is based exclusively on
automated handling when this decision has a significant impact on the individ-
ual in question.146 However, while the right to an explanation of automated
77
decisions is addressed in the recitals, it is not mentioned in the regulation itself.
It is thereby not legally binding.147
Norwegian authorities should therefore adopt and specify such a right to expla-
nation.148
This right should include two types of explanation; i.e. how the system works
(purpose, logic and consequences) as well as explanation of the individual cir-
cumstances that led to a decision.149 What constitutes an adequate explanation
in different contexts should also be clarified.
The possibility of explanation of the algorithm may be limited because the al-
gorithm is complicated and difficult to explain in comprehensible everyday
terms. It may be particularly challenging to provide an explanation of how spe-
cific data have been weighted in algorithms based on neural networks.
We may potentially have to consider whether the public sector should abstain
from making automated decisions unless it is possible to provide adequate ex-
planation. The French strategy considers it unthinkable to accept decisions that
cannot be explained in areas of critical importance to a person's life, such as
access to credit, work, housing, the legal system and medical services. 150
When machines take over tasks that were previously carried out by humans, it
is especially important to show that the algorithms do not make biased recom-
mendations. Algorithms may in the worst case amplify social differences, lead
to unintentional discrimination and conceal normative choices.
As a general rule, Norwegian authorities should therefore require that all algo-
rithms used by the public sector be open to audit, so that other actors in society
properties in the data and categories in a decision tree and the consequences might be that the credit
score is used by the lender to perform a credit assessment that may affect the interest rate. The
individual circumstances may be the actual credit score, which actual data or properties were used and
how these were weighted in the decision tree or model. See also Wachter, Mittelstadt and Floridi 2017.
150 Villani 2018, pages 1156-116.
78
can verify that they are being used correctly and ethically. This will also be im-
portant for trust in the public sector. 151
Requirements for open algorithms may not necessarily apply in all contexts.
Business interests, personal data protection, or national security may be com-
promised if some types of algorithms can be copied and distributed freely. Of
particular cause for concern is the fact that open algorithms may also potentially
strengthen actors with malicious intent.
Algorithms for machine learning that for critical reasons cannot be open to the
public should nonetheless be subject to evaluation before they can be put into
broad use in society. One possibility is to require that closed AI algorithms be
thoroughly tested and reviewed or certified by an independent third party be-
fore they can be used in society.152 Such assessments should include whether
the decisions of the algorithm are
• fair,
• correct,
• explainable,
• verifiable and
• that a means of appealing undesirable outcomes is made evident.
It is not always necessary, useful or possible to examine source code. One alter-
native is to test the algorithm. To evaluate whether, for example, a recruitment
algorithm will discriminate, it can be tested with a large number of CVs of men
and women with equal qualifications. It may therefore be appropriate to require
a programming interface153, to be able to test the algorithm on a large number
of fictive users. 154
151 The French President has said that France will increase the pressure on private actors for them to
make their algorithms open to audit as well. Thompson 2018.
152 Great Britain is in the process of establishing the Centre for Data Ethics & Innovation, which among
other things will evaluate various tools to identify and manage biased algorithms and make
recommendations for tools that the private and public sector should use. House of Commons 2018.
153 Application Programming Interface, often shortened to API
154 See also Villani 2018, page 117.
79
There are already existing mechanisms, for example prior to revision, that can
be expanded to include algorithms.155
Algorithms can be checked by opening them up to audit or review, but the most
suitable thing for both developers and users alike is to build ethical considera-
tions in from the start. Undesirable events such as biased or unfair decisions
can lead to a breakdown in trust that would be difficult and costly to correct
afterwards.
Such thinking has already been established with respect to personal data pro-
tection.156 Privacy by design means that the principles of personal privacy pro-
tection, rights, and requirements are incorporated throughout the entire devel-
opmental cycle, from design and coding to testing and operation.157.
155 There are already companies specialising in the auditing of algorithms, such as O’Neil Risk
Consulting: http://www.oneilrisk.com/.
156 Norwegian Data Protection Authority 2018b.
157 Norwegian Data Protection Authority 2018c.
158 Villani 2018, page 121.
80
14. NATIONAL DIALOGUE ON AI
The Chinese strategy concludes with a point about "guiding opinion" towards
the acceptance of artificial intelligence.159 When rapid technological develop-
ment affects peoples' lives and values in this way, working for passive ac-
ceptance is not enough. The Norwegian authorities should actively take initia-
tives to involve lay people and civil society in the discussion on artificial intelli-
gence, and they should be receptive to their perspectives on what developments
people would hope to see. This can build on principles of responsible research
and innovation (RRI):
81
REFERENCES
Agrawal, Ajay, Gans, Joshua & Goldfarb, Avi (2018). Prediction Ma-
chines, The Simpe Economics of Artificial Intelligence. Harvard Busi-
ness Review Press.
Antoni, Manfred & Schnell, Rainer (2017). The Past, Present and Fu-
ture of the German Record Linkage Center (GRLC), Journal of Eco-
nomics and Statistics.
82
Artificial Lawer (2017, October 16). UK Gov-Backed Report Calls for
AI Data Trusts; Praises Legal Sector.
Retrieved from: https://www.artificiallawyer.com/2017/10/16/uk-gov-
backed-report-calls-for-ai-data-trusts-praises-legal-sector/.
Bell, Robert M; Koren, Yehuda & Volinsky, Chris. (2018). The BellKor
solution to the Netflix Prize.
Retrieved from: https://www.netflixprize.com/assets/Progress-
Prize2007_KorBell.pdf.
Brundage, Miles; Avin, Shahar; Clark, Jack; Toner, Helen & Eckersley,
Peter (2018, February). The malicious use of artificial forecasting,
prevention, and mitigation.
Retrieved from: https://www.eff.org/files/2018/02/20/malicious_ai_re-
port_final.pdf.
Brynjolfsson, Erik & McAfee, Andrew (2017b, July 18). What’s Driving
the Machine Learning Explosion?, Harvard Business Review.
Retrieved from: https://hbr.org/2017/07/whats-driving-the-machine-learn-
ing-explosion.
83
CIFAR (2017, August 20). Pan-Canadian Artificial Intelligence Strat-
egy. CIFAR.
Retrieved from: https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-
strategy.
84
Datatilsynet (2018c) Programvareutvikling med innebygd person-
vern.
Retrieved from: https://www.datatilsynet.no/regelverk-og-
verktoy/veiledere/programvareutvikling-med-innebygd-person-
vern/?id=7729.
Dockrill, Peter (2017, December 8). Google’s AI has mastered all the
chess knowledge in history – in just 4 hours. In World Economic Fo-
rum.
Retrieved from: https://www.weforum.org/agenda/2017/12/google-s-ai-has-
mastered-all-the-chess-knowledge-in-history.
Echersley, Peter & Nasser, Yomna (2018, March 7). Measuring the
progress of AI research, in Electonic Frontier Foundation.
Retrieved from: https://www.eff.org/ai/metrics#Vision.
Esteva; Andre, Kuprel, Brett; Novoa, Roberto A.; Ko, Justin; Swetter,
Susan M.; Blau, Helen M. & Thrun, Sebastian (2017). Dermatologist-
level classification of skin cancer with deep neutral networks, in Na-
ture (542), p. 115-118.
Retrieved from: https://www.nature.com/nature/journal/v542/n7639/in-
dex.html.
85
Etherington, Darrell (2017, February 8). Udacity open sources its self-
driving car simulator for anyone to use, in TechCrunch.
Retrieved from: https://techcrunch.com/2017/02/08/udacity-open-sources-
its-self-driving-car-simulator-for-anyone-to-use/.
86
Federal Ministry of Transport and Digital Infrastructure (In Germany)
(2017, August 28). Ethics commission's complete report on auto-
mated and connected driving.
Retrieved from: https://www.bmvi.de/SharedDocs/EN/publications/report-
ethics-commission.pdf.
China's State Council (2017, July 20). A Next Generation Artificial In-
telligence Development Plan. Based on an English translation by New
America.
Retrieved from: http://www.gov.cn/zhengce/content/2017-07/20/con-
tent_5211996.htm & https://na-production.s3.amazonaws.com/docu-
ments/translation-fulltext-8.1.17.pdf.
87
Gallup (2018, March 6). Most Americans Already Using Artificial In-
telligence Products, in Gallup.
Retrieved from: https://news.gallup.com/poll/228497/americans-al-
ready-using-artificial-intelligence-products.aspx.
Garber, Megan (2016, June 30). When Algorithms Take the Stand, in
The Atlantic.
Retrieved from: https://www.theatlantic.com/technology/ar-
chive/2016/06/when-algorithms-take-the-stand/489566/.
Geng, Daniel & Shih, Shannon (2017, February 4). Machine Learning
Crash Course: Part 3, Machine Learning at Berkeley.
Hentet fra: https://ml.berkeley.edu/blog/2017/02/04/tutorial-3/.
88
Gupta, Dishashree (2017, June 1). Transfer learning & the art of using
pre-trained models in deep learning, from Analytics Vidhya.
Retrieved from: https://www.analyticsvidhya.com/blog/2017/06/transfer-
learning-the-art-of-fine-tuning-a-pre-trained-model/.
Hassabis, Demis & Silver, David (2017, October 12). AlphaGO Zero:
Learning from Scratch, from Deepmind.
Retrieved from: https://deepmind.com/blog/alphago-zero-learning-scratch/.
89
House of Commons (UK) (2017, September 13). Robotics and Artifi-
cial Intelligence: Fifth Report of Sessions 2016-2017. House of Com-
mons Science and Technology Committee.
Retrieved from: https://publications.parlia-
ment.uk/pa/cm201617/cmselect/cmsctech/896/896.pdf.
IBM (2018, March 7). 10 Marketing Trends for 2017 and Ideas for Ex-
ceeding Costumer Expectations, in IBM.com.
Retrieved from: https://public.dhe.ibm.com/com-
mon/ssi/ecm/wr/en/wrl12345usen/watson-customer-engagement-watson-
marketing-wr-other-papers-and-reports-wrl12345usen-20170719.pdf.
Knight, Will (2017, January 4). 5 big predictions for artificial intelli-
gence in 2017, in MIT Technology Review.
Retrieved from: https://www.technologyreview.com/s/603216/5-big-predic-
tions-for-artificial-intelligence-in-2017/.
90
Knight, Will (2017, April 11). The dark secret at the heart of AI, in MIT
Technology Review.
Retrieved from: https://www.technologyreview.com/s/604087/the-dark-se-
cret-at-the-heart-of-ai/.
Lardinois, Frederic (2018, August 17). Google gives its AI the reins
over its data center cooling systems, in TechChrunch.
Retrieved from: https://techcrunch.com/2018/08/17/google-gives-its-ai-the-
reins-over-its-data-center-cooling-systems/?guccounter=1.
91
Lui, Yunjie; Racah, Evan; Correa, Joaquin Prabhat; Khosrowshahi,
Amir; Lavers, David; Kunkel, Kenneth; Wehner, Michael & Collins,
Will iam (2016, May 4). Application of deep convolutional neutral net-
works for detecting extreme weather in climate datasets, in Arxiv.org.
Retrieved from: https://arxiv.org/pdf/1605.01156.pdf.
Mannes, John (2016, December 5). OpenAI’s Universe is the fun par-
ent every artificial intelligence deserves, from TechCrunch.com.
Retrieved from: https://techcrunch.com/2016/12/05/openais-universe-is-
the-fun-parent-every-artificial-intelligence-deserves/.
May, Theresa (2018, May 21). PM speech on science and modern In-
dustrial Strategy.
Retrieved from: https://www.gov.uk/government/speeches/pm-speech-on-
science-and-modern-industrial-strategy-21-may-2018.
92
Miotto, Riccardo; Li; Li, Kidd, Brian A & Dudley, Joel T (2016). Deep
Patient: An Unsupervised Representation to Predict the Future of Pa-
tients from the Electronic Health Records, in Nature Scientific Re-
ports.
Retrieved from: https://www.nature.com/articles/srep26094.
Moe, Sigrid & Breivik, Steinar Rostad (2018, March 5). Snart kan kun-
stig intelligens styre vannkraftverkene, in E24.
Retrieved from: https://e24.no/energi/vannkraft/snart-kan-kunstig-intelli-
gens-styre-vannkraftverkene/23933863.
Ng, Andrew (2015, November 24). What Data Scientists Should Know
About Deep Learning, Presentation at the Extract Data Conference,
24. November, 2015. Retrieved from: https://www.slideshare.net/Ex-
tractConf.
93
Ng, Andrew (2017, July 25). Deep learning’s next frontier, in Harvard
Business Review.
Retrieved from: https://hbr.org/2017/07/deep-learnings-next-frontier.
O’Neil, Cathy (2018, July 3). Audit the algorithms that are ruling our
lives. Governments should follow France and move towards algorith-
mic accountability, Financial Times.
Retrieved from: https://www.ft.com/content/879d96d6-93db-11e8-95f8-
8640db9060a7.
Ong, Thuy (2017, August 4). Facebook’s translations are now powered
completely by AI, in The Verge.
Retrieved from: https://www.theverge.com/2017/8/4/16093872/facebook-ai-
translations-artificial-intelligence.
94
Pajarinen, M., Rouvinen, P. & Ekeland, A (2014). Computerization
and the Future of Jobs in Norway. ETLA and SSB, 2014.
Retrieved from: http://nettsteder.regjer-
ingen.no/fremtidensskole/files/2014/05/Computerization-and-the-Future-
of-Jobs-in-Norway.pdf.
Paysa.com (2017, November 29). New Paysa Study Reveals U.S. Com-
panies Across All Industries Investing $1.35 Billion Dollars in AI Tal-
ent.
Retrieved from: https://www.paysa.com/press-releases/2017-11-29/11/new-
paysa-study-reveals-us.
95
Ribeiro, Marco Tulio; Singh, Sameer & Guestrin, Carlos (2016, August
12). Introduction to local interpretable model-agnostic explanations
(LIME), from O’Reilly.com.
Retrieved from: https://www.oreilly.com/learning/introduction-to-local-in-
terpretable-model-agnostic-explanations-lime.
Ruan, Sherry; Wobbrock, Jacob O.; Liou, Kenny; Ng, Andrew &
Landay, James (2017). Comparing Speech and Keyboard Text Entry
for Short Messages in Two Languages on Touchscreen Phones. Jour-
nal Proceedings of the ACM on Interactive, Mobile, Wearable and
Ubiquitous Technologies archive. 1 (4).
Retrieved from: https://doi.org/10.1145/3161187.
96
Snow, Jackie (2018, March 7). Most Americans are already using AI,
in MIT Technology Review, Oxford Insights.
Retrieved from: https://www.technologyreview.com/the-down-
load/610438/most-americans-are-already-using-ai/.
97
The Economist (2018b, Februar7 15). Humans may not always grasp
why AIs act. Don’t panic.
Retrieved from: https://www.economist.com/leaders/2018/02/15/humans-
may-not-always-grasp-why-ais-act.-dont-panic.
Thornhill, John (2017, October 30). Would you donate your data for
the collective good?, in Financial Times.
Retrieved from: https://www.ft.com/content/00390a76-bd4a-11e7-9836-
b25f8adaa111.
Turner, Karen (2016, October 3). Google Translate is getting really, re-
ally accurate, in The Washington Post.
Retrieved from: https://www.washingtonpost.com/news/innova-
tions/wp/2016/10/03/google-translate-is-getting-really-really-accurate/.
98
Vinnova (2018). Artificiell intelligens i svenskt näringsliv och
samhälle. Analys av utveckling och potential (18).
Retrieved from:
https://www.vinnova.se/contentassets/55b18cf1169a4a4f8340a5960b32fa82
/vr_18_08.pdf.
99