002
002
net/publication/372028377
The Rise of CreAltives: Using AI to enable and speed up the creative process
CITATIONS READS
12 668
1 author:
Andrew Pearson
Intelligencia
8 PUBLICATIONS 49 CITATIONS
SEE PROFILE
All content following this page was uploaded by Andrew Pearson on 03 July 2023.
Andrew Pearson
Managing Director, Intelligencia, China
Andrew Pearson is the President of Intelligencia Limited, a leading implementer of analytics, artificial
intelligence (AI), business intelligence, customer experience, data integration, digital marketing and social
media solutions for clients worldwide, including some of the biggest casinos and sportsbooks in the
world. He has written several books on analytics and AI, including The AI Casino and The AI Marketer, as
well as several others in the AI series. Andrew has written articles for several prestigious publications. He
is also a sought-after speaker and lectures all over the world on such topics as AI, machine learning (ML),
casino marketing, analytics and social media. In summer 2023, his first novel, The Dead Chip Syndicate,
will be traditionally published by Brother Mockingbird press. It is the first in his Exotics Trilogy, which is set
in Asia and focused on the world of tech and gambling.
Abstract The ancient Greeks invented the concept of the muse goddess to be a vessel
that would enter a human’s life and spark long-desired creativity. The Greek gods did not
want to give humans credit for coming up with creativity; however, humans may have
invented something as good as a muse — artificial intelligence (AI). Today, AI, machine
learning (ML), deep learning (DL) and natural language processing (NLP) could be taking
up that ‘muse role’ as well as becoming the workhorse for many a visual artist’s grunge
work, and possibly even their creative work as well. In this paper, we will look at how
modern creative AI technologies can be viewed through the lens of five groups. We will
also explore how recent advances in creative AI offer users the ability to create images,
compose music, animation and even video in ways never before possible — and then
wrap up with final takeaways on the future of artistic creativity in the era of AI.
© Henry Stewart Publications 2633-562X (2023) Vol. 2, 2 1–14 Journal of AI, Robotics & Workplace Automation 1
2 Journal of AI, Robotics & Workplace Automation Vol. 2, 2 1–14 © Henry Stewart Publications 2633-562X (2023)
early ones to adopt technology, often using repetitive tasks of the work, with 89 per cent
it in ways that disrupt society. AI, machine of respondents expressing interest in such a
learning (ML) and deep learning (DL) will system (see Figure 1).16 Pfeiffer Consulting
be no exception.14 On the one hand, many reports:17
artists already use ‘ML to enrich the way they
work with their preferred media. On the ‘For creative professionals working
other, some artists use ML, and in particular, frequently with stock images, or other
AI, to shed light onto certain facets of materials such as fonts found on the
these same tools which can be invisible web, AI and ML-based help with these
or taken for granted by public opinion, tasks would be very welcome: 81% of
media, or institutions’, claim Caramiaux and respondents indicated strong interest for
Donnarumma.15 such an assistant.’
When asked which type of assistant
creatives would prefer, the clear favourite was Teaching assistants that could explore
an assistant that reduced drudgery and the new features in the software also received
© Henry Stewart Publications 2633-562X (2023) Vol. 2, 2 1–14 Journal of AI, Robotics & Workplace Automation 3
Figure 1: Which AI- or ML-based assistant system would creatives be interested in?18
strong approval ratings;19 however, assistants motion graphics creatives were also interested
providing creative variations based on a in testing when a viewer’s level of attention
project at hand only received 51 per cent dropped while watching a clip.24 User
approval. Many interviewees expressed interest experience (UX) designers also welcomed
in trying such an assistant, but they did not testing the logic and intuitiveness of a user
expect to rely on it, since they preferred to interface design.25
stay in control of their creative process.20 This
sentiment is certainly a justified one.
One creative assistant suggestion that drew TRANSCENDENT BEAUTY AS A
the most interesting comments was a system SERVICE
that would help anticipate different types In ‘The Art in the Artificial: AI and the
of audience reactions to a given project, creative industries’,26 Davies et al. see AI
based on ML.21 ‘While some respondents within the creative industries breaking down
(almost 18%) had spontaneous strong into three categories: AI involving media/
negative reactions to this type of assistant, domains relevant to creative industries;
over 59% stated strong or extremely strong AI research directly applied to a creative
approval’, according to Pfeiffer Consulting.22 industry-related activity; creative/generative
The type of information the respondents content, such as generative adversarial
wanted to receive using such an assistant networks (GANs), which imitate artists and
was particularly interesting, with several artistic styles. The latter, like DALL-E and
designers mentioning they would like to test MidJourney, has already produced some
the uniqueness of a design.23 Some video and impressive and compelling work.
4 Journal of AI, Robotics & Workplace Automation Vol. 2, 2 1–14 © Henry Stewart Publications 2633-562X (2023)
The first category relates to images, audio must focus ‘less on the details of the actual
and text: outcome of a creative project, and much
more on the multi-layered engagement
‘The raw material of the creative processes necessary to reach and touch’ an
industries: language, imagery and sound audience, argues Pfeiffer Consulting.32 They
can all be digitized and converted into contend:
data. As machine learning algorithms are
often applied to these, AI research of this ‘Understanding the client’s needs and
nature could have future implications for dreams, and putting themselves in the
the creative industries, although it may not shoes of the client’s target audience is
necessarily be directly analysing creative essential for creative professionals. This is
content.’27 what shapes their creative vision, and helps
find inspiration. But the creative project
The second category uses AI to animate is also shaped by technology: Technology
a character in a computer game or, like defines what it is possible to create — and
in Adobe’s Character Animator software, also what it is possible to deliver to the
generates a visual effect in an image, a video audience.’33
or even on a livestream. The third category,
creating generative content as well as the Today’s creative professionals must balance
ability to mimic an artist’s style, has grown their creative vision within the given
rapidly in the past few years. technological framework they are working
Style transfer is a process allowing a style with, while juggling the increasingly
from one medium to be transferred to complex demands of multi-format, multi-
another.28 For example, on AI text-to-image device content delivery, which can both
tools like DALL-E and MidJourney, users enable and constrain an artist.34
add prompts like ‘abstract expressionist’ or
‘in the vein of Warhol’ to utilise a famous
or even not-so-famous artist’s style to their IMAGES AND AI
work. As noted by Davies et al: In early 2021, OpenAI unveiled its picture-
making neural network, DALL-E, with
‘An example of an artistic application a string of surreal and cartoonish images
is Gene Kogan’s Cubist Mirror where produced on demand that would probably
a viewer standing in front of a screen have made the service’s namesake artist,
finds their recorded image alternatively Salvador Dali, immensely proud. As Will
transformed into the styles of different Douglas Heaven explains in his article,
artists such as Klimt, Munch and ‘This horse-riding astronaut is a milestone
Hokusai.’29 on AI’s long road towards understanding’.35
‘DALL-E’s avocado armchairs had the
Caramiaux et al.30 discovered AI in the essential features of both avocados and chairs;
current media and creative industries could its dog-walking daikons in tutus wore the
be found in three areas: creation, production tutus around their waists and held the dogs’
and consumption. Although technology lets leashes in their hands.’36 An AI system getting
creatives realise the potential of their vision, these and other important details right was
it also defines the boundaries of engagement truly cutting-edge.
by providing an ever-increasing number To explore how AI can have an impact
of distribution channels and platforms a on the creation of images with AI, the
creative professional must accommodate, says authors of this paper tested out MidJourney,
Pfeiffer Consulting.31 This means a creative a text-to-image service calling itself ‘An
© Henry Stewart Publications 2633-562X (2023) Vol. 2, 2 1–14 Journal of AI, Robotics & Workplace Automation 5
independent research lab. Exploring new texts. The more training data to pull
mediums of thought. Expanding the from, the better these systems become at
imaginative powers of the human species.’ recognising patterns and making sense of
The self-assessment is lofty, as many them.
corporate slogans often are, but MidJourney’s Ilya Sutskever, cofounder and chief
catchphrase contains quite a bit of truth. scientist of OpenAI, describes the DALL-E
Experimenting with this service as the 2 system as ‘Transcendent beauty as a
persona of a novelist, ie someone who paints service’. OpenAI believes DALL-E 2 will
with words rather than with images, using help inspire artisans of all kinds, including
MidJourney was extraordinarily liberating. painters, graphic designers, architects, book
In my experiments, verbal prompts created illustrators, costume designers, storyboard
palettes of moody, smog-filled and noir- artists as well as many simple amateur artists
inspired streets of 1950s Singapore (see looking to improve their craft. These text-
Figure 2) or mythical heroes in epic to-image solutions can simplify an artist’s or
compositions (see Figure 3) in less than a designer’s life, if only in the art discovery,
minute. sampling and collage phase of the creative
Text-to-image systems work by using ML process.
algorithms instead of human intelligence; A simple prompt of ‘/imagine’ triggers
they try different combinations of words MidJourney, telling it to pull images from the
until they come up with something that web and then, according to any provided text
looks like an image or sounds like human instruction, build accordingly. The system
speech. They also need lots and lots of starts by providing the user with four images,
examples (known as training data) so they any of which can be used as a jumping-off
can learn how language works in general point for more images. An image the user
before being asked questions about specific likes can be created in high resolution.
Figure 2: MidJourney text-to-image landscape prompted by the following text prompts: noir Singapore alley
in 1950, smog, dark, street lights, rule of thirds, cityscape, matte, VHS aesthetic, smoke coming from ground,
amber lights, highly detailed, extra details, clean, trending on ArtStation (ArtStation.Com is an online platform for
creative collaboration and creative showcases)
6 Journal of AI, Robotics & Workplace Automation Vol. 2, 2 1–14 © Henry Stewart Publications 2633-562X (2023)
Figure 3: MidJourney text-to-image portrait prompted with: Epic composition, a boy fights atop a mountain,
highly detailed, torchlight, night, fog, rain, lit windows, night, shape language, bounce light, cool colours,
reflections, rim light, complementary colours, volumetric light, shadow shapes, rule of thirds, concept art,
atmospheric perspective, cinematic, octane-render, photo-real, CryEngine, V-ray, 8k, micro details, in the style
of Peter Mohrbacher, Edmund Dulac, Greg Rutkowski, muted color scheme::0, by simon bisley::0, by greg
rutkowski::0, by lee bermejo::0, by alphonse mucha::0, by enki bilal::0, by frank frazetta::0, by alex ross::0, by
frank miller::0, by todd mcfarlane::0, by dave mckean::0, by ayami kojima::0, by shinkiro::10000, by masamune
shirow::0, by hirohiko araki::0, by takehiko inohue::0, by luis royo::0
© Henry Stewart Publications 2633-562X (2023) Vol. 2, 2 1–14 Journal of AI, Robotics & Workplace Automation 7
collateral with only a text and/or an image the mind of the user should pop up. If not,
prompt, all in average time of less than 60 well, there is always a tweak of the prompt
seconds. and then another 60-second wait for several
By applying techniques such as style newly rendered images that might just capture
transfer and other ML-trained methods what you were looking for in your prompt.
to generate visuals that would have been One does have to trust in the system,
very time-consuming to produce even for however. ‘The creative process is a process
a skilled professional, AI can lower the of surrender, not control’, Bruce Lee once
barrier of entry to an extent that may lead said. That is a motto to keep in mind while
to the devaluation of skills and expertise working with MidJourney, which effortlessly
of creatives. These technologies will likely seduces you down a rabbit hole of creativity
improve significantly in the future and could that is highly addictive and endlessly creative.
have a distinct impact on the way creatives It does not decipher dreams, but it sure lets
express and manifest their talents.37 The user your imagination run wild. Some control
can freely create anything their imagination must be relinquished, but as OpenAI’s
desires, in any chosen artistic style they fancy, Sutskever asserts, every now and then
from the standard styles of Art Nouveau, MidJourney generates something that just
Cubism, Expression and Modernism to makes you gasp.
Post-Impressionism. The palette can go from In his article, ‘AI-generated imagery is
Afrofuturism to Ziggurat, with anything and the new clip art as Microsoft adds DALL-E
everything in between. With MidJourney to its Office suit’,38 James Vincent explains
and DALL-E, the world of art and the style that ‘Microsoft is adding AI-generated art
of any known artist, even highly obscure to its suite of Office software with a new
ones, are available at the click of a button. app named Microsoft Designer’. The app
Because MidJourney works on a Discord works like other AI text-to-image tools,
server, everyone’s work is accessible to with user prompts generating a variety of
all who are logged onto that server, so designs with minimal effort.39 ‘Designer can
collaboration can be a part of the process. be used to create everything from greeting
Anyone can whip off a variation of a piece cards and social media posts to illustrations
they see in the render cue. Watching the for PowerPoint presentations and logos for
generated art is like seeing the history of businesses’, says Vincent.40 Designer will
art playing out on an hourly basis. Using have a free standalone app as well as a more
MidJourney reminds one that art is, above feature-rich version available to paying
all else, a collaborative process, owned Microsoft 365 subscribers.41
by anyone and everyone — at least the Depending on how the style transfer
generated art of the digital kind. feature is implemented, ‘it could have
All MidJourney renderings start with the wide-ranging ripple effects in the creative
prompt ‘/imagine’. As a request is typed in, community’, warns Pfeiffer Consulting.42
the system’s promise that ‘There are endless They further contend:
possibilities …’ pops up and that line certainly
rings true. One’s imagination is the only ‘Making style transfer available as a push-
limiter, although the system can go off on button option could have a considerable
confusing and nonsensical tangents every now impact on how creatives format multiple
and then. If it is available as an image on the elements requiring a consistent style, thus
Internet or you can describe it well enough speeding up their workflow — yet in the
in a text prompt, while adding enough artistic process running the risk of reducing the
qualifiers, weights and styles, something kind of creative accident that is often seen
that at least vaguely resembles what was in as a positive force in the creative process.’43
8 Journal of AI, Robotics & Workplace Automation Vol. 2, 2 1–14 © Henry Stewart Publications 2633-562X (2023)
The feature could also facilitate plagiarism, and a RNN continuously updates itself
a concern repeatedly brought up in the based on its continuing input – hence the
research interviews;44 however, a style transfer recurrent part of the name.’
feature would allow for improvised creative
experimentation unavailable without the help Examples of AI music software systems
of AI.45 One designer put it rather succinctly: include Flow Machines by Sony, Google AI’s
‘Creativity does not necessarily mean using NSynth and Jukebox by OpenAI, which
a tool for its intended purpose.’46 ‘Subverting currently includes over 7,000 songs that
tools and letting creativity run rampant in mimic artists such as Frank Sinatra, 2Pac,
ways unintended by the developers is likely Katy Perry and Bruno Mars. When listening
to lead to results we would have a hard time to the songs, one gets the musical intention.
imagining now’, argues Pfeiffer Consulting.47 Other AI applications in music include
assisting sound design and searching through
large databases to find the most appropriate
MUSIC AND AI musical match for a particular application.50
Near the end of the 2010s, one of the
authors of this paper walked through a
Manila (Phillipines) casino and listened to a VIDEO AND AI
casino executive explain how the songs we Anantrasirichai and Bull see AI enhancing
were listening to was part of a soundtrack and helping video postproduction with
customised for the patrons on the floor. colourisation, super-resolution, deblurring,
Using data collected from their patron loyalty denoising, dehazing, turbulence removal and
cards, the casino figured out the average age inpainting, as well as with visual effects.
of the people filling their floor, then adjusted For Matt Cimaglia, an award-winning
the soundtrack accordingly, playing hits from creative director and entrepreneur, it does
whatever generation was most representative not stop there. In his article’ The Future of
on the floor. It was a subtle but clever way to Video Advertising is Artificial Intelligence’,51
use patron data to make the customers more Cimaglia claims, ‘We are witnessing a
amenable to gambling their money away. moment in video marketing history where
Today, applications of AI in music are human editors are becoming obsolete’.
much more sophisticated than building Cimaglia envisions an advertising landscape
soundtracks based on age averaging. ML completely different from our own. In this
algorithms can analyse data to find musical world, advertisers do not shoot a single
patterns, then make suggestions for newly video, they shoot multiple iterations of it.
composed melodies that might inspire an In one, the actor changes shirts. In another,
artist.48 In his article ‘A beginner’s guide the actor is an actress of Asian descent. In
to AI: Neural networks’,49 Tristan Greene another, she is African American. After
explains the process of making music with finishing the shoot, the agency passes the
AI: footage to an algorithm, not a human
editor.52
‘Let’s imagine an AI that generates Instead of taking hours or even days to
original musical compositions based on cut a new ad, the AI algorithm can compile
human input. If you play a note the AI hundreds of videos in a few minutes, each
tries to “hallucinate” what the next note tailored to a specific viewer based on highly
“should” be. If you play another note, the detailed user data.53 Cimaglia contends:
AI can further anticipate what the song
should sound like. Each piece of context ‘As the video analytics flows in, the
provides information for the next step, algorithm can edit the video in real-time,
© Henry Stewart Publications 2633-562X (2023) Vol. 2, 2 1–14 Journal of AI, Robotics & Workplace Automation 9
too—instead of waiting a week to analyze talking head. The system took several images
and act on viewer behavior, the algorithm of a person, ran them through an off-the-
can perform instantaneous A/B tests, shelf ‘face landmark tracker’ to decipher
optimizing the company’s investment in a where the eyes, eyebrows, nose, lips and
day.’54 jawline were, then did the same for another
‘driving’ source video, this time tracking the
Human editors are going extinct, says motion frame by frame.61 Blain explains:
Cimaglia.55
This is personalisation marketing on ‘These networks start out really bad at
a scale never seen before, only available their jobs, but as they perform their jobs
because of AI. Content is tailored to the millions of times, they begin to improve,
subjective individual, not the general, barely and the competition between the two
understood mass. Video marketing will be networks is what drives both to continue
surgically striking highly relevant offers to a getting better. The Discriminator network
market of one, not firing a shotgun blast of isn’t looking for the same things a human
promotions to the uninterested many. fake-spotter might be looking for, but it
Savvy advertising agencies need to doesn’t matter – whatever it’s looking for,
embrace AI today. The same logic backing it keeps getting better at discriminating, so
programmatic banners and search advertising the Generator network has to keep getting
supports ML and chatbots; computers can better to keep fooling it.’62
just do some things faster, cheaper and more
accurately than their human counterpart.56 Meta (née Facebook) is also getting into the
‘In this future of data-driven dynamic text-to-video game. On 29th September,
content, viewers’ information is siphoned to 2022, Meta announced Make-A-Video,63
AI that determines aspects of the video based ‘a new AI system that lets people turn text
on their data’, explains Cimaglia.57 prompts into brief, high-quality video clips’.
The options for customisation do not Facebook has been one of the leaders in AI
just stay with user data. If it is raining where these past few years and its Make-A-Video
the viewer is, it could be raining in the builds upon its past work in generative
video, which is easily done by the agency technology research, which it believes has
plugging in a geolocating weather script.58 the potential to open new opportunities
Cimaglia is correct when he contends, ‘this for creators and artists alike.64 ‘With just a
customization model of video production few words or lines of text, Make-A-Video
is more effective than the current model can bring imagination to life and create
of creating a single video for the masses’.59 one-of-a-kind videos full of vivid colors,
Although he argues there is always a place characters, and landscapes’, says Meta.65 The
for the multimillion-dollar, 30-second Super system generates videos from images or takes
Bowl mini-film, marketers need to get more existing videos and creates similar ones.66
sophisticated when it comes to marketing to Meta is open-sourcing its generative AI
the individual. research to build a supportive community,
It is not just in the editing room where whom they hope will provide useful
AI will make a big difference. AI systems are feedback to improve their product. This is a
now turning single images into talking head common procedure with AI tools.
videos. As Loz Blain explains in his article,
‘Samsung AI brings the Mona Lisa (or any
other picture) to life’,60 Samsung’s AI Center ANIMATION AND AI
in Moscow uses adversarial learning to take AI has been widely used in the gaming
a single image of a person and turn it into a industry. Immersive VR and mixed reality
10 Journal of AI, Robotics & Workplace Automation Vol. 2, 2 1–14 © Henry Stewart Publications 2633-562X (2023)
(MR) experiences require good quality, ‘AI Dungeon is a web-based game that is
high-resolution, animated worlds, which capable of generating a storyline in real-
pose new problems for data compression and time, interacting with player input. The
visual quality assessment.67 AI technologies underlying algorithm requires more than
have been used to make this immersive 10,000 label contributions for training to
content more exciting and realistic. It helps ensure that the model produces smooth
robustly track and localise users and the interaction with the players.’82
objects they see and interact with inside
a virtual environment.68 For example, Modern games often incorporate 3D
Meta’s Oculus Insight uses AI to generate visualisation, AR and VR in an attempt to
real-time maps and position tracking.69 make play more realistic and immersive.83
‘Combining audio and visual sensors can Some games even generate synthetic 3D
further improve navigation of egocentric gaming environments with the help of
observations in complex 3D environments’, deep neural networks trained on real video
say Anantrasirichai and Bull.70 cityscapes.84 DL technologies have also been
Recently, AI has been used to make the used in VR/AR game design85 and emotion
animation process faster, simpler and more detection has even been added to improve
realistic than in the past.71 ML is particularly a game’s immersive experience.86 ‘Recently
good at creating models of motion from AI gaming methods have been extended into
captured real motion sequences.72 Google the area of virtual production, where the
research has created software for pose tools are scaled to produce dynamic virtual
animation based on PoserNet and FaceMesh environments for filmmaking’, contend
that turns a human pose into a cartoon Anantrasirichai and Bull.87
animation in real time.73 ‘Get into character’, When it comes to rendering objects and
asserts the Adobe tagline for its Character scenes, AI has also proven instrumental for
Animator software, a solution that allows a gaming, helping with the synthesis of 3D
user to create a 3D character animation that views from motion capture, ray-tracing
replicates their moves.74 The software, which lighting, character and scene shading,88
synchronises ‘live-performance animation dynamic texture synthesis89 and multiple
with automatic lip sync and face and body depth sensors.90
tracking’, has been embraced by Hollywood
studios as well as many livestreaming content
creators.75 Facebook’s Reality Labs has also FINAL TAKEAWAYS ON CREATIVITY
‘employed ML-AI techniques to animate AND AI
realistic digital humans, called Codec Avatars, ‘The threat some see in the impact of AI and
in real time using GAN-based style transfer machine learning on human creativity is not
technology’.76 new. The history of computer-based visual
Beyond animation, gaming ‘could be creation is shaped by the democratization of
considered as an “all in-one” AI platform, tools, making the production of sophisticated
since it combines rendering, prediction and designs ever easier and more accessible’, says
learning’, say Anantrasirichai and Bull.77 Pfeiffer Consulting.91
It has supported design, decision making Starting with the simple page layout and
and interactivity,78 helped with interactive vector design tools of the 1980s and 1990s, the
narrative builds,79 assisted in generating evolution has been extremely liberating, but its
procedural content80 and deep reinforcement has left many creative professions in its wake.92
learning has been employed for in-gaming Currently, there is a widespread fear that AI
personalisation.81 As Anantrasirichai and Bull will have a leveling effect on creative output,
explain: leading to a homogenised, machine-driven
© Henry Stewart Publications 2633-562X (2023) Vol. 2, 2 1–14 Journal of AI, Robotics & Workplace Automation 11
12 Journal of AI, Robotics & Workplace Automation Vol. 2, 2 1–14 © Henry Stewart Publications 2633-562X (2023)
16. Pfeiffer Consulting, ref. 8 above. 48. Anantrasirichai and Bull, ref. 1 above.
17. Ibid. 49. Greene, T. (July 2018), ‘A beginner’s guide to
18. Ibid. AI: Neural networks’, TNW, available at https://
19. Ibid. thenextweb.com/news/a-beginners-guide-to-ai-
20. Ibid. neural-networks (accessed 5th October, 2022).
21. Ibid. 50. Anantrasirichai and Bull, ref. 1 above.
22. Ibid. 51. Cimaglia, M. (December 2018), ‘The Future
23. Ibid. of Video Advertising is Artificial Intelligence’,
24. Ibid. Entrepreneur, available at https://www.entrepreneur.
25. Ibid. com/article/323756 (accessed 15th October, 2022).
26. Davies, J., Klinger, J., Mateos-Garcia, J. and 52. Ibid.
Stathoulopoulos, K. (June 2020), ‘The Art in 53. Ibid.
the Artificial: AI and the creative industries’, 54. Ibid.
Nesta, available at https://cdn2.assets-servd.host/ 55. Ibid.
creative-pec/production/assets/publications/PEC- 56. Ibid.
and-Nesta-report-The-art-in-the-artificial.pdf 57. Ibid.
(accessed 14th October, 2022). 58. Ibid.
27. Ibid. 59. Ibid.
28. Ibid. 60. Blain, L. (May 2019), ‘Samsung AI brings the Mona
29. Ibid. Lisa (or any other picture) to life’, Newatlas, available
30. Caramiaux, B., Lotte, F., Geurts, J., Amato, G., at https://newatlas.com/samsumg-ai-mona-lisa-
Behrmann, M., Falchi, F., Bimbot, F., Garcia, A., photo-talking/59828/ (accessed 16th October, 2022).
Gibert, J., Gravier, G,. Holken, H., Lefebvre, S., 61. Ibid.
Liutkus, A., Perkis, A., Redondo, R., Turrin, E., 62. Ibid.
Vieville, T. and Vincent, E. (2019), ‘AI in the media 63. Facebook (September 2022), ‘Introducing Make-A-
and creative industries’, available at https://arxiv.org/ Video: An AI system that generates videos from text’,
pdf/1905.04175.pdf (accessed 14th October, 2022). available at https://ai.facebook.com/blog/generative-
31. Pfeiffer Consulting, ref. 8 above. ai-text-to-video/ (accessed 17th October, 2022).
32. Ibid. 64. Ibid.
33. Ibid. 65. Ibid.
34. Ibid. 66. Ibid.
35. Heaven, W. D. (April 2022), ‘This horse- 67. Anantrasirichai and Bull, ref. 1 above.
riding astronaut is a milestone on AI’s 68. Ibid.
long road towards understanding’, MIT 69. Ibid.
Technology Review, available at https://www. 70. Ibid.
technologyreview.com/2022/04/06/1049061/ 71. Ibid.
dalle-openai-gpt3-ai-agi-multimodal- 72. Ibid.
image-generation/#:~:text=Artificial%20 73. Ibid.
intelligence-,This%20horse%2Driding%20 74. Adobe, ‘Animate in real time. Really’, available
astronaut%20is%20a%20milestone%20on%20 at https://www.adobe.com/products/character-
AI’s%20long,what%20we%20mean%20by%20 animator.html (accessed 17th October, 2022).
intelligence.&text=When%20OpenAI%20 75. Anantrasirichai and Bull, ref. 1 above.
revealed%20its%20picture%2Dmaking%20neural%20 76. Ibid.
network%20DALL%2DE (accessed 15th October, 77. Ibid.
2022). 78. Justesen, N., Bontrager, P., Togelius, J. and Risi,
36. Ibid. S (2020), ‘Deep learning for video game playing’,
37. Pfeiffer Consulting, ref. 8 above. IEEE Trans Games, Vol. 12, No. 1, pp. 1–20
38. Vincent, J. (October 2022), ‘AI-generated imagery 79. Riedl, M. and Bulitko, V. (2012), ‘Interactive
is the new clip art as Microsoft adds DALL-E to its narrative: A novel application of artificial intelligence
Office suite’, The Verge, available at https://www. for computer games’, 16th AAAI Conference on
theverge.com/2022/10/12/23400270/ai-generated- Artificial Intelligence.
art-dall-e-microsoft-designer-app-office-365-suite 80. Héctor, R. (2014), ‘MADE – massive artificial
(accessed 16th October, 2022). drama engine for non-player characters’, FOSDEM
39. Ibid. VZW, TIB AV-Portal, available at https://av.tib.eu/
40. Ibid. media/32569 (accessed 16th October, 2022).
41. Ibid. 81. Wang, P., Rowe, J., Min, W., Mott, B. and Lester,
42. Pfeiffer Consulting, ref. 8 above. J. (2017), ‘Interactive narrative personalization with
43. Ibid. deep reinforcement learning’, International Joint
44. Ibid. Conference on Artificial Intelligence, available at
45. Ibid. https://www.ijcai.org/proceedings/2017/0538.pdf
46. Ibid. (accessed 16th October, 2022).
47. Ibid. 82. Anantrasirichai and Bull, ref. 1 above.
© Henry Stewart Publications 2633-562X (2023) Vol. 2, 2 1–14 Journal of AI, Robotics & Workplace Automation 13
14 Journal of AI, Robotics & Workplace Automation Vol. 2, 2 1–14 © Henry Stewart Publications 2633-562X (2023)
Pearson.indd 14 stats
View publication 30/01/2023 14:56