Explore 1.5M+ audiobooks & ebooks free for days

From $11.99/month after trial. Cancel anytime.

In the Age of AI: How AI and Emerging Technologies Are Disrupting Industries, Lives, and the Future of Work
In the Age of AI: How AI and Emerging Technologies Are Disrupting Industries, Lives, and the Future of Work
In the Age of AI: How AI and Emerging Technologies Are Disrupting Industries, Lives, and the Future of Work
Ebook233 pages4 hours

In the Age of AI: How AI and Emerging Technologies Are Disrupting Industries, Lives, and the Future of Work

Rating: 3.5 out of 5 stars

3.5/5

()

Read preview
  • Artificial Intelligence

  • Job Displacement

  • Technology

  • Technological Advancements

  • Electric Vehicles

  • Ai Takeover

  • Technological Singularity

  • Future Society

  • Technology Vs. Tradition

  • Education Reform

  • Disruptive Innovation

  • Rise of the Machines

  • Secret Identity

  • Chosen One

  • Found Family

  • Future of Work

  • Autonomous Vehicles

  • Industries

  • Ai in Healthcare

  • Blockchain

About this ebook

"Artificial Intelligence, deep learning, machine learning - whatever you're doing if you don't understand it - learn it. Because otherwise you're going to be a dinosaur within three years." - Mark Cuban


A.I. has changed the way we do business and make money. Now coming for our

LanguageEnglish
PublisherNew Degree Press
Release dateAug 23, 2021
ISBN9781637305263
In the Age of AI: How AI and Emerging Technologies Are Disrupting Industries, Lives, and the Future of Work

Related to In the Age of AI

Related ebooks

Intelligence (AI) & Semantics For You

View More

Reviews for In the Age of AI

Rating: 3.6666666666666665 out of 5 stars
3.5/5

6 ratings2 reviews

What our readers think

Readers find this title lacking in accuracy and credibility.

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 5 out of 5 stars
    5/5

    Dec 26, 2024

    awesome book, very insightful. The book is contrarian and interesting.
  • Rating: 1 out of 5 stars
    1/5

    Jul 17, 2024

    Chapter 1. "One day, George Washington's personal servant William Lee [...] went to get a patent from Queen Elizabeth I."

    Nope. If this book is that wrong about basic facts, it has nothing to say worth our time about science/tech.

Book preview

In the Age of AI - Sam Mielke

Part 1

AI Overview

Chapter 1

AI and the History of Jobs

It’s 6:30 a.m. and your AI-powered Eight Sleep smart bed, sleep app, alarm clock, and smart watch have decided that now is the perfect time for you to get out of bed. You grudgingly get up from a long night of rest to grab your daily coffee, which is set to brew as soon as you get up. You sit down at the kitchen table with a tablet that shows you personalized daily news and social media feeds. Then, your AI personal assistant reminds you of your schedule for the day. All this is happening while you eat a well-balanced diet that fits your health and nutritional needs based on data your swallowable digestive system sensor—which continuously analyzes the health of your internal bacteria—shows. After you finish eating, it is time to head off to work. You stand up, hug the kids good-bye, and head out. Today, you have an important surgery to perform in downtown Chicago. Your AI assistant, which has your schedule down, has an autonomous Waymo car ready for you. Your self-driving car takes off. All of this happens in the span of twenty minutes.

At the hospital where you perform soft tissue surgery twice a week, you wheel in a patient. The second the patient gets into the surgery room and sees three surgeons, they say, Sorry, but I want the surgical robot. Before the appointment, you gave the patient the option to choose between a human-operated surgery or a robot-dominant surgery. More often than not, patients choose the robot because they know that the STAR—Soft Tissue Autonomous Robot—can outperform human surgeons on the task of stitching up soft tissue. The robot does the surgery five to ten times faster than a human with greater efficiency and precision. This ability is key, since you know that over 30 percent of soft tissue surgeries end in complication, with a portion of those instances ending up in death (Brown, 2021).

At the hospital, you work directly alongside the surgical robot, which allows you to monitor the health and status of the patient while the robot performs the surgery. Using robots results in optimal efficiency and increased safety for the patient. It also allows you to focus on other tasks that the robot can’t execute.

On the ride home from work, you decide to pick up dinner and groceries. Using Flytrex’s drone delivery technology, you will have your favorite fast-food meal delivered to a safe zone, which will then be delivered by an Uber carrier, cutting the average delivery time from twenty-one minutes to seven minutes. Rather than having your groceries delivered, you prefer to head into the store. You walk into the Amazon Go grocery store, which uses technology like computer vision, deep learning, and sensor fusion to detect when items are taken off or put back on the shelves and keeps a tab of them in your mobile app. The store is fully automated, so there’s no need to be checked out by a cashier or use self-checkout, saving you time. Later, you arrive at your parents’ home to celebrate your dad’s seventy-forth birthday. It is a special one, as you almost lost your dad to a degenerative disease, but through personalized medicine and the help of AI, doctors were able to predict, detect, and treat the disease before it had a chance to worsen and spread throughout the body.

While this hypothetical reality may seem like something out of a sci-fi movie, the technologies mentioned are not far from becoming a reality. In fact, almost all the aforementioned technology is already out there, they are just in the process of testing and small-scale adoption. While some may view this reality as harmful or unnecessary, holding fears the technology may be misused. Others believe these innovations will help solve a lot of the problems society struggles with today by making people’s lives easier and freeing them up to do more meaningful work.

What Is Artificial Intelligence?

Artificial intelligence (AI) is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence (Beal, 2021). AI has many subsets, most notably, machine learning, which is an application of AI that allows machines to learn and become smarter from experience without having to be manually programmed. This ability is what powers Amazon’s product recommendations, Netflix’s movie suggestions, and Facebook’s automatic photo tagging.

AI has two main forms. First, artificial narrow intelligence (ANI), also known as weak AI, is the most common form and refers to a computer’s ability to perform a single task extremely well, such as image recognition, playing chess, or acting as a virtual assistant like Siri, for example. On the flip side, general AI, artificial general intelligence (AGI), strong AI, and superintelligence are a form of AI equal to or greater than human intelligence. The name is derived from the idea that this AI can be applied to all problems, unlike weak AI. Strong AI is like the Terminator. This type of AI does not currently exist, but experts such as Ray Kurzweil and Elon Musk believe we could see it come to fruition within ten to twenty-five years (Goertzel, 2015).

In the early 1900s, pop culture enlightened the world with the idea of artificially intelligent robots, starting with Maria from the movie Metropolis, followed by the heartless Tin Man from The Wizard of Oz. In 1950, Alan Turing, a British mathematician whom the Turing Test is named after, dedicated his career to researching the mathematical possibility of artificial intelligence. Turing argued that humans use acquirable information and logic to solve problems and make decisions, so he pondered whether robots could do this as well. This idea was the foundation of his famous paper, Computing Machinery and Intelligence, where he explained how to construct AI machines and then measure their intelligence levels (the Turing Test). A few years later, inspired by Alan Turing’s ideas, thousands of mathematicians, scientists, and philosophers began working with the concept of AI and its possibilities.

Innovation and the adoption of AI was immensely slowed because computing power in computers needed to accelerate first. Computers during Turing’s time could be programmed what to do but failed to store what they did. In addition, like every technological innovation at the beginning, it was immensely pricey. In 1950, on a yearly basis, the cost of using a computer cost about $2.4 million, so it was limited to educational institutions and a couple of giant tech companies (Garner, 2021). From 1957 to 1974, the rate of AI development went parabolic in its growth. As computers started to store more and more information, they also became faster, cheaper, and more available to the common person.

Fast forward a couple of decades—computing power is larger, and computers are faster and more affordable, which leads to a breakthrough in many of artificial intelligence’s accomplishments. In 1997, IBM’s Deep Blue, a chess-playing computer program, defeated world chess champion Gary Kasparov (History.com Editors, 2021). The chess match signified the first time a computer had beaten a world chess champion, and this defeat marked a huge leap forward in measuring artificial intelligence decision-making. In this same year, speech recognition software began operating on Windows.

Fast forward to today, and computing power and storage—which had been holding us back previously—are no longer roadblocks. Moore’s Law is the observation that the capability and speed of computers doubles each year. Moore’s Law finally caught up to a capability that allows AI to flourish. We are now at a point where AI is booming; every day, new technologies are being developed, and within a couple of decades, these intelligent machines could quite possibly surpass human intelligence. But before then, disruption will happen to jobs, and life in the age of AI a few years from now will be far different from what it is today.

The History of Job Displacement

People feel a sense of mass hysteria when it comes to technology taking our jobs. With every new technological advancement in the past five hundred years, someone has been faced with the idea of their livelihood changing irreversibly.

In the sixteenth century, essentially all jobs were manual labor, but then one day, George Washington’s personal servant, William Lee, had an idea to mechanize the production of stockings by creating the first knitting machine. The machine precisely imitated the knitters’ hand movements, but it was around twelve times quicker. The stocking frame could easily produce massive quantities of stockings. So, Lee went to get a patent from Queen Elizabeth I, but she rejected it because she was concerned about the welfare of stock knitters who would be out of a job (History of Information, 2021).

In the nineteenth century, we saw the textile riots. The Industrial Revolution was gaining momentum, and people were transitioning from rural communities to more compact cities. They typically held jobs in factories where machines were ramping up the speed of output for common items that had been handcrafted by artisan workers. Also, farmworkers were up against mechanization. Population growth was booming, and more food was needed, so manufacturers adopted machines to do almost everything, including sowing seeds and harvesting crops. Like most tech advances that displace labor, the working class had a negative reaction. The Luddites was a movement that fought back against the growing use of automation. They rioted at textile factories, destroyed the machines, and even burned down the homes of the business owners (Victorian Era, 2021).

More recently, toward the end of the twentieth century, the car manufacturing boom prominently featured the use of robots in vehicle manufacturing. The robots were mainly used for repetitive, low-skill tasks that sped up output and subsequently lowered costs. This advancement also included assembly-line tasks like spray-painting, part assembly, and welding, which saw a transition from human jobs to robot jobs (with human supervision). Today, you can’t look around a manufacturing floor without seeing some sort of robot.

Throughout every period of technical advancement and change, the total number of jobs has always increased. We have seen technology like computers, steam power, and electricity progress every year for 250 years, and the United States has always maintained an unemployment rate between 5 to 10 percent with some rare spikes and drops (Lebergott, 1957). So based on history, some economists believe the upcoming fourth Industrial Revolution will create more jobs than it displaces.

However, the impending AI revolution is something completely different, nothing like we have ever seen. We are heading for an era in which computer scientists will program us out of work, leading to net job declines unless governments step in to regulate technological advancements (which they shouldn’t do). With that said, we are certainly going to create new jobs that aren’t even invented yet, but it’s going to be incredibly difficult to retrain an entire working class of people. So, while I don’t go as far as Elon Musk in saying that AI will make jobs kind of pointless, we will see a massive displacement like never before (Gilchrist, 2020).

Chapter 2

AI Today

AI is probably the most important thing humanity has ever worked on. More profound than electricity or fire.

—Sundar Pichai,

CEO

of Alphabet and Google (Petroff, 2018).

Several different technologies and industries are integrated with AI, and in almost all cases, getting AI to make on-the-fly decisions and act like humans are some of the biggest hurdles in AI development. In the early 1990s, Microsoft’s co-founder Bill Gates made a bold prediction that computers would one day see, hear, communicate, and understand human beings, and he was right (Huddelston, 2020).

Seeing

Just like how humans have sight, so do computers. Computer vision—programming a computer to process and identify visuals—has evolved tremendously over the years. In 1959, the first digital image scanner was invented by transforming images into lines of numbers (Demush, 2019). Fast forward to 2009, and Google started using computer vision to test cars on roads where AI could recognize various traffic signs with 99.46 percent precision, which is better than humans (Islam and Raj, 2019). A couple of years later in 2011, the US government used facial recognition to confirm the identity of Osama bin Laden after he was killed by SEAL Team Six (FaceFirst, 2021). Today, AI systems are also exemplary at surveillance and image recognition.

Using AI surveillance in the right way can be a very positive technological asset for society in keeping us safe. Companies like Athena Security are using AI-driven surveillance technology to easily detect handguns and assault rifles instantaneously to protect employees and students from active shooters (Athena Security, 2021). Although most schools and offices have surveillance operators, it has been found that CCTV operators cannot recognize objects in the video feed after twenty to forty minutes of active monitoring. It has been proven that video monitoring operators miss up to 45 percent of screen activity after twelve minutes of continuous video monitoring, which increases up to a 95 percent miss rate after twenty-two minutes of continuous monitoring (Dadashi, 2008). Athena Security’s AI can detect about nine hundred different types of firearms, recognizing threats with extreme accuracy in under three seconds. If the AI recognizes harm, it can call 911 in real time and send the video along as well. This is a case where AI makes a positive difference in the world.

AI cameras monitoring citizens on the streets, albeit controversially, can be a huge asset in high-crime areas. Current AI technologies can pick you out of a large crowd, get a transcript of what you’re saying based on lip movement, and—by analyzing micro-expressions and other biomarkers—actually know how you’re feeling. Using large data sets, AI can improve administrative operations, law enforcement, and national security. For example, in China, they have the Sharp Eyes program, where Chinese law enforcement matches social media activity, video images, online purchases, travel records, and personal identity into a police cloud.

This technology and cloud of data are tools for authorities to use to keep track of criminals, potential lawbreakers, and average citizens. With tens of millions of video cameras active in China—they have a lot of power in monitoring their citizens to keep them safe—China is, without a doubt, the leader in emphasizing AI facial recognition for surveillance purposes. I’m not advocating AI monitoring of people on the streets, only pointing out that we have the technology to make society safer. These are and will continue to be used in military operations as well.

Hearing

Artificial intelligence is in your home, and it can listen! Whether it is Amazon’s Echo, Google Home, or Apple’s HomePod, these virtual assistant technologies have given us an always on-demand tool, answering any question we have based on simple commands. The future of

Enjoying the preview?
Page 1 of 1