5 Big Problems With OpenAIs ChatGPT
5 Big Problems With OpenAIs ChatGPT
makeuseof.com/reasons-artificial-intelligence-cant-replace-humans
Home
Technology Explained
Readers like you help support MUO. When you make a purchase using links on our site,
we may earn an affiliate commission. Read More.
The need to increase our daily productivity has resulted in the creation and evolution of
artificial intelligence. And like any major technology, there are misconceptions about it
that need to be set right.
Regardless of how you feel about AI, it's here to stay, and we'll likely rely on it more and
more as time goes on. Let's take a look at some myths you should stop believing.
1/21
Artificial intelligence (AI) is simply an imitation of human intelligence in machines. One
of its goals is to raise cognitive reasoning in computers---with more accuracy and lesser
bias.
The products of artificial intelligence are evident in smartphones, computers, and IoT
devices. One promising scenario is its use in industries and astronomy. For example, it
played a significant role in the Google/NASA's discovery of a Kepler-90i planet in 2017, as
released on NASA's website. This was a debut role of AI in astronomy that brought its
great potential in space into more limelight.
However, misconceptions about AI have raised concerns over time. Futurists and
technologists strongly believe that the influence of artificial intelligence is limitless.
Let's take a critical look at some of these myths and dig up the truths and the lies about
them.
2/21
According to Fox Business, despite the over 200,000 robots working in Amazon's retail
warehouses as of 2019, its human power hiring rate jumped to 23 percent between 2019
and 2020. Amazon keeps hiring, so what's the significance of the bots?
A glaring answer to that question is that companies around the world are constantly
searching for ways to reduce the workload of their employees. It's not to replace them
entirely, as is often wrongly believed.
There has been more collaboration between people and machines in industries than we
know. The mail and goods delivery industries are not exceptions. Evidence is in the DHL's
mail delivery system---where drones lift the heavy packages to leave the delivery
employees hands-free. AI is also used in banks to serve you better.
Starship also reported about its on-campus food delivery system that maps its bots with
delivery employees. That way, employees get to control the bots instead of delivering the
items themselves. This reduces stress significantly, increases productivity, and boosts
efficiency.
In a practical sense, it's more of an alliance between humans and artificial intelligence. As
such, it's somewhat faulty to agree to such a myth completely.
Artificial intelligence won't take your jobs---they'll only reshape the way you work and
what you do at work.
The idea of the creation of machines that are more intelligent than men and women is
controversial. The likes of Stephen Hawking and Nick Bilton believe that AI could get out
of human control in the future. This has raised the fear of an impending robot apocalypse,
as depicted in many sci-fi movies.
As a counter-argument, Elon Musk (the CEO of Tesla) makes a more logical statement
about regulations, checks, and balances. He's compared AI to a demon we might lose
control of if we do something foolish. That all depend on how far we go in making
something we can't control.
Although efficiency and accuracy are strong points for AI, it's still a glaring fact that AI
can never attain a person's level of intuition and emotion. So their mystical coup, which
isn't logical, will be influenced by what we make of them.
A more solid argument would be that artificial intelligence could fail us. Indeed, there has
been evidence of the failure of artificial intelligence in medicine. An example is IBM's
WATSON failed cancer treatment recommendation that was reported on Becker's Health
3/21
IT Hospital Reviews.
Another example is a report by The New York Times that detailed the death of a
pedestrian hit by an autopilot Uber vehicle. Thus, we see a problem of AI's self-control
and human influence rather than their potential to eliminate humans and take over the
world at will---when they don't even have a will.
It's clear that artificial intelligence now influences decision-making processes in business
intelligence, astronomy, medicine, and pharmacy. But the fact remains that no matter
how well you train a machine, it can't think for itself.
This is a limitation that will take ages for AI to override---and probably will never happen.
As such, most processes that use artificial intelligence will always depend on a final
verdict of people to decide.
The origin of artificial intelligence and machine learning goes as far back as the 1950s.
The term "machine learning" was coined by IBM's Arthur Samuel in 1952---after he
successfully developed a computer program to play checkers that mastered all previous
positions.
However, the need for the development of machines with artificial brains came into
limelight in the late 1940s. And because it was necessary to have a broader term for
everything a machine does, including learning, artificial intelligence became a discipline
in 1956.
As a result, using both terms synonymously isn't quite correct. Machine learning is a
process whereby a machine learns by experience, based on the information it has seen
before. For more information, we've looked at some examples of machine learning
algorithms.
4/21
Such information is in the form of data, whether it's featured or not. Artificial intelligence,
on the other hand, involves all processes, including machine learning, that brought about
the product we know today.
Because of how the concept appears in our imagination, each time the word artificial
intelligence resonates, it's normal for robots to come to mind. However, artificial
intelligence applies to all spheres of technology. And if robots were the only products of
AI, they would appear everywhere.
Beyond the concept of robotics, artificial intelligence offers more complex creations. The
facial and fingerprint recognition system of smartphones, smart decision making home
gadgets, smart health care equipment, and business intelligence---among others---are all
artificial intelligence.
Robotics is only one aspect that may depend on AI. In some cases, we can separate the
term robotics to mean machines that can automatically carry out specific physical and
complex tasks. That's why the term "robotics and artificial intelligence" are used together
in some instances.
5/21
Some controversies about AI portray it as a threat rather than a solution. Hopefully, after
reading up on these myths, you're in a better position to understand the truth behind AI.
Don't forget that what you believe is also a factor in how you choose to use or think about
AI.
Readers like you help support MUO. When you make a purchase using links on our site,
we may earn an affiliate commission. Read More.
ChatGPT is a powerful new AI chatbot that is quick to impress, yet plenty of people have
pointed out that it has some serious pitfalls. Ask it anything you like, and you will receive
an answer that sounds like it was written by a human, having learned its knowledge and
writing skills from mass amounts of information across the internet.
Close
1.1M
Just like the internet, however, truth and facts are not always a given and ChatGPT is
guilty of getting it wrong. With ChatGPT set to change our future, here are some of the
biggest concerns.
What Is ChatGPT?
6/21
ChatGPT is a large language learning model that was designed to imitate human
conversation. It can remember things you have said to it in the past and is capable of
correcting itself when wrong.
It writes in a human-like way and has a wealth of knowledge because it was trained on all
sorts of text from the internet, such as Wikipedia, blog posts, books, and academic
articles.
It's easy to learn how to use ChatGPT, but what is more challenging is finding out what its
biggest problems are. Here are some that are worth knowing about.
OpenAI knows about this limitation, writing that: "ChatGPT sometimes writes plausible-
sounding but incorrect or nonsensical answers." This "hallucination" of fact and fiction, as
some scientists call it, is especially dangerous when it comes to something like medical
advice.
Unlike other AI assistants like Siri or Alexa, Chat GPT doesn't use the internet to locate
answers. Instead, it constructs a sentence word by word, selecting the most likely "token"
that should come next, based on its training.
In other words, ChatGPT arrives at an answer by making a series of guesses, which is part
of the reason it can argue wrong answers as if they were completely true.
While it's great at explaining complex concepts, making it a powerful tool for learning, it's
important not to believe everything it says. ChatGPT isn't always correct—at least, not yet.
In fact, users have shown how ChatGPT can give produce some terrible answers, some, for
example, that discriminate against women. But that's just the tip of the iceberg; it can
produce answers that are extremely harmful to a range of minority groups.
The blame doesn't simply lie in the data either. OpenAI researchers and developers
choose the data that is used to train ChatGPT. To help address what OpenAI calls "biased
behavior", it is asking users to give feedback on bad outputs.
With the potential to cause harm to people, you can argue that ChatGPT shouldn't have
been released to the public before these problems are studied and resolved.
7/21
A similar AI chatbot called Sparrow (owned by Google's parent company, Alphabet) was
released in September 2022. However, it was kept behind closed doors because of similar
concerns that it could cause harm.
Perhaps Meta should have headed the warning too. When it released Galactica, an AI
language model trained on academic papers, it was rapidly recalled after many people
criticized it for outputting wrong and biased results.
Teachers have experimented with feeding English assignments to ChatGPT and have
received answers that are better than what many of their students could do. From writing
cover letters to describing major themes in a famous work of literature, ChatGPT can do it
without hesitating.
That begs the question: if ChatGPT can write for us, will students need to learn to write in
the future? It might seem like an existential question, but when students start using
ChatGPT to help write their essays, schools will have to think of an answer fast. The rapid
deployment of AI in recent years is set to shock many industries, and education is just one
of them.
There are other concerns too. Fake social media accounts pose a huge problem on the
internet and with the introduction of AI chatbots, internet scams would become easier to
carry out. The spread of fake information is another concern, especially when ChatGPT
makes even wrong answers sound convincingly right.
8/21
The rate at which ChatGPT can produce answers which aren't always correct has already
caused problems for Stack Exchange, a website where users can post questions and get
answers.
Soon after its release, answers by ChatGPT were banned from the site due to a large
number of them being wrong. Without enough human volunteers to sort through the
backlog, it would be impossible to maintain a high level of quality answers, causing
damage to the website.
OpenAI chooses what data is used to train ChatGPT and how it deals with the negative
consequences. Whether we agree with the methods or not, it will continue developing this
technology according to its own goals.
While OpenAI considers safety to be a high priority, there is a lot that we don't know
about how the models are created. Whether you think that the code should be made open
source, or agree that it should keep parts of it a secret, there isn't much we can do about it.
At the end of the day, all we can do is trust that OpenAI will research, develop, and use
ChatGPT responsibly. Alternatively, we can advocate for more people to have a say in
which direction AI should head, sharing the power of AI with the people who will use it.
If you're interested in what else OpenAI has developed, check out our articles on how to
use Dall-E 2 and how to use GPT-3.
9/21
OpenAI admits that ChatGPT can produce harmful and biased answers, not to mention its
ability to mix fact with fiction. With such a new technology, it's difficult to predict what
other problems will arise. So until then, enjoy exploring ChatGPT and be careful not to
believe everything it says.
Home
Creative
By
Garling Wu
Published Dec 22, 2022
OpenAI's new chatbot has garnered attention for its impressive answers, but how much of
it is believable? Let's explore the dark side of ChatGPT.
Poll
Home
Work & Career
10/21
Readers like you help support MUO. When you make a purchase using links on our site,
we may earn an affiliate commission. Read More.
When faced with the rapid growth of AI technology in today's labor market, employers
probably think of automated processes that make work easier, faster, and more efficient.
On the other hand, employees probably fear losing their jobs and being replaced by a
machine.
While artificial intelligence is designed to replace manual labor with a more effective and
quicker way of doing work, it cannot override the need for human input in the workspace.
In this article, you will see why humans are still immensely valuable in the workplace and
cannot be fully replaced by artificial intelligence.
11/21
Emotional intelligence is one distinguishing factor that makes humans forever relevant in
the workplace. The importance of emotional intelligence in the workspace cannot be
overemphasized, especially when dealing with clients.
As social animals, one basic, undeniable need of humans is the need for emotional
connection with our kind. We achieve this connection through the chemical and biological
interaction of several hormones and emotions between the parties involved. AI does not
possess it as it comprises software and chips, not biological cells.
Good business owners and company executives understand the importance of appealing
to the emotions of staff and clients. A machine can't achieve such levels of human
connection, while, as a human, there are ways to increase your emotional intelligence.
12/21
These situations are common in the tech and manufacturing industries, and AI builders
constantly try to find temporary workarounds. The idea that AI tools will adapt to any
situation is one of several common myths around artificial intelligence.
Therefore, if you fear that AI may infiltrate all industries and eliminate the demand for
your professional skills, you can rest assured that won't happen. Human reasoning and
the human brain's power to analyze, create, improvise, maneuver, and gather information
cannot easily be replicated by AI.
When brainstorming creative concepts and ways of doing work, AI lacks this human
ability because, as already established, AI can only work with the data it receives. Hence,
it cannot think up new ways, styles, or patterns of doing work and is restricted to the
given templates.
Employers and employees know how important creativity is in the workspace. Creativity
offers the pleasant sensation of something new and different instead of the boring,
repetitive actions in which AI is designed to function. Creativity is the bedrock of
innovation.
Related to creative thinking is the ability to think outside the box. Machines are designed
to "think within the box." That means AI tools can only function within the dictates of
their given data.
On the other hand, humans can think outside the box, sourcing information from various
means and generating solutions to complex problems with little or no available data.
Since AI does not possess the ability to think out of the box and generate creative ideas for
innovation, AI cannot take over humans in the workspace.
13/21
Soft skills are a must-have for every worker in the workspace. They include teamwork,
attention to detail, critical and creative thinking, effective communication, and
interpersonal skills, to mention but a few. These soft skills are in demand in every
industry, and you must develop them to succeed professionally.
Humans are taught and required to possess these skills; developing them is valuable for
everyone, regardless of position. Company executives need them to thrive, as do a team of
field workers in any industry. Hence, these soft skills give you the upper hand in the
workspace over AI.
However, soft skills are alien to machines with artificial intelligence. AI cannot develop
these soft skills critical to workplace development and growth. Developing these skills
requires a higher level of reasoning and emotional intelligence.
There would be no artificial intelligence without human intelligence. The term artificial
intelligence means humans design it. Humans write the lines of code with which AI is
developed. The data AI machines operate with are inputted by humans. And it is humans
that use these machines.
14/21
Artificial intelligence applications are indeed gaining ground in the workplace, and they
will replace many jobs people perform today. However, the jobs it takes are limited to
repetitive tasks requiring less intense reasoning. Additionally, evolving workplace
demands will create new roles for humans as the world moves towards a more integrated
tech landscape.
A report by the World Economic Forum shows that while machines with AI will replace
about 85 million jobs in 2025, about 97 million jobs will be made available in the same
year thanks to AI. So, the big question is: How can humans work with AI instead of being
replaced by it? That should be our focus.
Because in this present age, it will be difficult, if not impossible, to live without AI, and
without humans, there would be no artificial intelligence. Forward-thinking organizations
are already developing ways to incorporate human capabilities and AI to attain higher
levels of productivity and innovation.
So the next time you hear how artificial intelligence threatens to eliminate humans from
the workforce, refer to this article and rest assured that humans will always have the
upper hand-over AI.
Home
Technology Explained
15/21
Readers like you help support MUO. When you make a purchase using links on our site,
we may earn an affiliate commission. Read More.
Artificial intelligence is disrupting and revolutionizing almost every industry. With
advancing technology, it has the potential to improve so many aspects of life drastically.
And, with swathes of experts warning of the potential danger of AI, we should probably
pay attention. On the other hand, many claim that these are alarmist views and that
there’s no immediate danger from AI.
So, are concerns about artificial intelligence alarmist or not? This article will cover the five
main risks of artificial intelligence, explaining the currently available technology in these
areas.
16/21
AI is growing more sophisticated by the day, and this can have risks ranging from mild
(for example, job disruption) to catastrophic existential risks. The level of risk imposed by
AI is so heavily debated because there’s a general lack of understanding (and consensus)
regarding AI technology.
These risks are amplified by the sophistication of AI software. The classic hypothetical
argument is the facetious “paper clip maximizer.” In this thought experiment, a
superintelligent AI has been programmed to maximize the number of paper clips in the
world. If it’s sufficiently intelligent, it could destroy the entire world for this goal.
But, we don’t need to consider superintelligent AI to see that there are dangers already
associated with our use of AI. So, what are some of the immediate risks we face from AI?
17/21
Automation is a danger of AI that is already affecting society.
The issue is that for many tasks, AI systems outperform humans. They are cheaper, more
efficient, and more accurate than humans. For example, AI is already better at recognizing
art forgery than human experts, and it’s now becoming more accurate at diagnosing
tumors from radiography imagery.
The further problem is that, with job displacement following automation, many of the
workers who lost their jobs are ineligible for newly created jobs in the AI sector due to
lacking the required credentials or expertise.
As AI systems continue to improve, they will become far more adept at tasks than
humans. This could be in pattern recognition, providing insights, or making accurate
predictions. The resulting job disruption could result in increased social inequality and
even an economic disaster.
18/21
decision-making is capable of.
The problem is that the hope is that as AI-driven security concerns rise, so do AI-driven
prevention measures. Unless we can develop measures to protect ourselves against AI
concerns, we run the risk of running a never-ending race against bad actors.
This also begs the question of how we make AI systems secure themselves. If we use AI
algorithms to defend against various security concerns, we need to ensure that the AI
itself is secure against bad actors.
When it comes to privacy, large companies and governments are already being called out
for the erosion of our privacy. With so much personal data available online now (2.5
million terabytes of data is created daily), AI algorithms are already able to easily create
user profiles that enable extremely accurate targeting of advertisements.
The risk is that this technology could be expanded to authoritarian regimes or simply
individuals or groups with malicious intents.
3. AI Malware
19/21
over time.
Newer smart technology (like self-driving cars) has been assessed as a high-risk target for
this kind of attack, with the potential for bad actors to cause car crashes or gridlocks. As
we become more and more reliant on internet-connected smart technology, more and
more of our daily lives will be impacted by the risk of disruption.
Again, the only real solution to this danger is that anti-malware AI outperforms malicious
AI to protect individuals and businesses.
4. Autonomous Weapons
Autonomous weapons—weapons controlled by AI systems rather than human input—
already exist and have done for quite some time. Hundreds of tech experts have urged the
UN to develop a way to protect humanity from the risks involved in autonomous weapons.
What happens when we start allowing AI algorithms to make life and death decisions
without any human input?
It’s also possible to customize consumer technology (like drones) to fly autonomously and
perform various tasks. This kind of capability in the wrong hands could impact an
individual's security on a day-to-day basis.
20/21
Facial reconstruction software (more commonly known as deepfake tech) is becoming
more and more indistinguishable from reality.
The danger of deepfakes is already affecting celebrities and world leaders, and it’s only so
long until this trickles down to ordinary people. For instance, scammers are already
blackmailing people with deepfake videos created from something as simple and
accessible as a Facebook profile picture.
And that’s not the only risk. AI can recreate and edit photos, compose text, clone voices,
and automatically produce highly targeted advertising. We have already seen how some of
these dangers impact society.
The first step in mitigating the risks of artificial intelligence will be to decide where we
want AI to be used and where it should be discouraged. Increasing research and debate
into AI systems and their uses is the first step to preventing them from being misused.
21/21