GenAI
GenAI
How Gen AI could trigger the next CrowdStrike catastrophe; Left unguarded, artificial intelligence can spread misinformation and enable attackers to commit
new crimes. Cybersecurity needs an upgrade.
MarketWatch ,
Sarah Hammer,
18 August 2024 07:01,
1037 words,
English,
MRKWC,
Copyright 2024 MarketWatch, Inc. All Rights Reserved.
Cybersecurity protection company CrowdStrike (javascript:void(0);)'s CRWD faulty software update caused a global meltdown in technology systems last month.
Financial institutions experienced significant disruption, with banks, brokerage firms and trading infrastructure suffering interruptions to online functions, operations and
access to important data.
Around the world, industries and governments were negatively impacted. In a public statement, CrowdStrike (javascript:void(0);) CEO George Kurtz (javascript:void(0);)
announced that a logic error in a configuration update rolled out by the company had affected all Microsoft MSFT Windows devices, causing the "Blue Screen of Death."
While the CrowdStrike (javascript:void(0);) catastrophe was not a cyberattack, the calamity reminds us that technology is a double-edged sword. A clear example of this
is generative AI.
Generative AI holds the potential to significantly enhance cyber threat detection, containment, eradication and recovery by advancing automation of those processes. It
can also develop more sophisticated anti-fraud tools to detect anomalies in data and reduce false positives in anti-money laundering controls.
On the other hand, generative AI can enable attackers to commit new, more refined and increasingly diabolical crimes. McAfee (javascript:void(0);) reported that this
began immediately with CrowdStrike (javascript:void(0);), as criminals seized the opportunity to release sophisticated phishing, malware and other fraudulent schemes.
Attackers also are using generative AI to develop more devious weapons. The technology can be leveraged to conduct social engineering (manipulating and deceiving
users to gain control over computer systems), as well as build human impersonation tools. For example, in February, a finance employee was tricked into paying $25.6
million to swindlers using deepfake video technology to produce a fraudulent representation posing as the company's CFO. Deepfakes have also been used to trick
facial recognition programs, impersonate celebrities, and, in this year's Indian election, sway voters.
Generative AI can also reverse engineer (disassemble and analyze) software systems to understand functionality, design, and implementation. This helps malicious
actors acquire the extraordinary ability to better identify new and more threatening points of vulnerability in IT systems.
Another challenge posed by generative AI is its inherent use of enormous datasets. The data used by AI could be inaccurate or faulty, generating false or misleading
information and presenting it as fact. Even more nefarious, a perpetrator could "poison" an AI model during the training phase by introducing corrupt data.
Finally, financial firms often rely on third-party AI and data providers, which have their own cybersecurity vulnerabilities. Trading and analytics software are oft-cited
examples of this.
Numerous government agencies, including the U.S. Treasury (javascript:void(0);) and the U.S. Department of Homeland Security (javascript:void(0);), have jurisdiction
over these issues and have issued recommendations. Among them are many common responses: increased collaboration with the government; identification of best
practices, and information-sharing among financial sector participants. While these recommendations are all worthy of attention, their implementation is easier said than
done, given the secrecy surrounding how large financial firms manage and respond to cyber incidents.
Also: Apple's OpenAI partnership threatens your online privacy and data security. Here's what we can do. (javascript:void(0))
Time to act
More action is necessary. First and foremost, the U.S. should pour enormous resources into advancing its own technology, with a strong emphasis on AI, to enhance
national security and combat criminals.
Second, the agencies mentioned above should clarify and coordinate existing regulations relevant to the protection and enforcement of cybersecurity. They have issued
rules covering privacy, incident reporting, strategy, risk management, access controls, encryption standards and management of third-party vendors. These government
agencies must work to maximize the protective value of these existing rules and requirements.
Addressing these evolving threats also requires companies to make data governance an integral part of their DNA, encompassing crucial aspects such as data security,
architecture and integration. Effective data security involves measuring control access rights, safeguarding sensitive information, and restricting data usage to authorized
personnel with legitimate needs. Data architecture focuses on optimizing data and system structures for accessibility and utility, while data integrity ensures accuracy
and consistency throughout the data lifecycle.
By meticulously implementing these governance principles, organizations can enhance operational efficiency and strengthen their defenses against cyberthreats and
manipulations.
Read: Manipulated video shared by Elon Musk mimics Kamala Harris's voice, raising concerns about AI in politics (javascript:void(0))
AI innovators themselves must also prioritize cybersecurity. For example, Microsoft (javascript:void(0);) recently unveiled the "Recall" feature as part of its upcoming
Copilot+ PC. After warnings from privacy and security experts, Microsoft (javascript:void(0);) committed to three major updates to the "Recall" feature, addressing the
experts' concerns.
Finally, and importantly, cybersecurity protection must include education. Ultimately, humans are responsible for protecting our systems from attacks. Investors and
advisers must become literate in cybersecurity and prevention techniques, and their education should be ongoing to stay updated with technological developments.
Advisers should also learn the vulnerabilities of their systems and vendors' systems, and how these can be protected from attack. Investors should study their AI- and
technology-related investments to identify whether they have a clear cyber-risk management strategy, strong data governance and a protective mindset when innovating
and updating technology.
At the corporate level, companies need hundreds of thousands more cybersecurity experts to secure their systems. To meet the demands for human expertise,
cybersecurity education should be provided at four-year colleges, community colleges, vocational school, and even K-12. Educating at the K-12 level is essential, given
the extent of the potential for harm at all levels. Students can become versed in new technologies, learn not to trust everything they see on social media, and focus
instead on critical thinking.
There is no easy solution for cybersecurity in today's rapidly evolving AI-based landscape. Organizations must adopt a comprehensive, flexible approach combining
technological solutions, robust policies, education and unyielding vigilance to protect our digital assets effectively and maintain resilience against ever-strengthening
cyber threats.
Sarah Hammer is executive director at the Wharton School (javascript:void(0);) and adjunct professor at the University of Pennsylvania Carey Law School.
More: The CrowdStrike chaos will happen again. Here's why. (javascript:void(0))
Also read: After CrowdStrike's 43% stock decline in a month, this analyst now says to buy (javascript:void(0))
Dow Jones & Company, Inc.
Document MRKWC00020240814ek8e001e1