AI and cybersecurity: A new era
Morgan Stanley Wealth Management07/05/23
Summary: In the evolving landscape of artificial intelligence (AI), both cybersecurity teams and hackers are using AI to their advantage.
If you recently used your car’s GPS system, relied on auto-correct when writing an email, or conducted an online search, chances are you’ve experienced artificial intelligence (AI).
So, let’s discuss the basics of AI, how cybersecurity teams and hackers are using AI, and how you can help keep yourself safe.
What is AI?
AI is a broad term that refers to the science of simulating human intelligence in machines with the goal of enabling them to think like us and mimic our actions. This would allow AI machines to perform tasks that previously only human beings could handle. In some instances, generative AI, by which computer algorithms use existing content to produce new content, even appears as if it were created by humans.
Many AI machines also attempt to determine the best way to achieve an outcome or solve a problem. They typically do this by analyzing enormous amounts of training data and then finding patterns in the data to replicate in their own decision-making.
While AI may seem futuristic, the concept behind it is believed to have begun in 1950, when British mathematician and logician Alan Turing speculated about “thinking machines” that could reason similarly to humans.1 The term “artificial intelligence” was born a few years later.2
How AI benefits cybersecurity
Artificial intelligence (AI) is reshaping nearly every industry –and cybersecurity is no exception. A recent research report estimated the global market for AI-based cybersecurity products was about $15 billion in 2021 and will surge to roughly $135 billion by 2030.3
Cybersecurity organizations increasingly rely on AI in conjunction with more traditional tools such as antivirus protection, data-loss prevention, fraud detection, identity and access management, intrusion detection, risk management, and other core security areas. AI is uniquely suited to tasks such as:
- Detecting actual attacks more accurately than humans, and prioritizing responses based on their real-world risks;
- Identifying and flagging the type of suspicious emails and messages often employed in phishing campaigns;
- Simulating social engineering attacks, which help security teams spot potential vulnerabilities before cybercriminals exploit them; and
- Analyzing huge amounts of incident-related data rapidly, so that security teams can swiftly contain the threat.
Additionally, AI has the potential to be a game-changing tool in penetration testing—intentionally probing the defenses of software and networks to identify weaknesses. By developing AI tools to target their own technology, organizations will be better able to identify their weaknesses and develop a significant edge in preventing future attacks. Stopping breaches before they occur would help protect the data of individuals and companies as well as lower IT costs for businesses.
How hackers abuse AI
Unfortunately, cybercriminals are relentless and resourceful. Here are four ways they’re using AI for their own benefit:
- Social engineering schemes:
These schemes rely on psychological manipulation to trick individuals into revealing sensitive information or making other security mistakes. They include a broad range of fraudulent activity categories, including phishing, vishing, and business email compromise scams.
AI allows cybercriminals to create more personalized, sophisticated, and effective messaging to fool unsuspecting victims. This means cybercriminals can generate a greater volume of attacks in less time—and experience a higher success rate.
- Password hacking:
Cybercriminals exploit AI to improve the algorithms they use for deciphering passwords. The enhanced algorithms provide quicker and more accurate password guessing, which allows hackers to become more efficient and profitable.
This type of deception leverages AI’s ability to easily manipulate visual or audio content and make it seem legitimate. This includes using phony audio and video to impersonate another individual. The doctored content can then be broadly distributed online in seconds—including on influential social media platforms—to create stress, fear, or confusion among those who consume it.
Cybercriminals can use deepfakes in conjunction with social engineering, extortion, and other types of schemes.
- Data poisoning:
Hackers “poison” or alter the training data used by an AI algorithm to influence the decisions it ultimately makes. In short, the algorithm is being fed with deceptive information, and bad input leads to bad output.
Data poisoning can be difficult and time-consuming to detect. So, by the time it’s discovered, the damage could be severe.
Staying secure in a changing AI environment
As AI evolves, concerns about data privacy and risk management for both individuals and businesses continue to grow. Regulators are considering ways to develop AI and maximize its benefits while reducing the likelihood of negative impacts to society. However, there currently isn’t any comprehensive AI federal legislation in the United States.
So, what does this mean to you? Fortunately, the answer is surprisingly simple.
- You should review your current cybersecurity protection and make sure it follows best practices in critical areas such as passwords, data privacy, and social engineering.
- Regularly visit our Security Center for updates and tips.
- Understand how your assets are protected.
By staying secure, it makes it easier for all of us to enjoy the conveniences and other enhancements in our daily lives made possible by AI.
The source of this article, “AI and cybersecurity: A new era,” was originally published on May 17, 2023.
How can E*TRADE help?
Consider investing in companies that help safeguard data and computer systems.
Find investing opportunities in this growing field of technology.
Investing and trading account
Buy and sell stocks, ETFs, mutual funds, options, bonds, and more.