Threat actors have a new tool in their belts — ChatGPT.
ChatGPT is not foolproof, but experts think its ability to customize and scale cyberattacks could threaten enterprises. Yet, the hype around ChatGPT is obscuring security concerns.
More than half of IT professionals predict that ChatGPT will be used in a successful cyberattack within the year, according to a BlackBerry survey of 1,500 IT decision makers across North America, the U.K. and Australia.
More than 7 in 10 IT professionals think ChatGPT is a potential threat to cybersecurity and are concerned, according to the survey.
While IT departments begin to decipher potential use cases, teams must also be aware of how generative AI might be used to target their company.
“As with any new or emerging technology or application, there are pros and cons,” Steve Grobman, SVP and CTO at McAfee, said in an email. “ChatGPT will be leveraged by both good and bad actors, and the cybersecurity community must remain vigilant in the ways these can be exploited.”
Anyone with access to a web browser can use ChatGPT, lessening the knowledge gaps and lowering the barrier to entry, Grobman said.
With ChatGPT’s help, threat actors can generate well-written messages in bulk that are targeted to individual victims. This typically improves the odds of a successful attack, according to Grobman.
Following the rollout of ChatGPT, people have struggled to distinguish between AI-generated and human-written text. More than two-thirds of adults could not tell the difference between a love letter written by an AI tool versus a human, McAfee data published this month found.
OpenAI released a text classifier to help organizations identify AI-generated text, though the tool is unreliable. Plagiarism detection company Turnitin and several other vendors have started testing detection capabilities for AI-assisted writing as well.
Phishing and prevention
The main goal for businesses regarding phishing attacks is to prevent employees from clicking random links or sending company information.
More than half of IT decision makers believe ChatGPT’s ability to help hackers craft more believable phishing emails is a top global concern, according to a BlackBerry survey.
Educating employees on how ChatGPT and other similar tools are used by threat actors can increase caution and empower employees to avoid these types of attacks, according to Paul Trulove, CEO at security software company SecureAuth.
In addition to education, businesses can adopt a zero-trust model, which requires potential attackers to verify their identity and only provides successful intruders with access to limited resources, Trulove said in an email.
While this strategy may be met with internal resistance, managing access through zero-trust makes it harder for attackers to compromise enterprise systems, Trulove said.
Lowering the barrier to entry and shadow IT
Phishing attacks aren’t the only potential nightmare scenario. Cybercriminals could use AI-powered tools to offer malware Code-as-a-Service, according to Chad Skipper, global security technologist at VMware.
“The nature of technologies like ChatGPT allows threat actors to gain access and move through an organization’s network quicker and more aggressively than ever before,” Skipper said in an email.
Nearly half of IT decision makers believe ChatGPT’s ability to enable less experienced hackers to improve their technical knowledge is a top global concern, according to a BlackBerry survey.
In the past, it used to take hours for cybercriminals to gain access to a network, but now code can be generated within seconds, Skipper said.
Threats can also come from within an organization through the form of shadow IT.
Businesses should remind employees about intellectual property protection, according to Caroline Wong, chief strategy officer leading the security, community and people teams at pentesting company Cobalt.
“OpenAI does warn users not to share confidential information, so it’s important to reiterate that anything they use it for should be considered public,” Wong said in an email.
More than two-thirds of workers said they were using AI tools without first informing their bosses, according to a Fishbowl survey of almost 11,800 Fishbowl users.
“To mitigate risk and tackle the shadow IT issues, organizations must turn on the lights and gain visibility into every process and packet, limit the blast radius with segmentation, inspect in-band traffic for advanced threats and anomalies, integrate endpoint and network detection and response and conduct continuous threat hunting,” Skipper said.