Uptycs Blog | Cloud Security Insights for Linux and Containers

How ChatGPT Is Disrupting Security Norms‚ Tech Evangelist Jack Roehrig

Written by Elias Terman | 3/10/23 1:00 PM

ChatGPT is garnering a lot of attention lately. There’s been a surge of blogs and articles touting its positive capabilities: copywriting, code writing, virtual assistants, customer support bots, contract writing, et al. The list is broad in scope. But there’s also been a lot of fear, uncertainty, and doubt (FUD) around ChatGPT and AI in general. Cybersecurity experts in particular are concerned about the potential misuse of the technology.

 

Curious about the buzz, I sat down with longtime cybersecurity and privacy practitioner and evangelist Jack Roehrig to hear his thoughts on some of the ways that ChatGPT is/will be changing security now and in the future.

 

Jack, what are your thoughts on the FUD making the rounds about ChatGPT?

 

Is ChatGPT the new singularity? Nah. Is it going to be disruptive? Most definitely. It has some restrictions to prevent misuse, and they’re effective. And despite OpenAI’s name, it’s far from open. So we’re not likely to see hackers running their own self-hosted version of ChatGPT to bypass safety and misuse restrictions. That said, you can bet that threat actors are as anxious to put ChatGPT to use as legitimate users are.

 

You’ve been obsessed with information security and privacy for most of your life. You’ve worked with and for a lot of technology companies. You’ve been on the other side of the business as a CSO/CISO. What are some of your thoughts about the impacts ChatGPT is likely to have on cybersecurity?

 

One of the first things that comes to mind is how it can be used in social engineering attacks. Phishing is still one of the most common ways that attackers establish a foothold in an organization’s network. Phishing messages written by a tool like ChatGPT will likely have a better chance of evading anti-phishing security and will more likely be read (and believed) by recipients. Imagine trying to write phishing emails in 35 languages, while trying to sound conversationally casual. That’s cake for ChatGPT. It’s a powerful tool for generating context-aware content that’s informal and conversational (and it’s good at formal too).

 

As for spearphishing, attackers will be able to use target-specific information and content more effectively by pulling from context-aware GPT-3 training data. So, if an attacker wants to reach a target company’s CFO with a phishing message, they can use the chatbot to generate content that is highly specific to that person—perhaps referencing the CFO’s projects or activities.

 

ChatGPT passes the Turing test. It exhibits intelligent behavior equivalent to—or indistinguishable from—that of a human. Attackers who are not native English-language speakers will be able to use it to make their illicit pitches that much more believable. No more atrocious grammar and spelling mistakes from supposed foreign princes. 

 

One of the ways that anti-phishing tools screen for attacks is by looking at the content of a message. If the tool sees that numerous recipients are getting the exact same message, it’ll be tagged as suspicious and go through further evaluation. But ChatGPT can automate message generation such that each target recipient gets a (somewhat) unique lure message, thus evading the anti-phishing mass message test.

 

Another type of social engineering is a fake website made to look legitimate, but having links to malicious sites or triggers for drive-by malware downloads. ChatGPT is more than capable of writing entire websites that can be used for malicious purposes.

 

It looks like companies will need to up their game on user security awareness training and screening for phishing campaigns hitting their email. In what other ways do you foresee ChatGPT being used maliciously?

 

We definitely have to be concerned about AI-generated malware. Intelligent chatbots lower the barrier to entry for creating malicious software. I’ve heard people saying that the next big coding language is English. Even someone with very low technical skills can give the order for ChatGPT to write code for things like ransomware, viruses, spyware, keyloggers, and other types of programs intended to do harm.

 

Forums are already popping up on the dark web for threat actors to exchange information on how to perfect their AI-generated code. They’ve even shared example code that ChatGPT generated for the purpose of encrypting/decrypting files, which of course can be used in ransomware attacks.

 

AI-generated code is indistinguishable from human-written code, and it’s faster and easier to produce. That could help threat actors change their TTPs [tactics, techniques, and procedures] faster than they can be detected and stopped. For example, ChatGPT can write code for polymorphic malware, which has the ability to mutate and change its signatures frequently. This is a real concern for security researchers and the anti-malware industry. Detection capabilities might also need to use AI to keep pace with modern malware—to fight fire with fire.

 

How do you see security vendors incorporating ChatGPT or similar technologies into their products or processes?

 

There are a lot of potentially innovative uses for AI tech like ChatGPT in security platforms. Take incident response, where security teams are buried in alerts. A chatbot could serve as an incident commander. Or, more safely, a co-commander—one that directs some of the simpler processes of incident response in a human voice. Keeping cool during an incident is critical, so the conversational nature of the ChatGPT interface brings a lot of value to IR management.

 

In a similar vein, it could make a great user interface for security programs. One example— humans ask questions to diagnose vulnerability remediation. This could help alleviate the problem of not having enough highly skilled technical people by enabling junior-level people to interact with the chatbot to get answers. I can’t wait to see what incident response startups innovate using ChatGPT. 

 

On the other hand, one question is whether the technology is reliable enough yet to be using it for something as important as vulnerability remediation. What if it walks you through a software update that doesn’t actually fix the problem? You’ve wasted time and you’re still vulnerable, thinking that you’ve already fixed the problem.

 

So how do we know if its answers can be trusted?

 

It depends on the data it was trained on. As they say, garbage in, garbage out. And right now it’s really  hard to know exactly what ChatGPT was trained on and whether the facts it’s been fed are really accurate. It’s been widely discussed that, as of now, the accuracy of ChatGPT’s answers is wildly unreliable.

 

And you might not realize it because the chatbot spews its content with confidence. It’s not always clear where it’s pulling information from—although the new Bing version attempts to cite sources. These limitations have high stakes in situations where you’re giving detailed instructions to respond to a security event. Can those steps be trusted?

 

Although it’s early in the game, a few companies have already announced they’re jumping on the ChatGPT bandwagon. Can you cite a few examples?

 

Microsoft dropped over $10 billion into OpenAI. I don’t recall ever hearing about a private-equity investment from a single source being that massive. While Microsoft might have the coin, that’s commitment.

 

I recently read a step-by-step guide to integrating ChatGPT with Microsoft Sentinel using the OpenAI API. Sentinel is a pretty broad platform that provides both SIEM and SOAR capabilities for threat detection and response. Adding ChatGPT to the mix might be able to accelerate investigations, automate responses, and simply streamline the incident management process.

 

Island, the security-focused enterprise browser company, announced it is integrating ChatGPT technology into its browser to create an “assistant” that will provide contextual awareness to what someone is working on, and prompt users based on their behavior/work subject. So, for instance, the AI-guided assistant can learn product documentation and help people understand every aspect of the product or service they’re working on.

 

Microsoft, too, announced it’ll be embedding the technology in its Edge browser, making it much more intuitive and helpful to users. And, of course, it launched a new version of its Bing search engine that incorporates technology originating from OpenAI.

 

There’s so much potential here to bring contextual AI to security products and services we all use.

 

Meanwhile, what other thoughts and concerns do you have about ChatGPT and similar chatbots?

 

What makes these kinds of tools so useful is their natural language interface. Unlike query languages such as SQL, the user doesn’t need any special knowledge or skills to ask ChatGPT a question or give it a task to perform. Anyone can use it with virtually no training. And the more someone uses it, the more they learn how to tune their input to get tailored output.

 

People who have been around the IT industry for a while might remember Ask Jeeves. The butler-themed search engine touted that it let users get answers to questions in everyday natural language, in addition to traditional keyword searching. This tool was ahead of its time conceptually, but lost out to Google’s more powerful search engine. The irony is that many people now say that Google is at risk of losing business to a more user-friendly chatbot like ChatGPT.

 

Any parting thoughts on how ChatGPT is changing security?

 

It’ll become much more useful when it brings the learning data up to date and allows trusted and vetted organizations to provide their own content for input. And I presume that, in time, companies will be able to adapt a chatbot to train it on its own information, such as documentation, product or service manuals, standard operating procedures, the computing environment, and so on. Then the question becomes, who owns or has access to an organization’s proprietary information? Is it walled off from the rest of the world, and if so, how secure is that wall? Can information be removed if necessary?

 

ChatGPT still has a ways to go before I’d call it fully baked. OpenAI co-founder Sam Altman has said that it’s “cool” but “a horrible product.” Another co-founder, Greg Brockman, said that releasing the tool last November was basically a last resort after they had issues with internal beta testers. They opened ChatGPT up to the world and had millions of beta testers overnight, many of which are easily finding glitches that show it’s really not ready for prime time yet. 

 

While it makes sense for companies to investigate how they can use ChatGPT to support their business operations, they must understand the product can’t be trusted just yet—especially for critical functions such as medical care that require a high level of accuracy.

 

The world has tremendous interest in ChatGPT, even as it’s in its infancy stage. The potential for it is so great, but there are a lot of questions and concerns that need to be worked out, too.