Blog

ChatGPT doesn’t keep its mouth shut

minute read

Last Updated September 2, 2023

Share

This blog is part of the August Thought Leader newsletter.

Lee Pender headshot and byline

ChatGPT is pretty cool, right? Just type in a query and the artificial intelligence (AI) bot returns an answer in crisp, clean English—complete with near-perfect grammar and punctuation.

There’s only one problem; that’s not true. Like any new technology, there are problems with ChatGPT. But probably the biggest one right now is that it could present a security risk on a lot of levels.

Take the now-infamous case of engineers at Samsung who used ChatGPT to check some of their code and apparently released proprietary information onto the internet, leading the company to ban employees from using the AI chatbot.

The Samsung incident has sent companies and governments scrambling to figure out how to best use, control and regulate AI. Their freakout is understandable, given that it could even go beyond security concerns if it’s not addressed.

Will ChatGPT end humanity? Maybe…

Not long ago, a number of AI scientists signed a statement suggesting that technologies like ChatGPT pose a risk of human extinction. Extinction! That’s even worse than a cyberattack (although not by much). Perhaps the most unnerving tidbit about the statement was that Sam Altman, CEO of ChatGPT maker OpenAI, signed it. So did Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence.

When was the last time a company released a product and immediately followed its introduction with a warning that it could wipe out all of humanity? Even New Coke didn’t prompt that kind of reaction.

To be fair, though, not everyone agrees that ChatGPT will kill us all.

But for now, ChatGPT just talks too much

I honestly don’t mean to scare you with any of this (although as a security writer, I suppose that’s part of my job). So, in less apocalyptic but more pressing terms, leaders at accounting firms do need to be careful about how they let employees use ChatGPT.

Here at Rootworks, we’ve found the chatbot to be super-useful. We’ve had it write a letter of introduction to a prospective client, an explanation of what escheatment is and an email asking a client to sign a 1099 form.

Each query produced in a matter of seconds an excellent result easily customizable for your firm. (What ChatGPT can’t do is write a column like this one. Just trust us. It can’t, OK?) If you’re doing generic, broad writing (again, not a witty and charming security column), ChatGPT can provide a great starting point.

In fact, it can probably get you 95% of the way toward a complete document ready to send. But be careful not to make the same mistake the Samsung engineers made. Don’t feed any proprietary information into ChatGPT, and don’t enter any of your clients’ data into it. In fact, don’t mention names of clients in ChatGPT at all. 

The bottom line is that entering information into a ChatGPT query is basically the same as putting it on the open internet. It’s not unlike posting on social media or a message board. When you type something into a ChatGPT query, you’re making it public—probably forever. Like Run-DMC’s proverbial homeboy, ChatGPT never shuts up.

The bad guys are figuring out how to use ChatGPT, too

ChatGPT has other issues that need to be kept in mind. For one, it doesn’t verify anything, which means it can present as fact results pulled from information that’s completely false, pure propaganda or simply made up. A lawyer made that unfortunate discovery when he used ChatGPT to cite cases to support his client’s lawsuit—only to find that the cases documented in the AI chatbot never existed. ChatGPT just invented them. Why? You’d have to ask the robot that question.

So, go ahead and use ChatGPT to write an email to a client (remember, without entering the client’s name!). But if the chatbot gives you any sort of specific information—financial data, quotes from books or famous people, legal precedents—verify it before you send the document to another real person. If something looks questionable, question it and verify it.

Spreading misinformation is a problem, but a much bigger problem with ChatGPT is that cyberattackers are figuring out how to use it to their advantage. You know how you can often recognize a phishing email because it’s full of poor English? Not anymore! Authors of phishing emails can now send emails full of excellent grammar, punctuation and phrasing, making phishing emails harder than ever to recognize.

Particularly clever cyberattackers have discovered how to get ChatGPT to write malicious code they can use to break into systems and steal data for ransomware attacks. This new capability is bad news for firms that are trying to handle security with an in-house server.

Your firm needs to be prepared for AI security risks

There is good news, though. If you’re running your applications and storing your data in the cloud, you have the luxury of being able to rely on a team of security experts to virtually eliminate damage from data loss. You’re also keeping your clients’ critical information in the most secure data centers available. (If not…you’re on your own.)

Along those same lines, with phishing becoming more sophisticated, training employees to recognize security threats is essential. The best way to avoid a ransomware attack is never to click into one. Successful attacks almost always begin with human error, usually just a seemingly innocent click on a link. Minimizing the chance for human error minimizes your risk of falling victim to an attack.

Of course, if AI really does wipe out humanity, none of this will matter (kidding!). But for now, it’s best to treat ChatGPT as a useful and potentially even fun tool that just so happens to carry a lot of caveats.

As is the case with so many things in life, safety should be your number one priority when using ChatGPT or any form of AI chatbot.

Subscribe to our blog

Get Rightworks articles delivered straight to your inbox.
Privacy(Required)