Ashville NC IT Support Company | Blue Ridge Technology, Inc.

ChatGPT and Bing and Bard and…Phishing? Dark Sides of Generative AI

AI is making phishing scams more dangerous
Have you played around with the latest generation of AI chatbots recently? They’ve been making waves. It started with OpenAI’s ChatGPT, which can answer complex queries and do a surprisingly good job, at least most of the time. Seriously: you can ask it to write a review of your favorite smartphone in the style of a Shakespearean sonnet (or a Shakespearean sonnet in the style of a Wikipedia entry) — and it delivers something awfully close to what you asked for! You can also ask it to write an article like this one. And while this writer certainly hopes the machines aren’t coming for his job quite yet, the results are honestly a little amazing. To be fair, it’s not pro-quality copy. For now it’s more like, let’s say, B-plus-level high-school writing? And there are certainly still some issues with the tech (it can’t really tell what’s true, and it has no qualms making stuff up). Still, this technology (called generative AI) has people — and big businesses — excited. Microsoft has started integrating a version of ChatGPT into its Bing search engine. Google’s doing the same with its internally developed Bard. But like any new advancement in tech, there are some dark sides to generative AI — and your business needs to be ready. Here are a few situations where generative AI could create negative outcomes (or make them worse).

1. Makes Phishing Emails More Convincing

We’ve written about phishing emails numerous times on the blog, and we usually tell you to watch out for various red flags. One of these is when the wording in the email just seems…off. There are typos, or weird grammatical things, or it just doesn’t sound like the kind of thing Apple or your bank would say. The scammers behind these emails are just as excited as anyone about things like ChatGPT. That’s because these tools can create content that’s typo-free. They can even create content designed to sound like a specific type of thing (like a press release or a business email).

2. Creates Misinformation and Makes Up Sources

We’re not trying to be uncharitable here, but it’s true: these generative AI tools have been known to state falsehoods, create misinformation, and even make up fake academic citations (like referencing a nonexistent article in an academic journal). No one’s trying to be deceitful here. Rather it’s a function of what generative AI is doing. In essence, it’s writing content by guessing what the most likely next word (or part of a word) would be based on what it’s already generated. So if you ask it to create something that would usually have sources or citations, it will create something that looks similar. But the actual facts and citations may or may not be right. What does this mean for your business? First, if you make content, be very careful how you use generative AI content. Make sure you thoroughly fact-check anything you use. Second, as these tools become more commonplace (and even built into common search engines), you’ll need to be more vigilant about believing everything you read. (We know, that’s not exactly the direction we want the internet to go, either!)

3. Could Harm Your SEO

If your business is doing any kind of search engine optimization (SEO) or content marketing, be aware that for the time being the major search engines frown on using generative AI content on your website. We don’t know how well they can detect it, but if they determine you’re using this kind of content without modifications, it could hurt your site’s rankings. (And this writer breathes a sigh of relief!) Of these, the most crucial thing to watch out for is the new and improved phishing scams. Careful grammar checking might not be enough to keep you safe anymore. Now more than ever, you need better business-grade tools to keep you safe. We can help you choose and set up these tools. Reach out to our team to learn more or to get started.