0

AI-powered scams and what you can do about them | TechCrunch

AI is there to help you, whether you’re writing an email, creating some concept art, or tricking someone in distress into believing you’re their friend or relative. AI is very versatile! But since some people don’t want to be scammed, let’s talk about what to look out for.

The last few years have seen a huge increase not only in the quality of media, from text to audio to images and video, but also in how cheaply and easily media can be created. The same tool that helps a concept artist create some imaginary monster or spaceship, or helps a non-native speaker improve their business English can also be used maliciously.

Don’t expect the Terminator to come knocking on your door and selling you a Ponzi scheme – these are the same old scams we’ve been facing for years, but with a generative AI twist that makes them easier, cheaper or more reliable.

This is by no means a complete list, just some of the most obvious tricks that AI can supercharge. We’ll be sure to add new tricks as they appear in the wild, or any additional steps you can take to protect yourself.

Cloning the voices of family and friends

Synthetic voices have been around for decades, but it’s only been in the last year or two that they’ve seen a shift. Advances in technology A new voice can be created from just a few seconds of audio. This means that anyone whose voice has ever been broadcast publicly – for example, in a news report, YouTube video or on social media – can have their voice cloned.

Scammers can and have used this technique to create believable fake versions of our loved ones or friends. Of course, they can be made to say anything, but for the scam, they are most likely to create a voice clip asking for help.

For example, parents might get a voicemail from an unknown number that sounds like their son, saying that his luggage was stolen while he was traveling, someone let them use his phone, and can mom or dad send some money to this address, Venmo recipient, business, etc. One can easily imagine a problem like a car problem (“They won’t leave my car until someone pays them”), a medical problem (“This treatment isn’t covered by insurance”), and so on.

This type of scam has been done before using President Biden’s voice! They caught the people behind itBut in future scammers will be more careful.

How can you fight against voice cloning?

First of all, don’t try to spot a fake voice. They’re getting better every day, and there are many ways to hide any quality issues. Even experts get fooled!

Anything coming from an unknown number, email address, or account should automatically be treated as suspicious. If someone says they’re your friend or loved one, go ahead and contact the person as you normally would. They’ll probably tell you they’re fine and that it’s (as you’ve guessed) a scam.

Scammers often won’t proceed if they’re ignored – whereas a family member probably will. As long as you’re being considerate, it’s okay to continue reading a suspicious message.

Personal phishing and spam via email and messaging

We all get spam sometimes, but text-generating AI is making it possible to send mass emails customized to each individual. With regular data breaches, a lot of your personal data is out there.

It’s one thing when you get emails like “Click here to view your invoice!” with obviously scary attachments that seem to have done very little effort. But with a little context, They suddenly become quite reliableUsing recent locations, purchases, and habits to make it look like a real person or a real problem. Armed with a few personal facts, a language model can create a generic version of these emails for thousands of recipients in a matter of seconds.

So what was previously “Dear customer, please see your invoice attached” becomes something like “Hi Doris! I’m from the Etsy promotions team. The item you were looking at recently is now 50% off! And if you use this link to claim the discount, shipping is free to your address in Bellingham.” A simple example, but still. With real name, shopping habits (easy to find out), general location (same) and so on, suddenly the message becomes a lot less clear.

In the end, it’s still just spam. But this kind of customized spam was previously done by low-paid people at content farms overseas. Now it can be done at scale by LLMs with better prose skills than many professional writers!

How can you fight email spam?

As with traditional spam, vigilance is your best weapon. But don’t expect to be able to distinguish generated text from text written by humans. There are very few people who can do this, and certainly no other AI model (despite the claims of some companies and services).

No matter how much the text is improved, this type of scam still has the basic challenge of getting you to open suspicious attachments or links. As always, don’t click or open anything unless you’re 100% sure of the sender’s authenticity and identity. If you’re even slightly unsure – and that’s a good sense – don’t click, and if you have a knowledgeable person you can forward it to for a second pair of eyes, do so.

‘Fake You’ Identity and Verification Fraud

Due to the number of data breaches in the last few years (Thanks, Equifax!), it’s safe to say that almost all of us have a fair amount of personal data on the dark web. If you’re following good online security practicesA lot of threats are mitigated because you’ve changed your password, enabled multi-factor authentication and taken other similar measures. But generative AI could introduce a new and serious threat in this area.

With so much data available about a person online, and for many people even having a clip or two of their voice available, it is very easy to create an AI personality that sounds like the target individual and has access to most of the facts used to verify the identity.

Think about it this way. What do you do if you’re having trouble logging in, can’t configure your authentication app properly, or you lose your phone? Maybe call customer service – and they’ll “verify” your identity using some trivial fact like your birth date, phone number or social security number. Even more advanced methods like “taking a selfie” are becoming easier to game.

The customer service agent – as far as we know, he’s also an AI! – may very well oblige this fake you and give you all the privileges you would get if you actually called. What they can do with that situation varies widely, but none of it is good!

Like others on this list, the danger is not how real this fake you will be, but that it’s easy for scammers to carry out this kind of attack widely and repeatedly. Not long ago, this kind of impersonation attack was expensive and time-consuming, and as a result was limited to high-value targets like wealthy people and CEOs. Nowadays you can create a workflow that creates thousands of impersonation agents with minimal oversight, and these agents can autonomously phone the customer service numbers on all of a person’s known accounts – or even create new ones! Only a handful need to be successful to justify the cost of the attack.

How can you fight against identity fraud?

Just as it was before AI came along to boost the efforts of scammers, “Cyber ​​Security 101” This is the best option for you. Your data is already available; you can’t put the toothpaste back in the tube. But you can be able to Make sure your accounts are adequately protected against the most obvious attacks.

Multi-factor authentication This is easily the most important single step anyone can take here. Any kind of serious account activity goes straight to your phone, and suspicious login or password change attempts will show up in email. Don’t ignore these warnings or mark them as spam, even (especially!) if you’re getting a lot of them.

AI-generated deepfakes and blackmail

Perhaps the scariest form of nascent AI scams is the potential for blackmail Deepfake Images You or someone you love. You can thank the fast-moving world of Open Image Models for this futuristic and frightening possibility! People interested in some aspects of cutting edge image production They’ve created a workflow to not only render nude bodies but also attach them to any face they can get a picture of. I don’t need to go into detail about how this is already being used.

But one unintended consequence is the expansion of the scandal that is commonly called “revenge porn,” but is more accurately described as the non-consensual distribution of intimate images (though like “deepfakes,” the original term may be difficult to change). When someone’s private images are released through hacking or a vengeful ex, they can be used as blackmail by a third party who threatens to publish them widely unless a sum of money is paid.

The AI ​​makes this scam work in such a way that no real intimate image needs to exist in the first place! Any person’s face can be added to a body created by the AI, and while the results aren’t always reliable, it’s probably enough to fool you or others if it’s pixelated, low-resolution or otherwise partially obscured. And that’s all it takes to scare someone into paying up to keep a secret – although, like most blackmail scams, the first payment is unlikely to be the last.

How can you fight against AI-generated deepfakes?

Unfortunately, the world we’re heading towards is one where fake nude photos of almost anyone will be available on demand. It’s scary and weird and disgusting, but sadly, the truth is out there.

Nobody is happy about this state of affairs, except the bad guys. But there are a few things for all of us potential victims. It may be cold comfort, but these pictures are not actually of you, and they don’t need actual nude photos to prove it. These image models may produce realistic bodies in some ways, but like other generative AI, they only know what they’ve been trained on. So fake images will have no distinguishing markings, for example, and are likely to be blatantly wrong in other ways.

And although this danger will probably never be entirely diminished, Support for victims continues to growwho can legally force image hosts to remove photos, or ban scammers from the sites where they post. As the problem grows, so do the legal and private means to fight it.

TechCrunch is not a lawyer! But if you are a victim of this, tell the police. This is harassment, not just a scam, and while you can’t expect the police to do intense sleuthing on the internet to track someone down, sometimes such cases are resolved, or the scammers are scared off by requests sent to your ISP or forum host.

ai-powered-scams-and-what-you-can-do-about-them-techcrunch