Communications of the ACM - Artificial Intelligence 09月05日
AI驱动的网络犯罪:深度伪造与社交工程的新篇章
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

人工智能正以前所未有的方式改变网络犯罪格局,使得网络钓鱼和欺诈活动变得更加快速、廉价且具说服力。文章深入探讨了AI如何通过超个性化诱饵、深度伪造音视频、智能聊天机器人以及实时语言切换等手段,极大地增强了网络钓鱼的隐蔽性和有效性。文中还引用了香港Arup公司和德国某公司被AI诈骗的案例,详细分析了攻击者如何利用AI技术模仿高管声音和形象,绕过员工的警惕性,造成巨额经济损失。面对日益严峻的挑战,文章强调了结合AI技术进行检测、实施“零信任”策略、利用行为生物识别技术以及加强用户意识培训的重要性,并指出通过独立的、可信的渠道进行二次验证是防范此类攻击的关键。

🤖 **AI驱动的超个性化攻击:** AI能够深度分析社交媒体、公司新闻稿、代码提交记录等公开信息,为目标用户量身定制看似由熟人发送的邮件或信息,其中可能包含用户近期活动、公司动态甚至内部笑话,大大降低了被识破的概率。

🎭 **深度伪造音视频的逼真模仿:** 过去需要大量录音才能模仿的声音,现在仅需30秒的音频片段即可。结合AI生成的视频,攻击者能够进行高度逼真的虚假视频会议,足以诱骗受害者进行巨额转账,如香港Arup公司近2500万美元的欺诈案。

💬 **AI聊天机器人带来的持续性欺骗:** AI聊天机器人能够全天候、即时地响应,并能适应用户语气,进行看似正常的客户服务对话,同时不动声声地套取个人信息,且不易暴露其欺诈意图,与过去容易露出破绽的旧式聊天机器人截然不同。

⚡ **攻击效率与规模的指数级提升:** AI使得过去需要数天准备的工作能在几分钟内完成,并且能够同时生成数千个针对不同受害者的定制化网络钓鱼活动,极大地拓展了网络犯罪的效率和规模。

🔒 **“零信任”与多渠道验证是关键防线:** 在AI时代,传统的“信任但验证”模式已失效,必须转向“先验证”。实施“零信任”策略,对所有设备、用户和请求进行双重检查。同时,对于大额转账或异常请求,必须通过独立的、可信的渠道(如已知电话号码或面对面沟通)进行二次确认,这是防范此类攻击最有效的手段。

In early 2024, a Hong Kong-based clerk at multinational consulting engineering firm Arup was duped into transferring about $25 million to scammers who used AI to impersonate the company’s CFO, and other senior executives, in a live video meeting. The fraud only became apparent when the employee checked in with headquarters in London.

This is social engineering 2.0.

In the old days, phishing emails were usually riddled with typos and bad grammar, or at least had subtle tells like domain names with typos. With a little cybersecurity education, you could spot them a mile away. Now, AI studies your team’s LinkedIn profiles, writes flawless messages, and builds a ruse so tailored it feels like it’s from your best client. They might even use deepfakes or voice clones to be even more convincing.

AI is the ultimate force multiplier for cybercriminals, because it makes scams faster, cheaper, and more convincing at scale. It can automate tasks that criminals used to do manually—or would never have invested the time to do manually. It can personalize attacks using scraped data, mimic voices, faces, and writing styles with staggering accuracy.

In this post, let’s take a look at how AI is rewriting the rules of phishing and cybercrime.

How AI Is Supercharging Phishing Attacks

The next wave of phishing and AI attacks is already here, driven by technologies such as agentic research, deepfakes, and machine learning that are making threats more convincing than ever.

Here’s how today’s cybercrooks are using AI to create large-scale phishing campaigns that are almost impossible to spot:

1. Hyper-Personalized Lures

Forget the generic “Dear Customer” email. AI can scan your social media, past press releases, and even your GitHub commits, and use that data to craft a message that feels hand-written for you. It can reference your recent vacation, your company’s latest product launch, or even an inside joke from your social media feed.

2. Deepfake Audio and Video

Voice cloning used to work well only when fed hours of recording for training. Now, a 30-second clip is enough to mimic your boss’s tone and accent. Combine that with AI-generated video, and you get full-blown fake Zoom calls convincing enough to move millions.

3. AI-Driven Chatbots That Never Slip

Old scam chats used to break character quickly; AI chatbots don’t. They respond instantly, adapt to your tone, and can hold a believable “customer service” conversation while quietly harvesting your personal info.

4. Real-Time Language Switching

AI translation is now good enough to scam you in flawless Spanish in the morning, Hindi by lunch, and Japanese in the evening. No awkward phrasing, no giveaway grammar mistakes.

5. Attacks at Machine Speed

What once took days of research and prep can now happen in minutes. AI can spin up thousands of unique phishing campaigns at the same time, each one tailored to its victim.

Case Studies: Recent AI-Powered Scams

It used to be just a sci-fi nightmare scenario, but today, AI phishing is real, and it’s costing companies millions.

We’ve already touched upon this one in the intro, but the Hong Kong phishing scam that targeted an employee at Arup deserves a deeper dive. The employee was tricked by deepfake versions of her CFO and colleagues into transferring HK$200 million across 15 transactions. The case has been widely reported and confirmed by the Hong Kong police.

Every face and voice was AI-generated. The employee thought she was following her CFO’s orders on a video call. The money was gone before anyone realized.

Why did the scam work so well? For starters, it wasn’t a simple shady email. It was a full-on video call with recognizable faces and voices. All fake, but super realistic. The person on the screen looked and sounded exactly like the CFO.

And when someone who looks like your boss tells you to do something, most people just do it.

They played the classic pressure game, too, talking about how urgent the transactions were. That shuts down the little voice in your head saying, “Hmm, should I double-check this?”

But here’s the thing: There were red flags. The employee just needed to know what to look for. For one, why were there so many big payments going out to previously unknown accounts, especially ones based overseas? And why did such a major transfer come out of nowhere, with no heads-up, no prior discussion?

Even the video might’ve had tiny glitches. Maybe a weird blink, or slightly off lighting. Deepfakes aren’t perfect (yet). And if the company had a rule that big payments need multiple approvals, well, that rule got skipped. That’s a big fat clue something’s wrong.

Similarly, back in 2019, criminals used AI-based voice cloning to impersonate the CEO of a German company. An executive at a subsidiary, a U.K.-based energy firm, thought he recognized the CEO’s tone and accent and transferred about $243,000 to a fake supplier, before realizing it was a scam.

The fake CEO asked for a quick payment to a supplier. The cloned voice perfectly mimicked the CEO’s accent, tone, and cadence, making it sound authentic over the phone. It also matched a real deal the company was working on, so it didn’t feel strange at the time.

Furthermore, the caller sounded urgent, saying it had to be done right away to close the deal. So the executive did what was asked of him.

But again, there were warning signs hiding in plain sight. The payment was going to a new account, one the company hadn’t used before. That alone should’ve raised a flag. And if the exec had just pinged the real CEO through a known channel like email or internal chat, the whole thing might’ve unraveled.

Also, that mix of pressure and secrecy is a classic social engineering tell. It’s meant to convince the victim to override logic and act quickly, circumventing protocols. And while the voice clone was highly convincing, these tools still mess up now and then. Weird pauses, flat emotion, audio-video sync issues, lack of movement, and tiny slip-ups are clues if you’re paying attention.

It’s evident that AI is helping to create new attack surfaces. AI scams succeed because they blend real context with hyper-realistic impersonation. The more familiar they feel, the harder they are to doubt, which is exactly why double-checking through a trusted, separate channel is non-negotiable.

Detection and Defense: What Works in the AI Era

To defend against these dark arts, there’s a need to fight fire with fire.

To begin with, you can’t rely on human eyes alone. Modern security tools use machine learning to spot anomalies in tone, sender patterns, or user behavior. They can catch subtle signs, like if a coworker emails you at 3:00 a.m. or if the tone feels off. Things humans might miss, machines can flag.

Next, the old “trust but verify” approach is dead. Verification comes first, every time. The new “zero trust” policy means every device, user, and request gets double-checked whether they’re inside or outside your network.

There’s also tech that tracks your typing rhythms, how you move your mouse, and even how you talk. So if the “CEO” types like a different person, the system can tell. That’s behavioral biometrics for you.

But tech alone isn’t enough. If social engineering is now 2.0, user awareness should also be 2.0. Teams need exposure to AI-generated phishing simulations so they learn to spot scams that look perfect. Drills should cover video calls, chat platforms, and phone scams—not just email.

Also, use out-of-band verification. Big request? Large sums of money? New account details? Always confirm through a separate, trusted channel like a known phone number or in-person meeting. This one extra step shoots most scams in the head.

Finally, if someone smells something phishy, they should know exactly what to do. Prepare an incident playbook with clear steps that empower your team to lock it down fast and limit the damage.

Stronger Weapons

AI has changed phishing forever. Attacks are now faster, more convincing, and nearly impossible to detect with the old “look for typos” playbook.

Deepfakes, voice clones, and AI-driven chatbots have expanded the scammer’s toolkit, making even seasoned employees vulnerable. Defending against these threats means matching AI’s speed and sophistication with tools like anomaly detection, zero trust verification, and realistic phishing simulations that go beyond email.

Above all, verification is your strongest weapon. Confirming big or unusual requests through a trusted, separate channel can stop most scams in their tracks. Audit your phishing defenses now, before AI tests them for you.

Gaurav Belani is a Senior SEO and Content Marketing Analyst at Growfusely, where he specializes in crafting data-driven content strategies for technology-focused brands.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 网络犯罪 网络钓鱼 深度伪造 社交工程 AI Phishing Cybercrime Deepfake Social Engineering
相关文章