Coro Secures $100 Million Funding Round to Drive Aggressive Growth to Transform Cybersecurity for SMEs Read more

Start a Trial 
Watch a Demo
Contact Sales
Become a Partner
Compliance Survey
Get Support

Start a Free Trial

Try Coro for Free for the Next 30 Days

"*" indicates required fields

Hidden
Name
Hidden
Hidden
Hidden
Hidden
This field is for validation purposes and should be left unchanged.
Coro Platform

Watch a Demo

Explore our collection of recorded product demonstrations to witness Coro in action.

"*" indicates required fields

Hidden
Name
Hidden
Hidden
Hidden
Hidden
This field is for validation purposes and should be left unchanged.
See how much time you could save with Coro guarding your business:
Instantly handle 95%+ of email threats
Monitor cloud app security from a single dashboard
Protect devices across the threat landscape
Prevent data loss with a deceivingly simple solution

Contact Sales

Receive comprehensive information about our product, pricing, and technical details straight from our specialists.

"*" indicates required fields

Hidden
Name
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
This field is for validation purposes and should be left unchanged.
Modules

Become a partner today

Turn your cybersecurity business into a revenue center

"*" indicates required fields

Hidden
Name
Hidden
Hidden
Hidden
Hidden
This field is for validation purposes and should be left unchanged.
Modules

Build Your Compliance Report

Does your business satisfy security regulations? Take the survey to learn how your industry, services, and location can impact your compliance posture.
Take the Compliance Survey

AI is the New Major Accomplice for Cyber Crimes

February 16, 2024

Within just a couple years, AI seems to have overtaken a head-spinning amount of aspects in our lives. And there seem to be few groups who are benefiting and innovating more with AI than foreign hackers and cyber criminals. 

On Feb. 14, Microsoft released a report detailing, among other things, how state-backed hackers from Russia, China, and Iran have been using tools from Microsoft-backed OpenAI to hone their skills and trick their targets. 

Originally reported by Reuters, one of the ways these groups have been leveraging OpenAI is by using its large language models (LLMs) to pull from gigantic amounts of text in order to generate more human-sounding responses.

“Independent of whether there’s any violation of the law or any violation of terms of service, we just don’t want those actors that we’ve identified – that we track and know are threat actors of various kinds – we don’t want them to have access to this technology,” Microsoft Vice President for Customer Security Tom Burt told Reuters in advance of the report’s release.

In terms of how these nations have been utilizing OpenAI, Reuters breaks it down:

  • Hackers allegedly working on behalf of Russia have used the models to research satellite and radar technologies that may pertain to military operations in Ukraine.
  • North Korean hackers were observed using the models to create content for potential spear-phishing campaigns.
  • Iranian hackers were reportedly leaning on the models to write more effective emails, potentially to lure recipients into a booby-trapped website.
  • Chinese state-backed hackers were also messing around with large language models; for instance asking questions about rival intelligence agencies, cybersecurity issues, and people of note.

These revelations by Microsoft are but one drop in a growing storm related to how AI is being used to craft more effective ways to commit cyberattacks.

Just this week, OpenAI announced a new AI tool, Sora, that allows users to create stunning videos based on text prompts. While Sora hasn’t yet been released to the general public, it’s almost mind-bending to think of the way it’s going to be used by bad actors.

As noted in a recent webinar hosted by Dror Liwer, co-founder and chief marketing officer of Coro, AI is being used to hack passwords more quickly and also improve the ability of crooks to falsify communications through email, social media, and other social engineering attacks. 

And it’s growing far beyond just written communications. Just recently, a finance employee was deceived by a deep fake, multi-person video conference. The result was a transfer of millions of dollars to criminals. 

And, as cybersecurity expert, Joseph Steinberg, noted in our recent webinar, these types of AI tricks are not just impacting businesses. 

“The reality is that criminals can now impersonate people so well their voices, their way of speaking,” Steinberg said on the webinar. “You take TikTok videos that a kid has made and feed it into an AI, and it can speak like that person. And so you get calls to parents where it’s a child pretending to be in trouble. And it’s coming from a criminal. And that’s happening. Now that’s already happening, and, as you said, it’s only getting worse.”

The good news is AI is also helping to catch and protect against these threats, and will continue to do so. To hear more cybersecurity predictions for AI in the year ahead, watch our webinar below or on-demand here

Coro Cybersecurity News

Expand knowledge in cybersecurity
Coro was built on a simple principle: Enterprise-grade security should be accessible to every business. We believe the more businesses we protect, the more we protect our entire economic outlook.
Copyright 2023 © Coro Cybersecurity All Rights Reserved
chevron-down