As businesses continue learning the benefits that artificial intelligence (AI) assisted computing tools provide, we’re continuing to see rapid interest and adoption of the technology – especially within the enterprise. Most conversations up until recently have revolved around ChatGPT, but now another new AI-powered large language model tool – DeepSeek – is creating a lot of intrigue and discussion. While there are known benefits of these tools, there are as many concerns from a cybersecurity lens – including the increasing threats they present to business and individuals alike globally.

Concerns of DeepSeek and other AI tools primarily center around data privacy, the security of wide-scale implementations, and there being a lack of the same guardrails that the US models have implemented to prevent certain abuses. Beyond the widely sensationalized scare tactics of labeling DeepSeek as a ‘Trojan Horse’, there lies real data that presents a massive global threat: according to a new Google Threat Intelligence Group (GTIG) report, 57 distinct threat actors with ties to China, Iran, North Korea and Russia have been observed using AI technology powered by Google to further enable malicious cyber operations.

We’re still in the early days of this technology and there’s certainly a lot to learn. However, we’re now very cognizant that the rapid evolution of AI in both sophistication and cost reduction will change what we thought was possible – and what we hoped was impossible. We now also have an open source alternative that can be run locally to mitigate some of the aforementioned data privacy concerns with DeepSeek.

Balancing the benefits of AI tools with increasing cyber risks

Individuals have found that the accessibility of AI tools certainly have benefits, such as cost savings and productivity in the workplace via rapid content creation, pattern detection and summarization. However, the primary cybersecurity concern of these tools includes lowering the bar for adversaries to convincingly and successfully execute cyberattacks like phishing and ransomware. The challenge with AI-based attacks is that they bring technology closer to the edge of what our normal human senses can detect. Convincing text, video and audio plays on our most intuitively trusted senses of sight and hearing, making it harder than ever to detect.

In today’s business world with geographically distributed workforces, AI tools enable increasingly credible methods to execute successful social engineering attacks. Controls that were difficult to circumvent, such as voice verification for identity on a password reset, will become obsolete. While AI-based tools by themselves may not (yet) produce convincing spear phishing outputs, it could be used to improve base level quality issues with most phishing campaigns such as addressing poor grammar and obviously inaccurate information.

As AI tools evolve to assist with legitimate code development, attackers are also able to quickly use the same technology to write malware to support their phishing campaign. While AI tools are able to accelerate phishing attacks to gain access to stolen login credentials and sensitive information, they are now also capable of aiding attackers to take the next step of  persistence to a victim’s computer and environment.

Using AI, once attackers access an email client they can then write rules to automate sending and forwarding emails to things they control or care about (money, business deals, etc.). This allows them to selectively respond to emails with the goal to redirect funds to an account they own – without the victim knowing.

Tackling an AI future with phishing-resistant MFA 

As adoption of these tools continues to grow, it will be important to focus on the key ways to circumvent the risks associated with them. This underlines the importance of using modern phishing-resistant multi-factor authentication (MFA) and identity-based security methods. When the efficacy of identity measures that companies have trusted for decades such as voice verification and video verification erodes, strongly linked electronic identity is even more important. Credentials that are hardware-backed and purpose-built around cryptographic principles excel in these scenarios, such as hardware security keys like the YubiKey.

YubiKeys operate with FIDO2 as one of its core protocols where credentials are tied to a specific user – which prevents attackers from preying on the human inability to consistently spot small differences such as a 0 (zero) versus a O (capital o) in a nefarious website URL. With security keys, credentials are securely stored in hardware which prevents those credentials from being transferred to another system without the user’s knowledge or by accident. The use of FIDO2 authenticators also greatly reduces the efficacy of social engineering through phishing as users cannot be tricked into vending a one-time-password to an attacker, or mitigate the impact of SMS authentication codes stolen directly through a SIM swapping attack.

For more insights on what the impact of AI is having on business and individuals, including the latest cybersecurity patterns and behaviors globally, check out our Global State of Authentication survey here.

Interested in learning more about what YubiKeys can do for your business? Contact our team today.

Disclaimer: This article is sourced from the official Yubico website. As official partners of Yubico, we have obtained permission to utilize both articles & resources for further updates with regards to Yubico’s products.