By Alex Woodie

Companies had a mixed relationship with cybersecurity before generative AI landed on the scene in 2022. Now that companies are quickly adopting GenAI across their organizations, they’re finding themselves playing a game of security catch-up. That will make 2025 an eventful year, security experts predict.

Identity resolution is tough enough under the best of circumstances. Add AI-generated fake identities to the mix, and the results are potentially disastrous, says Darren Shou, chief strategy officer of the RSA Conference (RSAC).

“AI-generated identities will overrun the digital landscape, spurring a crisis in digital trust,” Shou says. “Generative AI will drive a staggering increase in fake digital identities by easily creating convincing profiles that contain fabricated personal details that bypass KYC [know your customer] and biometric checks. These fake personas will infiltrate more enterprises, enabling sophisticated fraud and reputation attacks–something we’ve already seen with fake North Korean IT workers, who’ve stolen hundreds of millions from companies around the globe. Enterprises will need to adopt cryptographic digital IDs to counteract this deluge of deception, marking a shift in how we verify identities online.”

Many things are up in the air when it comes to GenAI. Organizations may get a positive return on investment (ROI) or they may not.  One thing that isn’t negotiable when it comes to GenAI: compliance and security are mandatory, according to Carmelo McCutcheon, public sector CTO VAST Data Federal.

(shuttersv/Shutterstock)

“With the rise of global regulations like the EU AI Act, businesses will face immense pressure to ensure their AI systems are transparent, accountable, and aligned with stringent privacy standards,” McCutcheon says. “As data becomes an even more valuable asset, protecting it from potential threats will be a top priority. Organizations will need to implement stronger security measures that safeguard data both at rest and in transit, while also meeting regulatory requirements. The balance between compliance and security will be crucial for organizations to maintain trust and protect valuable assets.”

We know the bad guys are using AI to create fake identities and generate malware on an industrial scale. But the good news is the good guys can also use AI to bolster security, such as through AI-driven threat detection, says Carl Gersh, SVP of global marketing at IGEL.

“The AI in the cybersecurity market is projected to grow from approximately $24 billion in 2023 to around $134 billion by 2030, reflecting the increasing reliance on AI for threat detection and response,” Gersh says. “This growth underscores the critical role of AI in modern cybersecurity strategies. AI and machine learning are no longer optional in endpoint security. In 2025, AI-powered solutions will become a cornerstone of threat detection, identifying anomalies and preventing breaches faster than ever.”

Many IT employees are still not back in the office, but that’s not slowing the data center construction boom. Data and servers still have to live somewhere in the real world, and that makes the junction of physical security and cybersecurity a big challenge, says Greg Parker, global vice president of security, fire, and life cycle management at Johnson Controls.

(Gorodenkoff/Shutterstock)

“As cyber and physical security increasingly intersect, zero-trust architectures will be essential to safeguard access and mitigate vulnerabilities,” Parker says. “Organizations must ensure all users, devices and systems are verified continuously with robust access controls to prevent unauthorized intrusions into physical security systems. I anticipate zero-trust becoming the industry standard, especially for facilities leveraging IoT and cloud-based solutions, where the stakes for security and operational continuity are higher than ever.”

Cybersecurity has always been a cat and mouse game. With AI in the mix, the game reaches new levels, but there will be big differences in the skill with which cybercriminals and security professionals wield AI, predicts Tim Wade, deputy CTO at Vectra AI.

“In 2025, attackers will continue to leverage AI to streamline attacks, lowering their own operational costs and increasing their net efficacy,”. “In most cases this will increase attacker sophistication. However

“We’ll start to see a clear distinction emerge between groups that masterfully apply AI and those adopting more simplistically,” Wade says. “The attackers who skillfully leverage AI will be able to cover more ground more quickly, better tailor their attacks, predict defensive measures, and exploit weaknesses in ways that are highly adaptive and precise. Defensive AI will play a critical role in combating these attacks but will require intentionality in how, where, and when it is operationalized to be truly effective. The teams that excel will be those that understand how to apply AI beyond surface-level automation, integrating it into the full range of people, process, and technology.”

GenAI’s failure to live up to hype in the business setting has led to a case of the blahs. In 2025, the general GenAI disillusionment will extend to GenAI in cybersecurity, predicts Mark Wojtasiak, vice president of research and strategy at VectraAI.

“In the coming year, we’ll see the initial excitement that surrounded AI’s potential in cybersecurity start to give way due to a growing sense of disillusionment among security leaders,” Wojtasiak says. “Vendors will no longer be able to rely on generic promises of ‘AI-driven security’ to make sales. Instead, they will need to demonstrate tangible outcomes, such as reduced time to detect threats, improved signal accuracy, or measurable reductions around time spent chasing alerts and managing tools.”

(Mongta Studio/Shutterstock)

We have had so many major data breaches that we’ve become numb to them. In 2025, we’ll be shocked back to our senses as the result of the first data breach of an AI model, predicts Druva CTO Stephen Manley.

“Pundits have frequently warned about the data risks in AI models. If the training data is compromised, entire systems can be exploited,” Manley says. “While it is difficult to attack the large language models (LLMs) used in tools like ChatGPT, the rise of lower-cost, more targeted small language models (SLM) make them a target. The impact of a corrupt SLM in 2025 will be massive because consumers won’t make a distinction between LLMs and SLMs. The breach will spur the development of new regulations and guard rails to protect customers.”

We’re in the midst of a political realignment, as the elections of Donald Trump in the US and right-wing politicians in Europe demonstrate. In 2025, the enter cyber threat view will be up for realignment, predicts Steve Stone, the SVP of threat intelligence and managed hunting at SentinalLabs.

“The last few years demonstrated relatively universal alignment from the cybersecurity private sector community. The war in Ukraine and Russia’s significant focus on cyberwarfare (particularly data destruction tools) allowed for a fairly permissive political environment across the industry, with several major vendors openly listing their support for a specific group and position. The recent Israel conflict returned most cybersecurity vendors to a more neutral position,” Stone writes. “This shift will likely accelerate and expand due to elections in the US and related western countries where claims of ‘weaponized’ cyber intelligence communities are already made, combined with multiple high-level tech companies’ top executives becoming major partisan players.”

Cybercriminals who use phishing techniques will see their approach bear (criminal) fruit resurgence thanks to GenAI’s capability to deliver excellent deepfakes an affordable price, predicts David Richardson, vice president of endpoint at Lookout.

“In 2025, I expect to see hackers’ mobile phishing toolkits expand with the addition of deepfake technology,” Richardson says. “I can easily see a future, especially for CEOs with a celebrity level status, where hackers create a deepfake video or vocal distortion that sounds exactly like the top leader at an organization to further pursue attacks on corporate infrastructure, either for monetary gain or to share information with foreign adversaries.

(BritCats Studio/Shutterstock)

Cybersecurity professionals have a lot on their plates. In 2025, the more industrious cybercriminals will concentrate their efforts where they can do the most damage: SecOps soft underbelly, predicts Leonid Belkind, the co-founder and CTO of Torq.

“With SecOps focused on front-line defense measures, attackers will focus on stack elements and settings that are typically under-protected and less tightly managed,” Belkind says. “SaaS misconfigurations, access control anomalies, and third-party integrations and gateways are prime examples. With SecOps’ staff overwhelmed and burning out, advanced security automation such as hyperautomation can use Gen AI to manage and parse these systems and auto-remediate or escalate threats before they have a chance to take root.

Yes, advances in GenAI will give the bad guys better tools. But GenAI will also help security pros manage their huge workloads by taking over tedious tasks, says Jimmy Mesta, CTO and founder of RAD Security.

“Security teams are overwhelmed by the growing volume and complexity of vulnerabilities, leading to errors and burnout,” Mesta says. “AI-driven tools are set to change this, automating tasks like triage, validation, and patching. By analyzing vast datasets, these tools will predict which vulnerabilities are most likely to be exploited, allowing teams to focus on critical threats. By 2025, up to 60% of these tasks will be automated, significantly improving accuracy and response times. AI-driven tools will also proactively discover vulnerabilities, closing gaps before attackers can exploit them.

America’s adversaries have signaled their intent to target the country’s water infrastructure, but that won’t stop the US government and US water sector from continuing a murder-suicide pact through lapses in cybersecurity, predicts Grant Geyer, the chief strategy officer at Claroty.

“Despite the clear understanding that U.S. adversaries are targeting the water sector to project power and create gaps in confidence in the U.S. Government’s ability to safeguard the public, the water sector and government will continue the current path of inaction,” Geyer says. “While the water sector asks Congress for a NERC-like regulatory regime, efforts by the EPA to enforce cybersecurity standards in a questionable manner are sparking intense backlash. Meanwhile, the threat landscape is growing more dangerous, with cyberattacks from Russia, China, and Iran

(Gorodenkoff/Shutterstock)

exposing critical vulnerabilities in our water systems.”

At the end of the day, AI models are collections of data. In 2025, more companies will realize that to secure AI, they must secure their data, says Balagi Ganesan, the CEO and co-founder of Privacera.

“In a rapidly evolving digital world, our greatest defense is precision and deep awareness of where data resides and how it moves,” Ganesan said. “The exponential pace of AI adoption has amplified opportunities and threats, demanding organizations go beyond conventional data protection strategies. Data security isn’t just compliance—it is an ongoing process that builds trust and safeguards innovation.”

Cybercriminals are very creative when it comes to cooking up new fraud schemes. In 2025, those schemes will get turbocharged thanks to GenAI, says Mark Bowling, Chief Information Security and Risk officer at ExtraHop.

“With generative AI easily accessible to hackers, we’re going to see more impersonation tactics posing a huge threat to our society,” Bowling says. “Hackers are quickly becoming more proficient in identifying vulnerable attack surfaces, and the human element is one of the biggest. For example, we can expect there to be more impersonations of police officers or high ranking C-suite from Fortune 500 companies being generated by GenAI in efforts to gain access to login credentials, PII and more. As we enter 2025, there will be a bigger emphasis on identity protection measures as we learn to contend with impersonation issues. This means having stronger authentication methods like MFA and IAM tools that check for abnormalities for where and when credentials are being used and what they are trying to access.”

Cybercriminals have figured out the combination of graph databases with retrieval augmented generation (RAG) techniques, or GraphRAG, makes their nefarious jobs easier. In 2025, the good guys will strike back their own graph capabilities, predicts Jans Aasman, CEO of Franz.

“Cyberattackers increasingly use graph-based approaches to map out and execute their attacks. In 2025, we will see cybersecurity defenders adopt similar strategies for effective threat detection and response,” Aasman says. “Defenders will use AI graph insights to map out not only their network’s architecture but also the intricate relationships and patterns that indicate potential vulnerabilities. By adopting graph-based defense systems, security teams will be able to visualize and track how cyber threats spread across a network, identify hidden connections between compromised assets, and rapidly detect anomalies in user or system behavior.”

Related Items:

The Top 2025 GenAI Predictions, Part 2

2025 Big Data Management Predictions

2025 Data Analytics Predictions

 

The post 2025 Cybersecurity Predictions: AI in the Spotlight appeared first on BigDATAwire.

Read more here:: www.datanami.com/feed/