
From low-code to vibe code: Why you may no longer need software engineers
Vibe coding lets anyone build apps in plain English using AI, unlocking innovation and speedâbut businesses must manage security, compliance, and quality risks....

Published December 9, 2025 in Artificial Intelligence âą 11 min read
AI is reshaping cybersecurity, arming both hackers and defenders. Learn how to stay ahead in the fast-evolving AI cybersecurity arms race.
When Anthropic recently released its latest threat intelligence report, it revealed an alarming evolution in AI-powered attacks. Anthropicâs security team intercepted a lone hacker who had transformed artificial intelligence into a one-person ransomware enterprise. The attack demonstrated how cybercriminals could leverage AI to automate previously complex operations that previously required entire criminal organizations. The hacker used AI coding agents to systematically identify vulnerable websites and web services, then deployed machine learning models to write malicious code exploiting these vulnerabilities.
After stealing data, the attacker employed large language models to analyze and prioritize the stolen information based on sensitivity and extortion value, before sending automated ransom demands to targeted companies. The attacker successfully executed 17 ransomware incidents, demanding ransoms between $75,000 and $500,000.
What would traditionally require an entire criminal organization had been condensed into a single operator leveraging AIâs capabilities. âThis was one person, doing what would normally take a whole group of operators in a ransomware gang to do,â said ĂykĂŒ IĆık, Professor of Digital Strategy and Cybersecurity at IMD. âThis is a very recent and very real example of how things are evolving, and companies need to be prepared.â

IĆıkâs warning is borne out by industry research. IBMâs latest Cost of a Data Breach Report 2025: The AI Oversight Gap revealed alarming weaknesses in AI security governance across organizations worldwide. While only 13% of companies reported breaches involving AI models or applications, a staggering 97% of those organizations lacked proper AI access controls. An additional 8% of companies admitted they did not know whether they had been compromised through AI-related attacks, suggesting the true scope remained hidden.
The research exposed shadow AI as a significant vulnerability, with one in five organizations experiencing breaches due to unauthorized AI tools used by employees. These shadow AI incidents cost an average of $670,000 more than breaches at firms with controlled AI environments. Meanwhile, 63% of breached organizations either lacked AI governance policies entirely or were still developing them, with only 34% of those with policies conducting regular audits for unsanctioned AI tools.
IBMâs research also found cybercriminals had rapidly weaponized AI capabilities, with 16% of data breaches involving attackers using AI tools â primarily for AI-generated phishing campaigns (37% of cases) and deepfake impersonation attacks (35%). The most common entry point for AI-related breaches occurred through compromised applications, APIs, and plug-ins within AI supply chains, resulting in 60% of incidents leading to data compromise and 31% causing operational disruption.
These statistics underscore a critical reality: as AI democratized both attack and defense capabilities, business leaders faced an unprecedented challenge in balancing innovation with security imperatives.

âThe democratization of AI capabilities had indeed lowered entry barriers for cybercriminals.â
The artificial intelligence revolution had created a parallel transformation in both cybersecurity threats and defenses, fundamentally altering how organizations approached digital risk management. Yenni Tim, Associate Professor in the School of Information Systems and Technology Management at UNSW Business School, identified this duality as central to understanding AIâs cybersecurity implications.
âThere are two dimensions to consider: cybersecurity of AI and AI for cybersecurity,â Tim explained. âAIâs black-box nature makes securing its implementation and use more complex, while at the same time, AI provides defenders with powerful tools like advanced pattern recognition for more accurate threat detection. But those same capabilities lower the barrier for attackers, who can exploit AI to scale and automate malicious activities.â
The democratization of AI capabilities had indeed lowered entry barriers for cybercriminals. IĆık observed how traditional hacking requirements had diminished: âWe do see, unfortunately, that cybercrime market is very lucrative. Recently, through the use of AI, the entry barrier to the cybercrime market is getting lower and lower,â she said.
The underground economy had rapidly adapted to these opportunities. Dark web marketplaces offer specialized large language models designed specifically for criminal purposes, with subscription services providing hacking capabilities for as low as $90 per month, according to IĆık. âThese criminals move very fast, and they are very agile. Theyâre not bound by rules or governance mechanisms that organizations need to comply with.â
IBMâs research confirmed this trend, revealing that 16% of data breaches involved attackers using AI, with AI-generated phishing attacks accounting for 37% of these incidents and deepfake impersonation attacks representing 35%. The speed and sophistication of AI-enabled attacks had outpaced many organizationsâ defensive capabilities, according to research published in Harvard Business Review, which found that the entire process of phishing can be automated using LLMs, which reduces the cost of phishing attacks by more than 95% while achieving equal or greater success rates.
However, the defensive applications of AI offer substantial benefits for organizations willing to invest appropriately. âAI is also a great friend for cybersecurity, but unfortunately, that side is developing slower than the attack side,â IĆık acknowledged. Organizations that implemented AI extensively throughout their security operations demonstrated measurably superior outcomes, reducing breach costs by $1.9m on average and shortening breach lifecycles by 80 days compared to organizations with limited AI security integration.
Tim emphasized that this technological arms race necessitated a fundamental shift in organizational thinking. âThis is why the conversation needs to move from security alone to digital resilience,â she said. âResilience provides a capacity lens to understand the extent to which a business can defend, respond, and recover from disruptions, including cyberattacks.â

The concept of digital resilience represented a paradigm shift from reactive security measures towards proactive organizational capacity building. Timâs research highlighted this evolution as essential for addressing AI-powered threats that traditional cybersecurity approaches struggled to counter effectively.
âResilience is often misunderstood as a technical issue â having the most advanced systems. In reality, it is a socio-technical capacity. Resilience emerges when assets and human abilities are mobilized together through activities that enable the organization to continue functioning, adapt to disruption, and advance over time,â she explained.
This framework comprised three interconnected layers that organizations needed to develop systematically. The foundational layer addresses assets and abilities that could be drawn upon during crises. The operational layer focused on activities that mobilized and coordinated these resources effectively. The strategic layer encompassed goals of continuity, adaptation, and advancement that guided resilience efforts.
âFor AI-powered threats, this means leaders cannot stop at acquiring tools,â Tim explained. âThey must also invest in building the abilities of their people to use AI effectively, securely, and responsibly. Only then can assets and abilities reinforce one another to support different objectives to collectively maintain resilience.â
IĆık approached resilience through the lens of proactive threat anticipation. âI talk about organizations âthinking like a thiefâ to help protect themselves from a cybersecurity perspective. What do I mean? Since the advent of the web, organizations have managed, to a certain extent, to protect themselves by taking a very reactive stance on this issue. So, thinking like a thief is more about pushing them to be proactive.â
This mindset required organizations to systematically evaluate their vulnerabilities from an attackerâs perspective. âIf I were a black-hat hacker, for example, how would I breach my systems? That kind of thinking is a great way to start thinking proactively on this topic,â IĆık explained.
The human element is critical in building organizational resilience. Despite technological advances, IĆık said attackers continue to target human vulnerabilities as their primary strategy. She observed that most LLM use cases target humans rather than technical vulnerabilities. âThe human element remains the most targeted one in cybersecurity,â she said. âSo, the better prepared we are from a behavior perspective, the better prepared organizations will be.â
The benefits of this approach are outlined in IBMâs AI cybersecurity research. Organizations that used AI extensively throughout their security operations saved an average of $1.9m in breach costs and reduced breach lifecycles by 80 days. This dual capability contributed to the first global decline in average breach costs in five years, dropping 9% to $4.44m, though recovery remained challenging with 76% of organizations taking more than 100 days to fully recover from incidents.
The distributed nature of AI adoption amplified this challenge.
The traditional approach of isolating cybersecurity responsibilities within IT departments had become inadequate for AI-enabled environments where technology decisions occurred across multiple organizational functions. IĆık identified that a fundamental challenge is shifting organizational risk perception from technical to business responsibility. âIt comes down to recognizing cyber risk as a business risk,â she said. âThatâs really the starting point for genuine cross-functional ownership.â
IĆık cited examples of high-profile failures that demonstrated the systemic nature of cybersecurity risks. In Sweden, 200 municipalities were locked down because of a cyberattack. âApparently, these 200 municipalities all used the same cloud HR software provider â so this was a supply chain attack,â explained IĆık, who noted that such incidents highlight how traditional risk assessment approaches failed to account for interconnected digital dependencies.
In response, she said effective cross-functional ownership requires embedding cybersecurity considerations within strategic planning and performance management processes. âWhy donât we make cyber resilience part of our organizationsâ strategic planning cycles? And why donât we help executives take responsibility by including this in their performance reviews?â IĆık asked.
Another important step is to distribute accountability across business functions, based on decision-making authority. âBusiness executives need to see how their decisions change digital risks in the organization,â said IĆık. âIf we can hold them accountable for that, then that is a good starting point to distribute that risk across the organization and not just leave that responsibility to the Chief Information Security Officer.â
Timâs research lab, UNSW PRAxIS, has a portfolio of ongoing projects on the responsible use of AI in businesses. Emerging findings from these projects show that this silo is a common and critical vulnerability that organizations need to address systematically. âThis siloing is common: cybersecurity is often seen as an IT problem. But in AI-enabled environments, that view is no longer adequate,â Tim explained.
The distributed nature of AI adoption amplified this challenge. Unlike previous technologies that remained within controlled IT environments, AI tools have proliferated across business functions, enabling individual employees to make technology decisions with security implications. âAI amplifies this need because it is a general-purpose technology,â said Tim. âIndividuals across functions now have greater influence over how technologies are configured and used, which means ownership must be distributed.â
She agreed with IĆıkâs perspective that traditional technological safeguards, while necessary, are insufficient without corresponding human capability development. âTechnological guardrails remain essential, but they must be paired with knowledge building that cultivates stewardship abilities across the workforce,â she said. âWhen employees understand their role in shaping secure and responsible use, resilience becomes embedded across the organization rather than isolated in IT.â
A key driver of AI-enabled cyberattacks has been the rise of quantum computing, which IĆık said presents medium- and long-term strategic risks.
 While current quantum capabilities remain limited to specific problem domains, broader accessibility could fundamentally alter cryptographic assumptions underlying digital security.
âThe moment this capability becomes widely accessible â over the cloud, for example, this gives rise to a new range of threats,â said IĆık. âYou can even go to Amazon today, which has an S3 (simple storage service) cloud computing environment that you can block time on. So, we are slowly getting there.â
Threat actors have already begun preparing for quantum decryption capabilities through âharvest now, decrypt laterâ strategies, collecting encrypted data for future exploitation. âWe know that they have already been doing this,â said IĆık. âThey are sitting on encrypted data that they will be able to decrypt with quantum computing capability, because the RSA encryption that we heavily depend on is breakable with quantum computers.â
Organizational preparation for post-quantum cryptography remained inadequate, despite available solutions. While quantum-safe encryptions exist and some institutions are actively vetting some of these encryption algorithms, IĆık noted that organizations need to invest time and resources in the process and develop a roadmap for switching from RSA to quantum-safe encryption systems.
Executive awareness of quantum risks is particularly limited, IĆık added. âIt might be only one in 50 organizations that say they are on top of this if you were to question them about quantum-safe transition planning,â she said.
Executive leaders face the complex challenge of leveraging AIâs transformative potential while maintaining appropriate security postures that protect organizational assets and stakeholder interests.
Executive leaders face the complex challenge of leveraging AIâs transformative potential while maintaining appropriate security postures that protect organizational assets and stakeholder interests. Timâs research suggests that successful leaders approach AI integration as both opportunity assessment and organizational stress testing.
âLeaders should treat AI integration as both an opportunity and a stress test of organizational resilience. The question is not simply how much you can scale or automate, but whether AI is being integrated in ways that strengthen the organizationâs capacity rather than strain it,â Tim explained.
This perspective requires leaders to evaluate AI initiatives holistically rather than focusing solely on efficiency metrics. For leaders, she said this means considering broader implications, such as how AI fits with existing processes, how it shapes employeesâ work satisfaction and capabilities, and whether it enhances rather than erodes organizational coherence.
âMost importantly, leaders need to view AI as part of a living system that evolves,â said Tim. âShort-term efficiency gains can easily create long-term fragility, which is why employees must be continuously supported to develop the stewardship capabilities needed to adapt these systems.â
This article was first published by UNSW Business School in Sydney, Australia, and is republished with its permission.
Source: Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards, World Economic Forum

December 1, 2025 âą by Tomoko Yokoi in Artificial Intelligence
Vibe coding lets anyone build apps in plain English using AI, unlocking innovation and speedâbut businesses must manage security, compliance, and quality risks....

November 25, 2025 âą by Tomoko Yokoi, Michael R. Wade in Artificial Intelligence
IMDâs AI Maturity Index demonstrates how to align leadership, people, and technology for measurable business benefits and revenue growth, with examples from 10 industries....

November 24, 2025 in Artificial Intelligence
Professor Amar Bhidé challenges AI hype, arguing that LLMs flatter rather than enlighten and that executives must distinguish calculable risk from true uncertainty....

November 21, 2025 ⹠by Amit M. Joshi, José Parra Moyano, Michael R. Wade, Shih-Han Huang in Artificial Intelligence
As AI-driven Large Language Models reshape digital visibility, Generative Engine Optimization (GEO) emerges as a critical frontier threatening to upend the $80 billion SEO industry and demanding urgent attention from senior executives....
Explore first person business intelligence from top minds curated for a global executive audience