AI Hype Isn’t New — But the Stakes Are
What decades of bad predictions teach us about AI, disruption, and the future of cybersecurity
There is a certain confidence in today’s AI discourse that should make every seasoned operator pause.
Executives, analysts, and technologists are making sweeping claims about artificial intelligence, its inevitability, its dominance, and its ability to fundamentally replace large portions of the workforce. The tone feels definitive, almost settled. But if you take a step back and examine history, this level of certainty is not new. In fact, it is one of the most consistent signals that the market is misunderstanding the moment.
The reality is that we have seen this pattern before. The internet in the late 1990s, industrial automation in manufacturing, and the rise of mobile computing all triggered similar waves of optimism and fear. In each case, experts made bold predictions. In each case, many of those predictions were wrong not because the technology failed, but because the framing was flawed.
Understanding those patterns is critical for today’s cybersecurity leaders. Because AI is not just another technology wave it is a force multiplier in an already asymmetric threat landscape.
The Incumbency Trap: Why Experts Misjudge Disruption
One of the most consistent failure modes in technology forecasting is what can be described as the incumbency trap. Time and again, leaders of dominant organizations have evaluated emerging technologies through the lens of their existing business models.
This pattern is well documented. Early computing pioneers underestimated the demand for personal computers. Established software executives dismissed the importance of mobile devices. Media and entertainment giants failed to recognize the implications of streaming platforms. In each case, the underlying mistake was the same: the technology was assessed based on whether it replaced the current product, rather than whether it replaced the underlying need.
This is not simply a strategic oversight, it is a failure in risk assessment. When organizations define risk within the boundaries of their current operating model, they become blind to disruption that does not resemble them. By the time the threat becomes visible, it is often too late to respond effectively.
For cybersecurity leaders, this lesson is particularly relevant. Many organizations are currently evaluating AI as a tool to incrementally improve existing workflows—automating alerts, accelerating triage, or enhancing reporting. While these gains are valuable, they are not transformative. The real risk lies in failing to recognize how AI may fundamentally alter the nature of threats, attack surfaces, and defensive strategies.
Second-Order Effects: The Blind Spot That Matters Most
Another consistent pattern in failed predictions is the inability to anticipate second-order effects. Experts are often capable of understanding what a technology does at a surface level, but they struggle to foresee how it will reshape systems, behaviors, and economies once it reaches scale.
The early internet is a prime example. Many analysts correctly identified it as a powerful communication tool. What they failed to anticipate was its evolution into a coordination infrastructure, one that enabled global e-commerce, cloud computing, remote work, digital identity ecosystems, and entirely new economic models. These outcomes were not immediately visible because they depended on widespread adoption and behavioral change.
Second-order effects are inherently difficult to predict because they emerge from the interaction of multiple variables: technology, user behavior, market forces, and regulatory environments. However, they are often where the most significant impact occurs.
In cybersecurity, this blind spot is particularly dangerous. AI is not merely automating tasks; it is enabling new forms of coordination for both defenders and adversaries. Attackers can now scale operations, personalize phishing campaigns, and automate vulnerability discovery in ways that were previously resource-intensive. At the same time, defenders are leveraging AI to enhance detection, response, and threat intelligence.
The net effect is not equilibrium it is acceleration. The pace of both attack and defense is increasing, compressing the time available to detect and respond to incidents.
Behavior Over Capability: The Only Reliable Predictor
A third pattern emerges when examining accurate versus inaccurate predictions: the most reliable forecasts are grounded in behavior, not capability.
Many incorrect predictions focus on the limitations of a technology at a given point in time. They assume that current constraints, whether technical, economic, or usability-related—will persist. In contrast, accurate predictions tend to focus on how people will behave once those constraints are reduced or eliminated.
The adoption of smartphones, online banking, and streaming services all followed this pattern. The technology itself evolved, but the real driver of change was user behavior. As friction decreased, adoption increased, and new norms emerged.
For cybersecurity, this distinction is critical. The risks associated with AI are not solely tied to what the technology can do today, but to how it will be used by employees, attackers, and organizations. Employees may inadvertently expose sensitive data through AI tools. Attackers may leverage AI to craft more convincing social engineering campaigns. Organizations may integrate AI into critical workflows without fully understanding the associated risks.
In each case, the underlying issue is behavioral, not technical.
AI and the Cybersecurity Job Market: Evolution, Not Elimination
One of the most prominent narratives surrounding AI is its impact on the workforce, particularly within cybersecurity. Claims that AI will replace security professionals are widespread, but they do not align with historical precedent or current trends.
What AI is doing is not eliminating jobs, it is reshaping them.
Routine, repetitive tasks such as log analysis, alert triage, and basic threat enrichment are increasingly being automated. This shift reduces the demand for purely process-driven roles, particularly at the entry level. However, it simultaneously increases the value of higher-order skills, including threat hunting, incident response, and strategic risk management.
This dynamic mirrors the impact of automation in manufacturing. While certain roles were displaced, new roles emerged that required more advanced skills and offered greater value. The workforce did not disappear; it evolved.
In cybersecurity, the same pattern is unfolding. Organizations are not reducing their need for security professionals; they are redefining what those professionals must be capable of doing. The gap between average and highly skilled practitioners is widening, and the expectations for performance are increasing.
The Strategic Implication: Moving Beyond Incremental Thinking
The most significant risk facing organizations today is not the adoption of AI itself, but the way in which it is being evaluated.
Many executives are approaching AI as a means of incremental improvement seeking efficiency gains within existing processes. While this approach can yield short-term benefits, it fails to capture the broader implications of the technology.
The more important question is not how AI can make current workflows more efficient, but what entirely new capabilities it enables. This shift in perspective is essential for identifying both opportunities and risks.
For cybersecurity leaders, this means moving beyond tool-centric thinking and focusing on outcomes. It requires a reassessment of identity management, as AI-driven systems increasingly rely on machine identities and automated interactions. It demands a deeper understanding of how attackers may leverage AI to bypass traditional controls. And it necessitates a more dynamic approach to risk management, one that accounts for rapid changes in both technology and behavior.
James Azar’s CISO Take
History provides a clear lesson: technology is consistently overestimated in the short term and underestimated in the long term. The internet did not eliminate industries overnight, but it fundamentally reshaped them over time. Automation did not create a jobless future, but it transformed the nature of work.
AI is following the same trajectory, but at an accelerated pace. The window between early adoption and widespread impact is narrowing, and organizations that fail to adapt will find themselves at a disadvantage.
The critical differentiator will not be access to AI tools, but the ability to understand and operationalize them effectively. This includes recognizing the importance of behavior, anticipating second-order effects, and aligning security strategies with evolving business models.
In cybersecurity, where the consequences of failure are immediate and tangible, this understanding is not optional, it is essential.
Conclusion
AI is not an unprecedented phenomenon. It is the latest iteration of a familiar cycle of innovation, hype, and eventual transformation. What makes it different is not the pattern itself, but the speed at which it is unfolding and the scale of its impact.
For cybersecurity leaders, the challenge is not to predict the future with certainty, but to recognize the signals that history provides. By understanding the common pitfalls in technology forecasting and focusing on the factors that truly drive change, organizations can position themselves to navigate this transition effectively.
The question is not whether AI will transform cybersecurity, it already is. The question is whether organizations will adapt quickly enough to keep pace.



