Preparing for 2026: The AI bubble
CRN Asia reached out to several vendors in the region to get their views on the AI bubble and tech hurdles for 2026.
Throughout 2025, there have been conversations from some partners and customers on concerns of an AI bubble. While most tech vendors feel that the AI bubble is not really an issue to be worried about, organizations are still needing to ensure they are well prepared to deal with the hurdles in AI.
As 2025 was the year that witnessed the most AI deployment in organizations, 2026 will be about how they manage these AI use cases and develop newer ones from this. It’s no longer just about getting ROI from AI investments but also ensuring the infrastructure is prepared to deal with the increased use of AI as well as meeting regulatory requirements.
CRN Asia reached out to several vendors in the region to get their views on hurdles organizations may continue to face in their AI journey as well as if the AI bubble is really a concern and how they can deal with it.
Mark Micallef, Managing Director, Southeast Asia, Google Cloud
We wouldn’t say hurdle, but an ongoing challenge will be operational readiness ensuring organizations have the data quality, architecture, and governance needed to scale AI responsibly. Many enterprises still face data fragmentation or a shortage of AI-skilled talent, slowing their ability to extract real value.
- Responsible Deployment and Trust: As AI systems take on more autonomous action, whether in clinical settings, financial workflows, government services, or cybersecurity, trust and verification become critical. Industries such as healthcare emphasize the need for verifiable, explainable outputs, rigorous governance, and clinician- or domain-expert-led oversight models. Trust is becoming a differentiator, not an afterthought.
- Regulatory Maturity: Regulatory frameworks worldwide continue to lag behind the pace of innovation. Governments are exploring how AI might eventually help streamline regulation itself, but broad adoption will require careful implementation and cross-sector collaboration.
- The AI Bubble: 2026 will mark the maturation of enterprise AI, with organizations shifting from hype-driven experimentation to measurable value creation. Industries have already begun reporting tangible ROI from revenue growth in retail and media & entertainment (M&E) to relief of administrative burden in healthcare to substantial gains in security efficiency.
The trajectory points toward acceleration. As enterprises adopt AI-native operating models and as agentic systems become embedded in mission-critical workflows, AI will be seen less as a novelty and more as foundational infrastructure much like cloud adoption a decade ago.
Mike Capone, CEO, Qlik
People keep asking whether AI is a bubble. I do not think that is the right single question. What I see globally is an investment cycle with pockets of speculation on top, not a pure rerun of 1999.
The facts matter here. The companies driving most of the spend today actually have earnings, cash flow, and customers asking for capacity. Central banks and market analysts point out that valuations are high, but they are still anchored in real profits and very real capital expenditure on data centers, networks and power. That looks more like an industrial build-out than a meme stock frenzy.
At the same time, whenever you combine high valuations, concentration in a small group of names, and some circular deals between chipmakers, cloud providers and model companies, you should expect corrections. The IMF and several central banks are right to warn that parts of this market can reprice sharply if earnings disappoint. That is healthy. Some projects will not clear the bar and that is exactly how cycles sort themselves out.
From my vantage point, the real dividing line is not ‘AI or no AI.’ It is discipline. If a board treats AI as a fashion spend, with no clear view of which work units change and how they will be measured, they are volunteering to be part of the froth. If they treat AI like any other serious capex decision, tied to unit economics, error rates and time to resolution in specific workflows, they are much more likely to create durable value and to survive any shake-out.
So, I would put it this way: we are not living in an AI fairy tale, and we are not living in a doom story either. We are in a sorting phase. The technology is real, the productivity potential is significant, and there will be winners and losers. Whether you end up on the right side of that is less about the hype cycle and more about the discipline of how you invest.
Andrew Amos, Vice President of APAC, Diligent
From what we are seeing on the ground, AI adoption is proving to be a catalyst in overcoming one of the most entrenched hurdles to technology adoption: the inertia of the status quo. For many organisations, that has meant continuing to rely on spreadsheets and siloed processes, not out of strategy but habit.
What’s changing in 2026 is mindset. Organisations increasingly understand that digital governance, supported by automation and AI, is becoming essential.
Of course, like any major shift, there will be a learning curve to AI, including a period of adjustment, trial, and course correction. The difference between those who succeed and those who stall will come down to training, a strong governance framework, and disciplined iteration. Organisations that approach AI adoption with structure – phased rollouts, real-world use cases, and measurable outcomes – will ultimately avoid the “AI bubble”.
Sumir Bhatia, President, Asia Pacific, Infrastructure Solutions Group, Lenovo
AI’s rapid growth brings a very real challenge: Power. Many CIOs in the region view energy availability and efficiency as strategic constraints on their AI ambitions. Sustainability is now a prerequisite for continued innovation, not just corporate responsibility. At the same time, every AI conversation today eventually comes back to trust. Boards and regulators want assurance that AI systems are fair, secure, and accountable.
As with any major innovation, there will be intense competition in certain areas such as large language models, but overall we do not see a bubble – the long‑term AI trend remains intact.
At Lenovo, we believe that hybrid AI will become essential in 2026 as the need for workloads to balance public cloud, edge, and on-prem environments in the same workflow increases. Case in point, from Bengaluru to Seoul, we’re seeing customers converging on a similar architectural answer that is hybrid AI.
This is driven by data sovereignty, latency needs, cost predictability, and, increasingly, sustainability. Training might happen in a core data center, while real‑time inference runs at the edge – bringing AI to the data where it's generated. We believe that this distributed approach ensures flexibility across diverse APAC markets where regulatory requirements vary drastically.
Beni Sia, General Manager & Senior Vice President, Asia Pacific & Japan
2026 is the year organizations start to realize the real benefits of AI. AI is set to fundamentally change how we work, interact, and define roles. Those who leverage AI to enhance productivity and efficiency will be the ones who stay ahead.
In my view, the biggest hurdle will be managing the data that powers AI, and that’s exactly what Veeam aims to address in 2026.
Other significant challenges in the region include increasing complexity. Organizations are navigating multi-cloud, multi-data center, and multi-SaaS environments, all of which can create gaps in data resilience. Additionally, they face rapidly evolving and fragmented regulations across APJ, as well as clear skill gaps in both AI and cyber/digital resilience.
As a result, C-suite leaders are looking for a single platform that can unify backup, security, privacy, and compliance – reducing complexity and improving trust, while also controlling costs. The winners in 2026 won’t be those with the most tools, but those who can simplify their environments, gain end-to-end visibility, and operationalize resilience across people, processes, and technology.
Chris Kelly, President, Delinea
The biggest hurdle in 2026 will be trust at scale. AI is making cyberattacks easier to build, harder to spot, and far more convincing - and identity sits right in the middle of that.
We expect to see more AI-orchestrated credential attacks, including “vibe hacking,” where attackers use generative AI to run multi-step campaigns that look legitimate and exploit access paths quickly. As identity attacks become more scalable, especially with deepfakes and shadow AI adding to the attack surface, trust and continuous identity discovery and control will matter more than ever in 2026.
Finally, AI momentum will only pick up in 2026. AI tools will be more deeply embedded, powering agents, copilots, and machine-to-machine workflows. That’s where risk rises if identity and authorization controls don’t keep pace, and why organizations must secure AI access with least privilege, strong verification, and continuous monitoring — so innovation scales safely, not blindly.
Lawrence Yeo, ASEAN Solutions Director, Hitachi Vantara
The biggest hurdle in 2026 will be the hard physical and operational limits that define how fast AI can realistically scale. Across ASEAN, power availability, land constraints, data-management maturity and regulatory alignment will influence every major AI-infrastructure decision. Organisations will increasingly need to focus on the fundamentals: whether their data is governed, reliable and available in the right context; whether their infrastructure can support the throughput needed for training and the latency required for inference; and whether their operating models are ready for continuous resilience.
On the question of whether the AI bubble will burst, the short answer is no. The demand for intelligent systems is real, and adoption is accelerating across critical industries. What is likely to fade is the belief that AI can be scaled through compute alone. The organisations that over-invest in accelerators without strengthening data pipelines, governance and sustainability will find themselves hitting limits quickly. AI itself is not the bubble; undisciplined approaches to AI are. The leaders in 2026 will be those that treat AI not as a trendy investment cycle, but as a long-term operational discipline.
Remus Lim, Senior Vice President, Asia Pacific & Japan, Cloudera
The widening gap between experimentation and sustainable, scalable deployment will be the biggest hurdle in 2026. Many enterprises are operating in AI silos, with departments running independent AI pilots and POCs, resulting in inconsistent governance, fragmented tools, and rising costs.
Regulatory pressures, cybersecurity threats, and unclear ROI will also slow progress. Many organizations remain stuck in the “middle stage” of AI maturity, where governance, scaling, and cost control become barriers. With economic headwinds driving a shift toward “AI for impact,” companies that lack strong data foundations or clear ROI metrics will struggle.
AI is unlikely to “burst,” but poorly planned or poorly governed AI investments may fail, with more than 40% of AI agent projects ultimately being scrapped by 2027 due to poor ROI. The defining challenge will be ensuring that AI is deployed with the right data, infrastructure, and governance, not chasing trends for their own sake.
Kenneth Lee Wee Ching, CEO, Global TechSolutions
For advanced packaging and the wider semiconductor ecosystem, the key hurdle in 2026 will be whether the ecosystem can keep up with sustained demand for advanced chips. AI, high-performance compute and data-intensive applications are putting unprecedented pressure on global supply chains – from wafer fabs and advanced packaging lines to materials, logistics and skilled talent. That stress doesn’t sit with one player, it affects all stakeholders within the ecosystem such as hyperscalers, device makers, equipment OEMs, SMEs and regulators alike.
Keeping pace will require much tighter collaboration across this chain. Large manufacturers will need partners who can help them maximize uptime, extend the life of critical tools, and qualify capacity in multiple hubs. GTS’ role in that context is to keep the equipment that underpins AI and advanced compute performing at or above OEM spec for longer, with shorter downtimes and a smaller footprint. Whether AI’s valuation curve smooths out or not, the world will still need reliable, efficient manufacturing – and that is where we expect to contribute to the global semiconductor industry.
Troy Nyi Nyi, SVP & GM, APAC, SEON
Industrialized fraud met upstream. AI has scaled the fraud supply chain: deepfakes look natural, scripted bots simulate human rhythm across forms and sessions, and multi-account farms iterate at machine speed. Our stance is to stop this before it becomes a customer problem. SEON evaluates a digital interaction from the first touch – pre-KYC and at signup – by orchestrating behavioral patterns (velocity, navigation flow, typing cadence), device and network intelligence (fingerprints, IP/ASN integrity, emulation/spoofing signals) and contextual history. When intent looks genuine, users move; when signals diverge, we step-up or hold. That keeps attacks from entering the system, cuts downstream workload on payments and payouts, and preserves a journey that stays fast, fair and explainable.
Human-plus-machine, with explainability at the core. SEON’s explainable models surface patterns and rank related entities so investigators see connected accounts and likely mules in seconds; analysts stay in the loop to judge intent and proportionality, and every outcome is audit-ready. New capabilities – such as similarity ranking and concise investigation summaries – turn raw data into actionable insights, reducing manual review load while keeping decisions consistent across products and markets. The result is a system that adapts as adversaries evolve, pairs automation at scale with human judgement where it matters and maintains trust without slowing growth.
Alex Teo, Vice President & Managing Director of Southeast Asia, Siemens Digital Industries Software
The biggest technology challenges in 2026 will be managing the rising complexity of multi-domain, data-intensive workflows, and ensuring that technology delivers clarity rather than adding friction.
As data volumes continue to surge and systems become more complex and interconnected, it will become crucial for organizations to coordinate across disciplines, maintain precision at scale, and ensure consistent configuration across teams and tools.
AI is also expected to become even more deeply embedded, moving from pilot projects to being integrated into every aspect of operations. With AI now helping organizations to extract value from vast amounts of previously unusable industrial data, organizations will need to focus on transforming that data into context-aware insights that engineers, analysts, and decision makers can actually use. While it is impossible to predict if or when the broader AI hype may taper off, it is clear that AI is already delivering measurable value today in engineering and manufacturing. In 2026, AI will become even more essential for organizations that want to operate with greater speed, quality, and confidence.
Dominic Forrest, Chief Technology Officer at iProov
From iProov’s vantage point, the biggest hurdle for technology in 2026 comes down to one word: trust.
Current security models were never designed for a world where generative AI can be weaponized at scale. What we’ve seen over the past twelve months is how AI has dramatically lowered the cost and increased the quality of synthetic identity fraud and deepfakes. As a result, remote identity systems, that rely on basic liveness checks, are no longer enough to keep organisations. If you can’t be certain that a real, present human is the one initiating an action, every automated process beyond that becomes vulnerable too.
And what's emerging next is even more complex. With personal AI agents becoming mainstream, the security question shifts. We’re moving from “Is this a real human?” to “Is this agent acting under the control of the right human, right now?” Distinguishing a malicious AI agent from a legitimate, authorized one will be one of the defining challenges of the year, and it’s why establishing irrefutable trust at the point of identity will matter more than ever.
As for the idea of “AI bubble” bursting, I don’t see that happening. AI is already too deeply embedded in how organizations operate. But a correction is coming. Boards will start demanding measurable returns from the investments they’ve already made.
Kunal Jha, Regional Director for Netskope Asia
Organizations are in a balancing act between wanting to rapidly adopt AI, and ensuring the use of AI is secure and de-risked. They also must navigate the complexity of multicloud and hybrid environments, and eliminate the security blind spots that complexity creates. Many entities still operating with legacy IT infrastructure quickly realise that they are not built for the AI era, but need reliable partners to help make important decisions about how and in what ways to upgrade their infrastructure. Finally, geopolitical pressures are tightening cybersecurity, privacy, and sovereignty regulations across Asia, increasing the load of compliance efforts.
The link in all of these trends is wanting to be secure without wanting to slow the pace of innovation. I believe 2026 will be a year when organizations focus on building the conditions and making the technology investments which make that possible.
Johan Fantenberg, Director at Ping Identity
With AI-generated phishing, deepfakes, and synthetic identities on the rise, even the most vigilant teams will become vulnerable during periods of distraction and high demand. As agentic AI essentially becomes non-human employees within organizations, a single compromised agent could expose entire ecosystems.
In 2026, leaders will need to secure human and machine identities. This requires treating every interaction, access request, and transaction as a potential risk point. Resilience will be the ultimate competitive edge in the age of AI-driven commerce, and organizations can gain this advantage by adopting continuous verification at every stage of major touchpoints, passwordless access with biometric authentication, and verified trust frameworks that operate at machine speed and provide continuous risk-based assessment.
Erich Kron, CISO Advisor at KnowBe4
Q-Day, the day when quantum computers become sufficiently capable of cracking most of today's traditional asymmetric encryption, will likely happen in 2026. While privacy concerns have kept mandatory digital IDs largely at bay, digital identities tied to their real human identities are expected to grow in popularity and become increasingly necessary for accessing digital services. The security of these systems has never been more important.
Organizations must strengthen human authentication through passkeys and device-bound credentials while applying the same governance rigor to non-human identities like service accounts, API keys and AI agent credentials.
Nathan Cheng, Southeast Asia, Data & AI Lead at Rackspace Technology
The AI bubble won't burst—but the operating model gap will separate winners from casualties. As our CEO put it bluntly: "AI winter is not coming. If anything, the heat is just turning up." The numbers support this. Coding benchmarks like SWE Bench Verified are now built from real GitHub issues, testing whether models can read codebases, generate patches, and pass tests. Token prices for frontier coding models have dropped substantially while efficiency has increased, shifting the conversation from research novelty to unit economics—cost per bug fix, cost per feature, cost per refactor. What skeptics call a bubble is actually the painful but necessary transition from experimentation to production, where weak implementations get culled but the underlying infrastructure investment remains.
The real hurdle is organizational, not technological. Enterprises will struggle not because AI stops working, but because they haven't redesigned how their people and agents collaborate. The agenda for 2026 is clear: treat coding agents as teammates, benchmark models on your own repositories, and stand up at least one agent-first squad with real guardrails. The companies that fail won't be victims of a bubble bursting—they'll be victims of clinging to old operating models while competitors learn to run long-lived agents safely against production data. The gap won't be who has access to AI. It will be who figured out how to work with it.
Ananth Nag, Vice President, APAC, Rubrik
One of the biggest cybersecurity hurdles in 2026 will be whether organizations can keep up with the complex digital ecosystems AI creates. As organizations scale their use of AI models and autonomous agents, they are also creating an explosion of non-human identities (NHIs) that interact with applications, data, and other systems. Managing this new layer of identity has quickly become a pain point, as threat actors increasingly weaponize it to gain access to sensitive data – a critical limitation in enterprise security.
While organizations invest more heavily in security and governance systems to secure AI deployments, AI adoption and innovation will continue moving at a rapid pace. Business leaders will need to rethink how they introduce advanced measures. Continuous monitoring for NHIs and governance frameworks will help balance innovation with resilience and extract meaningful value from AI. Without these controls in place, even the most ambitious AI investments risk leading to inconsistent performance or full-system compromise.
Tatsuya Suzuki, Regional VP APJ Channel Sales, Akamai
Going into 2026, AI-orchestrated cyberattacks remain a top technology hurdle to overcome. From API abuse to automated fraud campaigns, the open-source nature and easy access of AI is a double-edged sword that can quickly bite back at organizations relying on traditional, legacy defense systems to fight autonomous AI agents attacking at unprecedented speed and scale. Industries such as semiconductors, finance, and high-tech manufacturing, which are already heavily targeted, will face heightened exposure unless they modernize their cyber operations around real-time, automated defense.
For APAC businesses, this translates into the need to stay resilient by constantly innovating their tools and processes to leverage the benefits of AI as a force-multiplier through modernizing API governance, investing in automated threat containment, and strengthening supply chain networks, starting at the edge and through to their internal workloads.
From an Akamai point of view, the AI bubble won’t burst—instead, it will mature. The industry is moving past the experimentation phase into AI being deeply embedded in both innovation and cyber risk, making its trajectory irreversible. What will fade are poorly governed, high-cost, isolated AI projects that cannot scale sustainably. In short, AI will not collapse, undisciplined AI will. The winner will be those who align smart architecture, strong governance, and financial oversight to build AI systems that are resilient, secure, and scalable.
Mark Weaser, Vice President, APAC at OutSystems
In 2026, the biggest AI hurdle will not lie in adoption, but translating it into sustainable, meaningful business value and impact. Many APAC organizations have experimented with AI pilots, yet scaling to the production stage remains challenging – particularly when generic AI solutions fall short of initial expectations and fail to meet diverse enterprise needs.
Moreover, as costly large language models (LLMs) show diminishing improvements, success will no longer just hinge on results, but also on faster, reliable, and economic solutions like small language models (SLMs), which are sufficiently powerful and inherently more suitable. We will see enterprises slowly pivoting to hybrid agentic systems, combining LLMs’ strong reasoning capabilities with the domain expertise of SLMs. The real differentiator will therefore be orchestration – the ability to route the right task to the right model and coordinate complex workflows so that all components work towards a unified business goal.
2026 will mark a shift from AI speculation to real adoption. AI will continue to be front of mind for all companies, but the organizations that operationalize AI to drive concrete efficiency and game-changing outcomes will emerge as the true winners.
Yuval Fernbach, VP & CTO of MLOPs, JFrog
A major hurdle is the potential loss of visibility and control as organizations race to embed AI into every workflow. The speed of adoption is outpacing the industry’s ability to govern it, and the most urgent manifestation of this problem is the rise of shadow AI: unapproved models, tools, and API calls operating outside formal oversight.
49% of companies have no reliable way to control ML model usage inside their applications, and more than two-thirds cannot track open source packages with transitive ML dependencies reliably, creating huge security blind spots. As developers integrate models from providers such as OpenAI, Anthropic, and Google directly into their workflows, organizations lose the ability to ensure responsible use, maintain audit trails, or verify model provenance. That becomes a major obstacle as global regulations increasingly demand transparency, accountability, and evidence of how AI systems are built and deployed.
The momentum behind AI is real and will continue, but as regulatory pressure increases and the consequences of ungoverned AI become clearer, organizations will be forced to slow down, reassess, and rebuild their foundations. The AI boom will continue, but governance is key. Projects built on uncertain data, unvetted models, or invisible decision pathways may stall or fail, while those grounded in transparency, oversight, and traceability will endure. The divide won’t be between AI adopters and non-adopters, but between those who can demonstrate responsible use and those who cannot.
Zoe Nicholson, VP partner sales APAC at Qualtrics
Trust is the currency of innovation, impact, and adoption with AI, and its key companies priorities building and cultivating it in 2026. If businesses, customers, or employees do not trust the AI capabilities they’re presented with, then they will not use, engage, or adopt them - and this will hinder the returns on investments and impact businesses are looking for.
Partners can play a critical role in building trust. They bring the expertise customers need to deploy AI that delivers and accurately measures impact, support in building programs that show real business outcomes and value, and work closely with teams to deeply understand their needs.
Beyond AI, leading organizations will focus on hyper-personalization in real time. They’ll achieve this by understanding what customers are doing and where they are (behavioral and contextual signals), and connecting all the scattered pieces of information from every touchpoint, whether it’s calls, chats, reviews, social media or transactions.
Prabhuraj Patil, Senior Director, Physical Access Control Solutions, Asean & India Subcontinent, HID
The biggest hurdle to technology in 2026 will be navigating the learning curve around operationalising AI within security and identity management workflows. As AI reshapes the security landscape and strengthens decision-making, the emphasis will shift towards building the skills, transparency and frameworks needed to help security teams use these capabilities confidently and effectively.
With the hype around AI may taper, it is unlikely to burst as its value is becoming already increasingly tangible. In identity security, for example, AI is improving biometric authentication accuracy and enhancing threat detection and mitigation. The question is no longer whether AI is viable, but how organisations can deploy it responsibly — supported by clear policies, ethical guidelines, and privacy-first design.
With these foundations in place, organizations can fully unlock AI’s potential and accelerate broader, trusted adoption of AI-driven security technologies.