Browse LibrarySign Up — Free
← Back to Transcripts
Home
Transcripts
Technology
Technology / SaaS
April 21, 2026
Technology / SaaS

AI Compute Migration in ASEAN: Shift to AI Infrastructure and the Rise of Sovereign Data Strategies

Analyzes ASEAN AI data center expansion, highlighting China-driven demand shift, power constraints, Singapore vs Malaysia trade-offs, and rising dominance of AI-optimized infrastructure economics.

60 Mins
Former Head of Digital
Singapore
Public
🛡️ MNPI Screened
🔒 PII Redacted
✓ Compliance Certified
📄 Full PDF Included
Standard
One-time purchase
$449
$499
30% OFF

No subscription required · Instant access after purchase

Buy Now

What's included
Full verbatim transcript (PDF)
Executive summary with key takeaways
Tagged companies, keywords & metadata
MNPI-screened & PII-redacted
Instant download after purchase
🔒 Secure checkout via Stripe · Instant delivery · Full compliance guarantee
Companies Discussed
Alphabet (GOOGL), Amazon (AMZN), AMD (AMD), Apple (AAPL), ARM (ARM), Baidu (BIDU), Equinix (EQIX), GDS (GDS), Microsoft (MSFT), NVIDIA (NVDA), Oracle (ORCL), Singtel (Z74), Tesla (TSLA)
Executive Summary
Topics Covered
Methodology
Free Preview — Executive Summary

This transcript examines AI-driven data center growth in ASEAN, where demand is increasingly redirected from China into Singapore and Malaysia, with over 70–80% of new capacity focused on AI workloads. Power availability has emerged as the primary constraint, shaping project scale and location decisions. Singapore offers reliability but at significantly higher costs, while Malaysia provides cost advantages with execution risks. AI data centers command higher returns due to premium pricing and longer tenancies, though infrastructure complexity, cooling requirements, and regulatory factors create significant barriers to scaling.

Topics Covered
  • China-linked demand shift driving ASEAN data center expansion
  • AI workloads vs traditional cloud demand in capacity planning
  • Power availability as the primary constraint on data center growth
  • Impact of sovereign AI policies and data localization on deployment
  • Singapore vs Malaysia trade-offs in cost, reliability, and scalability
  • AI data center design differences vs traditional cloud infrastructure
  • Cooling technologies and rack density constraints in AI infrastructure
  • Value capture across hyperscalers, colocation, and infrastructure layers
  • Role of hyperscalers vs colocation providers in infrastructure ownership
  • Key bottlenecks in execution including permitting, power, and supply chain
  • Build timelines and scaling challenges for new AI data centers
  • Return profiles of AI-focused vs traditional data centers
  • Risk of overcapacity vs strong structural demand
  • Investment outlook across geographies and infrastructure segments
Expert Sourcing

Experts are sourced from Nextyns verified network of 900,000+ professionals. All hold or previously held senior roles directly relevant to the topic — minimum VP level, typically C-suite or former C-suite.

MNPI & Compliance Screen

Every transcript undergoes a two-pass MNPI review before listing. Material non-public information is redacted. All experts sign NDA and MNPI disclosure forms prior to the call. PII is fully anonymised.

Call & Transcription

Calls are conducted by trained Nextyn research moderators using a structured question guide. Sessions run 4590 minutes. Verbatim transcription is produced within 24 hours with speaker labels and timestamps.

Quality & Delivery

Final transcripts include an AI-assisted executive summary, tagged companies and tickers, expert metadata, and a compliance certificate. Delivered as a formatted PDF with instant download via Stripe.

Q: Can you walk us through the current GPU allocation framework at your organisation? How are you deciding between internal AI workloads and enterprise customer commitments? A: Sure. So the fundamental tension right now is that our internal AI teams — the ones building our own foundation models and inference services — are consuming GPUs at a rate that nobody anticipated even 18 months ago. We're talking about 3-4x the original projections. And that creates a real squeeze on what's available for enterprise customers. The allocation committee meets weekly now, which tells you everything. It used to be quarterly. We have a scoring matrix that weighs revenue potential, strategic importance, and internal capability gaps. But honestly, internal teams almost always win because the economics of our own AI services are so compelling compared to renting compute to enterprises...

🔒 FULL TRANSCRIPT LOCKED
Purchase to unlock the full transcript
48 more pages of expert insights, data points, and analysis
Buy This Transcript — $449 →
Expert Profile
Former Head of Digital at ASM
Duration
60 Mins
Call Date
April 21, 2026
Geography
Singapore
Transcript Tier
Standard
Need Custom Research?

Commission a bespoke expert call on any topic

Choose your expert profile, topic, and questions. We source, vet, conduct, and deliver. From $599.

Learn About Custom Transcripts →
SAVE MORE WITH BUNDLES

Go deeper. Buy the pack.

3 Transcripts

AI Infrastructure Deep-Dive Pack

GPU Supply Bottleneck
AMD MI300 Strategy
Cloud Capex Priorities
$999
$1,297
SAVE 23%
You save $298 compared to individual purchases
Buy AI Infrastructure Deep-Dive Pack →
Companies Discussed
NVIDIA (NVDA)
Microsoft (MSFT)
AMD (AMD)
Google (GOOG)

Get the full picture. Buy with confidence.

Every Transcript-IQ transcript is MNPI-screened, PII-redacted, and compliance-certified. Instant delivery. No subscription.