April 24, 2026
Advanced Micro Devices (AMD)
All-Time Highs, a $60B Meta Deal, and the Most Important Earnings Call of 2026

Analyst Ratings & Price Targets — As of April 24, 2026
- D.A. Davidson — Upgraded to Buy | PT raised to $375 (from $220) — April 24, 2026; driven by agentic AI-linked CPU demand surge — new street-high
- Wells Fargo — Overweight | PT: $345 — flags AMD as its top AI pick for 2026; cites MI-series GPU ramp and expanding hyperscaler customer base
- Evercore ISI — Outperform | PT: $358
- Wolfe Research — Outperform | PT: $300 — based on multi-year AI GPU agreements and improving EPYC server CPU visibility; agentic AI demand pulling forward CPU orders
- Stifel — Buy | PT raised to $320 (from $280) — April 20, 2026; stronger-than-expected AI & general compute demand
- Bank of America — Buy | PT raised to $310 (from $280) — April 18, 2026; projects data center growth above 60% YoY in 2026 and 2027
- Bernstein — Raised PT to $265 (from $235) — stronger server expectations + AMD-Meta deal upside
- Citigroup — Neutral | PT: ~$248
- Consensus (TipRanks, 33 analysts) — Buy | Avg PT: $287.33 (3-month avg) | 33 Buys, 21 Holds, 0 Sells
- Consensus (MarketBeat, 40 analysts) — Moderate Buy | Avg PT: $290.19
- PT Range across all covering firms: $120 low – $375 high
Worth noting: on April 24, 2026 alone, AMD was up 15.46% intraday. D.A. Davidson’s upgrade to Buy with a $375 target — the new street-high — landed the same morning. The stock has moved from roughly $196 to $352 in a matter of weeks. That kind of velocity tends to separate the thesis from the noise very quickly.
Company Profile
Advanced Micro Devices (NASDAQ: AMD) is a fabless semiconductor designer operating across four segments: Data Center (EPYC server CPUs + Instinct AI GPUs), Client (Ryzen CPUs for consumer and commercial PCs), Gaming (semi-custom SoCs for PlayStation and Xbox + Radeon discrete GPUs), and Embedded (adaptive computing via the 2022 Xilinx acquisition). Fabless means AMD designs the chips but outsources manufacturing to TSMC — keeping capital requirements lean and letting AMD focus entirely on architecture.
The company is the second-largest server CPU vendor globally and the most credible large-scale Nvidia alternative in AI accelerators. That dual positioning — competitive in both CPU and GPU — is structurally rare, and it is the entire foundation of the AMD bull case.
Musk Signs MASSIVE Six-Year Deal with Tiny Company
Musk is buying 75,000 metric tons of one “magical” element from a completely unknown company.
Price Action — The Numbers Tell the Story
AMD’s 52-week range tells you almost everything. The stock went from a low of $91.87 to an all-time high above $310 in under 12 months — a move exceeding 237%. By April 24, 2026, AMD was trading up another 15%+ intraday on fresh analyst upgrades and AI demand optimism, with price action in the $340–$352 range intraday. The move from $196 to $352 in a matter of weeks is a near-doubling off a compression low — and it came with volume confirmation.
- 52-week low: $91.87 (April 30, 2025)
- All-time high: ~$352 intraday (April 24, 2026)
- YTD 2026 performance: ~+28% as of mid-April, then another 15%+ surge on April 24
- Recent trajectory: From ~$196 to ~$352 in a matter of weeks — near-doubling
- Beta: 1.33 — amplifies both upside and drawdowns relative to market
- Market cap: ~$494–$498B (late April 2026)
- P/E (trailing): ~115x — elevated, reflecting growth expectations not trailing earnings
- P/E (forward 2026): ~32x — premium to market, discount to Nvidia
- Key support zones: $310–$320 (prior ATH now potential support), $268–$270, $240, $193
- Key resistance: $352 (April 24 intraday ATH), $375 (D.A. Davidson street-high target)
The daily tape shows stair-step price action — shallow pullbacks, strong rebounds. Classic institutional accumulation signature. Whether that continues depends entirely on what May 5 delivers.
The Numbers — Most Recent Financials
AMD’s most recently reported quarter is Q4 FY2025. Q1 FY2026 results drop May 5, 2026, after market close.
- Q4 2025 Revenue: ~$10.3B — record quarter
- Q4 2025 Data Center Revenue: Record $5.38B — up 39% YoY
- Q4 2025 Non-GAAP EPS: $1.53 (beat expectations)
- Full-Year 2025 Revenue: $34.64B — up 34.34% YoY
- Full-Year 2025 Net Income: $4.34B — up 164% YoY
- Full-Year 2025 Free Cash Flow: $5.519B — up 129% YoY
- Embedded Segment Design Wins 2025: $17B (up ~20% YoY); cumulative wins since Xilinx exceed $50B
- Q1 FY2026 Company Guidance: ~$9.8B (+/- $300M) — implies ~32% YoY growth; non-GAAP gross margin ~55%
- Street Q1 FY2026 Estimate: ~$9.84B revenue; projected Data Center revenue ~$5.5B
- Q1 FY2026 EPS Estimate (TipRanks): $1.27, range $1.23–$1.37
- Long-Term Revenue CAGR Target: >35% (AMD Financial Analyst Day)
- Long-Term Non-GAAP EPS Target: >$20 (3–5 year)
- Data Center TAM View: AMD sees total AI data center TAM reaching $1 trillion over the next five years
One flag worth calling out: AMD guided that it is not forecasting any additional China MI308 revenue beyond the ~$100M already baked into Q1 FY2026. That’s the export control drag — real, quantified, and apparently manageable. The underlying business, stripped of that noise, is compounding at a rate that would have seemed implausible three years ago.
CEO Lisa Su — The Yottascale Vision
It’s impossible to separate AMD from Lisa Su. She took the CEO role in October 2014 when AMD was a near-bankruptcy story trading near $3. The turnaround thesis was simple and disciplined: abandon smartphones, focus on high-performance compute — CPUs, data centers, gaming, and eventually AI. The breakthrough came with the Zen chiplet architecture, which gave AMD both a performance edge and a cost advantage it hadn’t had in nearly a decade. By 2021, AMD’s market cap had surpassed Intel’s. Almost no one predicted that in 2014.
At CES 2026, where Su delivered the opening keynote, she declared the world has entered what she called the “Yottascale era” of computing — a period where AI systems will require yottaflops of processing power to meet growing demands. Su predicted a need for 10 yottaflops of compute power by decade’s end to support five billion users. For context, that is roughly 10,000 times the global AI compute capacity that existed in 2022. That’s the market AMD is positioning itself inside.
Su said AMD entered 2026 with “a lot of momentum” driven by demand for high-performance compute and an environment that rewards strong product cycles and deep customer relationships. On the AI bubble debate, she has been direct: Su said AMD remains confident in the AI infrastructure opportunity because it is “solving real-world problems,” and described AI infrastructure spending as tied to “productivity and intelligence,” with AMD seeing “significant new enterprise use cases” and deployments remaining in “very early innings.”
Su also made a point that rarely gets enough attention. She said CPU demand has exceeded her expectations and suggested the compute market may be larger than AMD’s earlier forecasts. At Purdue’s presidential lecture series, Su said the semiconductor space is “not mature,” and that proclamations about Moore’s Law being dead miss how innovation has shifted.
“This is about making the right bets at the right time.” — Dr. Lisa Su, AMD Chair & CEO, on the Meta deal (CNBC, February 24, 2026)
Su remains the central figure of the AMD narrative — and after twelve years, she has earned the benefit of the doubt on execution. Her track record is under-promise, over-deliver. She noted the time needed for customers to optimize workloads on AMD has shrunk from “a number of months” in early MI300 deployments to “a short number of days,” citing improvements in tools and libraries and AMD’s use of AI in software ecosystem development.
Mag 7’s relentless assault on Nvidia’s future
Nvidia’s biggest customers are now buying and selling chips to each other. That means, the virtual monopoly that fueled NVDA’s $4 trillion market cap is OVER. If you currently own NVDA, here’s a better alternative. Their competition is scarce, which puts them in a hugely advantageous spot. This supplier’s stock has outperformed Nvidia’s by 50X since July.
Click to get the full details on this urgent “Nvidia alternative” right here.
The $60B Meta Deal — What It Actually Means
This is the deal that structurally changed the AMD investment narrative. On February 24, 2026, AMD and Meta announced something that goes well beyond a standard supply agreement.
- The agreement commits Meta to purchasing up to 6 gigawatts of AMD Instinct GPUs across multiple chip generations, starting with custom MI450 accelerators. Independent analysts estimate the contract value at approximately $60 billion over five years — the single largest hardware procurement deal among the Magnificent Seven.
- Shipments supporting the first gigawatt deployment are scheduled to begin in the second half of 2026 powered by the custom AMD Instinct MI450-based GPU and 6th Gen AMD EPYC CPUs, codenamed “Venice,” running ROCm software and built on the AMD Helios rack-scale architecture.
- Building on deep roadmap alignment, Meta will be a lead customer for 6th Gen AMD EPYC CPUs, codenamed “Venice,” and “Verano,” a next-generation EPYC processor designed with workload-specific optimizations to deliver leadership performance-per-dollar-per-watt. This is not just a GPU deal — AMD is selling the full stack.
- As part of the agreement, to further align strategic interests, AMD has issued Meta a performance-based warrant for up to 160 million shares of AMD common stock, structured to vest as specific milestones associated with Instinct GPU shipments are achieved.
- In October 2025, AMD also announced a partnership with OpenAI for the supply of 6GW-worth of GPUs, with the deal expected to kick off in the second half of 2026, with OpenAI saying it would build a 1GW data center using AMD MI450 chips.
The AMD-Meta deal, combined with the OpenAI partnership, gives AMD 12 GW of committed GPU deployments. AMD has secured a “guaranteed” revenue stream estimated at $20 billion to $25 billion annually starting in late 2026. That’s not a vendor relationship. That’s a structural realignment of the AI chip supply chain. Meta’s commitment to porting its primary AI workloads to AMD’s ROCm software stack provides crucial industry validation that AMD’s software ecosystem is finally ready for prime time.
One nuance bears flagging. Analyst Matt Britzman of Hargreaves Lansdown observed that AMD’s willingness to issue a 10% equity stake suggests the company faces headwinds generating organic customer demand at this scale. That’s the honest bear-case read on the warrant structure. Su’s counter: she said warrants are “a very special instrument” AMD uses only for “transformational partnerships,” noting that most customers do not receive them, and that the warrants were designed to be “very, very performance-based,” aligning incentives so that both companies benefit if demand scales.
Innovations & Product Roadmap
AMD’s hardware cadence has become one of the most closely watched in the industry. The MI-series progression — from MI300X to MI455X in under three years — is the clearest evidence AMD is no longer just reacting to Nvidia. It’s planning several moves ahead.
- MI300X (2023–2024): The inflection point. 192GB HBM3 memory — 2.4x Nvidia H100’s 80GB. First serious Nvidia H100 alternative deployed at scale by Microsoft Azure, Meta, and Oracle Cloud.
- MI350 Series (2025 — CDNA 4): Built on TSMC 3nm. Up to 288GB HBM3e. Delivered up to 4x generational AI compute improvement and 35x inferencing performance vs. MI300 series. Oracle Cloud Infrastructure among the first to deploy at scale.
- MI400 Full Lineup — CDNA 5 (CES 2026): The flagship MI455X sits at the top of the lineup as AMD’s flagship AI training and inference accelerator. It packs 320 billion transistors across 12 TSMC N2 compute chiplets and 3 advanced 3nm chiplets, delivering up to 40 PFLOPS of FP4 performance and 20 PFLOPS at FP8 precision. The AMD Instinct MI400 series features 432GB of HBM4 memory per GPU, a 50% increase over the MI350’s 288GB of HBM3e. The MI440X powers an 8-GPU enterprise AI platform for on-premise deployments. The MI430X targets sovereign AI, HPC, and hybrid computing with full FP32/FP64 support.
- Helios Rack-Scale Platform (H2 2026 — Q3 target): At the heart of AMD’s announcement is Helios, the company’s first rack-scale system solution for AI and HPC workloads. Built on AMD’s upcoming Zen 6-based EPYC Venice processors, Helios integrates 72 Instinct MI455X accelerators delivering a combined 31TB of HBM4 memory and an aggregate bandwidth of 1.4PB/s. AMD says the platform is capable of up to 2.9 FP4 exaFLOPS for AI inference and 1.4 FP8 exaFLOPS for AI training. This open-standards approach contrasts with Nvidia’s vertically integrated NVLink and NVSwitch fabric, giving OEMs and cloud providers more flexibility in system design.
- EPYC Venice (Zen 6, 2026): Venice has 8 CCDs each with 32 cores for a total of up to 256 cores per Venice package. AMD is optimistic Venice will extend its performance lead over Intel’s Xeon lineup.
- MI500 Series (2027 — CDNA 6): Su teased additional details about its forthcoming MI500 series GPUs, saying that the chips, set to be launched in 2027, would deliver up to a 1,000x increase in AI performance compared to the AMD Instinct MI300X with AMD CDNA 6 architecture. Both the MI400 and MI500 series will be manufactured using TSMC’s 2nm process technology.
- ROCm Software Stack: AMD has day zero support for the most widely used frameworks, tools and model hubs, and ROCm is natively supported by top open-source projects like PyTorch, vLLM, SGLang, and HuggingFace, downloaded more than 100 million times a month — running out of the box on Instinct. The software gap vs. Nvidia’s CUDA remains real, but closing faster than most expected.
- Ryzen AI 400 Series: Expanded AI PC portfolio targeting HP, Lenovo, Dell, and Asus. The Ryzen AI Halo developer platform opens new possibilities for AI-driven computing at the endpoint.
EPYC Server CPU — The Quiet Dominance Story
Everyone focuses on the GPU story. The CPU story is almost as good and far less discussed. AMD went from supplying less than 1% of server CPUs in 2017 to commanding nearly 40% of the market by Q1 2025. AMD is demonstrating a robust trajectory with projected total revenue of $9.9 billion, reflecting a year-over-year increase of 33%, largely fueled by Data Center revenue anticipated to reach $5.5 billion.
Fortune 500 enterprise customers are now adopting EPYC faster than ever. Meta is now the lead customer for Venice, AMD’s 6th Gen EPYC. Microsoft Azure, Google Cloud, and Oracle have all disclosed EPYC-based server expansions in 2025–2026. Once holding near-zero share eight years ago, AMD now challenges Intel for server CPU leadership — and parity by 2026 is a credible projection.
Hottest stock in Silicon Valley about to go public (get in for under $10)
You’re running out of time to claim your pre–IPO stake in what many expect will become the hottest stock in Silicon Valley. And if you wait until the official announcement comes… you will almost certainly miss out on the biggest potential gains.
Challengers & Competitive Landscape
Nvidia is the unavoidable conversation. Nvidia is now the world’s largest publicly traded company, with a $4.66 trillion valuation, and controls roughly 90% of the AI chip market. Its moat isn’t primarily hardware — CUDA has been the dominant framework for GPU computing for nearly 20 years, with 4M+ developers and every major ML framework optimized for CUDA first. Switching costs are measured in years, not dollars. In raw specs, the MI455X offers 2.25x the memory capacity (432GB vs 192GB) and 2.4x the memory bandwidth (19.6 TB/s vs 8 TB/s) compared to the Nvidia B200. But memory advantage doesn’t automatically translate to system-level wins. Nvidia’s Vera Rubin is slated for H2 2026 — the first direct rack-scale collision.
Intel is rebuilding. Its Jaguar Shores and Crescent Island GPU lineup represents a renewed attempt at AI accelerator relevance. The competitive landscape is further complicated by the entry of Intel with its Jaguar Shores and Crescent Island GPUs. But Intel remains at least two full GPU generations behind AMD in hyperscaler traction, and its server CPU share has been declining for years.
Custom Hyperscaler Silicon is the quiet risk that rarely gets enough attention. Google’s TPUs, Amazon’s Trainium2, and Microsoft’s Maia are all designed to reduce merchant GPU dependency. US hyperscalers — Meta, Alphabet, Microsoft, and Amazon — collectively target at least $630 billion in capital expenditures in 2026, focused on data centers and AI chips. A meaningful portion of that goes to internal silicon. If this trend accelerates, both AMD and Nvidia see cloud revenue growth compress. This is a structural long-term risk AMD’s bulls largely dismiss and bears correctly flag.
Slight tangent, but worth noting: this deal, combined with AMD’s 2025 contract with OpenAI, propels AMD’s AI accelerator market share from a respectable 9% last year toward a projected 18% by the end of 2026. If that projection holds, it would represent the most significant shift in AI chip market share since the GPU era began. Market concentration is compressing — and that dynamic doesn’t stop at 18%.
Macro & Industry Context
Hyperscalers are collectively spending over $380–$630 billion on AI infrastructure in 2025–2026. AMD sits squarely in the middle of that spending surge. The DeepSeek efficiency argument — the idea that smarter models need less compute — has been tested by the market and largely rejected. Su’s counter-argument has proven correct: as AI models become more efficient, they get deployed more widely across more modalities, increasing aggregate compute demand rather than reducing it.
The China export control situation is a genuine overhang. AMD took a ~$1.5B revenue hit in FY2025 from restrictions on Instinct MI308X shipments. Additional restrictions in 2026 would create new noise. Conversely, any relaxation — however unlikely in the near term — would be a significant positive catalyst. It’s a binary risk that doesn’t resolve neatly and will continue showing up in quarterly guides.
One macro positive that’s been underappreciated: the Trump administration’s January 2026 executive order imposed a 25% tariff on select AI chips, including AMD’s MI325X, but explicitly carved out imports destined for US data centers, domestic R&D, and public-sector applications, limiting the direct commercial impact on AMD’s core hyperscaler business.
Forward Scenarios
🟢 Bull Case — PT: $400–$500
Helios ships on schedule in Q3 2026. The MI455X delivers on its performance claims. Meta’s 6GW deployment ramps ahead of milestones — triggering warrant conversion tranches and locking in multi-year revenue visibility. OpenAI’s 1GW MI450 data center goes live in H2 2026. ROCm enterprise adoption accelerates, with Fortune 500 AI deployments built natively on AMD. EPYC Venice extends server CPU share toward 45–50%. AMD reaches double-digit AI GPU market share by end of 2027. Revenue grows at 35%+ CAGR. Non-GAAP EPS approaches $15–$20 by 2028. Multiple expands on earnings upgrades.
🟡 Base Case — PT: $270–$345
AMD executes on the MI400 roadmap with minor delays. Helios broad availability slips to early 2027, but anchor customers get priority H2 2026 allocation. Data center revenue grows 40–55% annually. China export restrictions remain a drag but don’t worsen materially. EPYC holds server share near 40%. AMD reaches 8–12% of the AI accelerator market. Revenue grows to $45–$55B by FY2027. Stock trades in the $270–$345 range — consistent with current analyst consensus.
🔴 Bear Case — PT: $150–$190
Helios execution slips materially. If ecosystem partners fail to deliver UALink switching silicon in the second half of 2026, UALink-based systems will use UALink-over-Ethernet or stick to traditional configurations rather than large-scale fabrics — limiting rack-scale performance claims. Nvidia’s Vera Rubin outpaces AMD’s memory advantages in full-system benchmarks. Hyperscalers accelerate custom silicon faster than expected. China export controls tighten further. The AI infrastructure spending cycle peaks earlier than consensus models. Gaming console cycle extends its softness. Stock reverts to $150–$190 as multiple compression hits an expensive valuation sitting on elevated expectations.
Technical Overlay
- 52-week range: $91.87 – $352+ (April 24, 2026 intraday ATH)
- Key resistance: $352 (intraday ATH), $375 (D.A. Davidson street-high target)
- Support zones: $310–$320 (prior ATH, now potential base), $268–$270, $240, $193 (early April 2026 low)
- Beta: 1.33 — amplifies broader market moves sharply in both directions
- Market cap: ~$494–$498B (late April 2026)
- Trailing P/E: ~115x | Forward P/E: ~32x 2026 consensus
- Volume pattern: Stair-step accumulation with shallow pullbacks — institutional signature
The technical picture is extended on every near-term timeframe. A near-tripling off the 52-week low in under 12 months raises legitimate questions about how much of the next three years is already priced in. Bulls point to a sub-1 PEG ratio on forward growth estimates. Bears point to 100x trailing earnings. Both are correct — which is exactly why the May 5 earnings call is the next decisive moment.
Shock Prediction: “The Majority of Investors Could Soon Become Millionaires”
For the first time in over a century, the dollar is collapsing. Meanwhile, stocks keeping hitting new records. What’s going on? One expert says we’re on the brink of a crisis no American alive today has ever seen.
The time it occurred; every investor became a millionaire – but at a terrible cost.
What to Watch
- May 5, 2026 — Q1 FY2026 Earnings: Watch for Data Center revenue growth rate vs. the $5.38B Q4 record. Street expects ~$5.5B. Any acceleration above 45% YoY would be a significant re-rating catalyst. Also watch gross margin guidance — the 55% non-GAAP target is the number that matters most operationally.
- Helios / MI455X Delivery Timeline: AMD denies delay reports and says Helios is on target for H2 2026. Any slippage confirmed on the May 5 call is a direct risk to the Meta deal milestones and the bull case timeline.
- UALink Switch Ecosystem: Practical UALink adoption will depend on ecosystem partners such as Astera Labs, Auradine, Enfabrica, and Xconn. Watch for partner announcements confirming switch silicon delivery in H2 2026.
- China Export Policy: Additional restrictions or — conversely — any relaxation would have outsized impact on AMD’s near-term revenue and margin profile.
- ROCm Enterprise Adoption: Developer growth on AMD’s open software stack is the long-term moat story. Watch for enterprise customer wins built natively on AMD, not migrated from Nvidia.
- EPYC Venice First Deployments: AMD is optimistic Venice will extend its performance lead over Intel’s Xeon CPUs and continue taking server market share. Hyperscaler adoption disclosures will be closely watched through H2 2026.
- Analyst Estimate Revisions: Following D.A. Davidson’s $375 upgrade on April 24, watch for a wave of upward estimate revisions. Consensus EPS trajectory in the weeks following May 5 will signal the next leg direction.
Bottom Line
The debate on AMD has shifted. It used to be: can AMD compete with Nvidia? That question is largely settled — it can, in specific workloads, at lower cost-per-token, with a hardware roadmap that now matches Nvidia’s annual cadence for the first time in the company’s history. The Meta and OpenAI deals de-risk years of revenue in a way that no supply agreement has done for any AMD customer before. The AMD-Meta 6GW agreement is more than a business deal — it is a structural realignment of the technology sector. It validates AMD as a peer-level competitor to Nvidia and demonstrates that the world’s largest AI players are willing to spend tens of billions of dollars to ensure a competitive hardware market.
The new debate is harder. At $350+ per share and a 100x trailing multiple, how much of the next three years is already in the price? Trading at roughly 32x forward 2026 earnings, AMD sits at a premium to the broader market but at a discount to Nvidia — reflecting its challenger positioning in AI. That discount is exactly the bull case: AMD still has room to re-rate if the MI455X delivers, the Meta deal ramps, and ROCm closes the CUDA gap in enterprise workloads.
What actually determines the next major move isn’t the AI narrative — that’s consensus. It’s whether AMD can sustain 35%+ top-line growth through 2027 while managing export headwinds, custom silicon erosion, and Nvidia’s inevitable counter at the rack level. May 5 starts to answer that question. The rest plays out over the next 18 months — and that’s where the real money is either made or given back.
For informational purposes only.
