VacEck Research director explains: Why the valuation is expensive, and I still remain bullish on Nvidia’s challenger Cerebras’ IPO

ChainNewsAbmedia
NODE-0.44%

US chip startup Cerebras Systems is expected to push forward its IPO under the stock ticker CBRS. VanEck Digital Assets Research head Matthew Sigel recently posted that Cerebras could be the most market-attention-grabbing semiconductor IPO since Arm’s 2023 listing.

According to Sigel, Cerebras plans to sell 28 million shares at $115 to $125 per share, raising about $3.5 billion; meanwhile VanEck’s Onchain Economy ETF (NODE) has already applied for a fully subscribed allocation through its underwriters.

It is worth noting that VanEck is not the underwriter for Cerebras’ IPO. The focus of Sigel’s article is that, as an investor, VanEck is seeking an IPO allocation, rather than serving in an underwriting role. He also disclosed at the end of the article that VanEck has filed an application to allocate shares in the Cerebras IPO, and at least one VanEck investment professional previously held shares of Cerebras in the private market. Therefore, VanEck has a direct financial interest in the IPO pricing and issuance outcome.

Why is Cerebras being looked at favorably? OpenAI’s rapid reasoning business has shifted from Nvidia

Matthew Sigel believes Cerebras’ business model is fundamentally different from Arm’s. Arm’s value comes from architecture licensing and royalties, which can compound as its architecture penetrates more devices. Cerebras, on the other hand, is a highly focused product company whose core product is wafer-scale chips, specifically aimed at solving AI inference speed problems.

Sigel points out that the AI infrastructure market is splitting into two tracks: one focused on GPU clusters, pursuing batch processing and throughput-oriented training and inference workloads; the other is an inference market that is extremely sensitive to latency, such as AI agents, code assistants, real-time applications, and highly interactive products. In the latter, inference speed is not only a cost issue—it directly affects user experience, product pricing power, and retention rates.

He quotes Cerebras CEO Andrew Feldman as saying: “Clearly, NVIDIA doesn’t want to lose OpenAI’s fast inference business, and we’ve taken that piece of business away from them.”

Sigel believes this quote highlights Cerebras’ strategic significance: if OpenAI’s low-latency inference needs start being handled by Cerebras, CBRS would be more than just another AI infrastructure company—it could become a bellwether for a “new hardware category.”

Cerebras puts the model into SRAM on the chip, bypassing the GPU memory bandwidth bottleneck

Sigel attributes Cerebras’ technical edge to “wafer-scale chips” and “on-chip SRAM.” Unlike traditional GPU cluster systems that move data between multiple chips and external memory, Cerebras’ design allows the model to remain largely inside the on-chip SRAM, reducing memory bandwidth bottlenecks and latency fluctuations.

According to Sigel, this can make Cerebras’ inference speed up to 15 times faster than GPU-based approaches, while also delivering more stable and predictable latency performance. For AI agents, real-time coding assistants, voice interactions, and low-latency search and inference services, this kind of “deterministic latency” will become increasingly valuable.

He also notes that the Cerebras management team has raised prices due to excess demand, which indicates the company’s main challenge is not insufficient demand, but constrained supply.

OpenAI is the biggest core: a $20 billion contract supports an earnings floor

The most important investment thesis in Sigel’s article is the long-term agreement between Cerebras and OpenAI.

According to his account, the deal between Cerebras and OpenAI is a $20 billion take-or-pay contract covering 750MW of contract capacity through 2028. OpenAI also has an option to expand capacity to about 2GW, with an additional contract amount of about $34 billion, priced at about $27 million per MW.

Sigel says the contract is supported by a $1 billion operating capital loan and a warrant structure, forming the base of Cerebras’ future revenue. Cerebras’ current RPO—remaining performance obligations—is about $24.6 billion. The company expects to recognize about 15% in total during 2026 to 2027, equivalent to about $3.7 billion; then recognize about 43% during 2028 to 2029, equivalent to about $10.6 billion.

Codex Spark becomes a proxy indicator for tracking OpenAI usage

Sigel also notes that outside investors cannot directly observe Cerebras’ utilization of production capacity, so he views OpenAI’s Codex Spark as the closest available public proxy indicator.

According to him, Codex Spark is a low-latency version of OpenAI’s Codex coding agent, running on Cerebras infrastructure. The product had about 600k weekly active users in January 2026; it has since exceeded 4 million. It grew about 7x within four months, and it is currently still limited to ChatGPT Pro users.

Sigel believes that if Spark expands to all paid ChatGPT users, the potential user base could expand dramatically in a short time. This would not only increase utilization of the 750MW contract capacity, but also strengthen the likelihood that OpenAI exercises its additional 1.25GW expansion option.

AWS, Meta, Oracle, and IBM are also on Cerebras’ customer roster

In addition to OpenAI, Sigel also names AWS, Meta, Oracle, and IBM as important partners for Cerebras. Among them, AWS is especially critical, because Cerebras can enter the enterprise customer market through Amazon Bedrock integrations without relying entirely on building its own direct sales team.

He also mentions that AWS has made a $270 million equity investment in Cerebras, while Meta may use Cerebras for Llama model inference. This suggests that if Cerebras can prove that the OpenAI scenario is scalable, it still has the opportunity to expand from a single major customer base to broader enterprise and cloud platform demand in the future.

ZK proof: Cerebras could become blockchain-verifiable compute hardware

It is also worth noting that as VanEck’s research head, Sigel interprets Cerebras from an on-chain economics perspective.

He believes Cerebras’ chip architecture has abundant on-chip SRAM, extremely high memory bandwidth, and highly parallel compute capabilities. Therefore, it may not only be suitable for AI inference, but could also become a hardware accelerator for generating zero-knowledge proofs (ZK proof).

ZK proof is an important technology for blockchain scaling and verifiable compute. It is compute-intensive and latency-sensitive. Sigel believes that as verifiable compute becomes real-time, Cerebras could naturally enter these workloads—one of the reasons VanEck’s Onchain Economy ETF (NODE) wants to allocate to CBRS.

Sigel also concedes that Cerebras’ current gross margin is below 40%, which appears lower than the typical 55% to 70% range seen among semiconductor and cloud peers. However, he thinks this comparison is not fully fair.

The reason is that Cerebras is not only selling chips—it may also provide cloud deployments, install systems on customers’ sites, and even help build dedicated data center capacity according to customer needs. This allows Cerebras to capture economic value across the chip and hosting layers, but it also means the reported gross margin will be lower than that of pure-play chip companies.

Sigel’s base assumption is that as scale increases, the cost of wafer-scale chips improves, and the proportion of software and cloud services rises, Cerebras’ gross margin could reach about 50% in 2030. In an optimistic scenario, given that the current $24.6 billion backlog is expected to translate into roughly 60% gross margin, Cerebras’ gross margin could even approach 60%.

Valuation is expensive, but backed by three bullish frameworks: revenue multiple, inference TAM, and value per MW

Sigel does not avoid the issue of a high valuation for the Cerebras IPO. Using a price of $125 per share, CBRS is about 52x 2025 revenue, so the forward valuation is not cheap.

But he argues that for a company whose future revenue could grow significantly from the contract backlog before 2027, the trailing multiple has limited meaning. Using 2026 estimated revenue, CBRS is about 16.7x forward revenue, close to the median among semiconductor peers.

Sigel presents three valuation frameworks

First, valuation based on a 2028 revenue multiple. The base case assumes Cerebras revenue in 2028 reaches $5.5 billion, applying a 10x revenue multiple, implying about $258 per share—up about 107% from the IPO midpoint price. The optimistic case assumes revenue of $6.5 billion and a 12.5x revenue multiple, implying about $381 per share.

Second, valuation based on the inference market TAM. Sigel cites Bloomberg Intelligence data saying the inference market size is about $201 billion in 2028 and about $292 billion in 2029. If Cerebras revenue in 2028 reaches $5.5 billion, that would be about 2.7% of the inference market; if it reaches $6.5 billion, about 3.2%. He believes this is not an overly aggressive market-share assumption.

Third, valuation based on value per MW of capacity. He uses CoreWeave’s market valuation of about $20 million per MW as a reference and applies Cerebras’ 750MW contract capacity with OpenAI—just the OpenAI contract alone could support an enterprise value of about $15 billion. If OpenAI exercises the option to expand to 2GW, that corresponds to an enterprise value of about $40 billion.

Sigel emphasizes that CoreWeave is essentially a GPU compute infrastructure leasing company, while Cerebras owns proprietary chip IP, software stack, and data center capacity economics. In theory, it should not be valued below the per-MW valuation of a pure infrastructure company.

Key risks: customer concentration, capacity ramp-up, TSMC supply chain, and the technology moat

Sigel also lists the main risks for Cerebras. First is customer concentration. In 2025, two Abu Dhabi-related customers, G42 and MBZUAI, together contributed about 86% of revenue, which superficially looks highly concentrated.

However, he believes the OpenAI contract will structurally replace the previous UAE customer concentration issues during 2026 to 2028. AWS and other partners will also provide additional diversification.

More important risks include whether Cerebras can complete capacity ramp-up on time, whether TSMC’s wafer-scale manufacturing supply chain is stable, and whether next-generation GPUs or other AI architectures could narrow the latency gap and weaken Cerebras’ technology moat.

Sigel concludes that VanEck, as one of the largest US semiconductor ETFs issuers (SMH), conducts long-term research on the semiconductor industry. At the same time, VanEck previously established capacity deployment relationships with Cerebras through WhiteFiber (WYFI), giving it earlier exposure to Cerebras’ real operational expansion.

He says VanEck’s Onchain Economy ETF (NODE) has applied to underwriters for a fully subscribed allocation in the Cerebras IPO. Its rationale is not only bullish on the AI inference market, but also that Cerebras may become an important hardware layer for blockchain ZK proof, real-time verifiable compute, and on-chain infrastructure.

This article: VacEck research director explains why the valuation is expensive—Cerebras IPO’s earliest bullish challenge to Nvidia appeared first on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments