Skip to content

AWS · cloud-model

Verified 2026-05-01

AWS Trainium2

Amazon's training/inference silicon: HBM3e per chip; UltraServers via NeuronLink scale to dozens of chips.

Trainium2 is AWS's second-generation in-house AI accelerator. Each chip pairs eight third-gen NeuronCores with 96 GB of HBM3e at 2.9 TB/s. Trn2 instances host 16 chips for 20.8 FP8 PFLOPS; Trn2 UltraServers connect 64 chips across four instances for 83.2 FP8 PFLOPS.

Specs

compute
AWS Trainium2
const{jsx:n}=arguments[0];function _createMdxContent(e){const t={p:"p",...e.components};return n(t.p,{children:"AWS in-house AI silicon. Available through EC2 Trn2 instances and UltraServers; no retail SKU."})}return{default:function(e={}){const{wrapper:t}=e.components||{};return t?n(t,{...e,children:n(_createMdxContent,{...e})}):_createMdxContent(e)}};