aiDAPTIV™ Solution Brief

Rising GPU costs, limited VRAM, and data privacy concerns make on-premises AI difficult to scale. Discover how Phison aiDAPTIV+ enables cost-effective, private LLM training and inference by extending GPU memory with high-performance SSDs.

SEAMLESS INTEGRATION

  • Optimized middleware to extends GPU memory capacity
  • 2x 2TB aiDAPTIVCache to support 70B model
  • 低延遲

HIGH ENDURANCE

  • 業界領先,高達100次的五年內每日寫入次數(DWPD)
  • 採用業界先進NAND 糾錯算法的SLC NAND

aiDAPTIV+ BENEFITS

  • 隨插即用,無縫整合
  • 無需修改現有 AI 應用
  • Reuse existing HW or add nodes

aiDAPTIV+ MIDDLEWARE

  • 模型自動分割與 GPU 資源調度
  • Hold pending slices on aiDAPTIVCache
  • Swap pending slices w/ finished slices on GPU

FOR SYSTEM INTEGRATORS

  • Access to ai100E SSD
  • Middleware library license

  • Full Phison support to bring up