Most Innovative AI Application
Silver Award
Best Learning and Talent Technology
aiDAPTIV™ technology enables LLM training behind your firewall and gives you full control over your private data and peace of mind with data sovereignty compliance.
Easily deploy aiDAPTIV™ in your home, office, classroom, or data center with a small footprint while using commonplace power and cooling.
Choose between command line access or an intuitive GUI with an all-in-one toolset for model ingest, fine-tuning, and validation and inference.
aiDAPTIV™ offloads expensive HBM and GDDR memory to cost-effective flash memory, which removes the need for large numbers of high-cost and power-hungry GPU cards. It also reduces the needed DRAM for processing Mixture of Experts (MoE) models and keeps large data sets local to eliminate the cost of transferring data to be processed elsewhere.
Run LLMs on local PCs and edge devices with lower cost, less compute, and reduced power compared to cloud. aiDAPTIV™ improves response speed and enables more context for accurate, tailored answers while keeping data private and sovereign.
An AI Training PC with aiDAPTIV™ technology makes it easy for individuals and organizations to learn how to fine-tune LLMs beyond just simple inference. It can fill the shortage of skilled talent as you train LLMs locally with your own data.
Edge AI devices using aiDAPTIV™ achieve faster time to first token (TTFT) for LLM inference while enabling larger models and extended context lengths on compact hardware for longer, more accurate responses.
With an AI Notebook PC powered by a GPU and aiDAPTIV™ technology, you can learn how to train LLMs in your own home, office, or classroom. In addition, you can operate trained LLMs on-premises and benefit from a model augmented with your data to get more tailored responses to inference prompts. aiDAPTIV also improves prompt recall time and allows for more context, to produce lengthier, more precise answers.
LLM training on-premises with aiDAPTIV™ enables organizations and individuals to enhance general knowledge models with domain-specific data. This provides better usability, relevance, and accuracy for a wide range of specialized fields such as medical diagnostics, financial forecasting, legal analysis, and product development.
Use a command line interface or leverage the intuitive, all-in-one solution powered by aiDAPTIV technology to perform LLM training.
Supported Models
Supported Models
Built-in Memory Management Solution
Experience seamless PyTorch compliance that eliminates the need to modify your AI application. You can effortlessly add nodes as needed. System vendors have access to Pascari AI-Series SSDs and middleware licenses, plus full Phison support to facilitate smooth system integration.
Seamless Integration with GPU Memory
The optimized middleware technology extends GPU memory by an additional 320 GB (for PCs) up to 8 TB (for workstations and servers) using cache memory. This added memory is used to support LLM training with low latency. Plus, the high-endurance feature offers an industry-leading 100 DWPD, utilizing a specialized SSD design with an advanced NAND correction algorithm.
Our aiDAPTIV™ hardware and software technology enhances the inferencing experience by accelerating Time to First Token (TTFT) recall for faster responses. It also extends the token length, which provides greater context for lengthier and more accurate answers.
With aiDAPTIV™, you will no longer limit your model size fine-tuning due to the HBM or GDDR memory capacity on your GPU card. It expands the memory footprint by intelligently incorporating flash memory and DRAM into a larger memory pool.
This enables larger training models, giving you the opportunity to affordably run workloads previously reserved for the largest corporations and cloud service providers.
Phison’s dedicated technical support team offers end-to-end assistance for aiDAPTIV™ technology throughout the entire product lifecycle, from initial implementation to ongoing operation and optimization. Our experts provide rapid troubleshooting, firmware adjustments, and performance tuning to ensure seamless integration and maximum product efficiency. With access to our support engineers and cutting-edge tools, our aiDAPTIV customers and partners can accelerate time-to-market while maximizing return on investment for their AI workloads.