Phison is expanding its aiDAPTIV+ software to the NVIDIA Jetson platform, enabling enhanced generative AI inference at the edge of robotics development.

Phison has created a specialized SSD for the NVIDIA Jetson platform and has integrated its aiDAPTIVE+ software to enable Fine-Tune training, which allows users to take an AI model and add their own data into the model. The idea behind this integration is to reduce the performance cost of running the AI by offloading some of the processing from the GPU VRAM over to the SSD.
Phison explained to me that with these upgrades to the Jetson platform they are able to provide up to 14x improvement on Time to First Token (TTFT), which is the time it takes for an AI to provide a response to a question. Moreover, these changes also increased the token length by 8x, which is the length of the response generated by the AI. Ultimately, Phison's improvements made to Jetson have unlocked the platform in a way that enables faster responses with a greater length, enhancing the quality of the AI chat.

"aiDAPTIV+ now supports edge computing and robotics use cases with Phison's validation of NVIDIA Jetson-based devices. aiDAPTIV+ strengthens inference and LoRA-based LLM training capabilities on these devices using the aiDAPTIVCache SSD, available in April 2025. This unlocks new processing capabilities in a variety of use cases including autonomous vehicles, healthcare diagnostics, industrial automation, retail analytics, environmental monitoring, telecommunications and smart surveillance and agriculture," states the press release
