Foxconn Joins the AI Race with FoxBrain—What You Need to Know

Foxconn, a name usually known for assembling iPhones, has joined the AI arena by launching its own AI model called FoxBrain. It is built on Meta’s Llama 3.1 architecture and is optimized for traditional Chinese and Taiwanese language styles. Here’s everything you need to know about Taiwan’s first AI model:
FoxBrain: Technical Details and Training Process
FoxBrain is trained on 120 NVIDIA H100 GPUs combined with the NVIDIA Quantum-2 InfiniBand network. The model is said to be developed in about four weeks, and the process was supported by NVIDIA’s Taipei-1 Supercomputer and technical assistance.
The model is based on Meta’s Llama 3.1 architecture, featuring 70 billion parameters and a 128k token context window. However, the company has used a unique adaptive reasoning reflection technique to enhance its autonomous reasoning capabilities.
FoxBrain was initially designed to support Foxconn’s internal systems in data analysis, decision-making, document collaboration, mathematics, reasoning, problem-solving, and code generation. However, the company has announced plans to open-source the model in the coming time, making it publicly available.
FoxBrain: Performance and Competitive Positioning
According to Yung-Hui Li, director of the Artificial Intelligence Research Center at Hon Hai Research Institute, FoxBrain was trained using an optimized strategy focused on efficiency rather than simply increasing computing power. The institute claims that the model’s performance is close to global standards but slightly behind DeepSeek’s distillation model.
Furthermore, the model is said to outperform Llama-3-Taiwan-70B at the same scale, particularly in mathematical and logical reasoning. Apart from that, the company has not revealed many details about the model’s performance. It is said to be presented at NVIDIA GTC 2025 on March 20 in a session titled ‘From Open Source to Frontier AI: Build, Customize, and Extend Foundational Models.’