By Kate Yuan
(JW Insights) Sep 7 -- Chinese AI startup Baichuan Intelligent Technology (百川智能) unveiled fine-tuned versions of open-source AI-powered large language models (LLMs) Baichuan 2-7B, Baichuan 2-13B, Baichuan 2-13B-Chat, and 4-bit quantized versions on September 6. The models could be put into commercial use for free.
Beijing-based Baichuan was founded by Wang Xiaochuan, former CEO of China's second-largest search engine Sogou Inc in April 2023. The company has launched three LLMs, and opened its AI large language models to the public after receiving approval from Chinese authorities last week.
It will also release the technical report of Baichuan 2, which will provide detailed information about the training process of Baichuan 2, aiming to help academic institutions, developers, and enterprise users gain a deeper understanding of the training process. The company has open-sourced the Check Point for model training as well.
Baichuan 2-7B-Base and Baichuan 2-13B-Base, both trained on 2.6 trillion high-quality multilingual data, have significantly improved their capabilities in mathematics, coding, security, logical reasoning, and semantic understanding.
Compared to the previous generation 13B model, Baichuan 2-13B-Base has shown a 49% improvement in mathematical ability, a 46% improvement in coding, a 37% improvement in security, a 25% improvement in logical reasoning, and a 15% improvement in semantic understanding, according to the company.
The scores of the two models are higher than that of Llama 2, an open-source model launched by US tech company Meta, in both MMLU and CMMLU, two authoritative LLM evaluation rankings, said a China Daily report.
The open-source ecosystem is crucial for promoting the technical advancement and industrial application of LLMs, industry experts said. The open-source LLM will help enterprises simplify the process of model training and deployment, and lower threshold for the application of LLMs, they added.