China AI Native Industry Insights – 20241213 – Shanghai AI Lab | Peking University | ByteDance | Giant Network | more

Explore Shanghai AI Lab’s new REEF tool for LLM fingerprinting, the launch of Peking University and ByteDance’s joint lab for AI systems, and Giant Network’s release of the “QianYing” voice game generation model. Discover more in Today’s China AI Native Industry Insights.

1. Shanghai AI Lab Introduces REEF: Fingerprinting LLMs to Detect Unauthorized Derivatives

🔑 Key Details
– New Fingerprinting Method: Shanghai AI Lab, along with researchers from CAS, Renmin University, and SJTU, developed REEF (Representation Encoding Fingerprints) for detecting unauthorized derivative models of large language models (LLMs).
– Based on Representation Invariance: REEF leverages the invariance of LLM representations post-finetuning to accurately identify derivative models without compromising original model performance.
– Robust Against Modifications: REEF remains effective even after pruning, merging, reordering, or scaling transformations, ensuring “wrapped models” cannot bypass detection.
– Highly Accurate and Efficient: In tests with TruthfulQA and other datasets, REEF achieved 0.9962 similarity for finetuned models and maintained robustness with minimal sample sizes (~200-300 samples).
– Open and Transparent: Balances openness and IP protection, providing an efficient way to trace responsibilities for derivative models.

💡 How It Helps
– For Model Owners: Offers a robust and efficient tool to protect intellectual property, ensuring unauthorized derivatives can be traced and identified.
– For AI Researchers: Provides a transparent framework for studying LLM derivations, enabling fair and collaborative AI advancements.
– For Industry Users: Enhances accountability by allowing companies to verify the origins of models used in sensitive applications.

🌟 Why It Matters
REEF sets a new standard in AI model protection by addressing a critical challenge: detecting unauthorized modifications while maintaining model performance and openness. Its robustness against finetuning and structural alterations ensures that model ownership can be effectively traced, fostering a more transparent and collaborative AI ecosystem. This innovation has the potential to reshape the way IP rights are managed in AI.

Original Chinese article: https://mp.weixin.qq.com/s?__biz=MzIzNjc1NzUzMw==&mid=2247766493&idx=4&sn=5fa4935a6e79a192fd7e5e7047f5a761&chksm=e9127720c9b313242c383679f0e91b048089715fffb0136d0cfe56f1de1f0811de627f2ae23e#rd

English translation via free online service: https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=https%3A%2F%2Fmp.weixin.qq.com%2Fs%3F__biz%3DMzIzNjc1NzUzMw%3D%3D%26mid%3D2247766493%26idx%3D4%26sn%3D5fa4935a6e79a192fd7e5e7047f5a761%26chksm%3De9127720c9b313242c383679f0e91b048089715fffb0136d0cfe56f1de1f0811de627f2ae23e%23rd

Video Credit: the original article

2. Peking University – ByteDance “Doubao Large Model System Software Joint Laboratory” established, focusing on key issues in AI system software.

🔑 Key Details
– Lab Establishment: On December 12, Peking University and ByteDance officially launched the “Doubao AI System Software Joint Laboratory,” focusing on challenges in large model system software.
– Core Objectives: The lab aims to address key scientific and technical issues in intelligent system software for large models, with an emphasis on breakthrough research, real-world application, and talent cultivation.
– Collaborative Highlights: Peking University brings expertise in foundational research and system software, while ByteDance offers applied experience from its Doubao large model production environment.
– Innovative Achievements: Previous collaboration has led to significant outcomes, including research on GPU-based training systems for large models, published in high-impact journals, and successful deployments in production environments.
– Future Goals: The lab plans to focus on model architecture, training frameworks, and inference optimization, leveraging real-world feedback from ByteDance’s Doubao applications to drive impactful advancements.

💡 How It Helps
– For Researchers: Provides a collaborative platform to tackle foundational and applied challenges in AI system software, fostering groundbreaking innovations.
– For Industry Practitioners: Offers scalable solutions to enhance model training and inference efficiency, benefiting industries reliant on large-scale AI models.
– For Students: Serves as a hub for cultivating interdisciplinary talent with strong theoretical foundations and practical innovation skills.

🌟 Why It Matters
The Doubao Joint Laboratory represents a significant step in academia-industry collaboration, addressing the pressing challenges posed by large AI models. By uniting the academic strengths of Peking University with ByteDance’s real-world expertise, the lab aims to push boundaries in AI system software. This collaboration not only advances key technologies but also sets a benchmark for fostering innovation through deep integration of research and industrial practice.

Original Chinese article: https://mp.weixin.qq.com/s/Ku6I7rf1en0c3j76DuStaQ

English translation via free online service: https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=https%3A%2F%2Fwww.aibase.com%2Fzh%2Fnews%2F13947

Video Credit: Kling AI

3. Giant Network releases the “QianYing” voice game generation large model.

🔑 Key Details
– QianYing Overview: Giant Network launched “QianYing,” combining YingGame for video generation and YingSound for high-fidelity audio creation.
– YingGame: Enables diverse character motions, customizable appearances, and realistic physical simulations for open-world game videos.
– YingSound: Generates high-quality sound effects from video content, excelling in time alignment and semantic understanding.
– Technological Innovations: Uses multimodal feature fusion, motion enhancement, and reinforcement learning to deliver industry-leading precision.
– Strategic Vision: Positions Giant Network as a leader in “Game + AI,” accelerating efficient production and empowering creators with new tools.

💡 How It Helps
– For Game Developers: Reduces production complexity and boosts efficiency with AI-powered tools for video and sound generation, simplifying animation, sound design, and motion control.
– For Content Creators: Empowers non-developers to bring creative ideas to life by enabling interactive game creation with minimal technical expertise.
– For AI Researchers: Provides a robust platform to experiment with advanced multimodal AI applications in a highly interactive and demanding domain.

🌟 Why It Matters
The launch of QianYing marks a transformative step in game development, blending multimodal AI to redefine video and audio generation. By introducing interactive capabilities and advanced physical simulations, it lowers barriers for creators and democratizes game creation. This innovation not only reshapes production pipelines but also propels the gaming industry toward a future where creativity is the only limit, reinforcing Giant Network’s leadership in “AI + Games.”

Original Chinese article: https://mp.weixin.qq.com/s?__biz=MjM5MzE3NDI5Mg==&mid=2650761386&idx=1&sn=995d09e20b0aa0d66a12afa86ece8881&chksm=bf4d65d1e6fefcafe3ab8a90c687f026d160701a4505b34b9695b9bb486b3eb6cbaccea536b4#rd

English translation via free online service: https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=https%3A%2F%2Fmp.weixin.qq.com%2Fs%3F__biz%3DMjM5MzE3NDI5Mg%3D%3D%26mid%3D2650761386%26idx%3D1%26sn%3D995d09e20b0aa0d66a12afa86ece8881%26chksm%3Dbf4d65d1e6fefcafe3ab8a90c687f026d160701a4505b34b9695b9bb486b3eb6cbaccea536b4%23rd

Video Credit: Giant Network (https://www.ga-me.com/news/view?id=330)

That’s all for today’s China AI Native Industry Insights. Join us at AI Native Foundation Membership Dashboard for the latest insights on AI Native, or follow our linkedin account at AI Native Foundation and our twitter account at AINativeF.

🤞 Don’t miss these tips!

We don’t spam! Read our privacy policy for more info.

[email protected]

About

Ecosystem

Copyright 2025 AI Native Foundation© . All rights reserved.​