China AI Native Industry Insights – 20250421 – x-humanoid | ByteDance | Kunlun Tech | more

Explore the triumph of Tien Kung Ultra Robot in the first-ever half marathon race, delve into ByteDance’s Coze Space AI collaboration platform now in beta, and discover Kunlun’s innovative SkyReels-V2 for unlimited-length movie generation. Discover more in Today’s China AI Native Industry Insights.
1. Tien Kung Ultra Robot Wins First-Ever Half Marathon Race
🔑 Key Details:
– Tien Kung Ultra won the world’s first humanoid robot half marathon on April 19, completing 21km in a race organized by Beijing Central Radio and TV Station
– Designed for endurance, it features extended leg structures, reinforced high-torque hip joints, built-in cushioning, and quick-swap batteries
– The robot advanced from requiring cables and remote controls in January to autonomously navigating with beacon signals and human pacers by February
– Achieved full run capability after just three months, overcoming technical hurdles in thermal management, mechanical stability, real-time motion algorithms, and energy optimization
💡 How It Helps:
– Robotics Engineers: Demonstrates how motion capture, reinforcement learning, and sim-to-real techniques can produce agile, durable robots for dynamic environments
– AI Developers: Offers a working example of real-time sensor integration and balance correction at millisecond-level response rates
– Industrial Innovators: Highlights potential to scale humanoid robotics for long-duration, real-world tasks beyond lab demos
🌟 Why It Matters:
Tien Kung Ultra’s half marathon success marks a major leap in humanoid robotics, proving that complex, high-endurance locomotion is achievable in short development cycles. It showcases the convergence of AI, mechanical engineering, and real-world readiness—moving robotics closer to practical deployment in fields like logistics, public safety, and field operations.
Original Chinese article: https://mp.weixin.qq.com/s/pI591BiNED0LTbaqWTuqYQ
English translation via free online service: https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=https%3A%2F%2Fmp.weixin.qq.com%2Fs%2FpI591BiNED0LTbaqWTuqYQ
Video Credit: CCTV News official website
2. ByteDance Launches Coze Space: AI Agent Collaboration Platform Enters Beta Testing
🔑 Key Details:
– Coze Space enters beta testing as a collaborative workspace for users and AI agents
– Platform features automatic task analysis, tool integration (browser, code editor), and output in various formats
– Includes specialized agents like Huatai A-share Assistant and User Research Expert
– Supports dual modes: Exploration (faster completion) and Planning (for complex tasks)
– Integrates with MCP extensions including Feishu Tables, AutoNavi Maps, and image tools
💡 How It Helps:
– Knowledge Workers: Task automation and delegation to AI agents saves time and increases productivity
– Financial Analysts: Access to specialized stock analysis tools and daily market briefings
– UX Researchers: AI assistance with in-depth user research data analysis
– Developers: Future support for custom MCP development through the Coze platform
🌟 Why It Matters:
Coze Space represents ByteDance’s strategic expansion into AI agent ecosystems, moving beyond simple Q&A into comprehensive task completion. The platform aims to transform workplace productivity by creating specialized AI collaborators that can independently execute complex workflows. As competition in AI workspaces intensifies, ByteDance is positioning Coze as a robust ecosystem with domain-specific capabilities rather than simply a chatbot interface.
Original Chinese article: https://mp.weixin.qq.com/s/0ZXgS9sX6y6PHxjDkITU8A
English translation via free online service: https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=https%3A%2F%2Fmp.weixin.qq.com%2Fs%2F0ZXgS9sX6y6PHxjDkITU8A
Video Credit: The original article
3. Kunlun’s SkyReels-V2: Groundbreaking Open-Source Unlimited-Length Movie Generation Model
🔑 Key Details:
– First Global Model: SkyReels-V2 introduces the world’s first diffusion-forcing framework for unlimited-length video generation, supporting up to 40-second videos currently.
– Technical Innovations: Combines MLLM, multi-stage pretraining, reinforcement learning, and diffusion-forcing to optimize motion quality, consistency, and visual fidelity.
– Performance Excellence: Outperforms both open and closed-source models on SkyReels-Bench and V-Bench1.0, achieving 83.9% overall score.
– Multiple Applications: Offers story generation, image-to-video synthesis, cinematographer expertise, and multi-element video generation (SkyReels-A2).
💡 How It Helps:
– Content Creators: Enables film-style video generation with precise control over motion, subjects, and camera movements.
– Filmmakers: Provides professional cinematic capabilities through SkyCaptioner-V1, which understands film grammar and shot composition.
– Developers: All models (1.3B, 5B, 14B sizes) are fully open-sourced on GitHub to foster further research and applications.
🌟 Why It Matters:
SkyReels-V2 represents a paradigm shift in AI video generation, moving beyond short clips to potentially unlimited-length coherent narratives with cinematic quality. Its open-source approach democratizes access to advanced video synthesis technology, potentially transforming creative industries and opening new possibilities for digital storytelling and visual content creation.
Original Chinese article: https://mp.weixin.qq.com/s/xfgWnSBZYnI-TurjqNeUrw
English translation via free online service: https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=https%3A%2F%2Fmp.weixin.qq.com%2Fs%2FxfgWnSBZYnI-TurjqNeUrw
Video Credit: The original article
4. Tencent Open-Sources InstantCharacter: New Plugin for Consistent Character Generation
🔑 Key Details:
– Advanced Plugin Release: Tencent’s Mixue AI launches InstantCharacter, an open-source plugin compatible with Flux that creates consistent characters in various environments.
– Single-Image Transformation: Requires just one reference image and a text prompt to generate character appearances in different scenes while maintaining identity.
– Technical Innovation: Utilizes DiT model with innovative adapter framework and transformer encoders to process open-domain character features.
– Competitive Performance: Testing shows results comparable to leading models like GPT-4o.
💡 How It Helps:
– Content Creators: Enables consistent character generation across multiple scenes for comics, film production, and visual storytelling.
– Developers: Open-source access with GitHub repository and HuggingFace demo implementation.
– Digital Artists: Provides flexible text editing capabilities to place characters in any environment or pose.
– AI Researchers: Access to methodology documented in accompanying research paper on arXiv.
🌟 Why It Matters:
This release addresses a significant challenge in multi-turn image generation by maintaining character consistency while allowing scene flexibility. By open-sourcing this technology, Tencent contributes to democratizing advanced AI image generation capabilities, potentially accelerating innovation in visual content creation tools for storytelling, entertainment, and creative industries.
Original Chinese article: https://mp.weixin.qq.com/s/t5kR44NShOJ1xfIopmG3_Q
English translation via free online service: https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=https%3A%2F%2Fmp.weixin.qq.com%2Fs%2Ft5kR44NShOJ1xfIopmG3_Q
Video Credit: The original article
That’s all for today’s China AI Native Industry Insights. Join us at AI Native Foundation Membership Dashboard for the latest insights on AI Native, or follow our linkedin account at AI Native Foundation and our twitter account at AINativeF.