DeepSeek Founder’s Exclusive Interviews – Commitment

We continue our exclusive series with the second invaluable interview featuring DeepSeek CEO, Liang Wenfeng. This interview, originally published in July 2024 by 36Kr and titled “Unveiling DeepSeek: An Unparalleled Story of Chinese Technological Idealism,” delves into the extraordinary impact of DeepSeek on China’s large model landscape and further explores the company’s unwavering commitment to The Power of Open Source.
Following the first interview, which focused on DeepSeek’s origins and High-Flyer Quant’s (幻方) strategic entry into AI, this piece examines DeepSeek’s disruptive role in igniting a price war within the Chinese large model market. It uncovers how DeepSeek, seemingly under the radar, became a catalyst for change, prompting tech giants like ByteDance, Tencent, Baidu, and Alibaba to drastically reduce their pricing.
Central to this story is DeepSeek’s pioneering work in model architecture innovation. The interview details the groundbreaking MLA (Multi-Head Latent Attention) architecture, which achieved unprecedented cost-effectiveness and garnered significant attention from Silicon Valley experts. You’ll gain insight into how this technological breakthrough, combined with a unique DeepSeekMoESparse structure, allows DeepSeek to operate profitably while simultaneously driving down costs for the entire industry.
Beyond the economics, this interview provides a rare glimpse into the culture and philosophy driving DeepSeek. You’ll hear directly from Liang Wenfeng about his dedication to original innovation, his belief in open source as a driver of progress, and his vision for a future where Chinese companies contribute to global technological advancements, rather than merely following trends. The interview also highlights the unique, bottom-up organizational structure at DeepSeek, fostering creativity and empowering a team largely composed of young, homegrown talent.
This conversation reveals a company defying conventional wisdom, prioritizing long-term research and technological breakthroughs over immediate commercial gains. It showcases a technological idealism remarkably rare in today’s tech environment, a commitment to contributing to the global AI ecosystem, and a deep belief in the potential of Chinese innovation. The interview reinforces the critical message: Be a contributor, not a free rider.
We strongly encourage you to read this second installment to fully understand DeepSeek’s remarkable journey, its commitment to pushing the boundaries of AI, and its vision for the future of AGI.
Unveiling DeepSeek: An Unparalleled Story of Chinese Technological Idealism
By: Yu Lili . Edited By: Liu Jing . 22 July 2024 . Source Link
Among China’s seven large model startups, DeepSeek often flies under the radar, yet consistently stands out in unexpected ways.
A year ago, its surprise factor came from its backing by High-Flyer Quant (High-Flyer Quant is the company that incubates and fully owns DeepSeek), a giant in quantitative hedge funds, making it the only company outside major corporations to stockpile tens of thousands of A100 chips. A year later, it became the catalyst for sparking the price war in China’s large model market.
In the midst of the AI boom last May, DeepSeek catapulted to fame. The catalyst was their release of an open-source model called DeepSeek V2, delivering unprecedented cost-effectiveness: inference costs were reduced to just 1 yuan per million tokens, approximately one-seventh of Llama3 70B and one-seventieth of GPT-4 Turbo.
DeepSeek quickly earned the nickname “Pinduoduo of the AI industry”—a nod to the Chinese e-commerce platform known for affordability—prompting tech giants like ByteDance, Tencent, Baidu, and Alibaba to follow suit with price reductions. Thus, a price war erupted in China’s large model sector.
The smoke of fierce competition obscures a crucial fact: unlike many tech giants burning cash on subsidies, DeepSeek operates profitably.
This success is due to DeepSeek’s comprehensive innovation in model architecture. Their pioneering MLA (a new Multi-Head Latent Attention mechanism) architecture, which reduced GPU memory consumption to just 5%-13% of the previously common MHA architecture. Additionally, their unique DeepSeekMoESparse structure drastically decreased computational workload, leading to reduced costs.
In Silicon Valley, DeepSeek has been dubbed the “enigmatic presence from the East.” SemiAnalysis’s chief analyst praised the DeepSeek V2 paper as potentially the best of the year. Andrew Carr, a former OpenAI employee, called the paper “filled with astounding wisdom,” and applied its training settings to his own model. Jack Clark, former policy director at OpenAI and co-founder of Anthropic, noted that DeepSeek “has hired a group of enigmatic talents,” predicting that China-developed large language models “will become an undeniable force, just like drones and electric vehicles.”
In a tech wave largely dominated by Silicon Valley, such scenarios are rare. Several industry insiders indicated that the fervent reaction was due to architectural innovation, a rare endeavor among domestic large model firms and global open-source foundational models. An AI researcher pointed out that the Attention architecture has largely remained unchanged and untested on a large scale over the years. The concept of modifying it is often stifled due to confidence issues.
Meanwhile, prior to DeepSeek, domestic large models have seldom ventured into architectural innovation, partly because few have actively challenged the stereotype: that America is more proficient in 0-to-1 foundational breakthroughs, whilst China excels in 1-to-10 practical adaptations. Attempting such innovation is deemed high-risk—a new generation of models will emerge naturally in a few months, allowing Chinese firms to follow and focus on applications if needed. Innovating in model structures means forging a new path, facing numerous failures, and incurring significant time and financial costs.
Undoubtedly, DeepSeek is a contrarian. In a cacophony advocating that large model technologies would ultimately converge and that following the trend was a smarter shortcut, DeepSeek understood the value gained through an unorthodox journey and believed that Chinese AI entrepreneurs could contribute to global technological innovation beyond applications.
Many of DeepSeek’s decisions have been unconventional. As of now, it is the only one among the seven Chinese large model startups that has abandoned the “have it all” approach, dedicating itself solely to research and technology, without developing consumer applications. It is also the only company that hasn’t focused on commercialization, steadfastly opting for an open-source path without having raised venture funding. These choices often leave it overlooked at the compact table but make it frequently celebrated by word-of-mouth within the community.
How exactly was DeepSeek forged? We interviewed the elusive founder, Liang Wenfeng.
This founder has been quietly dedicating himself to technological research since his time at High-Flyer Quant. A post-80s generation individual, Liang Wenfeng continues his unassuming ways at DeepSeek, engaging in activities typical of any researcher: reading papers, writing code, and participating in group discussions.
Unlike many quantitative fund founders with overseas hedge fund backgrounds from fields like physics or mathematics, Liang Wenfeng has always been domestically grounded, having studied in the artificial intelligence direction in the electronic engineering department of Zhejiang University.
Industry insiders and DeepSeek researchers tell us that Liang Wenfeng is exceptionally rare in China’s AI domain for his combination of robust infrastructure engineering prowess and model research acumen while deftly mobilizing resources. He can make sharp higher-level judgments and simultaneously outperform frontline researchers in detail. “He possesses an intimidating learning ability,” they say, yet he is “nothing like a boss, more like a geek.”
This is a particularly valuable interview. In it, this technological idealist provides a voice that is remarkably rare in China’s current tech community: he is one of the few place moral principles before pragmatic interests, reminding us to recognise the inertia of our times and prioritise “original innovation” on the agenda.
A year ago, as DeepSeek entered the scene, our first interview with Liang Wenfeng was entitled “The Wild Ride of High-Flyer Quant: The Path to Large Models for an Under-the-Radar AI Giant.” If back then his statement, “Be audaciously ambitious, and radically genuine,” seemed a lofty slogan, a year later, it has indeed becoming an action.
Here is the dialogue:
How was the first shot of the price war fired?
Waves: Following the release of the DeepSeek V2 model, a brutal price war erupted in the large model sector. Some say you’re the ‘catfish’ of the industry.
Liang Wenfeng: We didn’t intend to become a ‘catfish’; we just inadvertently did.
Waves: Were you surprised by this outcome?
Liang Wenfeng: Very much so. We didn’t anticipate the sensitivity to pricing. We acted at our own pace and set prices based on cost calculations. Our principle is to avoid selling below cost and not to seek excessive profit. The price is just modestly above cost.
Waves: Five days later, Zhipu AI followed suit, then major players like ByteDance, Alibaba, Baidu, and Tencent responded.
Liang Wenfeng: Zhipu AI lowered prices for an entry-level product, whereas models comparable to ours remained pricey. ByteDance was actually the first to truly follow. They lowered their flagship model to our price, prompting others to follow. Given that major companies’ model costs are significantly higher, we never imagined they would take a loss, leading to an internet-era-style subsidy logic.
Waves: From the outside, it seems your price cut is about capturing users, a common tactic in price wars.
Liang Wenfeng: Capturing users isn’t our primary aim. On the one hand, we reduced prices because our next-gen model exploration reduced our costs. On the other, we believe both APIs and AI should be universally affordable.
Waves: Most Chinese firms previously replicated the Llama structure for applications. Why did you start with the model structure?
Liang Wenfeng: If the goal is applications, then using the Llama structure to quickly launch products is logical. But our target is AGI—requiring new model structure research to achieve superior performance with constrained resources. This is foundational research essential for scaling to larger models. Besides model structures, we’ve conducted extensive studies on data construction and human-like modeling, all of which reflect in our models. Moreover, Llama’s structure, in terms of training efficiency and inference cost, is possibly two generations behind global standards.
Waves: Where do these generational gaps stem from?
Liang Wenfeng: Firstly, there’s a gap in training efficiency. We estimate that the best domestic models may have a one-fold gap in terms of model architecture and training dynamics compared to the best international models. Just this alone requires us to consume twice the computational power to achieve the same results. Additionally, there might also be a one-fold difference in data efficiency, meaning we need to use twice the training data and computational power to achieve equivalent results. In total, this results in a fourfold increase in computational consumption. What we need to do is continuously work to narrow these gaps.
Waves: Why does DeepSeek focus solely on research and exploration when most Chinese companies pursue both models and applications?
Liang Wenfeng: We think it’s crucial now to engage in the global innovation wave. For many years, Chinese companies only commercialized technologies after others innovated, which shouldn’t be taken for granted. In this wave, our starting point is not profiting, but to advance technology’s frontier and propel the ecosystem forward.
Waves: The inertia of cognition left by the internet and mobile internet eras for most people is that America excels in technological innovation, while China is more adept at application development.
Liang Wenfeng: We believe that with economic growth, China should evolve into being contributors, not mere beneficiaries. Over the past three decades in the IT wave, we barely engaged in true technological innovation. We’ve grown accustomed to Moore’s Law’s gifts—better hardware and software landing every 18 months. This attitude extends to Scaling Law.
But these were cultivated through tireless Western tech communities. Because we weren’t part of this process, we ignored its essence.
The Real Gap Isn’t a Matter of Years, But the Difference Between Originality and Imitation
Waves: Why did DeepSeek V2 surprise many in Silicon Valley?
Liang Wenfeng: Among the numerous innovations happening every day in America, this one is quite ordinary. What surprises them is that it comes from a Chinese company, entering their game as an innovator. After all, most Chinese companies are accustomed to following rather than innovating.
Waves: In a Chinese context, isn’t this choice overly luxurious? Large models require significant investments—not everyone can afford to only research innovation before commercialization.
Liang Wenfeng: The cost of innovation is certainly not low, and the past inertia of simply adopting external models is also related to the historical context of our country. But now, whether it’s China’s economic scale or the profits of large companies like ByteDance and Tencent, they are not small on a global scale. What we lack in innovation is definitely not capital, but confidence and the ability to organise high-density talent for effective innovation.
Waves: Why is rapid commercialization such an easy priority for Chinese firms, including those well-funded?
Liang Wenfeng: For the past thirty years, we focused solely on profit, ignoring innovation. Innovation requires curiosity and creativity beyond commercial interests. We’re bound by this inertia, but it’s passing.
Waves: But you are ultimately a commercial organisation, not a non-profit research institution. If you choose innovation and then share it through open source, where is the moat? For example, the MLA architecture innovation in May—won’t it soon be copied by others?
Liang Wenfeng: In the face of disruptive technology, a moat formed by closed-source is temporary. Even if OpenAI keeps its code closed, it cannot stop others from surpassing them. So, we embed value within our team. Our colleagues grow in this process, accumulating a lot of know-how and forming an organisation and culture that can innovate—that is our moat.
Open sourcing and publishing papers don’t actually result in loss. For tech professionals, being followed is a great sense of achievement. In fact, open sourcing is more of a cultural action than a business action. Giving is actually an extra honour. When a company does this, it also has cultural appeal.
Waves: What is your view on market belief-driven perspectives like Zhu Xiaohu’s?
Liang Wenfeng: Zhu Xiaohu’s viewpoint is self-consistent, but his approach is more suited to companies that focus on making quick profits. If you look at the most profitable companies in America, they are all high-tech firms that have accumulated a lot over time.
Waves: But when it comes to large models, pure technical leadership alone is unlikely to form an absolute advantage. What bigger thing are you betting on?
Liang Wenfeng: What we see is that China’s AI cannot remain in a follower position forever. We often say there is a one- or two-year gap between China and America in AI, but the real gap is the difference between Originality and imitation. If this doesn’t change, China will always be a follower. That’s why some explorations are inevitable.
Nvidia’s leadership is not just the result of one company’s efforts, but a joint effort from the entire Western tech community and industry. They are able to see the next generation of technological trends and have a roadmap in hand. The development of China’s AI needs this kind of ecosystem as well. Many domestic chip manufacturers have failed to grow because they lack a supporting tech community and only have second-hand information. Therefore, China inevitably needs people to stand at the forefront of technology.
More investment doesn’t necessarily produce more innovation
Waves: The current DeepSeek has a hint of the early idealism seen in OpenAI and remains open-source. Will you eventually choose to go closed-source? Both OpenAI and Mistral have moved from open-source to closed-source at some point.
Liang Wenfeng: We won’t go closed-source. We believe that establishing a robust technological ecosystem first is more important.
Waves: Are you planning any fundraising? Some media reports suggest that High-Flyer Quant intends to spin off DeepSeek for an independent listing. In Silicon Valley, AI start-ups inevitably end up linked to the tech giants.
Liang Wenfeng: We have no immediate financing plans. The problem we face has never been about money, but rather the embargo on high-end chips.
Waves: Many believe that pursuing AGI and quantitative investing are completely different endeavours. Quantitative work can be done discreetly, while AGI may require a more high-profile approach and forming alliances to increase your investment capacity.
Liang Wenfeng: More investment doesn’t necessarily produce more innovation. Otherwise, tech giants would monopolise all innovation.
Waves: Are you not focusing on applications because you don’t have the operational know-how?
Liang Wenfeng: We believe that right now is an explosive period for technological innovation rather than application innovation. In the long run, we hope to build an ecosystem where the industry directly uses our technologies and outputs, while we only handle foundational models and cutting-edge innovation. Other companies can then build B2B or B2C services on top of DeepSeek. If a complete industry chain takes shape from top to bottom, there’s no need for us to do applications ourselves. Of course, we can do applications if necessary, but research and technological innovation will always be our first priority.
Waves: But if someone wants an API, why would they pick DeepSeek instead of a tech giant?
Liang Wenfeng: The future is likely to be one of specialised division of labour, and foundational large models require ongoing innovation. Big tech companies have their own limits; they’re not necessarily the best fit for that.
Waves: But can technology really create a big enough gap? You yourself have said there is no absolute tech secret.
Liang Wenfeng: There’s no secret in technology, but duplication requires time and cost. Theoretically, there’s nothing strictly secret about NVIDIA’s graphics cards—it should be easy to copy in principle. But reorganising a team and catching up to the next generation of technology both take time. So in practice, the moat is still pretty wide.
Waves: After you lowered your prices, ByteDance was the first to follow suit, indicating they must have felt some threat. What’s your view on new approaches for startups to compete with big companies?
Liang Wenfeng: To be honest, we don’t really care. We just happened to do this. Offering cloud services is not our main goal. Our aim is still to achieve AGI.
So far, I haven’t seen any completely new way of competing, and the big players don’t appear to have a decisive edge either. They may already have a user base, but their revenue streams also become a burden, making them perpetually vulnerable to disruption.
Waves: What do you think the endgame will be for the other six large model start-ups besides DeepSeek?
Liang Wenfeng: Probably two or three will survive. At the moment, they’re all still burning cash, so those with a clear self-positioning and a stronger ability to operate efficiently have a better chance of making it. Other companies might reinvent themselves. Valuable elements won’t simply vanish – they’ll evolve into new forms.
Waves: During your High-Flyer Quant days, your competitive stance was often described as ‘marching to your own drum,’ with little heed paid to peer comparisons. What is your fundamental principle when thinking about competition?
Liang Wenfeng: I frequently consider whether something can enhance society’s operational efficiency, and whether you can find a role in that value chain that you’re good at. As long as the end result is a higher level of societal efficiency, it’s valid. Much of what happens in between is temporary, and if you focus too closely on it, you’ll lose sight of the bigger picture.
A Group of Young People Doing ‘Enigmatic’ Things
Waves: Jack Clark, former Head of Policy at OpenAI and co-founder of Anthropic, believes DeepSeek has hired “a group of enigmatic talents.” What kind of people are behind DeepSeek v2?
Liang Wenfeng: There are no “enigmatic talents” here. The team is primarily composed of fresh graduates from top universities, Ph.D. candidates in their fourth or fifth year (interns), and some young people who graduated just a few years ago.
Waves: A lot of large model companies are persistently recruiting talent overseas. Many believe that the top 50 talents in this field may not be in Chinese companies. Where do your team members come from?
Liang Wenfeng: Our V2 model doesn’t include anyone who’s come back from overseas; they’re all local. The top 50 talents may not be in China, but perhaps we can cultivate our own.
Waves: How did the MLA innovation come about? I heard the idea originally came from a young researcher’s personal interest?
Liang Wenfeng: After summarizing some of the mainstream evolutionary trends of the Attention architecture, he had a sudden inspiration to design an alternative approach. However, it was a long process from the initial idea to implementation. We formed a team specifically for this and spent several months getting it to work.
Waves: The emergence of this kind of divergent inspiration is closely related to your completely innovative organizational structure. Back in the High-Flyer Quant era, you rarely assigned goals or tasks from the top down. But with AGI, which is such a cutting-edge exploration filled with uncertainty, are there more management actions?
Liang Wenfeng: DeepSeek is also entirely bottom-up. And we generally don’t pre-assign roles; instead, there’s an organic division of labor. Everyone has their own unique background and experiences; they come with their own ideas, without needing to be pushed. During the exploration process, if they encounter problems, they’ll naturally gather people to discuss them. However, when an idea shows potential, we will also allocate resources from the top down.
Waves: I hear that DeepSeek is very flexible in allocating compute resources and personnel.
Liang Wenfeng: We have no cap on how each person can allocate compute resources or personnel. If someone has an idea, they can use the training cluster’s compute cards at any time without needing approval. Also, because there are no hierarchies or cross-departmental barriers, they can flexibly call on anyone, as long as the other person is also interested.
Waves: A loose management style also depends on your ability to recruit a group of passion-driven individuals. I hear that you are very good at recruiting from details, which allows you to select people who might be outstanding by non-traditional evaluation criteria.
Liang Wenfeng: Our criteria for selecting people have always been passion and curiosity, so many people have unique and interesting experiences. Many people’s desire to do research far exceeds their concern for money.
Waves: Transformer was born in Google’s AI Lab, and ChatGPT was born in OpenAI. What do you think is the difference in the value of innovation generated by a large company’s AI Lab versus a startup?
Liang Wenfeng: Whether it’s Google’s labs, OpenAI, or even the AI Labs of major Chinese tech companies, they are all very valuable. The fact that OpenAI ultimately made it happen also has historical contingency.
Waves: Is innovation largely a matter of chance? I see that the meeting rooms in the middle of your office area have doors on both sides that can be pushed open at any time. Your colleagues say that this is to leave room for chance. In the birth of Transformer, there was a story of someone who happened to pass by, heard about it, joined in, and eventually turned it into a general-purpose framework.
Liang Wenfeng: I think innovation is first and foremost a matter of belief. Why is Silicon Valley so innovative? First, it’s daring. When ChatGPT came out, the whole of China lacked confidence in doing cutting-edge innovation. From investors to big tech companies, everyone felt that the gap was too big and that it was better to focus on applications. But innovation first requires self-confidence. This confidence is usually more evident in young people.
Waves: But you don’t participate in financing, rarely speak out, and certainly have less social influence than those companies that are actively fundraising. How do you ensure that DeepSeek is the first choice for people who want to work on large models?
Liang Wenfeng: Because we are doing the most difficult thing. For top talent, the biggest attraction is definitely to solve the world’s most difficult problems. In fact, top talent is undervalued in China. Because there is so little hardcore innovation at the societal level, they don’t have the opportunity to be recognized. We are doing the most difficult thing, and that is attractive to them.
Waves: OpenAI’s recent release didn’t bring GPT-5, and many people feel that this is a clear slowdown in the technological curve, and many people are starting to question Scaling Law. What are your thoughts?
Liang Wenfeng: We are more optimistic, and the whole industry seems to be in line with expectations. OpenAI is not a god; it can’t always be at the forefront.
Waves: How long do you think it will take to achieve AGI? Before releasing DeepSeek V2, you released models for code generation and mathematics, and also switched from dense models to MOE. So what are the coordinates of your AGI roadmap?
Liang Wenfeng: It could be 2 years, 5 years, or 10 years, but it will be achieved in our lifetime. As for the roadmap, even within our company, there is no unified view. But we are indeed betting on three directions. One is mathematics and code, the second is multimodality, and the third is natural language itself. Mathematics and code are a natural testing ground for AGI, a bit like Go, a closed and verifiable system, where it is possible to achieve a very high level of intelligence through self-learning. On the other hand, perhaps multimodality and learning by participating in the real world of humans are also necessary for AGI. We remain open to all possibilities.
Waves: What do you think the endgame of large models will be?
Liang Wenfeng: There will be specialized companies providing basic models and basic services, and there will be a very long chain of specialized divisions of labor. More people will build on top of that to meet the diverse needs of society.
All Playbooks Are Outdated
Waves: There have been many changes in China’s large model startup scene over the past year. For instance, Wang Huiwen, who was very active at the beginning of last year, withdrew mid-way, and the companies that joined later are also starting to differentiate themselves.
Liang Wenfeng: Wang Huiwen took on all the losses himself, letting others leave safely. He chose what was hardest on himself but best for everyone else. This was incredibly generous of him, and I really respect him for that.
Waves: Where do you focus most of your energy now?
Liang Wenfeng: My main focus is on researching the next generation of large models. There are still many unresolved problems.
Waves: Other large model startups insist on having it both ways. After all, technology doesn’t bring permanent leadership, and it’s important to seize the window of opportunity to turn technological advantages into products. Is DeepSeek daring to focus on model research because its model capabilities aren’t strong enough?
Liang Wenfeng: All playbooks are outdated; they may not hold true in the future. Using the business logic of the Internet to discuss the future profit model of AI is like discussing General Electric and Coca-Cola when Ma Huateng was starting his business. It’s likely a case of ke zhou qiu jian (marking the boat to find the sword – a futile effort/chasing yesterday’s solutions).
Waves: High-Flyer Quant had a strong track record of technology and innovation and grew relatively smoothly. Is this why you’re more optimistic?
Liang Wenfeng: High-Flyer Quant has, to some extent, strengthened our confidence in technology-driven innovation, but it hasn’t all been smooth sailing. We went through a long accumulation process. The outside world sees the part of High-Flyer Quant after 2015, but we actually worked on it for 16 years.
Waves: Returning to the topic of original innovation. Now that the economy is entering a downturn and capital is entering a cold cycle, will this bring more constraints to original innovation?
Liang Wenfeng: I don’t think that’s necessarily the case. The adjustment of China’s industrial structure will rely more on hardcore technology innovation. When many people realize that the quick money they made in the past might have come from the luck of the times, they will be more willing to get down to doing real innovation.
Waves: So you are also optimistic about this?
Liang Wenfeng: I grew up in a fifth-tier city in Guangdong in the 1980s. My father was a primary school teacher. In the 1990s, there were many opportunities to make money in Guangdong. At that time, quite a few parents came to my home, basically because they felt that studying was useless. But looking back now, the mindsets have all changed. Because it’s harder to make money, even opportunities like driving a taxi might be gone. Things changed in just one generation.
Hardcore innovation will become more and more prevalent in the future. It may not be easy to understand now because the entire society needs to be educated by facts. When this society allows those who engage in hardcore innovation to achieve success and fame, the collective mindset will change. We just need a bunch of facts and a process.
Waves is 36Kr’s dedicated investment reporting platform.
36Kr is a prominent brand and a pioneering platform dedicated to serving New Economy participants in China. With the mission of empowering New Economy participants to achieve more, 36Kr continuously connects and serves the six communities, which involves Startups, TMT giants, Traditional enterprises, Institutional investors, Local governments, and Individuals. 36Kr accelerates the flow of the four major elements, which are information, talent, capital, and technology, to promote the development of a rapid, stable, and sustainable New Economy.36Kr offers comprehensive services that span the entire corporate lifecycle, from investment and public relations to funding and IPO guidance, as well as market value and brand management, customer engagement, consulting and solution provision, government collaboration, and international expansion for businesses.
Join us at AI Native Foundation Membership Dashboard for the latest insights on AI Native, or follow our linkedin account at AI Native Foundation and our twitter account at AINativeF.