Alibaba's Qwen team of AI researchers have been among the most prolific and well-regarded by international machine learning community — shipping dozens of powerful generalized and specialized generative models starting last summer, most of them entirely open source and free.
But now, just 24 hours after shipping the open source Qwen3.5 small model series—a release that drew public praise from Elon Musk for its "impressive intelligence density"—the project’s technical architect and several other Qwen team members have exited the company under unclear circumstances, raising questions and concerns from around the world about the future direction of the Qwen team and its focus on open source.
The departure of Junyang "Justin" Lin, the technical lead who steered Qwen from a nascent lab project to a global powerhouse with over 600 million downloads, alongside two fellow colleagues — staff research scientist Binyuan Hui and intern Kaixin Li — marks a volatile inflection point for Alibaba Cloud and its role as an international open source AI leader.
These three Qwen Team members announced their departures on X today, though they did not share the reasons or whether or not it they were voluntary. Lin himself signed off with a simple post: "me stepping down. bye my beloved qwen."
Asked about the exits by VentureBeat, Alibaba provided the following message from Alibaba Group CEO Eddie Wu sent to the Alibaba Cloud Tongyi Laboratory staff (which housed the Qwen Team) dated March 5, 2026:
"Keep moving forward, let’s go together
Dear Tongyi Laboratory colleagues,
The company has accepted Lin Junyang’s resignation and we sincerely thank him for his contributions during his time with us. Jingren will continue to lead the Tongyi Laboratory and drive its ongoing initiatives. Additionally, the company will establish a Foundation Model Task Force, consisting of me, Jingren and Fanyu, who will jointly coordinate group-wide resources to accelerate foundation model development.
In technology, standing still means falling behind. Advancing foundation models is a core strategic priority for our future. While continuing to uphold our open-source model strategy, we will further scale up investment in AI research and development, accelerate the recruitment of top talent. Let's rise to this challenge."
More information about the departures and reporting from the web, including unconfirmed speculation, follows below:
The departing researchers' final gift: pocket-sized intelligence
The Qwen3.5 small model series (ranging from 0.8B to 9B parameters) represents a final masterstroke in "intelligence density" from the founding team.
The models employ a Gated DeltaNet hybrid architecture that allows a 9B-parameter model to rival the reasoning capabilities of much larger systems.
By utilizing a 3:1 ratio of linear attention to full attention, the models maintain a massive 262,000-token context window while remaining efficient enough to run natively on standard laptops and smartphones — even in web browsers.
Lin, a PKU humanities graduate and polyglot, has long advocated for this "algorithm-hardware co-design" to bypass compute constraints—a philosophy he detailed at the January 2026 Tsinghua AI Summit.
For the developer community, Qwen3.5 wasn't just another update; it was a blueprint for the "Agentic Inflection," where models shift from being chatbots to autonomous "all-in-one AI workers" capable of navigating UIs and executing complex code.
The enterprise dilemma
For the 90,000+ enterprises currently deploying Qwen via DingTalk or Alibaba Cloud, the leadership vacuum creates a crisis of confidence.
Many companies migrated to Qwen because it offered a "third way": the performance of a proprietary US model with the transparency of open weights.
Alibaba has recently consolidated its AI efforts into the "Qwen C-end Business Group," merging its model labs with consumer hardware teams. The goal is clear: transition Qwen from a research project into the operating system for a new era of AI-integrated glasses and rings.
However, the reported appointment of Hao Zhou, a veteran of Google DeepMind’s Gemini team, to lead the Qwen team indicates a shift from "research-first" to "metric-driven" leadership.
Industry analysts, including those cited by InfoWorld, warn that as Alibaba pushes to meet investor demands for revenue growth, the "open" in Qwen’s open-weight models may become a secondary priority — similar to what we saw with Meta after the disappointing release of its Llama 4 AI model last spring, and subsequent reorganization of its AI division, seeing the hiring of Scale AI co-founder and CEO Alexandr Wang and following departure of preeminent researcher Yann LeCun.
Enterprises relying on the Apache 2.0-licensed Qwen models now face the possibility that future flagships —such as the rumored Qwen3.5-Max—will be locked behind paid, proprietary APIs to drive Cloud DAU (Daily Active User) metrics.
The takeaway? If you value Qwen's open source efforts, download and preserve the models now, while you still can.
The "Gemini-fication" of Qwen?
The internal friction at Alibaba mirrors the tensions seen at OpenAI and Google: the "soul" of the machine is often at odds with the "scale" of the business.
Xinyu Yang, a researcher at rival Chinese AI lab DeepSeek, captured this sentiment in a stark post on X: "Replace the excellent leader with a non-core people from Google Gemini, driven by DAU metrics. If you judge foundation model teams like consumer apps, don’t be surprised when the innovation curve flattens."
This "Gemini-fication"—the shift toward a highly regulated, product-centric culture—threatens the very agility that allowed Qwen to surpass Meta’s Llama in derivative model creation. For the global AI community, the loss of Junyang Lin is symbolic.
He was the primary bridge between China’s deep engineering talent and the Western open-source ecosystem. Without his advocacy, there are fears that the project will retreat into a "walled garden" strategy similar to its Western rivals.
'Leaving wasn't your choice'
The technical brilliance of the Qwen3.5 release has been overshadowed by the heartbreak of its creators. On social media, the sentiment among the team members who built the model is one of mourning rather than celebration:
Chen Cheng, a Qwen contributor, explicitly alluded to a forced departure, writing in a post on X: "I'm truly heartbroken. I know leaving wasn't your choice… I honestly can't imagine Qwen without you."
Li suggested the exit signaled the end of broader ambitions, such as a planned Singapore-based research hub: "Qwen could have had a Singapore base, all thanks to Junyang. But now that he's gone, there's no reason left to stay here."
Tongyi Conference reports
While the public face of the Qwen3.5 launch was one of technical triumph, internal reports from a "Tongyi Conference" held by Alibaba on March 4 suggest an atmosphere of significant organizational tension.
According to unverified but widely discussed accounts from the meeting posted on X, executives defended the departures as the culmination of a fundamental disagreement over how AI should be built.
The primary catalyst appears to be a dismantling of the "vertically integrated" R&D model that Lin had championed. Under Lin, the Qwen team operated as an end-to-end, autonomous unit covering everything from pre-training and infrastructure to multimodal research.The new corporate directive splits this "closed loop" into horizontal modules managed directly by Alibaba Cloud’s Tongyi Lab.
Leadership including Wu, Cloud CTO Zhou Jingren, and the Chief HR Officer, argued that while Lin’s centralized "efficiency" was undeniable, the project’s scale—now involving hundreds of people—could no longer be governed by "one person's brain."
The most striking details from the conference involve the company's response to the team's loyalty to Lin. When asked if there was a path for Lin’s return, the Chief HR Officer reportedly struck a definitive tone, stating:
"We cannot put him on a pedestal… the company cannot accept irrational demands that spare no cost to retain him."
The executive then turned the question back on the staff, asking the audience to consider: "What do you think your own cost is?" This rhetoric signals a pivot from a talent-first, researcher-led culture to a more traditional, replaceable corporate structure.
CEO Wu addressed complaints regarding "choked" resources, claiming he was unaware of any intentional bottlenecks and asserting that Qwen remains his "highest priority."
However, in a surprising moment of candor, CTO Zhou Jingren reportedly admitted that even he had been "sidelined" at times, illustrating a fractured chain of command where technical needs frequently collided with "national situation" constraints and group-level political factors.
What happens to Qwen's open source AI efforts from here on out?
The known facts are simple: Qwen has never been technically stronger, yet its founding core has been dismantled. As Alibaba prepares to face investors for its fiscal Q3 earnings report on March 5, the narrative will likely focus on "efficiency" and "commercial scale."
For the enterprises currently excited about the 60% cost reductions promised by Qwen3.5, the immediate future is bright.
But for the larger AI community, the cost of that efficiency may be the loss of the most vibrant open-source lab in the East.
Wu's comments suggest, at least in principle, a commitment to continuing open source model development. But will there be as many or as performant open models from Qwen going forward — or will the pace of this effort slow down and the results be less impressive to the international community? For now, the only thing that's clear is that Qwen's leadership has changed abruptly, and a new era is in store.