Every summer, Shanghai plays host to a one-of-its-kind exhibition on Asia’s digital entertainment scene. The ChinaJoy Expo blends cutting-edge technology with gaming culture and serves it with a dollop of creative artistry. The event draws the who’s who from the digital entertainment sector. This year, the CEO of Soul, Zhang Lu, once again got her team to participate in the Expo.
Instead of relying on flashy games, Soul Zhang Lu’s team opted to go with a fusion of artificial intelligence and social networking meant to create immersive digital environments. The aim was to demonstrate that through the use of AI, it’s possible to craft a new kind of social ecosystem, wherein emotional authenticity takes top priority.
Although ChinaJoy is primarily recognized as a gaming expo and one of the largest at that, in recent years, the exhibition has been more than just a celebration of video games. The event now serves as a convergence point for technology companies, creative studios, and cultural innovators. In fact, exhibitors arrive not only to showcase their latest products but also to preview the ideas and technologies that could define the next decade.
The participation of Soul Zhang Lu’s platform was understandable at ChinaJoy 2025 because AI was the dominant theme this year, and Soul has systematically infused AI into its social ecosystem. As far as the event was concerned, the halls were teeming with everything from automated game design tools to interactive storytelling engines and more.
So, ChinaJoy can be best described as an eclectic and impressive display of innovative uses of artificial intelligence. And amid this technical exuberance, Soul Zhang Lu’s team stood out because they were proposing a way in which the technology can be used to reshape the very nature and essence of human connections.
At their booth, Soul Zhang Lu’s engineers created a Gen AI Social Playground of sorts. This was not just a static display; rather, it was an alive and interactive demonstration space where visitors were given the chance to experience AI-powered conversations firsthand. As expected, this approach turned out to be a crowd-puller as event-goers lined up to strike a conversation with “Meng Zhishi,” Soul’s AI companion capable of natural, full-duplex voice communication.
While digital personas that can hold a conversation are not new, a full duplex model is a rarity. The term refers to an AI model that is capable of speaking and listening at the same time, which is pretty much what humans do when chatting. Full duplex capability eliminates the jarring drawback of unnatural rhythm that plagues most AI chatbots.
Typically, there is a lag between the human input and the machine output. Although this pause isn’t significantly long, it is enough to create a sense of disconnect. Also, human speech isn’t just about words. Often, emotions are conveyed through verbal nuances that are used to display attentiveness, presence, sentiments, agreement, and more.
For instance, no matter what the language, verbal affirmations such as “right”, “umm”, “ah-ha”, “exactly”, are often used as interjections to show that the speaker is being heard. Unfortunately, AI models fail dismally when it comes to these verbal cues. But, not Soul Zhang Lu’s model, and that is what made it stand out.
Meng Zhishi is quite capable of responding not just to the words but also to the unspoken emotions that form the undercurrent of human interactions, and ChinaJoy’s visitors got a practical demonstration of the unique conversational abilities of the chatbot.
For example, one of the users chose to narrate a nostalgic story about a childhood game, and Meng Zhishi responded with perfectly timed chuckles and even a pertinent follow-up question, thus creating a moment that felt far more personal than a scripted chatbot response.
While Soul Zhang Lu’s Meng Zhishi left everybody awestruck with her responses, what was remarkably impressive was the emotional intelligence that the AI Chatbot displayed. Because the model is capable of advanced sentiment analysis, the system can detect mood shifts based not just on the actual words used by the speaker but also on the tonal differences. And it can change or adapt its delivery to align with these mood shifts.
For instance, the chatbot responded with playful banter to a light-hearted remark, but when the conversation shifted to a serious confession, the user was offered empathy and reassurance. In essence, by embedding emotional intelligence into an AI model, Soul Zhang Lu’s team has managed to deal with a fundamental limitation of digital communication.
The fact is that hitherto, AI systems simply did not have the ability to “read the room,” given the lack of physical cues in human-machine interactions. But, with these tweaks, Soul’s engineers have tried to make up for the paucity of physical cues by giving the model the ability to understand tonal and verbal cues.
The best part is that Soul Zhang Lu’s team didn’t stop there. The platform also debuted its Mobius Avatar series, which is a set of visually striking, customizable digital personas. Make no mistake, these avatars are more than just decorative trinkets. They are the visual aspect of the Soul’s AI model, and as such, an integral part of the whole human-machine interaction experience.
Although Soul Zhang Lu has invested significant resources into giving the platform’s AI models the ability to interact in a human-like manner, the aim is not to replace human-human interactions. Instead, Soul is simply trying to meet Gen Z’s growing demand for emotional value from online interactions.
A Gen Z Social Attitudes Survey Report released by Soul revealed that youngsters are less interested in superficial content and more invested in platforms that provide genuine interaction, a sense of belonging, and shared values. And, it’s obvious that through these AI-powered offerings, Soul Zhang Lu is trying to give her users the emotional takeaway they seek.
