What a Polish Celebrity and "Imperfect" Tennis-Playing Robot Reveal About the Future of Collaboration with Robots
I promise this post will be interesting, whatever your industry! Note: This post is research-driven. For those curious, full references are listed at the end.
A robot has gone viral in Poland. Not because it's the most sophisticated machine ever built. Rather, because it's charming and feels relatable. But what does Edward bring to the understanding of advanced "intelligent" technologies assimilation in a society? Let's first get acquainted with the protagonist.
Edward Warchocki in Warsaw, image from RobotsBeat, 2026
His name is Edward Warchocki. In two weeks, he racked up over 100,000 TikTok followers and hundreds of millions of views. He's appeared on prime-time TV, landed a luxury watch brand deal, and spent an hour in deep conversation with a 90-year-old woman on a Warsaw street. Some could describe him with the affectionate phrase: "an unemployed friend on a Tuesday" — the one who's always free, always curious, always up for a chat.
For my international friends, here is some context: He walks through Warsaw and Poznań, engages with strangers, attempted (unsuccessfully and endearingly) to travel to Choroszcz, converses with elderly passers-by, and occasionally behaves like he's had one drink too many. Behind the charm is a semi-autonomous architecture: an LLM drives conversation with a designed "soul" and personality, while movement is handled via remote control. His creators, entrepreneur Radosław Grzelaczyk and AI developer Bartosz Idzik, deliberately built his character to persist and evolve, recording conversations to develop a continuous personality history. Edward is, by their description, "completely different than he was two weeks ago."
Images from Edward's Instagram @edwardwarchocki
If you are uncertain how to identify him on the street, here are some specs: Edward is a Unitree G1 humanoid, 127cm tall, 35kg, up to 43 degrees of freedom, WiFi-connected, approximately two hours of battery life per charge, equipped with 3D LiDAR and depth cameras for environmental perception.12
Against a backdrop of fear of Artificial Agents and "Intelligence": why this matters
We are living through a period of serious AI anxiety, and not without reason. Fear of job displacement, erosion of human connection, questions about identity and inherent value. And then there's the famous uncanny valley: a concept coined by roboticist Masahiro Mori in 1970, describing the dip in human comfort when a robot looks almost human but not quite; close enough to trigger recognition, different enough to feel wrong. Think of CGI characters in early animated films that audiences found disturbing despite apparent technical polish.1,2,3
An interesting fact: recent research confirms that LLM-powered conversation significantly reduces this eerie feeling, shifting user responses toward warmth and engagement, with conversational naturalness and interestingness identified as the key variables.4
Edward sidesteps the uncanny valley or the exuding feeling of intimidation. He's visibly a robot, which gives him an exotic look, and yet feels "present" and "one of our own" due to his humour.
What could be working? Play as a bridge across the differences
From my philosophy curriculum and the study of animal minds, I kept thinking about a concept that may even seem almost whimsical applied here: interspecies play.
Research in Animal Behaviour and Cognition identifies interspecific social play (ISP) as a rare but revealing phenomenon — like dogs playing with bears, primates engaging with dolphins. What makes it work is not their sameness, but mutual willingness: play signals, role reversals, self-handicapping. Both parties adjusting to the other's level. Trust built not through formal agreements, but with shared curiosity and low-stakes engagement.5,6
Of course, play doesn't always land — sometimes a bear simply isn't in the mood. So, the architecture of playful invitation, of non-threatening curiosity, seems to lower barriers that factual reassurance never could. We are bringing new agentic actors into our social world that are neither human nor animal. And the conditions that create cross-species connection (such as playfulness, reciprocity, curiosity, humour) may be what also helps humans extend something like trust to artificial ones.
A note from my PhD research: the overlooked dimension of trust in AI systems
In research on AI agents in negotiations, a lot of energy is spent on well-established pillars: fairness, transparency, rationality, warmth in the conventional sense. These matter enormously. But Edward's case points to something different: playfulness, and its cultural specificity.
Research from Harvard's Programme on Negotiation confirms that humour fosters positive emotion, reduces tension, and improves creative problem-solving at the bargaining table — but essentially only when it's contextually grounded. A well-timed joke demonstrates that you understand the room. A misplaced one can damage trust quickly and sometimes irreversibly.7 While warmth is comforting, humour is more nuanced.
What makes Edward work is not just that he's funny — his humour is also recognisably Polish: a slightly dry irony and warmth without sentimentality. A literary character you want to have coffee with because his worldview is generous, and his lines are sharp.
Meanwhile, large-scale research on AI negotiation agents (based on the largest international AI negotiation competition conducted to date) found that warmth was a decisive factor in reaching agreements, comparable in importance to rational strategy.8 Edward is, in his own street-level way, a proof of concept for the humorous part of that equation.
Training on imperfection: a second data point
Image from LATENT Project Page
Here's where it gets technically interesting, and I think the implications reach beyond AI robotics. Researchers from Tsinghua University, Peking University and Galbot recently released LATENT: a system teaching a Unitree G1 to play competitive tennis against human players. The results are impressive: sustaining multi-shot rallies with reactive footwork is genuinely something. But the method is also intriguing: the robot was trained not on pristine, complete motion-capture sequences, but on imperfect human "motion fragments that capture the primitive skills used when playing tennis rather than precise and complete human-tennis motion sequences from real-world tennis matches" — making it move more like a human.
Their key insight: imperfect data still encodes meaningful priors about human movement. With correction and composition layered on top, the robot develops a policy that is both competent and natural-looking.9
Could this extend to cognitive and social tasks in certain low-stakes domains? Could training on naturally messy, context-rich human interactions produce AI systems that are better at the trust-building we've been discussing? The competitive sports domain may be an interesting test case: robots are already being deployed in negotiation settings as opponents, coaches, and feedback mechanisms.10,11
Conclusion: the case for "perfect imperfection"
Both Edward and the LATENT tennis robot are pointing in the same direction. We have spent decades building AI systems that optimise for clean efficiency: perfect data, flawless execution, frictionless outputs. And we have watched, puzzled, as humans consistently preferred the messier, funnier, more uncertain version. The robot that complains it can't get to Choroszcz as it fails to climb the bus stairs. The one whose movement style feels like it learned in a park, not a lab.
I want to propose a view on this: "proper imperfection" for low-stakes and social domains — the right dose of human-like imperfection, humour, and situatedness. Not maximum messiness, rather calibrated authenticity.
In a period of dramatic increases in AI efficiency and productivity, this question becomes more urgent, not less. If systems exponentially get better at tasks, we ask what remains distinctively ours? And how do we design human-AI collaboration?
Some things worth sitting with:
- What would it mean to build AI systems that are funny in culturally specific, earned ways rather than algorithmically approximate ones?
- Is the interspecies play framework — mutual willingness, role reversals, self-handicapping — a useful lens for human-AI interaction design?
- As robots enter competitive sports as coaches and opponents, what happens to the texture of athletic development, rivalry, and identity?
Edward Warchocki doesn't have the answers. But he's asking the right questions. In Polish, with excellent timing, and a remarkably straight face.
What do you think about this? Is playfulness and humour an underrated dimension of AI trust-building? I'd love to hear from practitioners and anyone who's ever seen a bear play with a dog.
References
- Mori, M., et al. (1970/2012). "The Uncanny Valley." IEEE Robotics & Automation Magazine. ieeexplore.ieee.org
- Mara, M. et al. (2022). "Human-Like Robots and the Uncanny Valley: A Meta-Analysis." Zeitschrift für Psychologie. hogrefe.com
- Wikipedia. Uncanny valley
- Kang, H. et al. (2026). "Affective and Conversational Predictors of Re-Engagement in Human-Robot Interactions." arXiv. arxiv.org
- Kieson, E. (2025). "A Review of Interspecific Social Play Among Nonhuman Animals." MDPI Veterinary Sciences. mdpi.com
- "Interspecies Relational Theory: A Framework for Compassionate Interspecies Interactions." MDPI Veterinary Sciences.
- Harvard Programme on Negotiation. (2026). "Is Humor in Business Negotiation Ever Appropriate?" pon.harvard.edu
- Zhang, Z. et al. (2025). "Advancing AI Negotiations: New Theory and Evidence from a Large-Scale Autonomous Negotiation Competition." arXiv. arxiv.org
- Zhang, Z. et al. (2026). "Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data." Tsinghua / Peking University / Galbot. arXiv · Project page
- Harvard Programme on Negotiation. (2026). "From Agent to Advisor: How AI Is Transforming Negotiation." pon.harvard.edu
- Stanford HAI. (2025). "The Art of the Automated Negotiation." hai.stanford.edu
- Unitree Robotics. Unitree G1 — Official Specifications
- RobotsBeat. (March 2026). "Poland's First Humanoid Influencer Draws Brands and Millions of Views." robotsbeat.com