When Mia Zelu “attended” Wimbledon this year, her selfies from the stands, sunny captions, and on-trend tennis-core fits garnered thousands of likes and fan reactions. The twist? She doesn't exist.

Mia Zelu is an AI-generated influencer—synthetic from pixels to personality. And yet, she navigated the online world like any other micro-influencer: achieving real engagement, press coverage, and a fan base that responded to her content as if it were from a living person.

Mia Zelu is not human, but her influence is real.

We asked three experts from the D'Amore-McKim School of Business to unpack the strange new territory that Mia's viral moment represents—from the evolution of digital authenticity to the ethics of synthetic storytelling.

“What's fascinating,” says Professor Yakov Bart, “is that even when consumers are aware that an influencer is AI-generated, many still continue interacting with them. It's a willingness to suspend disbelief in exchange for highly entertaining experiences.”

Bart compares the phenomenon to product placement with fictional movie characters, but now, these personas can engage in real-time. “AI enables immersive storytelling in ways we haven't experienced before. But the line between immersive and deceptive needs to be both visible and enforceable.”

So why did people feel like Mia was real?

According to Assistant Professor Nirajana Mishra, the answer lies in our very human brains. “People don't need influencers to be biologically human to feel authentic. We connect with consistent storytelling, appealing personalities, and human-like behavior. That's what Mia delivered.”

Mishra calls this moment an era where machine-generated content shapes how we define authenticity.  “Sometimes AI can seem so believable that it feels just as authentic as something made by a real person. We have reached a point where a believable AI presence can be just as compelling as biological reality. That's both fascinating and disturbing.”  

Where do we draw the ethical line?

Associate Teaching Professor Alex DePaoli believes context matters. “There's a difference between storytelling and simulation. If AI-generated personas are clearly disclosed, they can be a new form of entertainment, like pro wrestling. Fans know it's scripted, but still find it compelling.”

DePaoli sees a future where AI influencers are categorized differently: “They might not replace human influencers but carve out their own niche—performers in a digital theater.”

However, DePaoli warns: “If audiences don't know they're interacting with AI, we've crossed into deception. Disclosure is critical.”

Mia Zelu isn't just a viral anomaly; she's a sign of what's next. Regulation is already in motion. Under the EU's AI Act, avatars that could influence vulnerable groups are classified as high-risk, setting the stage for tighter oversight and potential consequences.

At the same time, the definition of authenticity is shifting. As Professor Mishra puts it, we're moving from “being human” to “being believable.” Consistency and connection matter more than biological reality.

The challenge now? Making sure audiences know what's real and what's not, before it all starts to feel the same.