The Ethics of AI Influencers: Authenticity, Transparency, and Brand Responsibility

Think about why you connect with your favorite influencers. Maybe it’s their humor, their candid moments, or how real they are. But what happens to authenticity when your favorite “person” online isn’t a person at all? Despite being entirely digital, AI influencers such as Lil Miquela and Shudu are now reshaping the influencer landscape. As we enter a world where influencers don’t have to be human, major ethical questions arise. How authentic is the message when the influencer isn't real? What responsibility do brands and AI creators have to their audiences? And, perhaps the biggest question: can we hold AI influencers ethically accountable if they don’t actually exist?

Image sourced from AdWeek

Virtual Faces and Spaces

AI influencers are digital personas created by artists or backed by major brands. These virtual figures have flawless designs and carefully curated personalities, allowing them to function like real human influencers. They post lifestyle content, promote brands, and engage with their audiences, contributing to an industry worth over $20 billion. Lil Miquela, for example, is an AI-created Brazilian-American model with a complicated backstory. Her persona is complete with a love for fashion, music, and activism. Shudu, another virtual influencer, portrays a high-fashion Black model, designed by a white creator, Cameron-James Wilson, to represent “idealized beauty” in a digital form. 

AI influencers have a unique appeal for brands because they are controllable and risk-free, without the complications that human influencers can bring. They don’t age, make mistakes, or get involved in scandals. However, this allure raises concerns about authenticity and transparency, particularly for the audiences who connect with these figures and the brands that promote them.

Ethical Concerns: Authenticity and Transparency

Do followers know these influencers aren’t real? Should they be required to? Authenticity is critical to the influencer economy. Many followers develop a sense of connection with influencers, believing they are receiving genuine opinions, recommendations, and glimpses into someone’s life. However, with AI influencers, it’s all a carefully scripted performance. For example, Lil Miquela once posted about being “hacked” as part of a PR storyline. While some fans were fascinated by this digital drama, others were uncomfortable with the manipulation. On a separate occasion, the influencer even spoke about her “sexual assault encounter” in an online vlog. This situation was followed by deep backlash online, prompting megastar singer Kehlani to comment on Twitter: “@lilmiquela you're playing wit real stories... real trauma.”

Similarly, Shudu’s rise to fame sparked debates about cultural representation and appropriation. She is presented as a Black model representing beauty and luxury fashion, however, she is a digital creation by a white artist. For some, this raises uncomfortable questions. If Shudu embodies Black beauty without a real Black person behind her, can it be considered genuine representation, or is it exploitation? Shudu has been featured in campaigns with brands like Balmain, Ellesse, and Fenty Beauty. However, using an AI model as a stand-in for real representation can seem inauthentic or even exploitative, especially if no actual people of color are benefiting from it. Using AI influencers to appear more inclusive can come across as a superficial approach to diversity. What does it say about a company or brand if they need to utilize an AI influencer to make a campaign more diverse?

Image sourced from Sprout Social

Brand and Consumer Guidelines

As the presence of AI influencers continues to grow, regulations are beginning to catch up. However, the regulatory landscape is still in its early stages, leaving many ethical concerns unaddressed. Current influencer guidelines, like disclosing paid promotions, are all aimed at human influencers, b. But should AI influencers also disclose their involvement in paid promotions? Should brands clearly label AI content to avoid misleading followers?

In some cases, brands have taken steps toward transparency. For instance, Lil Miquela’s account openly labels her as a “robot” in her bio, indicating to followers that she’s not human. However, this practice is not standardized.

Where Do We Draw the Line for the Future?

As AI influencers become more sophisticated, they’re likely here to stay. This means we need to adapt ethically, thoughtfully, and quickly. Brands considering using AI influencers should prioritize transparency by disclosing that these influencers are completely digital and being paid for promotional purposes. Similarly, AI personas that promote unrealistic standards or manipulate follower relationships can quickly erode consumer trust in a brand. Especially when many of these AI influencers appear flawless and idealized, setting a standard that could negatively impact followers’ self-image and mental health. Brands must proceed with caution when debating utilizing AI influencers.

As AI technology continues to evolve, brands, creators, and followers must establish clear boundaries and hold each other accountable for promoting responsible and transparent digital interactions.


About Piper

Piper is a Senior pursuing a degree in Advertising and Public Relations. This is her second year in GrandPR where she serves as the Media Coordinator as well as an Account Associate. Piper also is currently a Public Relations and Social Media intern at SpartanNash. After graduation, Piper hopes to further her career in social media and eventually work as a publicist in the sports or entertainment industry.

GrandPR