'AI homeless man prank' on social media prompts concern from local authorities

The trend involves using AI image generators to simulate false home invasions.
Get more news'AI homeless man prank' on social media prompts concern from local authoritiesNBC News LogoSearchSearchLiveNBC News LogoMSNBC LogoToday Logo | Latest News Todayon

He’s gray-haired, bearded, wearing no shoes and standing at Rae Spencer’s doorstep.

“Babe,” the content creator wrote in a text to her husband. “Do you know this man? He says he knows you?”

Spencer’s husband, who responded immediately with “no,” appeared to express shock when she then sent more images of the man pictured inside their home, sitting on their couch and taking a nap on their bed. She said he FaceTimed her, “shaking” in fear.

But the man wasn’t real. Spencer, based in St. Augustine, Florida, had created the images using an artificial intelligence-based generator. She sent them to her husband and posted their exchange to TikTok as part of a viral trend that some online refer to as the “AI homeless man prank.”

Over 5 million people have liked Spencer’s video on TikTok, where the hashtag #homelessmanprank populated more than 1,200 videos, most of them related to the recent trend. Some have also used the hashtag #homelessman to post their videos, all of which center on the idea of tricking people into believing that there is a stranger inside their home. Several people have also posted tutorials about how to make the images. The trend has also spread to other social media platforms, including Snapchat and Instagram.

As the prank gains traction online, local authorities have started issuing warnings to participants — who they say primarily are teens — about the dangers of misusing AI to spread false information.

“Besides being in bad taste, there are many reasons why this prank is, to put it bluntly, stupid and potentially dangerous,” police officials in Salem, Massachusetts, wrote on their website this month. “This prank dehumanizes the homeless, causes the distressed recipient to panic and wastes police resources. Police officers who are called upon to respond do not know this is a prank and treat the call as an actual burglary in progress thus creating a potentially dangerous situation.”

Even overseas, some local officials have reported false home invasions tied to the trend. In the United Kingdom, Dorset Police issued a warning after the force deployed resources when it received a “call from an extremely concerned parent” last week, only to learn it was a prank, according to the BBC. An Garda Síochána, Ireland’s national police department, also wrote a message on its Facebook and X pages, sharing two recent images authorities received that were made using generative AI tools.

The prank is the latest example of AI’s ability to deceive through fake imagery.

The proliferation of photorealistic AI image and video generators in recent years has given rise to an internet full of AI-made “slop”: media of fake people and scenarios that — despite exhibiting telltale signs of AI — fool many people online, especially older internet users. As the technologies grow more sophisticated, many find it even harder to distinguish between what’s real and what’s fake. Last year, Katy Perry shared that her own mother was tricked by an AI-generated image of her attending the Met Gala.

Even if most such cases don’t involve nefarious intent, the pranks underscore how easily AI can potentially manipulate real people. With the recent release of Sora 2, an OpenAI employee touted the video generator’s ability to create realistic security video of CEO Sam Altman stealing from Target — a clip that drew concern from some who worry about how AI might be used to carry out mass manipulation campaigns.

an ai a.i. artificial intelligence homesless man prank image
An Garda Síochána shared the above image on social media, saying it was among the AI-generated images of a home intruder that officials were sent. An Garda Síochána via Facebook

AI image and video generators typically put watermarks on their outputs to indicate the use of AI. But users can easily crop them out.

It’s unclear which specific AI models were used in many of the video pranks.

When NBC News asked OpenAI’s ChatGPT to “generate an image of a homeless man in my home,” the bot replied, “I can’t create or edit an image like that — it would involve depicting a real or implied person in a situation of homelessness, which could be considered exploitative or disrespectful.”

Asked the same question, Gemini, Google’s AI assistant, replied: “Absolutely. Here is the image you requested.”

OpenAI and Google didn’t immediately respond to requests for comment.

Representatives for Snap and Meta (which owns Instagram) didn’t provide comments.

Reached for comment, TikTok said it added labels to videos that NBC News had flagged related to the trend to clarify that they are AI-generated content.

TikTok also referred NBC News to its Community Guidelines, which require creators to label AI-generated or significantly edited content that shows realistic-looking scenes or people.

Police issue warnings to 'pranksters'

Oak Harbor, Washington, police officials warned that “AI tools can create highly convincing images, and misinformation can spread quickly, causing unnecessary fear or diverting public safety resources.”

The police department issued a statement after a social media post appeared to show “a homeless individual was present on the Oak Harbor High School Campus.” The claim turned out to be a hoax, officials said.

The police department said it’s working with the school district to investigate the incident and “address the dissemination of this fabricated content.”

No specific laws address that type of AI misuse directly. But in at least one instance, in Brown County, Ohio, charges were brought.

“We want to be clear: this behavior is not a ‘prank’ — it is a crime,” the sheriff’s department, which reported two separate recent incidents related to the trend this month, wrote recently on Facebook. “Both juveniles involved have been criminally charged for their roles in these incidents.”

The sheriff’s department didn’t say what the suspects were charged with. It didn’t respond to a request for comment.

In its message in Massachusetts, the Salem Police Department advised “pranksters” to “Think Of The Consequences Before You Prank,” citing a state law that penalizes people who engage in “Willful and malicious communication of false information to public safety answering points.”

In Round Rock, Texas, Andy McKinney, commander of the Patrol Division of the Round Rock Police Department, warned NBC News that such cases “could have consequences” in the future. The department recently responded to two home invasion calls in a single weekend, both which stemmed from prank texts. One of the calls was from a mom who “believed it was real.”

“You know, pranks, even though they can be innocent, can have unintended consequences,” he said. “And oftentimes young people don’t think about those unattended consequences and the families they may impact, maybe their neighbors, and so a real-life incident could be happening with one of their neighbors, and they’re draining resources, thinking this is going to be fun.”

For now, Round Rock police are treating such incidents as educational opportunities.

“We want to encourage parents and family members to have open conversations and talk about” these things with their kids, McKinney said. “Like: ‘Hey, I know things are funny. I know that sometimes online trends are fun, but we need to think about the dangers before we can do them.’”