Introduction
In 2015, only 10% of organizations used or had plans to implement AI technology. Today, over 80% of companies consider AI a top priority in their business strategies. As we’re seeing the mass adoption of AI in the workplace, personal life, and on the internet, it’s clear that we’re entering a golden era of artificial intelligence.
There are many potential positive applications of AI, including in healthcare, finance, and law. Large language models have the power to alleviate a lot of administrative burdens and free up human time for more complex and creative tasks. But just as easily as it can improve our lives, these tools can be abused by malicious actors and ruin the lives of countless innocent victims.
It’s important to keep in mind that AI is simply a tool that serves humans and is not a conscious entity (well, not yet anyway). Like all modern technology, these models can be used for good or bad depending on whose hands they fall into. In the hands of a research scientist, machine learning could be the key to curing cancer by analyzing the patterns in our genes. In the hands of a cyber-criminal, AI could be used as a weapon of mass destruction. No matter how hard we try to sanitize datasets, place guardrails in the training process, and regulate this technology, there will always be ways for people to take advantage of such powerful technology.
In 2024, AI has evolved to produce content that is nearly impossible to discern from real human content. Not only can AI mimic and analyze human behavior, but it also has the power to manipulate human behavior in dangerous ways. This isn’t an impossible feat, since humans are vulnerable in many ways that machines aren’t; we are easily swayed by emotions, we act rashly out of fear, and we are prone to unconscious biases.
In just the last few years, there’s been a huge spike in hyper-realistic “deepfakes” all over the internet, many of them posted with not-so-good intentions. In this article, we’ll go through some of the dangerous ways that AI-generated content can be used to manipulate human behavior by mirroring our reality.
1. Disinformation
Disinformation is false information that has been deliberately shared to deceive people into believing things that aren’t true. Unfortunately, fake news has been around for a long while. However, with the rise of deepfakes, many false claims can now be supported by extremely realistic images, videos, or audio. This technology has detrimental consequences, even having the power to incite online outrage and cause real-world incidents.
One infamous example of Fake News leading to real-world violence is the Pizzagate Incident, where a man began opening fire at a pizzeria believing false claims online suggesting that it was hiding a child trafficking ring. Disinformation is especially dangerous now that information spreads so rapidly on social media and false evidence is easier to forge than ever before.
It’s a lot harder now to tell what’s real, making it difficult to trust anything you read online. Although this might be a good way to train our critical thinking skills, it can be dangerous in emergency situations, when people fail to react to real-life disasters. Like the story of the boy who cried wolf, people are less likely to believe the truth after already being deceived several times.
2. Political Propaganda
Political figures and candidates are some of the prime targets of deepfakes because of their power and influence. By mimicking a powerful authority figure, such as a presidential candidate, deepfakes could have the ability to influence public opinion, manipulate voter behavior, and sway elections. Some recent political deepfakes have depicted figures such as Moldova’s president, Slovakia’s liberal party leader, and an opposition lawmaker in Bangladesh, all of which aimed to undermine their political standings and public favor.
Software companies have tried to prevent users from abusing their AI tools, but the current measures seem to be ineffective. A recent study published by the Center For Countering Digital Hate found that anyone is able to generate misleading images to support false political claims on some of the most popular AI image generation tools, such as ChatGPT, Midjourney, Microsoft’s Image Creator, and Dreamstudio, despite these models having policies to prohibit doing so. With some simple prompt engineering, virtually anyone could bypass the guardrails and safety training that all of these models have been programmed with.
As we come upon election season all over the world in 2024, we will likely see a lot more fake political propaganda crop up as well as increased government crackdowns on election-related deepfakes.

3. Financial Market Manipulation
Deepfakes make it possible to manipulate the stock market by inciting panic and outrage online. Stock markets can be especially sensitive to news of global crises and geopolitical unrest. One notable example from 2023 is a fake image of an explosion near the Pentagon, a US government building, which quickly went viral and led to a 0.3% drop in the S&P 500 before it was debunked.
Though the market tends to eventually recover from short dips and spikes, coordinated attacks of AI-backed disinformation have the potential to cause lasting harm by eroding global trust and increasing market volatility. This highlights the importance of verifying your sources prior to making any big decisions, financial or otherwise, based on internet rumors.
There have also been a growing number of investment scams that have used prominent business and finance experts to promote certain stocks. A few of the recent targets include Elon Musk, former ASX CEO Dominic Stevens, and financial commentator Peter Switzer. According to the Australian National Anti-scam Center, investment scams are the most financially devastating, contributing to $1.3 billion in losses in 2023.
4. Identity Theft
Are biometric login credentials still a secure way to protect what belongs to you? There have been new reports of cyber criminals in the Asia-Pacific region collecting facial recognition data, identity documents, and intercepted text messages through malware on mobile devices. The attackers then use the collected data to gain access to victims’ bank accounts by taking advantage of deepfake technology.
Cyber-criminals have created much more sophisticated tools and it’s now a race for financial institutions to stay on top of these new threats. There is a high probability that your banking app is still using vocal or facial identification technology to keep your information safe. But your biometric information, such as your face or voice, can be replicated easily with AI tools, leaving you vulnerable to data thieves.
On the bright side, you can protect yourself by using multi-factor authentication, making sure your passwords are secure, and being mindful of what you share online.
5. Social Engineering
Social engineering is a technique that utilizes psychological manipulation to gain access to a system, steal personal information, or coerce you into performing certain actions. The most common example of a social engineering attack is phishing, where attackers will attempt to steal your data by pretending to be a trustworthy person or organization.
In social engineering attacks, malicious actors will gain your trust first and strike when your guard is lowered. These attacks can come in many different forms such as emails, voice messages, or video calls. With the advent of deepfakes, it is now possible for scammers to pose as your friend on a video call and even replicate their voice. One man in China fell to one such scam and lost 4.3 million yuan or approximately $622,000 USD.
Another well-known case is the Hong Kong CFO scam, where an employee at a large multinational company was conned into paying $25M USD to scammers masquerading as the company’s chief financial officer and several employees on a conference call. Needless to say, deepfake technology has made phishing scams terrifyingly effective.
A few tell-tale signs you might be the target of a social engineering attack is when they demand a large sum of money, perhaps with a sense of urgency. To protect yourself from these scams, always remember to use a second communication channel or seek a second opinion from someone you trust.

6. Cyberbullying, Sexual Harassment, and Blackmail
This may or may not come as a surprise to you, but the most common use case of pornographic technology is to create pornographic content. According to a 2023 study, an estimated 98% of deepfakes are pornographic in nature and 99% of them disproportionately target women. The FBI even issued a warning in 2023 about an ‘uptick’ in “sextortion” and blackmail schemes using fake images and videos created using social media posts.
The most recent notable case occurred in January of 2024 when several sexually explicit images of famous pop singer Taylor Swift went viral on X, formerly known as Twitter. One image was even viewed 47 million times before it was taken down.
It’s not just celebrities who are the victims of deepfake pornography, AI-powered cyberbullying has also become increasingly common in schools worldwide. Even boys as young as 14 or 15 are learning to take advantage of this technology to bully their female classmates in school.
When pornographic content is intentionally shared to harass and humiliate young women, it can lead to psychological distress, damaged reputations, and compromised physical safety. The enduring nature of online content means that the consequences can be long-lasting, leaving the victims feeling helpless as they have no control over the spread or permanence of the images or videos.
If you or someone you know is a victim of AI-enabled sexual harassment, please reach out to your local authorities and seek legal advice in your region.
7. Character Assassination
On a personal level, misleading deepfake content can cause irreparable damage to your reputation. Anyone with a personal grudge against you can abuse AI technology to ruin your personal relationships or career. Journalists at major news outlets such as CNN, CBS, and BBC have also been featured in fake videos that undermine their credibility and public trust.
Some of the challenges when facing character assassination may include proving your innocence, erasing all traces of the content online, and taking action to prevent the perpetrator from reposting the defamatory content.
At Mirror Mirror, our goal is to help you protect yourself from attacks against your reputation by proactively searching for deepfakes using your face on your behalf. If you are a victim of AI-powered character assassination and want to take back control of your online integrity, you can contact us or read more about us here.
8. Scam Ads
If you’ve been on social media recently, you probably have come across at least a few AI-generated ads, perhaps without even realizing it. Many of these fraudulent ads are so effective because they align well with what their targets have been known for, such as Mr. Beast announcing a giveaway or Elon Musk promoting cryptocurrencies.
Even the identities of public-facing doctors have been stolen to create fake endorsement videos of questionable health products on TikTok. This deceptive marketing strategy can have dangerous consequences, especially if the products cause negative health outcomes or a major financial loss.
If you happen to see an advertisement with an offer that seems too good to be true, remember to seek reviews of that product or company to see if it is legitimate. Alternatively, you can use a deepfake detection tool to spot AI-generated content.

Conclusion
From disinformation campaigns and political propaganda to financial market manipulation and identity theft, the malicious use of AI-generated content poses serious risks to individuals and society. As the AI landscape continues to rapidly evolve, the best way to protect ourselves is to stay informed about the latest advancements and vulnerabilities in AI technology. Older adults are the most vulnerable to disinformation and online scams, so it is especially crucial to warn your loved ones of the dangers.
Ultimately, the responsibility to mitigate the dangers of AI-generated content lies with both individuals and organizations. By promoting digital literacy, enforcing stricter regulations, and developing more robust security measures, we can work towards harnessing the power of AI for good while minimizing its potential for harm.