At which point should technological advancement be considered too much? While recent improvements in artificial intelligence have shown great potential for enhancing our everyday lives, they also unveil some unsettling truths that require our attention. Particularly in the realm of deepfake AI technology, advancement has sparked controversy as the adverse impacts of misuse have begun to emerge.
Deepfake technology is a potent tool, capable of manipulating audio, images, or video to depict entire speeches, scenarios, or interactions that never happened with astonishing realism. While this tool can uplift certain industries, as is currently the case with the entertainment industry, its availability to the public coupled with inadequate safeguarding makes it susceptible to abuse.
The detrimental effects of deepfake misuse are multifaceted, ranging from the amplification of misinformation to causing financial and psychological harm. We will explore the concerning ramifications of this reality-distorting tool on individuals, businesses, and society at large.
1. It Blurs the Lines Between Reality and Illusion
Deepfakes pose a real, societal threat to what we perceive to be true. Since its early beginnings in the late 90s to its widespread recognition around 2017, deepfake technology has evolved from producing media that has unmistakably been modified by computer algorithms to producing results that are nearly indistinguishable from reality.


Figure 1 illustrates the output of a deepfake algorithm with underwhelming realism. It requires little attention to detail to realize the significant presence of artifacts across the faces. Meanwhile, in Figure 2, there are significantly fewer imperfections to notice. Without closely inspecting, there is a greater possibility of mistaking the character in the video for the real Morgan Freeman.
A video could be made of you promoting a scam product, sharing highly offensive viewpoints you do not align with, or performing sexual or illicit acts you never did. These examples all lie within the current abilities of deepfake technology.
Today, it seems that seeing is no longer believing since 43% of people globally believe they cannot discern a real video from a deepfake video according to a survey by iProov. In addition, deepfakes have yet to gain widespread recognition. According to another one of their surveys, 71% of people worldwide are unaware of what deepfakes are.
If most people are not aware that content they view online could be synthesized to depict realistic events that never occurred, it renders natural, visual detection even more difficult. As this technology continues to advance, the discernment of real from fake will certainly become more challenging.

Consider that external influences can hinder people’s ability to perceive things objectively. As Greater Good Magazine highlights, our perception of reality can be heavily distorted by several factors, including feelings of hunger or the clarity of a statement. This presents greater resistance to filtering out disinformation online.
Moreover, researchers at the University of Southern California reveal that many are unconcerned about differentiating truth from the misinformation that plagues social media. As the gap narrows between illusion and reality, online impersonation may become increasingly effective and prevalent, exposing many to deepfake attacks on their reputation.
Society is largely unaware of the imminent threat that deepfake abuse poses. While technical and legal solutions are necessary, we will most likely require additional educational support to effectively combat the infiltration of harmful deepfake content online.
2. It is Everywhere and Most of it is Porn
The internet is being flooded with deepfake content, much of which is pornographic in nature. This highlights a serious concern regarding the misuse of deepfake technology against individuals online.
In 2023, Home Security Heroes found that the total number of deepfake videos online was over 95,000, representing a 550% increase over 2019. Shockingly, pornography makes up 98% of all deepfake videos online.
These statistics paint the picture that deepfake tools are already being used as a new kind of digital weapon. Not only do such tools give way to more online harassment, but they also increase the threat of reputation damage to high-profile individuals and personal brands alike.
In late January 2024, Taylor Swift became a target of deepfake abuse where AI-generated sexually explicit images of her began going viral across several social media platforms, in particular, X (formerly Twitter). Having gained media attention, it became a source of outrage which sparked conversation on the controversial topic of deepfake use in society.
In response to the flood of non-consensual deepfake images, X took action to take these images down as well as block any related searches of her name until the viral wave passed. Even with the media coverage and Taylor Swift’s influence, X struggled to take prompt action, leading to one of the shared images amassing 47 million views before being removed, according to The New York Times.
It begs the question, if this was the response for a celebrity, what does it mean for lesser-known individuals? In all likelihood, it suggests that they will not receive the same level of attention or urgency as high-profile personalities, leaving them more vulnerable to the damaging effects of deepfake abuse.

For those subjected to the misuse of deepfake technology, it can have several serious implications. Internet personalities risk losing partnerships as well as trust across their fan base, which can both lead to loss of revenue.
For certain professionals requiring a clean and reputable online presence, their job security could be put in jeopardy. On a more personal level, it can mean emotional distress from public embarrassment or even damage to one’s close relationships.
The rampant misuse of deepfake technology, particularly in the creation of non-consensual pornographic material, underlines the dangers of powerful tools falling into the wrong hands. Establishing firm guardrails, likely through regulatory measures, is vital to mitigate the abuse of deepfake technology.
3. Anyone Can Easily Create and Upload it
Today, the barrier to entry to manipulate the content of anyone is worryingly low. The internet has never made it easier to distribute media, resources, and tools. It should therefore come as no surprise that deepfake tools are abundantly available, be it freely open to everyone or hidden behind a paywall.
Where there is a will, there is more than one way that anyone can create synthetic content incredibly easily from the comfort of their home. But, tools are nothing more than tools unless they are backed by malicious intent.

Consider the ease with which anybody can publicly accuse you online of acts you never committed. However, such accusations hold little weight without any kind of tangible, credible evidence.
Rather than mere accusations, picture a scenario where deepfake tools are used to fabricate an event, seamlessly manipulating your face and voice.
For many around the world, it is no longer a possibility but a harsh reality. Many popular content creators on social media platforms have already been subjected to the abuse of these tools by anonymous bad actors.
Moreover, these tools do not require much time or effort to collect the samples required to replicate someone’s voice or face. Forbes explains how it can be done in as little as 30 seconds with as few as one sample on several sites for free.
For those who want the best results, hundreds more samples are required. An alternative to using existing AI models is to train your own model—although not an easy endeavor. After much analysis, SCIP revealed it takes no more than roughly 500 samples fed into a deepfake algorithm to produce the best results.
A typical video is a set of 24 to 30 images stitched together per second. One hour of content could be 86,400 images (at 24 frames per second) of your face. The quality of output can also be increased by varied lighting, different angles, and higher resolution. For public figures with hours of content, there are already potentially hundreds of thousands of data samples circulating online, ready to be repurposed for a highly convincing impersonation attempt.
In any case, the availability of information and tools online means that there is no longer a high barrier to entry for the creation of harmful content against your online reputation.
4. Social Media Platforms Lack Incentives to Prevent it
Content moderation has been a largely neglected problem for social media platforms. YouTube alone has more than 500 hours of content (as of February 2022) being uploaded to its site every minute. With the colossal volume of uploaded content, is it acceptable to trivialize moderation?
According to Section 230 of the Communications Decency Act of 1996, it is acceptable—for the moment. The act provides immunity for online platforms from civil liability based on third-party content and for the removal of content in certain circumstances.
However, given the drastic development and evolving use of the internet since 1996, the U.S. Department of Justice has since put it under review. So, we may soon see a shift in content regulation.
Given that Section 230 immunizes platforms from legal repercussions because of abusive or illicit user-generated content, one may question the rationale behind content moderation efforts. As with many business decisions, it ultimately revolves around maximizing profits.
Most if not all platforms earn the majority of their revenue from advertisers. From an advertiser’s perspective, such as Nike, they would want to avoid associations of their brand with a platform that mainly hosts offensive or graphic content.
Despite moderation efforts, deepfake media continues to circulate on these platforms. It is unlikely that platforms will firmly clamp down on the growing issue of deepfake misuse unless there is a significant legal, financial, or social push.
However, in February 2021, a coalition was formed between some of today’s major players in the technology sector, such as Adobe, Intel, and Microsoft. It is known as the Coalition for Content Provenance and Authenticity (C2PA). Its aim is to provide a way to easily determine the original source of a piece of media online.
With OpenAI’s recent release of Sora, an AI model that can generate realistic video from text-based instruction, it has been announced that Sora will make use of the C2PA metadata standard.
While this is a good step in the right direction, the standard is only as useful as its widespread adoption. If media platforms fail to embrace it, its benefits will remain unseen and the worsening problem of content moderation will prevail.
5. It Can Hurt Your Bottom Line
Deepfake technology possesses the power to negatively affect your revenue streams by deceiving your audience through impersonation. With 43% of people (globally) admitting they believe they cannot detect a deepfake from a real video, there is a strong potential for people to fall victim to realistic deepfake impersonation attempts.
Deepfake impersonation aside, there are several methods of impersonation that can hurt revenue streams. Across social media platforms, the blue checkmark has long represented the authenticity of an individual. That is until Elon Musk, who acquired Twitter (presently X), decided to repurpose it in November 2022.
For just $8 per month, anybody could buy themselves a verification stamp, which was of course abused. Several companies and personalities became targets of impersonation, with many fake accounts tweeting outrageous things.
Eli Lilly’s share price fell 4.5% the following day after an imposter account promised free insulin. The damage also extended to its rivals, according to Investor’s Business Daily, with the stocks of Novo Nordisk and Sanofi taking a 3.5% and 3.4% hit respectively.
A simple change in the meaning of a blue symbol we associate with ‘the real person’ caused PR mayhem and financial turmoil for victims of impersonation.

So, how do blue-checkmarked tweets relate to deepfakes? They shed light on the deceptive potential of impersonations through seemingly legitimate channels.
If a misspelled X username, a few lines of text, and a blue checkmark can substantially impact the stocks of multiple companies overnight, what could a realistic deepfake video do? After all, a video is arguably a more convincing representation of the real you than a small, blue icon.
Depending on the platform, people do not necessarily need to even see a verification checkmark to believe the content is the real you because of the nature of sharing or reposting content online. TechCrunch sheds light on how deepfake scams involving content creators, like MrBeast, have already infiltrated social media in this way.
Recent live deepfake scams, involving voice or video calls, further highlight its alarming potential. In February of this year, a finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake tools on a video call.
Deepfake audio has also proven to be incredibly effective for imposters. It certainly was for another fraudster who received a payout of roughly $243,000 from the CEO of a UK energy firm after a phone call.
Deepfake-based impersonation presents a graver threat than other methods, and will only worsen as deepfake tools advance while becoming more available.
Conclusion
Amidst the AI hype, we must stop and evaluate the negative impacts of developing such influential tools. Undoubtedly, deepfake technology is powerful and can be used for good. But when there are few guardrails or consequences for misusing a free tool, it lends itself to being easily abused.
With little barrier to entry, it presents fraudsters with an enticing new method of impersonating someone for profit, and it more dangerously fuels online harassment, all while leaving victims psychologically and financially affected.
Eventually, we must ask ourselves, can any benefit of deepfake use outweigh the current societal consequences of deepfake abuse? Ultimately, the answer depends on the balance between technological progress and our capacity to safeguard societal well-being.