Over the past couple of years, AI has garnered a lot of attention as AI-powered tools get more advanced and as a result, draw more mainstream attention. This is particularly true for technologies like voice cloning, deepfakes and mimicry which were considered controversial even when they were still theoretical. While there are numerous practical uses for them (voice cloning can be used in the customization of virtual assistants, to recreate the voices of people with speech impairments and to offer immersion in games and other entertainment experiences. Deepfakes also serve most of these functions in addition to being used in medicine to generate realistic medical images), most of these technologies are mostly known for their nefarious uses.

The rise in public interest in AI has also led to an increase in conversations around the ethics and implications of various AI technologies. Because AI as we experience it now is still very new, there are not a lot of standards or regulations on the ethical usage of AI powered tools. The closest we have is the Preventing Deepfakes of Intimate Images Act which will outlaw the non-consensual sharing of digitally altered intimate images. In order to use these AI-powered technologies in a way that maximizes its potential for good, we have to first understand the ethics and social implications of these tools.

The ethics of voice cloning and mimicry

In March, OpenAI started offering limited access to its voice cloning tool that can create a synthetic replica of a person’s voice with only a 15 second clip. According to OpenAI, the tool could be used to support non-verbal people especially those with speech disabilities, provide reading assistance to non-readers and children, and help patients who suffer from degenerative speech conditions recover their voice. While these all seem like very useful applications of OpenAI’s tool, there were some reservations about its potential misuse, including from OpenAI themselves. In a blog post, they revealed that while the tool had been developed in 2022, its capabilities were not revealed publicly in order to minimize the risk of exploitation by bad faith actors.

Voice cloning is one of the AI-powered technologies seeing an increase in interest and usage following the mainstream fascination with AI. The term refers to the use of AI to create an artificial copy of a person’s voiceAccording to Valuates Reports, the global voice cloning market is expected to grow from 461 million dollars in 2023 to over a billion dollars by 2029. The use cases of the technology are also vast, from personalized virtual assistant and customer service chatbots to mimicking the voice and speech patterns of people with speech disorders.

The potential for misuse with voice cloning is a very major concern with this technology. Already, there is an increase in scams that use voice cloning to target people’s loved ones. In one instance, a couple were scammed into paying ransom for a kidnapping that never happened. In other instances, voice cloning is used to spread misinformation and other political propaganda like a fake robocall urging people not to vote while using Joe Biden’s voice. These events highlight the importance of the responsibility usage of voice cloning and an ethical approach to using it. The importance of consent cannot be overstated especially with the data used for voice cloning. 

Deepfakes and its ethics

Deepfakes have been in existence for almost as long as Youtube has existed. The video sharing platform has been home to thousands of parody videos that make use of deepfakes of celebrities and other notable people. Channels like Epic Rap Battles of History use the technology to create replicas of everyone from Abraham Lincoln to Jeff Bezos and create videos parodying them. This is also done with popular movies and tv shows with a video replacing Elon Musk’s face with Dave’s in a parody of 2001: A Space Odyssey. These videos were usually lighthearted and almost always for comedic purposes until deepfakes started getting very accurate.

Deepfakes are essentially replicas of a person’s likeness usually in a video format and created using AI. Like many other AI technologies, deepfakes have a long list of practical uses. Deepfakes can be used to give life to historical people and events, allowing us to experience them in a more interactive way. The technology can also be used in an educational setting, especially for children or students with disabilities. They are already being used to train nursing and medical students and can help them gain experience without real life consequences. 

As deepfakes get more realistic, so does the potential for harm. The term “deepfake” is actually gotten from a reddit user who used the technology to put the faces of female celebrities including Scarlett Johansson and Maisie Williams on the bodies of women in pornographic videos. This highlights the dangers of deepfake technology in the hands of malicious people. Deepfakes have also been used in political attacks such as in the case of a viral video of Vice President Kamala Harris speaking gibberish that was revealed to be a deep fake

Because of all the ways that deepfakes can be used to attack both individuals and the society at large, an ethical approach to the technology is essential. Right now, we know that they can be used for identity theft, to circulate misinformation and perpetuate fraud. It is then important to make sure that the public is aware of the ways that deepfakes can be used maliciously. Companies that develop software to create deepfakes also have a responsibility to ensure that their technology is being used for good purposes.

Conclusion

The recent rise in public interest in AI has helped to showcase all the ways that AI can be used for good in various fields. From AI-powered tools that can help doctors diagnose and treat cancer to machine learning algorithms that can analyze financial data to identify patterns and trends, there are numerous ways to use AI for both individual and societal benefits. However, there are also multiple negative ways that AI is being used to cause harm. To prevent and mitigate this, it is important for companies producing AI tools and software to ensure that their tools are being used in a positive manner. It is also important that the public is made aware of scams and other fraudulent activities that can be done using deepfakes and voice cloning.

Unlock language AI at scale with an API call.

Get conversational intelligence with transcription and understanding on the world's best speech AI platform.

Sign Up FreeBook a Demo