No it’s not.

Artificial Intelligence is advancing at a rapid pace. While it’s fascinating, it also brings with itself threats of varying degree to data privacy, individual privacy, and security breaches. The varying nature of scandals have brought an onslaught from internet to make AI technology regulated and hold accountable to legal repercussions. How will this eventually pan out is yet to see.

One such instance of it is the recently went viral software called “Deepfakes” which replaces someone’s face with someone else’s by using images. The software works by collecting a considerable number of images of someone’s face at different angles and then imprint a series of data points on a another face.

This tool was created to prey on women for pornograhy, or “revenge porn”. It first appeared on popular community website “Reddit” by a user named Deepfakes. After a significant amount of backlash from the readers it was banned from the website after similar communities like Motherboard, GyftCat, Twitter, and Discord banned it as well.

Now the technology is just not restricted to one case but we are seeing a wide range of applications in shape of false videos of politicians. United States has already seen a large number of fake news incidents in the recent years of a political nature but those were few in number. With an easy access to a software like one under discussion now anyone can become part of it. Considering the Internet’s working, it safe to say that offenders may never be caught.

Hollywood actress weighs in

Popular Hollywood actress Scarlett Johansson, among many, also became a target of the fake face software. Many videos of pornographic or ill nature surfaced with the actress appearing in controversial state. While she has been very vocal against this she still maintains the stance that pursuing the fight against the controversial software is not practical since she or no one can really stop someone from copying and pasting her face’s image on someone else’s body.

Although the software has been banned largely but the code and implementation guides are still appearing on developer platform Github. It is yet to see if they take any action against it.

Google took a big step in the right direction. By giving the victims of Deepfakes to remove the pornography material from search results, September 2018. The user community is also offering help to victims by using their platforms to raise awareness about it to a larger audience.

After banning Deepfakes some of the internet audience termed it as harmless  fun. Though it cannot be denied the lasting impact it will have on its victims. If you are not a victim then you may not understand the severity.

What now?

Regulations are struggling to keep up with the advancing technology as newer and weirder implications surface everyday. What are the best policies? Who will regulate and what? What are the pitfalls? All this is yet to decide. Though one thing is certain that if we are going to praise AI then we have to be ready for the consequences as well.

This site uses Akismet to reduce spam. Learn how your comment data is processed.