According to a research conducted by IBM, in order to train Facial Recognition Systems, the data need to be trained with even more diversity.

The current technology has only a few basics which are trained on specific images, logics, concepts, and information. Which is why the technology is not scoping information up-to the mark. IBM scoped 100 million images and videos from Flickr, and released the attributions collected from it very recently. The image attributes collected ten facial coding schemes, and human labelled items such as craniofacial measurements, face symmetry, gender and age.

According to IBM, it is aiming to assist developers and creators to improve their products. By employing the findings into built of applications, the ensuing recognition systems will identify faces with more accuracy. The use of facial recognition is universal. In order to make sure that it can cater everyone from anywhere, it needs to have enough diversity in the backend which can address all requirements.

The current dataset that is in use seems to be narrow and unbalanced. Hence, it is unable to provide a worldwide coverage. The experts in this field have claimed that the use of AI is not error-free. There are growing chances of technological biases which means that it can lead to racist and sexist results.

It might be racist

Joy Buolamwini, from MIT, while conducting a research found out that if she sat in front of a facial recognition camera then it will not identify her. However, it worked fine for her lighter skinned friends. Seeing this as a weird encounter she did an experiment. In front of the same face recognition system when she sat with a white 3D mask on her face then it worked just fine. This was an alarming situation considering the worldwide use of biometrics.

Seeing all this, Buolamwini, further carried experiments on world’s leading facial recognition systems that included IBM, Microsoft, and Chinese startup Face++. Expectedly the companies worked accurately while differentiating between white faces especially men. However, the results were extremely disappointing. Around 34% error rate is calculated when it comes to recognise darker skin tones. As compared to less than 1% error rate for light-skinned individuals. Hence, the reliability of such technology for use on a bigger landscape especially in aviation industry for boarding procedures is next to none.

This gives much weight to concerns of racial profiling with the help of facial recognition technology. The application of facial recognition has expanded from just on-boarding to law enforcement. Now, it is of immense importance to ensure that the technology works without any external bias.

It is to note that technology or software itself is not trained to have bias. Certain logics and hard coded instructions are fed to the system. This means a developer who develops the framework is responsible.

According to John R. Smith, AI Tech Manager at IBM, “The AI systems learn what they’re taught, and if they are not taught with robust and diverse data sets, accuracy and fairness could be at risk.”

This site uses Akismet to reduce spam. Learn how your comment data is processed.