Skip to content

Artificial Intelligence’s White Guy Problem

The growing use of artificial technology presents the world with a myriad of challenges. Its use is creating a lot of inequalities in the judicial and legal systems, at homes, and workplaces. The systems are being fed with some forms of discrimination based on sexism and racism. When this is fed into the algorithms that drive the artificial intelligence, the systems are affected in the way they churn out data when analyzing some situations. For instance, it affects the way people are categorized and also how advertisements are sent to users of the systems.

Some applications like Google’s photo application were found to be tagging photos of black people as gorillas. Similar problems have been noted in Nikon’s software installed in their cameras which misinterpreted pictures of Asian people to be blinking. Further, the HP’s web-based camera had problems in identifying images of people with dark skin tones.

Artificial Intelligence in Decision Making

These problems are fundamentally caused by the kind of data fed into the system when developing them. The system uses that data to make predictive analysis in future situations. The algorithms remain largely proprietary and thus secrets to the developers. The use of these systems in the legal process is widely practiced in the United States. Judges make risk assessments in some cases using machine driven technology. However, the judges do not understand the logic behind those machines even though they rely on them to make conclusions.

The use of artificial intelligence has revealed similar challenges in the policing department. Police use the systems to roll out predictive policing, but this is resulting in a vicious cycle of the problems already existing. For instance, the systems predict more policing in areas that are predominantly black and less in those predominantly white. More arrests could thus continue coming from the areas with over-policing, while crimes in other areas go unnoticed. Consequently, historical injustices based on discrimination could be revived through the digital systems if this is not fixed. The discrimination does not end at that. Google’s ads for higher paying jobs were found to be more skewed to men than women.

Reliving the Dark Past

The artificial intelligence systems are likely to reflect the old biases when not looked into with the aim of solving the stereotypes and discriminations. The development of the systems highly reflects the values of the companies that create them. The best way to resolve these challenges would be through having inclusion in the companies from the board levels to management and the people who make the designs and implementation of the artificial intelligence systems. The solution to the discrimination present in the artificial intelligence systems will be helpful in addressing the challenges that might crop up in the development of future projects and even create solutions to them before they emerge.