Google Cloud Vision API is an artificial intelligence solution that can be leveraged by developers. With the tool, dev’s can use the API to create labels for images. Google’s AI is trained to detect landmarks, logos, and faces. On Thursday, Google sent an email to developers telling them Cloud Vision will no longer use “gendered labels”. As reported by Business Insider, the API will now start labelling people in images with “non-gendered” labels. According to Google, the change was made because visual cues are not enough to infer someone’s gender. Furthermore, the company says its own ethical stance regarding AI and unfair bias played a part in the decision. “Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias.”
Avoiding Bias
Certainly, AI bias is something that is widely discussed amongst tech companies developing solutions. Developers have flagged the potential for poorly trained AI to have assumptions and biases that are unfair. This could be mis-representing gender or misidentifying people of color. In terms of its own stance, Google says its algorithms are at risk from bias. The company says it will work to ensure bias is removed from models. “We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.”