Joy Buolamwini uses art and research to show the social implications of AI

The Ghanaian-American computer scientist and digital activist spoke to us during her Cape Town visit.

While movies like i, Robot  tend to exaggerate our fear of machines and the people behind them respectively, there are some tangible red flags in the world of emerging technology.

Take artificial intelligence (AI) for example.

Some worry that AI might replace millions of jobs, leaving millions of people without a livelihood.

Others advocate for its efficiency, arguing that by leaving menial tasks to a more adept AI-powered machine, we would free people to be more creative.

Thus, AI would inadvertently create not only more jobs but better, healthier jobs.

But how do we really test the benefits and disadvantages of an emerging technology that so many of us still struggle to understand? And more pressingly, how do we govern it?

We witnessed a taste of this problem when members of the United States Congress were tasked with questioning Facebook founder, Mark Zuckerberg.

Congress needed to uncover just how much data Facebook tracks, keeps and distributes without its users’ knowledge. This came after several scandals surfaced, placing the misuse of consumer data at the centre of the controversy.

After 10 hours of questioning, headlines like, “These are the most confusing questions Congress asked Zuckerberg” marked what turned out to be a peculiar exchange between elected policymakers and the tech mogul.

Zuckerberg Gif

While we still don’t know for sure if the machines are always listening, the marathon question & answer session made one thing clear: There is a deepening void between the tech we integrate into our lives and the policymakers installed to protect us.

Computer scientist and digital activist Joy Buolamwini is working to bridge this gap, starting with an endeavour that speaks to her own personal struggles with artificial intelligence as a Ghanaian-American woman.

Buolamwini calls herself the Poet of Code. The moniker, much like a superhero’s cape and mask, represents the combination of two distinct powers: spoken word and code. With them, she leads an Avengers-style collective called the Algorithmic Justice League.

But rather than pit their strengths against Marvel supervillain Thanos, the Algorithmic Justice League uses art and research to illuminate the social implications of artificial intelligence, starting with bias.

Like the fictional Thanos, artificial intelligence is poised to be the most impactful agent of change of the 21st century. In a recent article in the Guardian, philosopher and historian Yuval Noah Harari warned that we’re woefully unprepared to deal with the unprecedented change it could bring.

“Technology is never deterministic: it can be used to create very different kinds of society. In the 20th century, trains, electricity and radio were used to fashion Nazi and communist dictatorships, but also to foster liberal democracies and free markets.

In the 21st century, AI will open up an even wider spectrum of possibilities. Deciding which of these to realise may well be the most important choice humankind will have to make in the coming decades,” he writes.

Buolamwini was first exposed to a blind spot in AI while working on a facial recognition project as an undergraduate at Georgia Tech. She found that the facial recognition technology could not recognise her face until she donned a white mask or used one of her lighter skinned peers as a model. The problem persisted until she joined the MIT Media Lab years later.

Enter her 2016 short film, The Coded Gaze: Unmasking Algorithmic Bias, which debuted at the Museum of Fine Arts Boston. The poetic video essay unpacked the struggles of using facial recognition technology as a person of colour and pointed to skewed data sets as the potential cause.

Her logic is simple. If people who code use datasets that are primarily skewed to a certain demographic, their algorithm would end up making decisions based on biased data.

Through her research she found these data sets to be prolific among coders across the globe, meaning all projects, no matter how innovative, were relying on inaccurate demographic data at their foundation.

In a 2016 blog post, she explained: “The faces that are chosen for the training set impact what the code recognises as a face. A lack of diversity in the training set leads to an inability to easily characterise faces that do not fit the normal face derived from the training set.”

Simple facial recognition is the tip of the iceberg. Expanding on her work with support from the Ford Foundation, Buolamwini created "AI, Ain't I A Woman?" the first spoken word visual poem focused on the failures of artificial intelligence on iconic women including Oprah Winfrey, Serena Williams, and Michelle Obama.

The poem forms part Gender Shades, an MIT Media Lab project that pilots an intersectional approach to inclusive product testing for AI.

The study used the dermatologist-approved Fitzpatrick Skin Type classification system to characterise the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience – the data sets widely used in coding.

The study found that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6 per cent for IJB-A and 86.2 per cent for Adience).

Using the faces of politicians, the majority of whom came from the South African parliament, Buolamwini and her team were able to introduce a new facial analysis dataset which is balanced by gender and skin type.

With their new foundation, they evaluated three commercial gender classification systems and showed darker-skinned women to be the most misclassified group with error rates of up to 34.7 per cent. The maximum error rate for lighter-skinned males is 0.8 per cent.

This means that iconic women like Winfrey and Obama were regularly incorrectly classified. The success stories of these powerful women go against pervasive and harmful stereotypes of black people yet some of the most popular facial recognition technologies identify them as men. These technologies include that of IBM, Microsoft, Google and Face++.

Buolamwini reached out to these companies as part of her research and was rebuffed by all except IBM who replicated her study within its walls. Not only did they affirm her results, but they used her study to better their own software.

This is ultimately what Buolamwini wants to achieve with her work but the ambiguity of AI hides the urgency of the problem.

“So for example, if you make more inclusive facial recognition systems it also means you can abuse those systems,” explains Buolamwini during a recent visit to Cape Town.

Her argument alludes to current fears not only in the United States but all over the world. Facial recognition technology is built into airports, police stations, and the largest cloud platforms in the world with very few federal rules to govern them. This leaves people of colour, women and children the most vulnerable to misclassification.

“There’s that argument that maybe it was better to be left out in the first place but it truly matters how it’s being used,” she says.

“If you say you want to use it to find missing children and we already know, children of colour, brown and black kids, particularly girls, when it comes to missing persons, often times get the least attention. They’re even harder to find. So in those situations, you’re balancing what the benefits are versus what the perils can be.”

photo credit: The title image is a modified version of an image created by mikemacmarketing Machine Learning & Artificial Intelligence via photopin (license)

How do we fix this and build genuinely fair, transparent, and accountable facial analysis algorithms? Buolamwini says the lines of communication need to be opened with policymakers, lawmakers and techies alike – we need to build a common language.

“You might have seen some of the memes that came out of the hearings right, where you understand that the lawmakers themselves are trying to grapple with the technology,” she says, referring to the Zuckerberg fiasco.

“[But there’s also a disconnect with people who are coming more from a technical background, who really don’t understand the law in and of itself or even how you might be breaching certain issues that you were not aware of.

So overall I think across the aisle and across the board, we need to make sure that as we’re learning about the technology we’re learning about the law and as we’re learning about the law, we’re learning about the technology.”

While creating the first inclusive dataset for facial recognition is a feat on its own, Buolamwini is using spoken word poetry to make her research accessible to the parties on either side of the aisle, prioritising data literacy for all.

"So oftentimes when I’m going to speak with policymakers I try to introduce AI by showing concrete specific examples that illustrate the ways in which it can have limitations and show that one example to help start the conversation so that they can imagine the other areas in which there can be limitations.

I find that doing it that way, with concrete examples, help to get the conversation going and helps to give a reference point as well.”

Buolamwini’s work comes as companies promise to use AI to make hiring an unbiased process, predict violent crime and even help find missing persons. These innovations can’t take place unless we address the technology’s already trained biases.

“One of the things that excite me with technology is, for example, I mention how some of the companies did not do so well on a particular group of people but within two months they were able to make a tenfold improvement when they were more inclusive,” she says.

“So unlike some of the social systems we have, there are ways, and not in every particular application of technology, but there are ways where you can be more inclusive in the tech than you are in the real world. However, we always have to be aware of how that technology is being used.”

photo credit: The title image is a modified version of an image created by mikemacmarketing Machine Learning & Artificial Intelligence via photopin (license)

Read more of art and AI 

Zach Lieberman on using human gestures to bring his work to life

The chair that was designed by AI

Will AI be our new co-dependent relationship?

More on Design Activism