Talk by Dr. Gebru

Guest Lecture: Eugenics and the Promise of Utopia through Artificial General Intelligence.
Author

Bell

Published

April 15, 2023

Image credits: Rod Searcey/Stanford HAI

Instructions can be found at Learning from Timnit Gebru.

About Dr. Gebru

Timnit Gebru ’08 M.A. ’10 Ph.D. ’17, a leader of the movement for diversity in tech and artificial intelligence, spoke about the dangers of AI at a Symbolic Systems department-sponsored event on Wednesday night. Gebru is the Founder and Executive Director of the Distributed Artificial Intelligence Research Institute (DAIR) and co-founder of the nonprofit to help with visibility and inclusion for Black people, Black in AI.

‘Utopia for Whom?’: Timnit Gebru on the dangers of Artificial General Intelligence

Dr. Gebru, a distinguished computer scientist and influential advocate for diversity, will be giving a virtual talk at Middlebury on April 24th, 2023. She will also be virtually joining our Machine Learning class.

Dr. Gebru’s many works include:

Her long-standing research in AI, ethics, and equality, along with her active presence in podcasts and social media, has made Dr. Gebru an influential voice in the field.

FATE in Computer Vision

This is a previous talk from CVPR 2020’s Tutorial on Fairness Accountability Transparency and Ethics in Computer Vision.

Website, Video, Slides

The Field

The field of computer vision has harmed the Black community through policing, racial profiling, and other oppressive uses. It also has extremely few Black researchers, and less progress in equality and diversity has been made in this field than some other fields, like machine learning.

People from Stanford perceived the use of Computer Vision in self-driving cars, policing, and education. But the reality is much darker, unfolding when our subjects are humans, i.e. facial recognition.

Facial Recognition

As soon as we begin to investigate humans as the subject, Computer Vision becomes the propagator for systematic racism. Two companies using facial recognition are mentioned:

  • Faception: profiling people with tags like “high IQ” and “terrorist” based only on their face
  • HireVue: ATS for resume, facial and emotion analysis for job applications

Each of these example demonstrate immediate dangers of amplifying existing bias, or introducing more. Personally, I think another danger is the potential justification through obfuscation, i.e. “we used this model and it performs well” to justify the system and hide the model’s biases (through the training data, or the model itself). This can even potentially conceal violations of relevant regulation such as equal opportunity laws, because investigation will be hard, and intent difficult to prove.

Adding to the equation is the lack of regulation in AI and CV - there are no safety tests. AI models endure minimal scrutiny, especially when compared to the FDA approving drugs, whereas the impact can be similar.

Abstraction

Making people disappear into mathematical equations, data points, models…

Gender recognition APIs (another layer of abstraction) had significantly higher error rates for Black women, as high as 46.8%. But the three datasets used to do this analysis (LFW, IJB-A, Adience) also had a disproportionate amount of around 80% lighter-skinned people.

Lack of diversity is not AI-specific - in crash tests for cars, for example, the dummies are based on the average male, putting female lives at risk. Even pharmaceuticals are being tested by White people and males more.

Visibility vs. Inclusion

We cannot ignore social and structural problems.

Data were scraped from minorities (darker skin, transgender) without their consent, even though the intention might be to build more inclusive datasets.

Gender classification itself is problematic as well - potential issues for advocating gender conformity, etc.

A Troubled System

Wide spread of systems for policing, surveillance, etc.:

  • US adults indexed in facial recognition: 1 out of 2
  • Regulation: unregulated
  • Accuracy and bias: unaudited

Additionally, these systems:

  • Target the marginalized
  • Do not benefit the marginalized
  • Are not built by the marginalized

Self-defense

Defending oneself against these systems go beyond the traditional advice of “use a VPN” or “use Tor”. To counter the system, researchers propose fashion items that covers the face, long hair to cover the face, or infrared LEDs that disrupt cameras.

Takeaway

The tl;dr for that everyone needs to understand about CV today:

CV today amplifies systematic racism and bias, inflicting real harm on communities; not enough work is being done to mitigate this, let alone compensate.

Eugenics and the Promise of Utopia through AGI

A recording from IEEE SaTML 2023 of a identically titled talk by Dr. Gebru:

Eugenics and the Promise of Utopia through AGI

Proposed Question

My proposed question for Dr. Gebru is:

People have said things like “intellegent”, “creative”, and “artistic” about recent text and image generation models.

Through engaging with AI systems that can, say, recognize an expression or write like Shakespeare, where do you think is the goalpost for creativity? Do we determine that through the outcome, like a Turing test, or the process and models used? How do we evaluate such statements?

The Argument

Utopia

Organizations like OpenAI promises that AI, more specifically AGI, will make for a bright future (Utopia).

The term AGI is not well-defined; after listing numerous definitions from different organizations, Dr. Gebru states that AGI sounds like “God can do anything”.

But current AI (potentially AGI) offerings exploited cheap labor, Mechanical Turk style. OpenAI exploited Kenyan workers to filter out toxic content for less than $2 per hour, leaving many with trauma and PTSD. Worse, these workers’ jobs are being replaced by the very AI systems they are being exploited to develop.

The question is thus:

For whom?

First-wave Eugenics

Mostly “negative” eugenics, i.e. expel the bad.

Eugenics back in the day was protrayed as a scientific method, suggesting keywords like progressive, empowering, and technology.

However, the outcome was dark, evil, and cost many lives.

The idea of improving “the human stock” lead to expulsion of those deemed to be undesirable, the “defectives” and “feeble-minded”, based on IQ tests.

Since mankind is to be improved genetically, being poor was the result of one’s inferior nature, resulting in racism and white superiority as well.

Second-wave Eugenics

Mostly “positive” eugenics, i.e. improve to become the good.

Arising from the 1990s, the second wave of eugenics sought to use genetic engineering and biotechnology.

What does eugenics have to do with AGI? The answer lies in the proponents of AGI, the “TESCREAL bundle”, that these ideologies are actually “second-wave eugenics”.

Dr. Gebru continues to show that the TESCREAL bundle has ideologies of “transcending the human race” and so on, which aligns them with eugenics.

These ideologies also contain elements of discrimination (against those unwilling to join them), an imagined (AI) utopia, and eschatology with convictions of an AGI apocalypse.

One example Dr. Gebru pointed out is how the “Effective Altruism” movement used IQ to score people. The metric is called PELTIV (Potential Expected Long-Term Instrumental Value), but is at heart a discriminatory view of individuals based on IQ.

Additionally, influential billionaires are in these movements, and they have enough funding to actually impact the world.

Do one thing, not everything

Dr. Gebru compares Meta AI’s AI translation model, NLLB-200, with Lesan AI’s specialized model.

While Lesan AI performed better, Meta’s announcement of an all-encompassing AI translator model made Lesan AI’s investors doubt its necessity.

This is one example of a “general” AI model harming smaller, specific models through investment pressure, and harming the minorities they represent as well.

Although not emphasized, Dr. Gebru’s takeaway message was, instead of building AGIs, build specific systems with clearly-defined scopes.

Reflections

I feel priviledged to be able to attend this talk.

The talk was based on a yet-to-be published paper, so instead of reading a paper, I had to look for tweets and forum discussions to learn more.

Around the same time as researching for this blog post, I came across a YouTube video titled The Rich Have Their Own Ethics: Effective Altruism & the Crypto Crash. The title echoes one of the takeaways from Dr. Gebru’s talk, that the rich needed their own ethics to convince themselves they are doing good things.

Forums

I also dug around the Effective Altruism forums a little bit.

Mosquito nets

Published back in 2015, this The Atlantic article portrays effective altruism as a positive force in helping end malaria with mosquito nets. In the forum however, there was a heated discussion about whether to donate mosquito nets or not, because people were unsure about the long-term benefit of doing so. (In the short term, they agree the consequences are “unambiguously positive”.)

To me, this suggests an unhealthy bias towards thingking long-term. Additionally, given that there are many more people in the future, an action that benefits the future but has short-term consequences might prompt one to ignore the latter.

IQ scores

With regard to scoring people by IQ, the only criticism I encountered in the forum was that keeping personal records violated GDPR, a data protection law.

Another concern on recording IQ scores was dismissed because the practice of keeping tags on people was so common, and such behaviour is creppy “by design”.

On Dr. Gebru

Dr. Gebru was mentioned once when the SaTML talk was posted; the poster specifically advised viewers to ignore criticism towards the EA community.

Another post asked for advice on engaging with people like Dr. Gebru. Interestingly, the poster acknowledges their work on social justice, but dismisses these worries with “Algorithmic bias today is not the same as x-risk from unaligned AI in 30 years.” (human extinction risk?) The post and comment section, however, were unconvinced and somewhat adversarial. This conforms to what one of Dr. Gebru’s collaborators has characterized as “cultish behavior”.

On Campus

The Effective Altruism movement has had a visible on-campus presence for quite a while now.

One of their posters illustrate doing good effectively; another is a reading group on the dangers of AGI.

After the talk, my (unqualified and unasked for) advice to them is twofolds:

  • Focus on one thing at a time, like choosing a charity to donate to, or tips on buying more sustainably.
    • Trying to maximize utility, for the good of all mankind, for the long term, and avoiding an AGI apocalypse all at once might lead to harmful results.
  • Find a new name, given the declining reputation of the Effective Altruism movement.
    • I trust their intention to make the world a better place, and I also don’t think they will actively engage in something that is harmful for the present but beneficial to the future.
    • Therefore, associating themselves with “rich people wanting their own ethics” might not be the best idea.