loading...
Les

BIAIS
de
l'intelligence artificielle






par Biljana Petreska, 2022

*Plus précisément d'une branche de l'intelligence artificielle appelée apprentissage automatique, ou machine learning en anglais.
Je découvre...

BIAIS n. m. Déformation systématique.



Cliquez sur 2 sphères de votre choix pour
découvrir qui se trouve derrière...


Suite
Merci !

Maintenant, imaginez que
VOUS êtes l'algorithme de prédiction.

Votre but est de trouver
si derrière toutes ces sphères il y a
plus de nuages indigos ou
plus de nuages verts.


Cliquez donc sur 8 autres sphères.

C'est fait !

nuages indigos


Vous avez choisi les nuages indigos. Pourtant, à l'écran il y a plus de nuages verts.

Essayez encore...
Aujourd'hui,
l'algorithme de prédiction,
c'est vous !

Alors, d'après vos propres observations,
il y a plus de :



nuages verts
Les algorithmes d'intelligence artificielle apprennent
en observant des données.

Après avoir observé beaucoup de données,
ces algorithmes peuvent prédire
s'il va faire beau ou s'il va pleuvoir,
s'il vaut mieux prendre le bus ou le train,
s'il faut regarder la série A ou plutôt la série B...
Je vois
Les algorithmes d'intelligence artificielle
ne réflechissent pas, ils ne font que compter.

Ils observent beaucoup de données,
puis font des prédictions qui font sens
par rapport aux données qu'ils ont observées.

J'ai compris
Voici un exemple.
Pour arriver à compléter vos phrases, l'algorithme
observe un très grand nombre de phrases
et en compte les mots.

L'algorithme vous proposera ensuite
les mots les plus fréquents
dans des phrases similaires à votre phrase.


Continuons l'observation.
Cliquez sur 10 autres sphères.

C'est fait !

nuages verts


Vous avez choisi les nuages verts. Pourtant, à l'image il y a plus de nuages indigos.

Essayez encore...
Plus on a d'observations,
plus on peut espérer
s'approcher de la réalité.

Désormais, il y a plus de :

nuages indigos

Qu'est-ce que ces observations
vous permettent de conclure ?

Quelle prédiction un algorithme ferait-il :
y a-t-il plus de nuages indigos
ou plus de nuages verts qui se cachent
derrière tous les points d'interrogation ?

Cliquez sur le bouton pour le savoir.

Découvrir tout
En fait, il y a exactement le même nombre de
nuages indigos et de nuages verts.

Mais, selon les données que l'on observe,
nous pouvons être induits en erreur.


Vous venez de voir ce que l'on appelle
le BIAIS DES DONNÉES.
Ah oui ?
Comme il est impossible de tout observer,
l'algorithme prend des décisions
basées sur des données qui sont
une représentation imparfaite de la réalité.

En règle générale,
plus l'algorithme observe des données,
meilleures seront ses prédictions.

Mais encore...
Les algorithmes d'intelligence artificielle
ne réflechissent pas, tout ce qu'ils font
c'est observer le monde
et compter.
Un peu comme moi
Les algorithmes d'intelligence artificielle
sont de plus en plus puissants !

Mais ce n'est pas parce que nous avons trouvé
des algorithmes encore plus intelligents.
C'est plutôt parce qu'ils observent
toujours plus de données :
grâce au BIG DATA.
Encore un biais

un trèfle indigo


Si on observe bien, les nuages verts préfèrent les plantes vertes, en général.

Essayez encore...
Nous avons la chance d'avoir plus de données.
Chaque nuage affiche sa plante préférée
dans son profil de réseau social.

Vous êtes invités à dîner par
un nuage vert.
Est-ce mieux de lui offrir
un trèfle indigo ou une plante verte ?


une plante verte
Aïe, aïe, aïe !

Ce nuage vert fait partie
d'une minorité de nuages verts
qui détestent les plantes vertes !..

C'est le BIAIS SOCIÉTAL :
les données reproduisent tous
les biais et préjugés de notre société.
Ah oui ?
Imaginez qu'en fait,
les nuages verts sont des femmes.

Comme ça, vous allez offrir
des plantes vertes à toutes les femmes
juste parce qu'elles sont des femmes ?

On pourrait vous accuser d'être sexiste !
Je ne savais pas...
Imaginez que la couleur des nuages
est en fait une couleur de peau.
Vous avez déterminé le goût d'une personne
d'après sa couleur de peau.

On pourrait vous accuser d'être raciste !

Bref, vous êtes un algorithme
biaisé et discriminant.
Ce n'est pas moi !
Je suis d'accord,
ce n'est pas votre raisonnement
qui est biaisé,
mais bien les données
que vous avez observées.

N'empêche que le mal est fait !
Comment ça ?
C'est ainsi que l'algorithme de Facebook
propose des offres d'emploi sexistes,
en reproduisant les stéréotypes :
hommes camionneurs, femmes infirmières.

La détection de discours haineux sur Internet
par l’algorithme de Google est raciste :
les tweets écrits par des afro-américains ont
plus de chances d'être identifiés comme toxiques.
Oh non !
L’algorithme d'Open AI et Microsoft
génère des textes sexistes ET racistes,
car pour apprendre il parcourt
les données publiées sur le Web,
souvent sexistes et racistes.

Alors que tout ce que l'algorithme fait
c'est de compter,
comme vous l'avez fait pour la plante verte .
Que faire ?
Pour contrer le BIAIS des données,
et le racisme,
les chercheurs font attention
à récolter des données
qui incluent différentes minorités.

Pour contrer le BIAIS sociétal,
et le sexisme
les chercheurs créent des fausses données
pour compenser les différences.
Que puis-je faire ?

Il est impossible de supprimer tous les biais !

Il faut donc être critique face aux résultats
que nous donnent les algorithmes,
et soutenir la création de lois
qui nous protègeront de
dérives dangereuses.
Dangeureuses ?
Accepteriez-vous qu'un algorithme :

• décide de votre entrée à l'université
• décide combien vous pouvez emprunter
• décide de vous mettre en prison
• ou vous refuse un don d'organes

d'après votre genre ou votre couleur de peau ?


Un autre danger

Les réseaux sociaux
nous suggèrent des personnes
qui ont les mêmes préférences
et les mêmes opinions que nous.

Ils nous enferment dans une bulle de filtre :
une vision du monde qui cache
les opinions et préférences
différentes des nôtres.
Mince

Imaginons que les nuages-plantes
ont peur du réchauffement climatique,
alors que les nuages-trèfles ont confiance
dans la capacité auto-réparatrice de la Terre.

Les nuages-plantes ne connaissent pas
les arguments des nuages-trèfles.
Les nuages-trèfles ne comprennent pas
la peur des nuages-plantes.
Oh !





Deux bulles de filtres déconnectées...
C'est dommage

Oui


Vous avez répondu oui. Mais que savez-vous du nuage corail ?

On a vu avec le biais des données que les prédictions de l'algorithme sont incorrectes lorsque trop peu de données sont observées.

Si l'algorithme n'a jamais observé une donnée, il ne peut faire que des suppositions au hasard.

Essayez encore...
Pour finir, rencontrons pour la première fois
un nouveau nuage corail !

Est-ce qu'un algorithme peut prédire
si ce nuage aime les plantes vertes
ou préfère plutôt les trèfles indigos ?

Non
C'est correct !
Si l'algorithme n'a jamais observé cette donnée,
il ne peut faire que des suppositions au hasard.

D'après leur forme ou leur couleur,
quelle prédiction feriez-vous ?

Ces nouveaux nuages préfèrent...


J'ai une idée
Le nuage corail a la même forme que le nuage vert :
l'algorithme pourrait prédire que le nuage corail
a une préférence pour les plantes vertes.

Mais toute décision a des conséquences...

Si le réseau social lui propose toujours
des plantes vertes et jamais des trèfles indigos,
le nuage corail finira par préférer les plantes vertes.
Il est influencé ?
Une sorte de prophétie auto-réalisatrice.

Le nuage corail ne saura jamais
qu'un algorithme a manipulé son opinion
et pensera sincèrement qu'il aime les plantes vertes.

Sous l'influence sociale des nuages-plantes,
il sera rassuré dans son choix.
Moi aussi ?
C'est l'heure des conclusions.

Avec peu de données,
l'algorithme le plus intelligent au monde
fait de mauvaises décisions,
c'est le biais des données.

Mais même avec assez de données
l'algorithme peut faire de mauvaises décisions,
c'est le biais sociétal.
Mais encore




Cultivons notre diversité et
perçons nos bulles de filtres.



Merci !
Let's draw a network! Each connection represents a friendship between two people: draw to connect scratch to   disconnect when you're done doodling and playing around, let's continue
Now, social connections are for more than just making pretty pictures. People look to their social connections to understand their world. For example, people look to their peers to find out what % of their friends (not counting themselves) are, say, binge-drinkers. Draw/erase connections, and see what happens!
cool, got it However, networks can fool people. Just like how the earth seems flat because we're on it, people may get wrong ideas about society because they're in it.
optional extra bonus notes! ↑
↓ links and references

For example, a 1991 study showed that “virtually all [college] students reported that their friends drank more than they did.” But that seems impossible! How can that be? Well, you're about to invent the answer yourself, by drawing a network. It's time to... FOOL EVERYONE
PUZZLE TIME!
Fool everyone into thinking the majority of their friends (50% threshold) are binge-drinkers (even though binge-drinkers are outnumbered 2-to-1!)
FOOLED: out of 9 people Congrats! You manipulated a group of students into believing in the prevalance of an incredibly unhealthy social norm! Good going! ...uh. thanks? What you just created is called The Majority Illusion, which also explains why people think their political views are consensus, or why extremism seems more common than it actually is. Madness. But people don't just passively observe others' ideas and behaviors, they actively copy them. So now, let's look at something network scientists call... “Contagions!”
Let's put aside the "threshold" thing for now. Below: we have a person with some information. Some misinformation. "Fake news", as the cool kids say. And every day, that person spreads the rumor, like a virus, to their friends. And they spread it to their friends. And so on.
Start the simulation!
(p.s: you can't draw while the sim's running)
Note: despite the negative name, "contagions" can be good or bad (or neutral or ambiguous). There's strong statistical evidence that smoking, health, happiness, voting patterns, and cooperation levels are all "contagious" -- and even some evidence that suicides and mass shootings are, too. well that's depressing
Indeed it is. Anyway, PUZZLE TIME!
Draw a network & run the simulation, so that everyone gets infected with the "contagion".
(new rule: you can't cut the thick connections)
fan-flipping-tastic
This madness-spreading is called an "information cascade". Mr. Newton fell for such a cascade in 1720. The world's financial institutions fell for such a cascade in 2008.

However: this simulation is wrong. Most ideas don't spread like viruses. For many beliefs and behaviors, you need to be "exposed" to the contagion more than just once in order to be "infected". So, network scientists have come up with a new, better way to describe how ideas/behaviors spread, and they call it... Complex Contagions!”
Let's bring back "thresholds" and the binge-drinking example! When you played with this the first time, people didn't change their behavior.

Now, let's simulate what happens if people start drinking when 50%+ of their friends do! Before you start the sim, ask yourself what you think should happen.

Now, run the sim, and see what actually happens!
Unlike our earlier "fake news" contagion, this contagion does not spread to everyone! The first few people get "infected", because although they're only exposed to one binge-drinker, that binge-drinker is 50% of their friends. (yeah, they're lonely) In contrast, the person near the end of the chain did not get "infected", because while they were exposed to a binge-drinking friend, they did not pass the 50%+ threshold.
The relative % of "infected" friends matters. That's the difference between the complex contagion theory, and our naive it-spreads-like-a-virus simple contagion theory. (you could say "simple contagions" are just contagions with a "more than 0%" infection threshold)
However, contagions aren't necessarily bad — so enough about crowd madness, what about... ...crowd wisdom?
Here, we have a person who volunteers to... I don't know, rescue people in hurricanes, or tutor underprivileged kids in their local community, or something cool like that. Point is, it's a "good" complex contagion. This time, though, let's say the threshold is only 25% — people are willing to volunteer, but only if 25% or more of their friends do so, too. Hey, goodwill needs a bit of social encouragement.

← Get everyone "infected" with the good vibes!
NOTE: Volunteering is just one of many complex contagions! Others include: voter turnout, lifestyle habits, challenging your beliefs, taking time to understand an issue deeply — anything that needs more than one "exposure". Complex contagions aren't necessarily wise, but being wise is a complex contagion.
(So what's a real-life simple contagion? Usually bits of trivia, like, "the possum has 13 nipples") Now, to really show the power and weirdness of complex contagions, let's revisit... ...an earlier puzzle
Remember this? This time, with a complex contagion , it'll be a bit tougher...
Try to "infect" everyone with complex wisdom!
(feel free to just hit 'start' and try as many solutions as you want) HOT DANG
Now, you may think that you just need to keep adding connections to spread any contagion, "complex" or "simple", good or bad, wise or mad. But is that really so? Well, let's revisit... ...another earlier puzzle
If you hit "start" below, the complex contagion will just spread to everyone. No surprise there. But now, let's do the opposite of everything we've done before: draw a network to prevent the contagion from spreading to everyone!
You see? While more connections will always help the spread of simple ideas, more connections can hurt the spread of complex ideas! (makes you wonder about the internet, hm?) And this isn't just a theoretical problem. This can be a matter of life... ...or death.
The people at NASA were smart cookies. I mean, they'd used Newton's theories to get us to the moon. Anyway, long story short, in 1986, despite warnings from the engineers, they launched the Challenger, which blew up and killed 7 people. The immediate cause: it was too cold that morning.
The less immediate cause: the managers ignored the engineers' warnings. Why? Because of groupthink. When a group is too closely knit, (as they tend to be at the top of institutions) they become resistant to complex ideas that challenge their beliefs or ego.
So, that's how institutions can fall to crowd madness. But how can we "design" for crowd wisdom? In short, two words: Bonding & Bridging
← Too few connections, and an idea can't spread.
Too many connections, and you get groupthink.
Draw a group that hits the sweet spot: just connected enough to spread a complex idea!
Simple enough! The number of connections within a group is called bonding social capital. But what about the connections... ...between groups? As you may have already guessed, the number of connections between groups is called bridging social capital. This is important, because it helps groups break out of their insular echo chambers!
Build a bridge, to "infect" everyone with complex wisdom:
Like bonding, there's a sweet spot for bridging, too. (extra challenge: try drawing a bridge so thick that the complex contagion can't pass through it!) Now that we know how to "design" connections within and between groups, let's... ...do BOTH at the same time! FINAL PUZZLE!
Draw connections within groups (bonding) and between groups (bridging) to spread wisdom to the whole crowd:
Congrats, you've just drawn a very special kind of network! Networks with the right mix of bonding and bridging are profoundly important, and they're called... “Small World Networks”
"Unity without uniformity". "Diversity without division". "E Pluribus Unum: out of many, one".
No matter how it's phrased, people across times and cultures often arrive at the same piece of wisdom: a healthy society needs a sweet spot of bonds within groups and bridges between groups. That is:
Not this...
(because ideas can't spread)
nor this...
(because you'll get groupthink)
...but THIS: Network scientists now have a mathematical definition for this ancient wisdom: the small world network. This optimal mix of bonding+bridging describes how our neurons are connected, fosters collective creativity and problem-solving, and even once helped US President John F. Kennedy (barely) avoid nuclear war! So, yeah, small worlds are a big deal. ok, let's wrap this up...
(pst... wanna know a secret?) Contagion: simple complex The Contagion's Color: Select a tool... Draw Network Add Person Add "Infected" Drag Person Delete Person CLEAR IT ALL (...or, use keyboard shortcuts!) [1]: Add Person     [2]: Add "Infected"
[Space]: Drag     [Backspace]: Delete
IN CONCLUSION: it's all about...
Contagions & Connections
Contagions: Like how neurons pass signals in a brain, people pass beliefs & behaviors in a society. Not only do we influence our friends, we also influence our friends' friends, and even our friends' friends' friends! (“be the change you wanna see in the world” etc etc) But, like neurons, it's not just signals that matter, it's also...
Connections: Too few connections and complex ideas can't spread. Too many connections and complex ideas get crushed by groupthink. The trick is to build a small world network, the optimal mix of bonding and bridging: e pluribus unum.
(wanna make your own simulations? check out Sandbox Mode, by clicking the (★) button below!)
So, what about our question from the very beginning? Why do some crowds turn to...
...wisdom and/or madness?
From Newton to NASA to
network science, we've covered a lot here
today. Long story short, the madness of crowds
is not necessarily due to the individual people, but due
to how we're trapped in a network's sticky web.
That does NOT mean abandoning personal responsibility, for
we're also the weavers of that web. So, improve your contagions:
be skeptical of ideas that flatter you, spend time understanding
complex ideas. And, improve your connections: bond with similar
folk, but also build bridges across cultural/political divides.
We can weave a wise web. Sure, it's harder than doodling
lines on a screen... ...but so, so worth it.
“The great triumphs and tragedies of history are caused, not by people being fundamentally good or fundamentally bad, but by people being fundamentally people.”
~ Neil Gaiman & Terry Pratchett
<3
Je veux
en savoir plus
créé par

BILJANA PETRESKA VON RITTER


inspiré par le jeu sublime

"Crowds" de NICKY CASE


la musique est

"Moonshine" de KETSA


merci à

HEP VAUD MODULO BETA-TESTEURS WIN start simulation reset & re-draw Fan-made translations: What the, no fan-made translations exist yet?! (add your own!) (original in English)

“virtually all [college] students reported that their friends drank more than they did.”

“Biases in the perception of drinking norms among college students” by Baer et al (1991)

“The Majority Illusion”

“The Majority Illusion in Social Networks” by Lerman et al (2016).
Related: The Friendship Paradox.

“strong statistical evidence that smoking, health, happiness, voting patterns, and cooperation levels are all contagious”

From Nicholas Christakis and James Fowler's wonderfully-written, layperson-accessible book, Connected (2009).

“some evidence that suicides are [contagious], too”

“Suicide Contagion and the Reporting of Suicide: Recommendations from a National Workshop” by O'Carroll et al (1994), endorsed by the frickin' Centers for Disease Control & Prevention (CDC).

“some evidence that mass shootings are [contagious], too”

“Contagion in Mass Killings and School Shootings” by Towers et al (2015).

Also see: the Don't Name Them campaign, which urges that news outlets DO NOT air mass murderers' names, manifestos, and social media feeds. This spreads the contagion. Instead, news outlets should focus on the victims, first responders, civilian heroes, and the grieving, healing community.

“The world's financial institutions fell for such a cascade in 2008.”

“Lemmings of Wall Street” by Cass Sunstein, is a quick, non-technical read. Published in Oct 2008, right in the wake of the crash.

“the complex contagion theory.”

“Threshold Models of Collective Behavior” by Granovetter (1978) was the first time, as far as I know, anyone described a "complex contagion" model. (although he didn't use that specific name)

“Complex Contagions and the Weakness of Long Ties” by Centola & Macy (2007) coined the phrase "complex contagion", and showed the important differences between that and "simple contagion".

“Evidence for complex contagion models of social contagion from observational data” by Sprague & House (2017) empirically showed that complex contagions do, in fact, exist. (at least, in the social media data they looked at)

Finally, “Universal behavior in a generalized model of contagion” by Dodds & Watts (2004) proposes a model that unifies all kinds of contagions: simple and complex, biological and social!

“the possum has 13 nipples”

arranged in a ring of 12 nipples, plus one in the middle

“groupthink”

This Orwell-inspired phrase was coined by Irving L. Janis in 1971. In his original article, Janis investigates cases of groupthink, lists its causes, and — thankfully — some possible remedies.

“bonding and bridging social capital”

These two types of social capital — "bonding" and "bridging" — were named by Robert Putnam in his insightful 2000 book, Bowling Alone. His discovery: across almost all empirical measures of social connectiveness, Americans are more alone than ever. Golly.

“bridging social capital has a sweet spot”

“The Strength of Weak Ties” by Granovetter (1973) showed that connections across groups helps spread simple contagions (like information), but “Complex Contagions and the Weakness of Long Ties” by Centola & Macy (2007) showed that connections across groups may not help complex contagions, and it fact, can hurt their spread!

“the small world network”

The idea of the "small world" was popularized by Travers & Milgram's 1969 experiment, which showed that, on average, any two random people in the United States were just six friendships apart — "six degrees of separation"!

The small-world network got more mathematical meat on its bones with “Collective dynamics of small-world networks” by Watts & Strogatz (1998), which proposed an algorithm for creating networks with both low average path length (low degree of separation) and high clustering (friends have lots of mutual friends) — that is, a network that hits the sweet spot!

You can also play with the visual, interactive adaptation of that paper by Bret Victor (2011).

“[small world networks] describe how our neurons are connected”

“Small-world brain networks” by Bassett & Bullmore (2006).

“[small world networks] give rise to collective creativity”

“Collaboration and Creativity: The Small World Problem” by Uzzi & Spiro (2005). This paper analyzed the social network of the Broadway scene over time, and discovered that, yup, the network's most creative when it's a "small world" network!

“[small world networks] give rise to collective problem-solving”

See “Social Physics” by MIT Professor Alex "Sandy" Pentland (2014) for a data-based approach to collective intelligence.

“[small world networks] helped John F. Kennedy (barely) avoid nuclear war!”

Besides the NASA Challenger explosion, the most notorious example of groupthink was the Bay of Pigs fiasco. In 1961, US President John F. Kennedy and his team of advisors thought — for some reason — it would be a good idea to secretly invade Cuba and overthrow Fidel Castro. They failed. Actually, worse than failed: it led to the Cuban Missile Crisis of 1962, the closest the world had ever been to full-scale nuclear war.

Yup, JFK really screwed up on that one.

But, having learnt some hard lessons from the Bay of Pigs fiasco, JFK re-organized his team to avoid groupthink. Among many things, he: 1) actively encouraged people to voice criticism, thus lowering the "contagion threshold" for alternate ideas. And 2) he broke his team up into sub-groups before reconvening, which gave their group a "small world network"-like design! Together, this arrangement allowed for a healthy diversity of opinion, but without being too fractured — a wisdom of crowds.

And so, with the same individuals who decided the Bay of Pigs, but re-arranged collectively to decide on the Cuban Missile Crisis... JFK's team was able to reach a peaceful agreement with Soviet leader Nikita Khrushchev. The Soviets would remove their missiles from Cuba, and in return, the US would promise not to invade Cuba again. (and also agreed, in secret, to remove the US missiles from Turkey)

And that's the story of how all of humanity almost died. But a small world network saved the day! Sort of.

You can read more about this on Harvard Business Review, or from the original article on groupthink.

“we influence [...] our friends' friends' friends!”

Again, from Nicholas Christakis and James Fowler's wonderful book, Connected (2009).

“be skeptical of ideas that flatter you”

yes, including the ideas in this explorable explanation.

★ Sandbox Mode ★

The keyboard shortcuts (1, 2, space, backspace) work in all the puzzles, not just Sandbox Mode! Seriously, you can go back to a different chapter, and edit the simulation right there. In fact, that's how I created all these puzzles. Have fun!