-
[Music]
-
hi uh my name is Deborah rajie and uh
-
I'm a Milla fellow I work with the
-
algorithmic Justice League so the
-
algorithmic Justice League is a research
-
organization um that works very hard to
-
make sure that AI is developed in a way
-
that is inclusive and effective for
-
everyone right now a lot of our work has
-
also involved doing audits ourselves of
-
these deployed systems so we analyze
-
situations um like I mentioned anything
-
from like healthcare to hiring to facial
-
recognition what we do is we come into
-
those situations and we try to
-
understand how the deployment of that
-
system impacts different marginalized
-
groups is a project called gender Shades
-
where we looked at uh facial recognition
-
systems that were deployed in the real
-
world and asked the question of is this
-
A system that works for everyone these
-
systems although they were operating at
-
almost 100% for for example lighter
-
skinned um male faces um they were uh
-
performing at less than 70% accuracy for
-
darker skinned women um this was a huge
-
story and it kind of escalated uh in the
-
press and and that's a lot of what we're
-
known for is that
-
project so you might have um a company
-
that builds a tool for doctors or for
-
teachers um whereas the affected
-
population in that situation would
-
actually be the students or the patients
-
and those guys very rarely have any kind
-
of influence on the types of features
-
that are emphasized in the development
-
of the AI system uh the type of data
-
that's collected uh and as a result um
-
those that are sort of experiencing the
-
weight of the decision-making that these
-
tools make uh end up almost uh erased
-
from the entire process of development
-
unless actively sought
-
out yeah so there's a lot of situations
-
in which humans are making very
-
important decisions uh an example being
-
hiring or a judge making a decision in a
-
criminal case and there's certainly a
-
lot of bias involved in that there's a
-
lot of the perspective of that person
-
making that decision that influences the
-
nature of that outcome in the same way
-
if you replace that human decision maker
-
with an algorithm there's bound to be
-
some level of bias involved in that the
-
other sort of aspect of this is that we
-
tend to trust algorithms and see them as
-
neutral in a way that we don't with
-
humans yeah so I got into this field
-
almost accidentally um I studied
-
robotics Engineering in University and I
-
was sort of playing a lot with um
-
uh AI as like just a form of of of part
-
of my experience in terms of coding and
-
and my experience in hackathons and
-
building projects and realize very
-
quickly that a lot of the data sets for
-
example um do not include a lot of
-
people that look like me so a lot of the
-
data sets that we use to uh you know to
-
to pretty much teach these algorithmic
-
systems uh what a face looks like what a
-
hand looks like what a human looks like
-
um don't actually include uh a lot of
-
people of color um and other different
-
demographics so that was is probably the
-
biggest uh sort of red flag that I saw
-
in the industry
-
immediately um I think a lot of the
-
times we think of AI systems as these
-
sci-fi sentient robot
-
overlords um but they're really just a
-
bunch of decisions being made by actual
-
humans and um our understanding of AI
-
systems as the separate thing makes it
-
really hard to hold anyone accountable
-
when a bad decision is made
-
[Music]