Technology Day at IUP
Sunday
On January 15th, 2020, I had the pleasure of conducting a hands-on workshop demonstrating the cloud-based program Microsoft’s Sway. I presented the 50-minute session at Indiana University of Pennsylvania's Technology Day.
Below is the session plan built in Sway.
Resource:
Photo by Robynne Hu on Unsplash
Algorithm Bias
Let’s think for a moment about how we navigate through our daily routine, from the time our feet hit the floor in the morning until our head hits the pillow at night and even as we sleep, many digital devices are collecting, analyzing, and storing your personal data. We are utilizing artificial intelligence (AI) for school assignments, work emails, health tracking, and social interactions.
Think about all
those times a streaming network suggested a movie or TV series. A
topic that you searched on Google has morphed into a series of ads in your
social media feed. Those
recommendations/ads are based on algorithms that examine your digital
environment; what you’ve searched, watched or bought. Artificial intelligence is about designing
intelligent software that can analyze and assess the digital environment and
make intelligent choices for online learning.
Artificial intelligence lies behind these algorithmic data sets. Algorithmic data sets can be biased, which
can result in a loop of cultural, social, and economic unfairness. AI is increasingly used in education,
training, and learning. AI ought to
reflect the diversity of all its users. It should level the playing field for students
and workers, especially as schools and businesses around the world embrace
differences, fresh perspectives, and non-traditional skillsets. Human
interactions with AI should be safe, secure, valuable, and useful to everyone. Why isn’t it?
Keywords: Artificial intelligence, technochauvism, algorithm
bias, adult education
Technochauvism
The mainstream
belief about the role of technology in society is greatly influenced by utopian
visions of small homogenous groups of people coding perfect algorithms and
developing inclusive artificial intelligence.
We have in a way been programmed by Silicon Valley to accept that the
technological solution is always the better solution, or “technochauvinism”
(Broussard, 2018). For a real-world
example of “technochauvism” we can take a look at Twitter. Twitter feels it is better to use an algorithm
to push conversation snippets from other users to your Twitter feed. This algorithm uses data information about
what you post and then pushes what it “thinks” you are interested in your
recommendations. We are led to believe
that technology is neutral, and the results are objective. It is not and because of Twitter’s current
algorithm conversation feed tends to trend on negative buzzwords (Broussard,
2018).
Wilson’s
(2018) literature review of Meredith Broussard’s book, “Artificial Unintelligence.
How Computers Misunderstand the World.”, he discusses the relevance of having a
human connection in tandem with the development and implantation of algorithms
and the uses of algorithms in AI. Wilson
(2018) concurs Broussard’s argument that the gap between what we imagine
technology in the classroom or work environment can do and what technology such
as computers or mobile devices can do is vast (Broussard, 2018). Twitter
can hire a community manager who can use technological tools to help improve
the conversation, in turn, creating a more inclusive and global user
experience. Technology is advancing at a
rapid pace and technology companies are still trying to lessen the cultural
divide. Everyday technology is being
developed and the human element is being left out of the equation creating an
“in the loop” machine learning where automated systems are excluding students,
social groups and communities based on personal demographics that don’t meet
algorithm parameters.
Algorithm Bias
“Discrimination is an increasing concern when
we use algorithms and it really does not matter if the algorithm intentionally
or unintentionally engages in discrimination: the outcome on the people who are
affected is the same” (Datta, Tschantz & Datta, 2015 as cited in Jackson,
2018). Jackson (2018) explores the
various ways in which algorithms fuel biased profiling among venerable
populations, thereby reinforcing rather than overriding existing biases.
How
algorithms are designed, developed and their deployment in data capture and
analysis can impact people’s lives in concealed and subtle ways that have a
significant impact on their home, work and family life. Jackson (2018) cites Amazon and its Prime
shipping service. Amazon utilized a data
set of neighborhoods based on income and zip code. Amazon was accused of “prime-lining” because
the algorithm led to excluding services to low-income minority
neighborhoods. “low-income” turned out
to be a proxy for race. This is an example of unintentional bias, of how easy
it is to engage in bias behavior, even when the bias is initially
excluded. Jackson also cites an
example of intentional or by design algorithmic bias, a credit card company
lowered a man’s credit limit because an algorithm profiled the businessman
frequented stores in predominantly African American with poor repayment
history. The algorithm used his
purchasing information to profile and predict what it thought to be an unbiased
representation of his financial habits.
Algorithms
are a set of unambiguous specifications or rules for performing calculations,
data processing, automated reasoning, and other tasks that reduce decision
making to a number (Jackson, 2018). A
number turns into a data set and this data set is then analyzed by a computer
program and rendered into a repository. Repositories of data allow those who
control the data to explore common patterns that emerge and be used to identify
behavioral traits common among certain groups of people but not of others
(Jackson, 2018). Algorithms use data to
create and infer meaning, embedding patterns in software used in AI, mobile
devices and social media. Once
algorithms are embedded, they grow and spread by pulling current data and
combining it with old data it creates new data, and as it does the
unintentional/intentional bias develops, like a virus, everyone or anyone can
be affected.
Artificial Intelligence & Society
Algorithms
are increasingly being used to make sensitive decisions, for instance, algorithms
are being used to calculate and assess which completed applications for
employment move to the next step in the hiring process. Simple errors in data entry can disqualify a
well-qualified applicant from obtaining work.
Algorithms are also being used to decide which individuals receive loans
basis of zip codes or who should receive bail based on the neighborhood they
are returning to once released (Reynolds, 2017). As research clearly shows the history of
algorithms and AI containing hidden biases, which begs the question, “are we
presenting a level playing field for everyone?” (Yanisky-Ravid & Hallisey,
2019). Yanisky-Ravid & Hallisey
(2019) purpose of a new AI Data Transparency Model that focuses on disclosure of
data rather than focusing on the initial software program and programmers.
Artificial
intelligence (AI) can to a great tool.
The benefits of artificial intelligence come from its ability to
evaluate, learn and adopt a dynamic strategy to create an immersive
experience. Yanisky-Ravid & Hallisey
(2019) argues that algorithm development needs transparency and a framework to
identify and eliminate algorithmic bias.
Programmers are attempting to remove the bias from AI, but without
proper training and a diverse team can inadvertently interject their own
cognitive biases. Currently,
Yanisky-Ravid & Hallisey (2019), contend we need to strive to identify the
risks of faulty data by hiring data managers to conduct critical audits of data
used to train AI systems.
Discussion & Conclusion
The AI discussed
in the reviewed articles is the same AI that impacts the education field with
mechanisms of individualized learning applications. AI can be brought into both the traditional
and distance classroom with the implementation of simulators, tutorial
programs, interactive games. The AI systems are developed to adapt to
students’ diverse needs to create personalized education leading educators to
rethink the teaching-learning process since the automated assistance in
relation to the student help allow a new and attractive perceptive as the AI
parameters facilitate the learning process (Ocaña-Fernández,
Valenzuela-Fernández, & Garro-Aburto, 2019). Here inlays the issue, the enormous mass of global
citizens are in unprivileged position with respect to AI technologies and many
ethnic groups who do have access to AI are finding algorithm biases within
face-recognition software which in some AI programs embedded in augmented reality
did not recognize dark skin pigment (Buell, 2018). These issues make it clear the associated
problems that can arise when technology is developed in a bubble without regard
for how diverse the world is, we need to focus on getting individuals to
think twice about what’s going on in the rapidly advancing brains that power
artificial intelligence before it’s too late (Buell, 2018).
It is time we
start thinking about constructing platforms that can identify bias by not only collecting
people’s experiences but also auditing existing software (Wilson, 2018). We need to start creating a framework to facilitate
more inclusive training sets to enable developers to design ethically, instead
of looking for blind spots and vulnerabilities of people’s perception and
allowing companies to influence what people do and the decisions that they make
without them realizing the implications.
References
Buell, S. (2018). MIT Researcher: Intelligence has a race problem and we need
to fix it.
intelligence-race-dark-skin-bias/
Broussard, M. (2018). Artificial unintelligence: how computers
misunderstand the
world. Cambridge, MA: MIT Press.
Jackson, J. (2018). Algorithmic
Bias. Journal of Leadership, Accountability and Ethics,
15(4), 55-65.
Ocaña-Fernández, Y.,
Valenzuela-Fernández, L. A., & Garro-Aburto, L. L. (2019).
Artificial Intelligence and its Implications in Higher Education.
Propósitos y
Reynolds, M. (2017). Bias test
to keep algorithms ethical. New Scientist, 234(3119), 10.
Wilson, T. (2018). Artificial
unintelligence. How computers misunderstand the world.
Information Research, 23(2), Information Research, Vol.23(2).
Yanisky-Ravid, S., & Hallisey,
S. K. (2019). “Equality and Privacy by Design”: A New
Model of Artificial Intelligence Data
Transparency via Auditing, Certification, and
Safe Harbor Regimes. Fordham Urban Law
Journal, 46(2), 428–486. Retrieved from
site=ehost-live
Photo by h heyerlein on Unsplash
Subscribe to:
Posts (Atom)