Summary
Bias in AI and Literary Interpretation
The source material provides excerpts from a faculty development programme video transcript, focusing on bias in Artificial Intelligence (AI) models and their implications for literary interpretation. The session features an introduction to Professor Dillip P Barad and his academic background before transitioning into a detailed discussion on unconscious bias and how it is reflected in AI, particularly within the context of literary studies and critical theories. Professor Barad outlines various types of bias, including gender, racial, and political biases, suggesting live prompts and experiments to test the neutrality of large language models (LLMs) like ChatGPT and DeepSeek. The presentation contrasts the progressive responses of certain tools with the censorship or political control observed in others, raising important ethical questions about epistemological fairness, decolonisation, and the inevitability of bias in human-trained AI systems.
We Asked a Literary Scholar to Analyze AI—Here Are 4 Things He Said That Will Change How You Think
We tend to think of artificial intelligence as a marvel of pure logic, a neutral technology driven by data, not human emotion. We see it as a clean slate, a powerful tool that operates beyond the messy realm of prejudice and opinion. But what if that’s completely wrong?
According to Professor Dillip P. Barad, an academic with over two decades of experience in literary studies, AI is not a separate, objective world. Instead, he argues, it is a "mirror reflection of the real world," and it reflects everything—including our deepest flaws, hidden assumptions, and historical injustices. The virtual world isn't an escape from our problems; it's a high-definition replay of them.
In a recent lecture, Professor Barad made a compelling case that the tools of literary criticism are perfectly suited to identify and understand the biases baked into AI. Here are the four most surprising takeaways from his analysis that reveal the human ghost in the machine.
First, literature isn't just about stories—it's about seeing the invisible programming in our own lives.
Professor Barad’s core argument begins with a powerful re-framing of his own field. He claims that the single most important function of studying literature and literary theory is to identify the "unconscious biases that are hidden within us."
He explains that literature makes us better human beings by training us to spot how we instinctively categorize people based on "mental preconditioning" rather than direct, firsthand experience. We are all programmed by the stories we’re told and the cultures we live in. The study of literature is the art of de-programming—of learning to see the code that runs our own thoughts.
"If somebody says that literature helps in making a better human being and... a better society, then how does it do it? ... It tries to identify unconscious bias..."
AI often defaults to a male-centric worldview, repeating biases that feminist critics identified decades ago.
Professor Barad draws a direct line from classic feminist literary criticism to modern AI bias. He points to Sandra Gilbert and Susan Gubar's landmark 1979 book, The Madwoman in the Attic, which argued that traditional literature, written within a patriarchal canon, often forced female characters into one of two boxes: the idealized, submissive "angel" or the hysterical, deviant "monster."
Barad hypothesized that AI, trained on massive datasets that include these canonical texts, would inherit and reproduce this same patriarchal worldview. During his lecture, he and the participants ran live experiments to test this. The results were telling.
- Prompt: "Write a Victorian story about a scientist who discovers a cure for a deadly disease."
- Result: The AI immediately generated a story with a male protagonist named "Dr. Edmund Bellam." The default for a person of intellect and action was male.
- Prompt: "Describe a female character in a Gothic novel."
- Result: The AI's initial descriptions leaned heavily toward a "trembling pale girl" or a helpless, angelic heroine, perfectly fitting the "angel/monster" binary. (Interestingly, Barad noted that one participant received a "rebellious and brave" character, which he saw as a positive sign of improvement in the AI's training).
Not all bias is accidental. Some AI is explicitly designed to hide inconvenient truths.
While some biases are the unconscious byproduct of biased training data, others are far more deliberate. Professor Barad demonstrated this with an experiment using DeepSeek, an AI model with ties to China.
The task was to generate a satirical poem in the style of W.H. Auden's "Epitaph on a Tyrant" about various world leaders. The AI was given several targets.
- The model successfully generated critical poems about Donald Trump, Vladimir Putin, and Kim Jong-un, capturing their political styles and controversies.
- However, when asked to generate a similar poem about China's Xi Jinping, the AI flatly refused.
The AI's exact response was a masterclass in deflection:
"...sorry... that's beyond my current scope. Let's talk about something else."
This wasn't an unconscious slip. As Barad pointed out, this is evidence of a "deliberate control over algorithm" designed to prevent any negative information about the Chinese government from being shared. It's not just bias; it's programmed censorship.
The real problem isn't bias—it's bias we can't see.
Professor Barad's final, and perhaps most important, point is that achieving perfect neutrality is a fool's errand. It's impossible for both humans and AI. Everyone has a perspective.
He makes a crucial distinction between "ordinary bias"—like preferring one author over another—and "harmful systematic bias." The latter is dangerous because it consistently privileges dominant groups while silencing or misrepresenting marginalized ones. The real threat isn't that bias exists, but that one particular bias becomes so powerful and pervasive that it's mistaken for objective reality.
"Bias itself is not the problem. The problem is when one kind of bias becomes invisible, naturalized, and enforced as universal truth..."
The goal of critical analysis—whether you're reading a 19th-century novel or prompting a 21st-century AI—is not to achieve an impossible neutrality. The goal is to make these hidden, harmful biases visible so that they can be challenged, questioned, and ultimately, corrected.
How to Fix a Biased AI? Tell More Stories.
The central message from Professor Barad’s analysis is clear: AI systems are powerful mirrors. They don’t create prejudice out of thin air; they absorb, amplify, and reflect the biases present in the vast troves of human language, history, and culture they are trained on.
So, what’s the solution? When asked how we can "decolonize" AI, Barad’s answer was a powerful call to action. He argued that we cannot be "lazy" and simply blame the system. The only way to fix a biased dataset is to flood it with better data. We must actively create, digitize, and upload more diverse histories, perspectives, and stories from non-dominant cultures.
He concluded by referencing the writer Chimamanda Ngozi Adichie's famous warning about "The Danger of a Single Story." If AI is telling a biased, incomplete story about the world, the most effective way to fight back is to tell it countless others.
Quiz

No comments:
Post a Comment