Artificial intelligence (AI) refers to a set of technologies which enable computers to perform advanced functions that are typically thought to require human intelligence. These functions might include recognizing faces, analyzing data, driving cars, creating art, interpreting and generating written and spoken language, and more. AI systems are trained on vast amounts of data, allowing them to identify patterns and relationships which humans may not be able to see. The AI learning process often involves algorithms, which are sets of rules or instructions that guide an AI's analysis and decision-making. Through continuous learning and adaptation, AI systems have become increasingly adept at performing tasks, from recognizing images to translating languages and beyond.
This guide contains a selection of resources that can help teachers and students learn about AI, literacy, and ways to navigate AI in classroom settings, giving us all a strong foundation to ethically and responsibly use AI technologies.
Suggested introductory articles:
Sources: AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology (Keun-woo Lee, Kelly Mills, Pati Ruiz, Merijke Coenraad, Judi Fusco, Jeremy Roschelle and Josh Weisgrau; Digital Promise, June 18, 2024); What is Artificial Intelligence (AI)? (Google Cloud); What is AI? Everyone thinks they know but no one can agree. And that’s a problem (Will Douglas Heaven; MIT Technology Review, July 10, 2024)
You've probably been hearing a lot about artificial intelligence (AI)—but what is it exactly? As stated in the previous tab, artificial intelligence is a term that refers to technologies that are thought to require human intelligence. Though it might feel like AI had only recently been brought into our lives, it has actually been around since the 1950s. The Dartmouth Summer Research Project on Artificial Intelligence (known as the Dartmouth Workshop) was a 1956 summer workshop widely considered to be the founding event of artificial intelligence as a field.

Image: Participants at the Dartmouth Workshop. The Minsky Family (1956).
In the 1980s, "expert systems" (programs that answer questions or solve problems about a specific domain of knowledge) became widely used by corporations around the world to streamline processes like ordering computer systems and identifying compounds in spectrometer readings. In the 2000s, AI were trained on big data, leading to new systems that could perform tasks such as facial recognition, natural language processing, answer trivia questions (remember IBM's Waston?), and more.

Image: IBM's Watson competing on Jeopardy in 2011.
Since 2020, we have been in an "AI Boom" era, following the release of large language models exhibiting human-like traits of knowledge, attention and creativity such as ChatGPT. For more on the current state of AI, check out the following video:
Video: How will AI change the world? Ted-ED (2022). Stuart Russell discusses the current limitations of artificial intelligence and the possibility of creating human-compatible technology.
Sources: "History of artificial intelligence" & "Dartmouth Workshop" (Wikipedia)
There is a long history of depicting artificial beings in literature. Even in antiquity, thinkers and alchemists were imagining artificial beings endowed with intelligence or consciousness by master craftsmen. The books listed below are in chronological order beginning in the 1800s and highlight some pivotal moments of AI in fiction.
For more films, see Wikipedia's list of artificial intelligence films or this list on Letterboxd.
2001: A Space Odyssey
by
Stanley Kubrick (director)
Silent Running
by
Douglas Trumbull (director)
Westworld
by
Michael Crichton (director)
Blade Runner
by
Ridley Scott (director)
The Matrix
by
Lana Wachowski, Lilly Wachowski (directors)
Moon
by
Duncan Jones (director)
Her
by
Spike Jonze (director)
Ex Machina
by
Alex Garland (director)
After Yang
by
Kogonada (director)
A generative AI system creates new text, images, or other media in response to prompts. As a student, it is important to take caution when utilizing AI software, especially for coursework or when importing data. To help you ethically and responsibly engage with these tools (especially generative ones), see the AI literacy box below and read through the acceptable uses of generative AI services at IU prepared by University Information Technology Services (UITS). Always ask your professor or TA if you are using AI for a class assignment.
The remainder of this box contains a collection of software (all with free trial or free tier options). We have focused on AI that can assist with productivity, task management, studying, and organizing your work and personal life, rather than generative AI.
Adobe Firefly As part of IU's software license with Adobe, you have access to the Firefly web app and generative AI features inside apps like Photoshop and Illustrator as well as Adobe Stock.
See more scheduling assistants and comparison charts here.
Sources: The best AI productivity tools in 2024 Miguel Rebelo, Zapier Blog
Note: Though this can help you summarize research more quickly, it is important to be cautious and read papers yourself if you plant to cite them or include them in your research
See a comparison chart of AI tools for research here.
Note: Always double-check generated citations.
Chatbots have the potential to revolutionize our lives, make work more efficient, and free up time so that people can focus on other tasks. However, it is important to be very cautious when using chatbots. Not only are they newly-developed and continually evolving, but we have already seen bias in many other AI systems (see the "Centering Justice" tab for more. Before using an AI chatbot, make sure you understand the risk and be sure to use the AI Literacy Framework above to evaluate outputs. See below for articles on the risks of chatbots:
Claude Built for work and trained to be safe, accurate, and secure. Claude can answer nuanced questions and create a variety of content. Trained by Anthropic using Constitutional AI. While Claude is fast and well-organized, it is not connected to the internet and does not automatically provide sources. Free tier available.
Perplexity A research chatbot that is good at providing sources (which it lists in an easily-accessible sidebar). Though Perplexity gives nuanced answers in an easy-to-follow list, it does tend to rely on Reddit posts as sources, which most people cannot cite for their projects. Free tier available.
ChatGPT Offers meaningful answers with a good amount of context on a variety of topics. While ChatGPT is good at most tasks like research and writing emails, it can be slow at times and it can be tedious to get ChatGPT to cite its sources. Developed by OpenAI and free to use.
InterviewBy.ai Practice job interview questions tailored to your job description. Get instant AI feedback and suggestions to improve your answers. Free plan includes 3 questions/month
Transcription
Visuals and Photography
AI Detectors
Though many people have grown up surrounded by AI technologies that have affected everything from traffic patterns to the products available at grocery stores, the recent release of ChatGPT brought AI to the forefront of our lives. The increased accessibility of AI bring with it a need for AI literacy. In this context, literacy does not simply refer to the ability to use AI technologies but to the combination of knowledge and skills that allow users to critically understand and evaluate AI tools in an increasingly digital world. In our daily lives, we implement information, media, financial, and health literacy when performing all kinds of tasks. When practicing AI literacy, one might ask questions like: How does this technology work? What kind of data was this system trained on? What biases are present in this technology? How does this impact my world and the world around me?
Source: AI Literacy, Explained (Alyson Klein; EducationWeek, May 10, 2023)
Whether we realize it or not, we utilize different literacies every day. For example, when we read the news we might use an information literacy framework to determine whether or not we can trust the media which we encounter. We can ask questions about who created or funded an article, about why the message of the piece is being sent, and about what kind of research went into the piece.
Similarly, we can apply an AI literacy framework when utilizing AI-enabled technologies or engaging with the outputs of such systems. Various scholars and institutions have developed AI literacy frameworks to encourage users to think critically about their use of AI tools. These frameworks propose practices and benchmarks that define how users can understand and evaluate AI-enabled tools as well as how educators can support AI literacy development.
Below we give a basic overview of just a few of these AI literacy frameworks. In doing so, we aim to highlight how this one concept can be applied differently: while the goal of AI literacy is largely consistent, there is currently no single unified definition or assessment framework for it. Of course, we encourage you to explore more deeply the research and documentation of each framework to gather a more complete understanding of the principles and practices they propose.
Ng et al. (2021) conducted a literature review on AI literacy, synthesizing existing research to create a consolidated definition of AI literacy and to shed light on associated teaching and ethical concerns. The researchers identified four overarching aspects of AI literacy from the literature:
By establishing this foundational understanding of AI Literacy, the authors lay the “groundwork for future research such as competency development and assessment criteria on AI literacy.” Indeed, since the study’s publication multiple organizations and academic institutions have developed their own AI literacy models and curricula. For more information, consult the original study, cited below.
Source: Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). “Conceptualizing AI literacy: An exploratory review.” Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
With its extensive computational infrastructure, the University of Florida is positioned at the forefront of AI research and development. In an effort to utilize these resources and to advance the university’s mission, a task force made up of students, faculty, staff, and administrators developed the AI Across the Curriculum program.
This five-year quality enhancement plan is “designed to provide students with the resources and skills to become successful digital citizens and global collaborators, acquire basic awareness and general knowledge of AI, have the opportunity to apply and use AI in relevant, discipline-specific ways, and develop foundational expertise in AI.”
Their AI literacy framework is largely based on the categories identified by Ng et al. (2021). In order to assess student learning and performance across these domains of AI literacy, the plan includes quantitative and qualitative assessment protocols, including a rubric of six student learning outcomes:
Source: Migliaccio, K., Southworth, J., Reed, D., Miller, D., Leite, M. C., & Donovan, M. (2024). AI Across the Curriculum: University of Florida 2024-2029 Quality Enhancement Plan. University of Florida. https://ai.ufl.edu/media/aiufledu/resources/25_01.08_UF-QEP_AI-Across-the-Curriculum.pdf
This framework “consists of three interconnected Modes of Engagement: Understand, Evaluate, and Use. The framework emphasizes that understanding and evaluating AI are critical to making informed decisions about if and how to use AI in learning environments.” Expanding on these overarching goals, the authors also propose six actionable AI-literacy practices, which include:
For a more detailed and comprehensive view of this framework, please refer to Digital Promise’s website.
Source: Mills, K., Ruiz, P., Lee, K., Coenraad, M., Fusco, J., Roschelle, J., & Weisgrau, J. (2024, May). AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. https://doi.org/10.51388/20.500.12265/218
Recognizing the ever-evolving development of AI-tools and their impact on the higher education environment, academic and technologies teams at Barnard College developed an AI literacy framework to provide “a structure for learning to use AI, including explanations of key AI concepts and questions to consider when using AI.” The framework consists of a four-part pyramid structure that breaks down AI literacy into four levels:
Each level has its own set of core competencies, key concepts, and reflection questions. For a more detailed and comprehensive view of this framework, read about it here.
Source: Hibbert, M., Altman, E., Shippen, T., & Wright, M. (2024) “A Framework for AI Literacy.” EDUCAUSE Review. https://er.educause.edu/articles/2024/6/a-framework-for-ai-literacy
Similar to an AI Literacy framework, the ROBOT test, developed by librarians at McGill University (Amanda Wheatley and Sandy Hervieux) offers a helpful mnemonic for evaluating AI systems and outputs. "Being AI Literate does not mean you need to understand the advanced mechanics of AI. It means that you are actively learning about the technologies involved and that you critically approach any texts you read that concern AI, especially news articles."
Reliability Objective Bias Ownership Type
| Reliability |
|
| Objective |
|
| Bias |
|
| Owner |
|
| Type |
|
From the personal computer to the internet, information institutions have a long history of adapting to emerging technologies. The development of AI technologies has proven to be another technological turning point, one to which all industries – not just libraries, museums, and archives – must adapt.
Just as AI literacy frameworks have been implemented to help individuals better understand and utilize AI-enabled tools, library professionals have developed AI decision-making frameworks to aid libraries, archives, and museums in adopting this new technology at the institutional level. Below we highlight a few of these frameworks. To get even more involved, consider joining a professional committee or discussion group on AI implementation, such as the Association of College and Research Libraries' Artificial Intelligence Discussion Group.
Professionals at the Library of Congress have identified machine learning “as a potential way to provide more metadata and connections between collection items and users.” Recognizing that the impact of AI use in library operations is an ongoing subject of study, they developed “a planning framework to support the responsible exploration and potential adoption of AI at the Library.” Broadly speaking, the framework includes three planning phases:
Moreover, each phase supports the evaluation of three elements of machine learning:
They have also developed supplementary materials – including worksheets, questionnaires, and workshops – “to engage stakeholders and staff and identify priorities for future AI enhancements and services.”
The folks at the Library of Congress recognize that no single institution can tackle the task of AI implementation alone. In order to successfully adapt to the challenges this new technology poses, the authors emphasize that professionals across the library sector must co-develop, communicate, and share information regarding requirements, policy, governance, infrastructure and costs.
To read more about this planning framework and how the Library of Congress has applied it, check out this post on the Library of Congress blog!
Dr. Leo. S. Lo offers a framework for approaching AI integration in libraries through the lens of practical ethics. He encourages librarians to apply various ethical theories to questions surrounding AI:
All of these ethical frameworks have their own pros and cons, and they certainly cannot give us final definitive answers to ethical dilemmas. However, they offer a structured approach to making decisions in accordance with library values, user rights, and professional ethics. Lo provides a seven-step framework with which libraries can make institutional decisions regarding AI:
To read more about this framework and its applications, check out the slide deck for Dr. Lo’s presentation.
Librarians at Montana State University have developed “tools and strategies that support responsible use of AI in the field” through an IMLS-funded project titled, Responsible AI in Libraries and Archives. One result of this research initiative was Viewfinder, a toolkit “designed to facilitate ethical reflection about AI in libraries and archives from different stakeholder perspectives.”
This toolkit provides a structure for a workshop for organizational AI-implementation project teams. The toolkit guides participants in considering which ethical and professional values might matter to different stakeholders in different scenarios involving AI-enabled tools. This framework encourages participants to reflect on the many factors at play in implementing AI tools at the organizational level and to gather a more holistic understanding of a particular implementation initiative.
Source: Mannheimer, S., Clark, J. A., Young, S. W. H., Shorish, Y., Kettler, H. S., Rossmann, D., Bond, N., & Sheehey, B. (2023). Responsible AI in Libraries. https://doi.org/10.17605/OSF.IO/RE2X7
For more books, see the following subject headings in IUCAT:
Developing middle schoolers' artificial intelligence literacy through project-based learning: Investigating cognitive & affective dimensions of learning about AI
by
Minji Jeon
Since the results of an AI chatbot are not retrievable by other users, it is important to provide sufficient context when using AI-generated content in your research. For example, many style guides recommend discussing how you have used AI in the methods section of your paper or describing how you used said tool in your introduction or footnotes. In your text, you can provide the prompt you used and then any portion of the relevant text that was generated in response.
MLA suggests that you:
According to Chicago, "you do need to credit ChatGPT and similar tools whenever you use the text that they generate in your own work. But for most types of writing, you can simply acknowledge the AI tool in your text (e.g., “The following recipe for pizza dough was generated by ChatGPT”)."
Citing AI-generated content will look different depending on the style you are using. We have provided guidelines for APA, MLA, Chicago, and IEEE in the next tab.
Template: Author. (Date).Title (Month Day version) [Additional Descriptions ]. Source
Author: The author of the model.
Date: The year of the version.
Title: The name of the model.The version number is included after the title in parentheses.
Bracketed text: References for additional descriptions
Source: When the publisher and author names are identical, omit the publisher name in the source element of the reference and proceed directly to the URL.
Example
Quoted in Your Prose
When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).
Reference Entry
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
Author: We do not recommend treating the AI tool as an author. This recommendation follows the policies developed by various publishers, including the MLA’s journal PMLA.
Title of Source: Describe what was generated by the AI tool. This may involve including information about the prompt in the Title of Source element if you have not done so in the text.
Title of Container: Use the Title of Container element to name the AI tool (e.g., ChatGPT).
Version: Name the version of the AI tool as specifically as possible. For example, the examples in this post were developed using ChatGPT 3.5, which assigns a specific date to the version, so the Version element shows this version date.
Publisher: Name the company that made the tool.
Date: Give the date the content was generated.
Location: Give the general URL for the tool
Example 1: Paraphrasing Text
Paraphrased in Your Prose
While the green light in The Great Gatsby might be said to chiefly symbolize four main things: optimism, the unattainability of the American dream, greed, and covetousness (“Describe the symbolism”), arguably the most important—the one that ties all four themes together—is greed.
Works-Cited-List Entry
“Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.
Example 2: Quoting Text
Quoted in Your Prose
When asked to describe the symbolism of the green light in The Great Gatsby, ChatGPT provided a summary about optimism, the unattainability of the American dream, greed, and covetousness. However, when further prompted to cite the source on which that summary was based, it noted that it lacked “the ability to conduct research or cite sources independently” but that it could “provide a list of scholarly sources related to the symbolism of the green light in The Great Gatsby” (“In 200 words”).
Works-Cited-List Entry
“In 200 words, describe the symbolism of the green light in The Great Gatsby” follow-up prompt to list sources. ChatGPT, 13 Feb. version, OpenAI, 9 Mar. 2023, chat.openai.com/chat.
For examples of citing creative visual works, quoting creative textual works, and citing secondary sources used by an AI tool, see the MLA Style Center Generative AI page.
According to Chicago, "you do need to credit ChatGPT and similar tools whenever you use the text that they generate in your own work. But for most types of writing, you can simply acknowledge the AI tool in your text (e.g., “The following recipe for pizza dough was generated by ChatGPT”)."
To sum things up, you must credit ChatGPT when you reproduce its words within your own work, but unless you include a publicly available URL, that information should be put in the text or in a note—not in a bibliography or reference list. Other AI-generated text can be cited similarly.
If you do need a citation:
Author: The name of the tool that your are using (such as ChatGPT)
Publisher: Name the company that made the tool (such as OpenAI)
Date: Give the date the content was generated.
Location: Give the general URL for the tool. Because readers can’t necessarily get to the cited content (see below), that URL isn’t an essential element of the citation.
A numbered footnote or endnote might look like this:
1. Text generated by ChatGPT, OpenAI, March 7, 2023, https://chat.openai.com/chat.
If you’re using author-date instead of notes, any information not in the text would be placed in a parenthetical text reference:
“(ChatGPT, March 7, 2023).”
According to the IEEE guide, "the use of content generated by artificial intelligence (AI) in an article (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section of any article submitted to an IEEE publication. The AI system used shall be identified, and specific sections of the article that use AI-generated content shall be identified and accompanied by a brief explanation regarding the level at which the AI system was used to generate the content. The use of AI systems for editing and grammar enhancement is common practice and, as such, is generally outside the intent of the above policy. In this case, disclosure as noted above is recommended."
Sources: How to cite ChatGPT (Timothy McAdoo, APA Style Blog); Ask The MLA: How do I cite generative AI in MLA style? (MLA Style Center); How to Cite AI-Generated Content (Purdue University); Citation, Documentation of Sources: Artificial Intelligence (The Chicago Manual of Style Online); Submission and Peer Review Policies: Guidelines for Artificial Intelligence (AI)-Generated Text (IEEE Author Center)
Harker, J. (2023, March). Science journals set new authorship guidelines for AI-generated text. National Institute of Environmental Health Sciences. https://factor.niehs.nih.gov/2023/3/feature/2-artificial-intelligence-ethics
Shope, M. L. (2023). Best Practices for Disclosure and Citation When Using Artificial Intelligence Tools. GLJ Online (Georgetown Law Journal Online), 112, 1–22.
APA publishes high-quality research that undergoes a rigorous and ethical peer review process. Journal policies for authors are provided for transparency and clarity, including ethical expectations, AI guidance, and reuse.
COPE (Committee on Publication Ethics) Position Statement on Authorship and AI Tools
The use of artificial intelligence (AI) tools such as ChatGPT or Large Language Models in research publications is expanding rapidly. COPE joins organisations, such as WAME and the JAMA Network among others, to state that AI tools cannot be listed as an author of a paper. AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.
AI has many applications at all levels of education. Though we may know AI best for the way it has reshaped student learning (through the proliferation of generative AI technologies that can create text, code, and other types of content), AI is also utilized by teachers and administrators. Predictive AI tools can analyze patterns in student data to forecast outcomes such as graduation rates and student learning milestones. These insights allow educators to intervene proactively but require careful evaluation for potential bias. See some more uses of AI in education below:

Chart: Examples of AI Applications in Education (Digital Promise).
AI also has many potential benefits when implemented in an educational setting. Of course, there are many risks as well:

Graphic: Potential risks and benefits of AI in education (TeachAI).
In this box, we have selected frameworks, toolkits, books, and articles that will help teachers and students implement and utilize AI in their classrooms.
For more books, see the following subject headings in IUCAT:
Much research has been done on bias in AI. As AI becomes more ubiquitous, it is important to understand that our own unconscious, implicitly biased associations can affect AI models, resulting in biased outputs. Though we might think of technology as neutral, AI has a long history of perpetuating biases present in our society such as racism, ableism, ageism, sexism, homophobia, and more. We should also think about the climate impact of AI when discussing its ethics. This box contains articles, books, and reports to help you learn more.
Video: Artificial Intelligence: Last Week Tonight with John Oliver. (2023, HBO)
The Uneven Distribution of AI’s Environmental Impacts (Shaolei Ren and Adam Wierman, Harvard Business Review)
As artificial intelligence technologies rapidly proliferate across society, a growing body of research highlights significant risks and unintended consequences. From environmental impacts to social inequities, AI systems present complex challenges that require thoughtful examination and proactive governance—if not outright abolition. Recent data shows that 78% of organizations now use AI in at least one business function, with 47% reporting at least one negative consequence from AI use. These concerns span technical, ethical, social, and economic dimensions, often intersecting in ways that compound their effects on individuals and communities.
In this box, you can find find resources that discuss concerns and critiques of AI. While by no means exhaustive, this guide touches on some of the most relevant, researched, and discussed concerns regarding the rise of AI: environmental concerns; ethical concerns; security and privacy concerns; social concerns; reliability concerns; and labor concerns. Since AI technology is such a rapidly developing field, many of this box's resources come from outside of the academic publishing realm, which has a much slower publication process than news reports, op eds, and magazine articles. While the number of academic articles on the ethical concerns of AI is growing, few books have ben published, given the intense time commitment required to research, write, and publish an academic book.
Video: AI researcher Sasha Luccioni's TED Talk on AI ethics
Key Resources
Here's you'll find a few resources to help you think more critically and thoughtfully about generative AI
The environmental consequences of AI development represent one of the most pressing yet underexamined challenges of the technology revolution. Environmental critiques of generative AI focus on the massive energy consumption, carbon emissions, and material extraction required to build and operate AI systems. The proliferation of data centers that house AI servers produce electronic waste and are voracious consumers of electricity, which in most places is still produced from fossil fuels.
AI systems require enormous computational resources for both training and operation, with estimates suggesting that training a single large language model can consume over 1,287 megawatt hours of electricity and generate 552 tons of carbon dioxide—equivalent to the lifetime emissions of five average cars. Data centers powering AI applications already account for 1-1.5% of global electricity use, and this figure is growing rapidly as major tech companies report surges in greenhouse gas emissions directly attributable to AI expansion. A request made through ChatGPT consumes 10 times the electricity of a Google Search. The environmental impact extends beyond energy consumption to include the extraction of rare earth minerals necessary for AI hardware. The data used to train AI models often originates from sources without the explicit consent of individuals, undermining privacy and ethical standards, while the development of AI entails significant human and environmental costs.
Beyond energy consumption, AI infrastructure demands massive water resources for cooling, with data centers using approximately 7,100 liters of water per megawatt-hour of energy consumed. Google's US data centers alone consumed an estimated 12.7 billion liters of fresh water in 2021. The manufacturing of specialized hardware like GPUs adds additional embedded carbon emissions, while the rapid obsolescence of AI hardware contributes to growing electronic waste streams. The urgency of AI deployment has also led to shortcuts in energy planning, with some facilities using diesel generators to supplement grid power, directly contradicting clean energy goals.
The scale of environmental impact is staggering and growing rapidly. Despite some efficiency improvements, the exponential growth in AI capabilities and deployment far outpaces these gains. The exponential growth in AI's energy demands, with Google reporting 60% of its AI-related electricity use stemming from inference in 2022, emphasizes the pressing need to tackle its environmental impact. Critics argue that without significant regulatory intervention and a fundamental shift toward sustainable development practices, AI's environmental footprint will continue to exacerbate climate change and environmental degradation.
The environmental toll disproportionately affects communities in the Global South, where much of the raw material extraction takes place, while the benefits primarily accrue to wealthy tech companies and consumers in developed nations. Integrating LLMs into search engines may multiply the carbon emissions associated with each search by as much as five times, at a moment when climate change is already having catastrophic effects. This creates what some scholars term "green colonialism," where AI benefits are predominantly reaped by global corporations headquartered in the Global North, while emissions and strain on local infrastructures are disproportionately offloaded onto more vulnerable regions.
Bias and discrimination critiques examine how AI systems consistently reproduce and amplify existing social biases around race, gender, class, and other identities, automating discrimination at scale while operating through opaque processes that obscure accountability. Benjamin argues that automation has the potential to hide, speed, and even deepen discrimination, while appearing neutral and even benevolent when compared to racism of a previous era. Human beings are designing these systems, and the training data, the way that the systems are learning to make quote-unquote 'intelligent decisions,' is mirroring the so-called intelligence or thinking of human beings—yet this process occurs within "black box" systems whose complexity creates significant barriers to understanding how decisions are made, limiting both external oversight and individual recourse when harmful outcomes occur.
In one example, widely-used healthcare algorithm that affects millions of people throughout the country favored White patients over sicker Black patients by using past spending to predict future healthcare needs, unwittingly reproducing racial disparities because on average Black people incur fewer costs for a variety of reasons, including systemic racism. This case exemplifies how bias manifests across protected characteristics, primarily stemming from historical data that reflects past discrimination patterns, with the erroneous use of past medical expenditures—which are historically lower among Black patients—as a proxy for determining access to extra medical support.
Safiya Noble's groundbreaking research on search algorithms demonstrates how seemingly neutral technologies encode and perpetuate racist and sexist biases at unprecedented scale. Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities, arguing that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color. Noble argues that search algorithms are racist and perpetuate societal problems because they reflect the negative biases that exist in society and the people who create them, rejecting the idea that search engines are inherently neutral—a problem compounded by the concentration of AI development within a small number of organizations with limited diversity of perspectives and that aren't centering justice in their organizational principles or approaches to construction, deployment, or access.
The systemic nature of algorithmic bias extends across multiple domains of social life, from criminal justice to employment to healthcare, affecting millions of people through automated hiring, lending, content moderation, and criminal justice decisions. Examples include a gang database that is 87% black and Latinx with many names belonging to babies under the age of 1, some of whom were supposedly "self-described gang members," a beauty contest judged by robots programmed through deep learning in which all the winners were white and only one finalist had visibly dark skin, and a recidivism risk algorithm that wrongly predicted arrested individuals who would reoffend, with the formula more likely to flag black defendants and less likely to flag white defendants as future criminals. These discriminatory outcomes are often obscured by the technical complexity and proprietary nature of AI systems, making them difficult to detect, challenge, or remedy, while the scale at which these systems operate amplifies their harmful impacts across entire populations.
Benjamin uses the term "New Jim Code" to underline how central bias is in seemingly objective technological systems, examining not only the emergence of a racist Internet but also how it is produced by a tech sector and commercial products that are themselves shaped by historical prejudices, biases, and inequalities. This "black box" problem reveals that addressing these challenges requires not just technical solutions like bias detection tools, but fundamental changes to development processes, regulatory frameworks, and organizational accountability structures that can penetrate the opacity of algorithmic decision-making and confront the systemic issues of transparency and accountability that allow discrimination to flourish under the veneer of technological neutrality.
The development and maintenance of AI systems relies fundamentally on a vast, largely invisible global workforce of data laborers who face severe exploitation and harmful working conditions. Behind every AI model are millions of workers in the Global South—primarily in Kenya, Venezuela, the Philippines, and India—who perform exhausting and brutalizing work of data labeling and content moderation necessary for the existences of these systems for wages as low as $1.32 to $2 per hour, compared to $10-25 per hour for similar work in the United States. These workers, often recruited from impoverished populations and migrant communities, are subjected to what 97 Kenyan AI workers described in an open letter to President Biden as "modern-day slavery."
Content moderators and data labelers are required to process graphic descriptions of sexual abuse, hate speech, violence, and other disturbing content for nine-hour shifts, labeling 150-250 passages daily to train AI systems like ChatGPT. The psychological toll is severe, with workers reporting recurring visions, depression, marital breakdowns, and lasting trauma from exposure to the "darkest recesses of the internet." Many work in overcrowded, dusty environments under constant algorithmic surveillance, with strict timers that inhibit basic needs like bathroom breaks. As independent contractors without formal employment protections, they lack access to healthcare, psychological support, or compensation for occupational hazards.
This exploitative model follows historical patterns of colonial extraction, where valuable resources are extracted from the Global South to benefit wealthy corporations and consumers in the Global North. Tech companies deliberately outsource this harmful work to regions with high unemployment and weak labor protections, treating workers as disposable while avoiding responsibility for their wellbeing. Even as AI systems become more sophisticated and may automate some of this work, the benefits flow to the companies and consumers who profit from AI, not to the workers whose labor and trauma made these systems possible.
Beyond direct exploitation of AI workers, broader job displacement affects millions globally. Current data shows 14% of workers have already experienced job displacement due to AI, while projections suggest AI could replace the equivalent of 300 million full-time jobs globally by 2030. The International Monetary Fund estimates that almost 40% of global employment faces exposure to AI, with advanced economies seeing higher rates of potential impact at 60% of jobs. This creates risks of further widening wealth gaps, as productivity gains concentrate among technology owners rather than displaced workers.
AI systems create new vulnerabilities for cyberattacks and unprecedented privacy risks through their dependence on vast datasets containing personal information and their ability to infer sensitive attributes from seemingly innocuous data, while expanding surveillance capabilities and compromising personal data protection.. These systems can predict highly personal characteristics such as sexual orientation, political views, or health status based on indirect behavioral patterns. The concentration of personal data in AI training datasets creates attractive targets for cybersecurity threats, while the opaque nature of many AI systems makes it difficult for individuals to understand how their information is being collected, processed, and potentially shared without their explicit consent.
Emerging security vulnerabilities specific to machine learning models include "model inversion" attacks that can extract personal information from trained models and "membership inference" attacks that can determine whether specific individuals' data was used in training. These risks are compounded by the challenge of data deletion—once personal information is incorporated into AI models, it becomes nearly impossible to fully remove, creating compliance challenges with privacy regulations like GDPR that guarantee rights to data erasure. The potential for AI to enable sophisticated social engineering attacks, discriminatory pricing, and other forms of algorithmic harm adds additional layers of concern for both individual and collective privacy.
AI expands the reach of existing surveillance practices—introducing new capabilities like biometric identification and predictive social media analytics—which could also disproportionately affect the privacy of communities that have historically been subject to enhanced policing based on factors like their zip code, income, race, country of origin, or religion. The adoption of new AI tools can expand an organization's attack surface and present several security threats, with AI systems relying on datasets that might be vulnerable to tampering, breaches and other attacks.
AI-powered surveillance technologies, such as facial recognition systems and location tracking tools, raise concerns about mass surveillance and infringement of individuals' privacy rights. These technologies can enable pervasive monitoring and tracking of individuals' activities, behaviors, and movements, leading to erosion of privacy and civil liberties. Many AI systems collect data quietly, without drawing attention, which can lead to serious privacy breaches, with these covert techniques often going unnoticed by users, raising ethical concerns about transparency and consent. Malicious actors may exploit vulnerabilities to steal sensitive data, manipulate AI-driven decisions, or compromise the integrity and reliability of AI systems, posing significant risks to privacy and security. The situation is made worse by the growth of "shadow AI," where employees use AI tools unofficially, with the most common deployments being those that enterprises aren't even aware of, ranging from shadow IT in departments, to individuals feeding corporate data to AI to simplify their roles.
The integration of biometric data into artificial intelligence applications presents considerable ethical dilemmas, as while facial recognition technology can enhance security measures, it often operates without the explicit consent of individuals, leading to unwarranted surveillance. Notable examples include membership inference, model inversion, and training data leaking from the engineering process, with models potentially disclosing sensitive data that was unintendedly stored during training. AI systems are susceptible to security vulnerabilities and attacks, including data breaches, adversarial attacks, and model poisoning, while adversarial attacks involve manipulating input data to deceive AI systems, leading to incorrect predictions or classifications. AI tools can also help threat actors more successfully exploit security vulnerabilities, with attackers able to use AI to automate the discovery of system vulnerabilities or generate sophisticated phishing attacks. The lack of transparency in many AI systems that operate as black boxes makes it challenging to understand how decisions are made or to hold them accountable for their actions, undermining trust and confidence in their outcomes, particularly in contexts where privacy and fairness are paramount.
Cultural and linguistic critiques of AI focus on how these systems reinforce dominant cultural perspectives and threaten linguistic diversity and indigenous knowledge systems. Language was often one of the first aspects colonizers imposed on the people they subjugated, and today, many Indigenous languages are being forgotten and disregarded in the development of natural language processing (NLP) technologies. Widely spoken African languages like Yoruba, Igbo, and others are not well recognized by NLP features, with Google's language application incorrectly translating the Yoruba word "Esu," which means "a benevolent trickster god," into "devil". Most AI systems are trained primarily on English-language data, reinforcing English linguistic hegemony and marginalizing other languages and knowledge systems, threatening linguistic diversity and cultural preservation.
Data colonialism represents a particularly insidious form of cultural appropriation, where AI training datasets often include indigenous knowledge, traditional stories, and cultural artifacts without permission or attribution. Indigenous data sovereignty asserts the inherent rights of Indigenous peoples to govern the collection, ownership, and application of data about their communities, knowledge systems, and territories. For Indigenous communities around the world, AI is not merely a technological development but a potential new form of colonization–one that risks marginalizing their languages, cultures and agency unless meaningful safeguards are established. This commodification and decontextualization of indigenous materials undermines the cultural integrity and intellectual property rights of marginalized communities.
The concept of Indigenous data sovereignty has emerged as a critical framework for resisting digital colonialism. The First Nations Information Governance Centre in Canada has long advocated the OCAP principles–ownership, control, access and possession–as the cornerstone of Indigenous data sovereignty. However, while OCAP has been widely cited in health and demographic research, its integration into national AI policy frameworks remains limited. AI systems tend to reproduce dominant cultural perspectives and ways of knowing, potentially erasing alternative epistemologies and worldviews, creating what scholars term "epistemic colonialism." This concept challenges the dominant paradigm of technological determinism in AI development, which often prioritizes rapid innovation over social responsibility, cultural diversity, and ethical accountability.
Economic critiques of AI center on how these technologies concentrate wealth and power among a small elite while displacing workers and exacerbating inequality. AI development is dominated by a handful of massive corporations, concentrating unprecedented economic and social power, which exacerbates existing inequalities and creates new forms of technological dependency. AI's current trajectory is fundamentally incompatible with the proliferation of good jobs rooted in human flourishing, as the AI industry embeds itself into nearly every sector of the economy, with firms increasingly shaping a job market contingent on worker displacement and exploitation. The concentration of AI power in private corporations raises fundamental questions about democratic control over technologies that increasingly shape public discourse, education, and social relations.
The economic model of surveillance capitalism, as theorized by Shoshana Zuboff, demonstrates how AI companies extract value from human behavior while providing little compensation to the individuals whose data fuels these systems. Surveillance capitalism is defined as the unilateral claiming of private human experience as free raw material for translation into behavioral data, which are then computed and packaged as prediction products and sold into behavioral futures markets. The competitive dynamics of surveillance capitalism have created powerful economic imperatives driving firms to produce better behavioral-prediction products, ultimately discovering that this requires not only amassing huge volumes of data, but actually intervening in our behavior. This represents a fundamental shift in capitalism, where human experience itself becomes the raw material for accumulation.
While tech workers and AI companies profit enormously from these developments, AI threatens to displace workers across many sectors, often those already economically vulnerable, without adequate social safety nets or retraining programs. Instead of working to protect workers from the uncertainty coming from this new market, AI companies are undermining hard-won labor protections, exploiting legal loopholes to avoid corporate accountability, and lobbying governments to support policies that prioritize corporate profits over fair and just treatment for workers. The promise of AI to enhance productivity often translates to job displacement rather than improved working conditions or shared prosperity. Overwhelmingly, AI companies are embedding "productivity" tools designed to help businesses optimize their bottom line across the entire labor supply chain, requiring work itself to become legible to AI systems, making working life more routinized, surveilled, and hierarchical.
Epistemic critiques of AI examine how these systems fundamentally undermine critical thinking, human autonomy, and democratic decision-making processes, while simultaneously creating new vectors for manipulation and social fragmentation. Knowledge workers often refrain from critical thinking when they lack the skills to inspect, improve, and guide AI-generated responses, with the use of GenAI tools shifting the knowledge workers' perceived critical thinking effort, particularly for recall and comprehension, where the focus shifts from information gathering to information verification. A new study from researchers at MIT's Media Lab found that of three groups asked to write SAT essays using ChatGPT, Google's search engine, and nothing at all, ChatGPT users had the lowest brain engagement and "consistently underperformed at neural, linguistic, and behavioral levels."
The concentration of AI power among a few corporations creates new forms of epistemic authority that operate without democratic oversight, while simultaneously enabling unprecedented manipulation of public discourse. Surveillance capitalism represents an unprecedented concentration of knowledge and the power that accrues to such knowledge: they know everything about us, but we know little about them; they predict our futures, but for the sake of others' gain. This asymmetric knowledge dynamic becomes particularly dangerous when AI systems generate convincing misinformation and deepfakes at scale, with recent surveys revealing that 80% of Americans are concerned about AI being used for cyberattacks and 74% worry about AI creating deceptive political advertisements. AI systems increasingly shape what information we encounter, how we understand complex issues, and even how we think about problems, while the deployment of AI in content moderation and social media algorithms shapes public opinion and democratic participation in ways that are often invisible to users.
The epistemic violence extends beyond individual cognitive degradation to encompass systematic manipulation of democratic discourse itself. AI-driven engagement optimization tends to amplify sensational or polarizing content, potentially contributing to social fragmentation and radicalization, while the ability of AI systems to generate personalized content creates opportunities for highly targeted manipulation campaigns tailored to individual psychological profiles. Knowledge homogenization represents another critical epistemic concern, as AI systems tend to reproduce dominant perspectives and ways of understanding the world, while echo chambers created by AI recommendation systems reinforce existing beliefs and limit exposure to diverse perspectives essential for democratic deliberation.
The speed at which AI systems operate and the opacity of their decision-making processes make it difficult for citizens to understand, critique, or hold accountable the systems that increasingly govern their lives and mediate their access to information. Despite much discussion and debate in the past two years, the potential of GenAI both to enhance and to undermine users' individual and social critical thinking is as yet inadequately theorized, even as these systems reshape the very foundations of democratic participation through algorithmic curation of public discourse. Thus, it is crucial to take seriously the task of developing critical thinking skills capable of navigating an information ecosystem increasingly dominated by artificial intelligence. This raises fundamental concerns about the preservation of human agency and independent thinking in an AI-mediated world where the boundaries between authentic and synthetic content, between human insight and algorithmic output, become increasingly blurred.
Current AI systems suffer from significant reliability issues including hallucinations, inconsistent performance across different contexts, and lack of robust evaluation methods. Large language models can confidently present false information, while computer vision systems may fail catastrophically when encountering inputs that differ from their training data. The complexity of modern AI architectures makes it extremely difficult to predict or test for all possible failure modes, particularly in real-world deployment scenarios.
AI hallucinations are incorrect or misleading results that AI models generate, caused by various factors including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. These inaccuracies are so common that they've earned their own moniker, referring to them as "hallucinations," which can be particularly problematic for AI systems used to make important decisions, such as medical diagnoses or financial trading. On OpenAI's PersonQA benchmark, which tests knowledge about public figures, o3 hallucinated 33% of the time compared to o1's 16% rate, while o4-mini performed even worse at 48%, with broader general knowledge questions showing AI hallucination rates for o3 and o4-mini reaching 51% and 79%, respectively.
The fundamental design of AI systems contributes to reliability problems, as generative AI models function like advanced autocomplete tools designed to predict the next word or sequence based on observed patterns, with their goal being to generate plausible content, not to verify its truth. The technology behind generative AI tools isn't designed to differentiate between what's true and what's not true, meaning that even if generative AI models were trained solely on accurate data, their generative nature would mean they could still produce new, potentially inaccurate content by combining patterns in unexpected ways.
Interpretability and explainability remain major technical challenges, especially for deep learning systems whose decision-making processes are opaque even to their developers. This creates problems for debugging, auditing, and building user trust, particularly in high-stakes applications where understanding the reasoning behind AI decisions is crucial. The lack of standardized benchmarks and evaluation methodologies makes it difficult to compare AI systems or assess their suitability for specific applications, while rapid changes in AI capabilities outpace the development of appropriate testing frameworks.
Performance degradation represents another critical reliability concern, with small tweaks leading to big consequences, where swapping an LLM model, adjusting a prompt, or tweaking a temperature setting often leads to large, unexpected output differences. Many teams experience regressions in the accuracy of their model simply by upgrading to the latest LLM version, and if output is part of a downstream workflow, these issues can quickly cascade into financial consequences with delayed development cycles that force teams to re-test applications. The reliability challenges extend beyond hallucinations to include context failures masquerading as model issues, where RAG pipelines rely on embeddings to find relevant context, but those embeddings can drift if the source data changes, chunking technique is tweaked, or the embedding model updates. Environmental factors in edge deployments, such as temperature fluctuations, humidity, dust, vibrations, and electromagnetic interference can all affect sensor readings and hardware performance, leading to corrupted input data for ML models trained on clean, ideal data that might "hallucinate" when confronted with inputs from a compromised sensor.
The TED AI Show
by
Bilawal Sidhu
Tech Won't Save Us
by
Paris Marx
In Machines We Trust
by
MIT Technology Review & Jennifer Strong
The Bot Canon
by
Hannah Keefer