Skip to main content

Generative AI

A course from the Orbis Cascade Alliance on using GenAI for library work

Ethics: Bias, Privacy, Copyright, and Environmental Impacts

Authors: Jason Cabaniss (Seattle University) and Shannon Kealey (Willamette University)

Recording and Materials

The recording and slides for this lesson will be posted here after April 24, 2026.

In This Lesson

General

As higher education investigates and implements artificial intelligence on campuses, considerations ranging from transparency in AI use to the environmental impacts of large scale data centers are among a handful of ethical concerns we will present.

One item we will lead off with is the impact on labor, specifically the gig workers who are paid to ‘train’ AI models. A 2024 ABC News piece interviewed individuals who slowly realized they were being given tasks that trained AIs and LLMs. At the time they were happy to be earning money, but the companies were less than forthcoming regarding the nature of their work. With the growing number of products promoting AI, can we be sure that the labor used to build this large intelligence system was done in an ethical manner? Were employees paid a fair wage?

ABC News: “‘Overlooked’ data workers who train AI speak out about harsh conditions

In her book Empire of AI, Karen Hao profiles gig workers in economically and politically unstable countries outside the United States, who rely on the low wages provided by tech companies to provide training and content moderation to AI models. In many cases, the content moderation roles require individuals to look at graphic text, photographs, and videos, which lead to severe psychological distress and burnout.

These topics are not meant to deter us from using any artificial intelligence, large language model, or machine learning tools in our workplace. Instead, we wish to invite conversation and debate about the tradeoffs when confronted with the impacts on copyright, privacy, society, and the environment.

Transparency

The American Psychological Association, American Medical Association, Modern Language Association, American Chemical Society, and other scholarly associations and societies are aligned in calling for transparency around generative AI use.

The Artificial Intelligence Disclosure (AID) Framework by Kari Weaver provides a common language for faculty, students, and administrators to use to communicate how generative AI tools are being used in a variety of contexts. Disclosure is foundational to the ethical use of generative AI. Most generative AI tools are not transparent about the sources of information that train the large language model, nor do they share the algorithms that power the output, so it is essential for academic users of generative AI to be as transparent as possible about what tools are used and how. Other disclosure frameworks exist, developed in-house by non-profit schools and universities and profit-driven organizations alike. We share this particular framework because its developer, Kari Weaver, Learning, Teaching, and Instructional Design Librarian at the University of Waterloo, is working with the European Council of Research Integrity to build on this framework to become the international standard for the disclosure of AI use in publishing and research. She has also developed an AID Disclosure Statement Form, similar to citation generators, which can be used to quickly generate a disclosure statement.

Ideally, in teaching contexts at an academic institution, faculty would use a common framework like Weaver’s to communicate the approved uses of generative AI tools for each individual assignment. Students would then use the framework to disclose which and how generative AI tools were used.

Similarly, faculty and administrators would use a common framework like Weaver’s to disclose uses of generative AI in their lesson or lecture planning, course or assignment design, interoffice communications, etc. Developing a culture of transparency and disclosure around generative AI use on college campuses could decrease breaches in academic integrity among students and ineffective or unethical uses by faculty and administrators.

Further reading: Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary?

Privacy

Privacy in higher education, especially libraries, is important as more of our lives shift to the digital sphere. While our employers provide notice that our communications, browser habits, and any activity done on a work computer are subject to audit, there are further considerations as many operating systems roll out AI features.

In her op-ed “AI agents are coming for your privacy,” Meredith Whittaker, president of the Signal Foundation, relays that Apple and Microsoft are implementing AI tools into their operating systems and bypassing security features of apps like Signal and WhatsApp, which have been marketed as encrypted messaging services. “Security researchers recently exposed Siri transmitting voice transcripts of WhatsApp messages to Apple servers as a part of the rollout of Apple Intelligence, an ai system developed by the firm.”

The impact on libraries is ensuring the security of user data when new products are implemented. It also means monitoring vendor use of patrons’ personal data and search histories within databases. This requires diligence and collaboration on the part of front line reference staff and e-resources managers to monitor changes to platforms and communicate issues and concerns with vendors.

Environmental Impact

The projected energy needs associated with AI range from slightly more than what we already consume to dire pictures of energy and water waste. Tech companies are constructing massive data centers in suburban and rural communities, so the technology behind AI can function around the clock. By some estimates, the amount of electricity and water required to maintain the servers and equipment associated with AI tools is equal to the needs of a large city.

MIT Technology Review posted an investigation, breaking down the estimated energy use between language models, image generators, and videos. MIT’s investigation estimates that a basic text prompt requires the equivalent energy to running the microwave for 8 seconds. That doesn’t sound like much, but with ChatGPT revealing that it handles up to 2.5 billion queries per day, that adds up to a lot of energy use. This does not take into account the additional energy needs required for image and video generation. These types of queries and processes require more energy based on the complexity of the request. For example, some video generators require the energy equivalent of “running a microwave for over an hour.” Imagine that on the scale of millions of video generations a day.

Reports on data centers indicate that they run on coal and fossil fuel supplied electricity. It’s likely that data centers will transition towards running on renewable energy, and the technology will evolve where the water usage is reduced or made more efficient. For now, though, the energy demands require the consumption of more fossil fuels.

For higher education, the energy use and environmental impact of AI is rarely seen on a daily basis. Until data centers move towards reduced water usage and renewable energy, libraries must determine if the use of AI tools is in alignment with any institutional environmental initiatives or efforts to reduce their carbon footprints.

Bias / Racism / Sexism

The information literate user, promoter, or critic of generative AI tools must understand that there are racist, sexist, and other biases hard coded into generative AI algorithms and comprising the source information for many generative AI tools. Because most generative AI tools are opaque rather than transparent about their algorithms and source material, the evaluation of bias must be done indirectly, through analyzing query and output. Scholars such as Dr. Safiya Noble, Dr. Joy Buolamwini, Dr. Tiera Tanksley, and other scholars have done such work and published and presented widely about their findings, which have implications for academic, healthcare, and government uses of generative AI.

The landscape for copyright legislation and generative AI is so rapidly evolving that it is extremely important for users of Generative AI tools to refresh their understanding regularly. On March 2, 2026, the The Supreme Court refused “to take up the ​issue of whether art generated by artificial intelligence can be copyrighted under U.S. law, turning ‌away a case involving a computer scientist from Missouri who was denied a copyright for a piece of visual art made by his AI system.” (Reuters)