Skip to Main Content

Critical Artificial Intelligence (A.I.) Literacy

A guide for thinking critically about the use of generative artificial intelligence tools, including ChatGPT

Ethical Concerns

Ethical concerns about generative AI arise from the way the technology is developed.  The training data for the LLMs (Large Language Models) that run the generative AI tools comes from the open internet.   As a result, all of the ethical concerns about bias, misinformation, disinformation, fraud, privacy, and copyright infringement that exist about the internet are also applicable to the content produced by generative AI.  In many ways, these ethical concerns echo the well-documented problems with bias in internet search engine algorithms.  And many argue that these ethical issues could have and should have been addressed as the technologies were being developed.

Bias

Since generative AI tools are trained on data from the open internet, they often replicate the racial, gender, disability, and other biases present in information on the internet.  

Copyright Infringement

In a decision on February 14, 2022, the U.S. Copyright Office stated that "copyright law only protects 'the fruits of intellectual labor' that 'are founded in the creative powers of the [human] mind'.”   As a result, works created by generative AI cannot be copyrighted.  They pass immediately into the public domain.

Recently, lawsuits have been filed by The New York Times and other entities because their copyrighted material has been taken from the internet and used as training data for the LLMs (Languages Learning Models) that run generative AI tools.  This copyrighted material has appeared verbatim in text generated by the tools.  We will need to watch closely as the courts sort out this issue of copyright and generative AI as these decisions will directly affect the laws regulating the technology. 

Inaccuracy and Misinformation

Generative AI tools often provide incorrect information and "hallucinate" their answers to prompts.  The technology functions like a super-charged autocomplete tool where the next word is predicted by an algorithm.  For this reason, any information derived from generative AI tools should always be checked for accuracy.  Yet checking this information can be difficult because it is out of context.   Some generative AI tools are now providing sources for the information they produce, which may allow for context and fact-checking if the material does actually come from those sources and if the source citations are accurate.

Disinformation and Fraud

Trained on data from the open internet, generative AI tools have the potential to spread propaganda, disinformation, and conspiracy theories.  Bad actors can use these tools to mislead and deceive through AI-generated text, images, and videos.

Privacy

Individuals should use caution and avoid sharing personal information with generative AI tools.  Chatting with AI bots in human-like "conversations" can lead to unintentional oversharing of such personal information.  That information you provide may be used as training data for the tool and show up later in prompt responses given to other users.

Originality, Creativity, Voice

What do the products of generative AI tools tell us about human imagination and innovation?  What is uniquely human in what humans create?

Plagiarism and Academic Integrity

Plagiarism

A definition of plagiarism that includes generative AI:

Plagiarism is "presenting work or ideas from another source, with or without consent of the original author, by incorporating it into your work without full acknowledgement."

University of Oxford. Plagiarism. https://www.ox.ac.uk/students/academic/guidance/skills/plagiarism

 

Academic Integrity

Widener University's Academic Integrity Policy includes this statement about the use of generative AI:

"The use of AI-generated content is only permissible with the expressed consent of the instructor and, if permitted, requires proper citation. Lack of proper citation and instructor approval constitutes a violation of the academic integrity policy." 

Course Policies on the Use of Generative AI

Many faculty are addressing the presence of generative AI by writing course policies and syllabi statements that demonstrate their awareness of the technology and set clear expectations for students about how it should and should not be used in the class.

View over 100 sample syllabi policies here:

AI Detector Tools

AI Detector tools vary a great deal in terms of accuracy and consistency and have generated many 'false positives' flagging students' original writing as being produced by generative AI.  While traditional plagiarism detector tools scan student writing for copied and uncited sections of text, AI detector tools cannot do the same because of the way LLMs work.  Generative AI creates new patterns of language in each writing sample that are devoid of context and source and, therefore, cannot easily be detected by a tool.

Related article: