Bloomberg Television. (2023, June 28). Can copyright law stop generative AI and ChatGPT? [Video]. YouTube. https://youtu.be/H3fqEW1nwpQ?si=vQkFlf4KI0zLjelN
Ethical concerns about generative AI arise from the way the technology is developed. The training data for the LLMs (Large Language Models) that run the generative AI tools comes from the open internet. As a result, all of the ethical concerns about bias, misinformation, disinformation, fraud, privacy, and copyright infringement that exist about the internet are also applicable to the content produced by generative AI. In many ways, these ethical concerns echo the well-documented problems with bias in internet search engine algorithms. And many argue that these ethical issues could have and should have been addressed as the technologies were being developed.
Since generative AI tools are trained on data from the open internet, they often replicate the racial, gender, disability, and other biases present in information on the internet.
In a decision on February 14, 2022, the U.S. Copyright Office stated that "copyright law only protects 'the fruits of intellectual labor' that 'are founded in the creative powers of the [human] mind'.” As a result, works created by generative AI cannot be copyrighted. They pass immediately into the public domain.
Recently, lawsuits have been filed by The New York Times and other entities because their copyrighted material has been taken from the internet and used as training data for the LLMs (Languages Learning Models) that run generative AI tools. This copyrighted material has appeared verbatim in text generated by the tools. We will need to watch closely as the courts sort out this issue of copyright and generative AI as these decisions will directly affect the laws regulating the technology.
Generative AI tools often provide incorrect information and "hallucinate" their answers to prompts. The technology functions like a super-charged autocomplete tool where the next word is predicted by an algorithm. For this reason, any information derived from generative AI tools should always be checked for accuracy. Yet checking this information can be difficult because it is out of context. Some generative AI tools are now providing sources for the information they produce, which may allow for context and fact-checking if the material does actually come from those sources and if the source citations are accurate.
Trained on data from the open internet, generative AI tools have the potential to spread propaganda, disinformation, and conspiracy theories. Bad actors can use these tools to mislead and deceive through AI-generated text, images, and videos.
Generative AI systems consume huge amounts of energy--much more than conventional internet technologies. They also require large quantities of fresh water to cool their processors. These environmental impacts often fall disproportionately on socioeconomically disadvantaged regions and localities throughout the world. And information about this environmental impact is difficult to obtain directly from generative AI companies. In February 2024, the U.S. Congress proposed the Artificial Intelligence Environmental Impacts Bill of 2024 to encourage voluntary reporting of this data. Generative AI companies including Microsoft, Google, and Amazon have recently signed deals with nuclear power plants in Pennsylvania and Washington to secure emissions-free energy generation for their AI data centers.
Individuals should use caution and avoid sharing personal information with generative AI tools. Chatting with AI bots in human-like "conversations" can lead to unintentional oversharing of such personal information. That information you provide may be used as training data for the tool and show up later in prompt responses given to other users.
What do the products of generative AI tools tell us about human imagination and innovation? What is uniquely human in what humans create?
Plagiarism
A definition of plagiarism that includes generative AI:
Plagiarism is "presenting work or ideas from another source, with or without consent of the original author, by incorporating it into your work without full acknowledgement."
Academic Integrity
Widener University's Academic Integrity Policy includes this statement about the use of generative AI:
"The use of AI-generated content is only permissible with the expressed consent of the instructor and, if permitted, requires proper citation. Lack of proper citation and instructor approval constitutes a violation of the academic integrity policy."
AI Detector tools vary a great deal in terms of accuracy and consistency and have generated many 'false positives' flagging students' original writing as being produced by generative AI. While traditional plagiarism detector tools scan student writing for copied and uncited sections of text, AI detector tools cannot do the same because of the way LLMs work. Generative AI creates new patterns of language in each writing sample that are devoid of context and source and, therefore, cannot easily be detected by a tool.
The Technology and Instructional Resources Committee (TIRC) at Widener University issued an advisory statement regarding the potential use of software to detect AI-generated content. The committee states that "this advisory statement does not alter Widener’s academic integrity standards; rather, it advises against using unreliable technology to enforce these standards."
TIRC Statement on AI detection software (September 18, 2024)
To the best of our knowledge there exists no software that is capable of consistent and reliable detection of AI generated content. The level of confidence in the AI detection software outcomes does not meet the high bar of proving that someone committed an academic integrity violation. As a result, Widener University does not employ AI detection tools to monitor or evaluate student work at this time.
Related article:
Many faculty are addressing the presence of generative AI by writing course policies and syllabi statements that demonstrate their awareness of the technology and set clear expectations for students about how it should and should not be used in the class.
View over 100 sample syllabi policies here: