Skip to Main Content

Artificial Intelligence: Ethical Considerations

Ethical Considerations of AI

Legal Considerations

When working with AI Tools, it is becoming more common for researchers to discover content in a subscription database (such as one of Zondervan's paid for databases) and then upload that content into an AI Tool for analysis and review. However, uploading subscription-based articles into AI is highly unethical and likely illegal, as it violates copyright laws, licensing agreements, and poses risks to privacy and the integrity of academic publishing. While using AI to assist with personal research is common, providing it with proprietary content is a violation of your terms of service with the publisher.

Plagiarism & Academic Integrity

AI Tools have presented new challenges for academic communities with regard to academic integrity and plagiarism. Plagiarism is the act of presenting someone else's words or ideas as one's own. While AI Tools are not people, using an AI Tool to complete an assignment can still be considered plagiarism. It is vital to understand Taylor University policy and each professor's policy with regard to AI use. Read syllabi carefully, and when in doubt, double check with the professor before using an AI Tool.

Another aspect to consider is false sources and false citations provided by AI Tools. Generative AI Tools are not researchers and will generate a response even if they have to cobble together information from various places or sources. Providing a false citation is also considered plagiarism, so be sure to double check any source or citation provided by an AI Tool.

Misinformation

While AI Tools can be useful for a myriad of tasks, it is important to understand that they are not infallible and are known to produce hallucinations—false information generated by the tool. The AI Tools rely on datasets for their information, which can be outdated and inaccurate. Any answer generated by an AI Tool needs to be critically evaluated. Here is an article from The New York Times detailing the rise in hallucinations from 2025. Here is another interesting open access article about this issue from a research perspective, just download and enjoy!

Bias

Another issue with AI Tools is the potential for bias. As was mentioned above, AI Tools are trained through datasets, which can also include biased information. Also, the humans testers utilized in AI Tool development and feedback are not neutral and will influence the AI Tool intentionally or unintentionally. 

Privacy Considerations

Another consideration with AI Tools is the issue of privacy. Any and all information shared with an AI Tool becomes a part of that AI Tool's dataset and "training" without concern for privacy. This carries implications for one's own personal and sensitive information as well as concerns for sharing other's information and scholarship. If an author's scholarship is located behind a paywall in a library database, it is unethical to upload this information into an AI Tool thus making it a part of the dataset without the author's expressed permission (see the legal considerations tab for more info). It is also necessary to consider FERPA guidelines for the protection of your own and other students' information before sharing it with an AI Tool.

Make sure to understand the privacy policy of an AI Tool you interact with and consider all the ramifications of using that tool.

Copyright and AI policy can be found here.

Environmental Impact

AI Tools are not without significant impact on the environment. See this article, "Explained: Generative AI's environmental impact," from MIT News on the ways Generative AI is impacting the environment.