Skip to Main Content

Artificial Intelligence: Generative AI

A Guide Created to Discuss AI and Its Role at Tarrant County College

Generative AI Banner with soft block technical dark blue background

 

Generative AI

This page defines Generative AI, overviews essential considerations for its ethical use, and provides resources to support the development of critical AI literacy. 

Generative Artificial Intelligence (GenAI, Generative AI, or GAI) refers to computer programs that can produce text, images, and other material based on prompts given by users. These programs have been trained on large volumes of pre-existing material and programmed to create responses based on the structure and patterns of the data they are trained on. Generative AI has existed for quite some time in forms that are fairly commonplace (i.e., autocorrect and predictive text in messaging and word processing applications).

At TCC, when we say, “Generative AI” we are generally referring to newer and more powerful generative AI such as chatbots, image generators, and video generators.

Definition from the dynamic AI Glossary [TCC AI Taskforce Glossary link Coming Soon].

Essential Considerations in the Ethical Use of Generative AI

Essential Considerations for Ethical Use

The ethical use of any technology requires careful consideration of both how the technology is developed and how it may be used.  Below are important areas to examine in navigating use that ensures alignment of laws, policies, and values with practices as we innovate in the ever-evolving AI landscape. 

Challenges

Navigating the AI landscape requires evaluation of AI tools and practices.  Challenges can include:

  • bias - The programming of technology can reflect biases, including human biases and biased data sets) and therefore the technology and its outputs can also reflect bias.
  • unreliability - Output can contain false, inaccurate, or misleading information due to biased data, incomplete data, and errors in programming or program execution.
  • data insecurity - Exposure of unprotected data or unintentional exposure of protected data can violate privacy and confidentiality. One model for considering data privacy is the "GUT Check"' for "synthetic media" described by University of Utah Health Compliance Services Information Privacy Administrator Bebe Vanek, JD.
  • Intellectual Property - Content created or augmented by technologies can build on content that is copyright-protected or otherwise illegal to use.
  • academic and professional misconduct - Content created through the use of technologies that is not in alignment with ethical principles and codes of conduct violates academic and professional integrity.
  • skill and cost barriers - Barriers to access include cost as well as the development of essential literacies. Costs barriers may be associated with technology subscriptions, hardware, and data processing.  Maintaining competence in a landscape of evolving technologies requires the continued growth of ethical and technological literacies essential in navigating the global landscape.

Opportunities

When used legally, ethically, and effectively, AI tools can empower innovation. Opportunities include:

  • Personalization - Technologies can support the personalization individual learning and experiences by tailoring content to individual interests and needs. 
  • Collaboration - Technologies can support collaboration through the use of shared digital spaces and experiences. 
  • Critical thinking
  • Creativity - Technologies can support creative processes by offering various entry points for inspiration and exploration. 
  • Communication
  • Accessibility

When considering using AI tools or AI outputs, always critically think about the tool's purpose and what ethical concerns you may have in addition to legal concerns surrounding copyright, policy, and data security. Remember that AI can make mistakes, and make sure to verify its output is credible. 

Considering AI

Maintaining Academic Integrity and Citing Generative AI

Maintaining Academic Integrity

Students should always check their instructor's ICR for a statement about AI and ensure they understand any permitted AI use or assistance in completing coursework. Unapproved use of AI in coursework puts student academic integrity at risk.  For more information about Academic Integrity, students and employees can consult FLB (Local) and the TCC Student Code of Conduct.

Citing Generative AI

Below are links that explain how to cite Generative AI use in common styles: 

Academic Publisher Policies

Examples of academic publications' policies on the use of AI tools:

Building AI Literacy

AI Literacy

AI literacy is situated within the Association of College & Research Libraries (ACRL) Framework for Information Literacy (James and Filgo 2023) and is considered an essential Digital Literacy (Bender 2024). As defined by Long and Magerko (2020), AI Literacy is the ability to: 

  • critically evaluate AI technologies
  • communicate and collaborate effectively with AI
  • effectively AI effectively as a tool personally and professionally

Bender, Stuart Marshall (2024). Awareness of Artificial Intelligence as an Essential Digital Literacy: ChatGPT and Gen-AI in the Classroom, Changing English, DOI: 10.1080/1358684X.2024.2309995

Long, D., & Magerko, E. (2020). CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems April 2020. Pages 1–16. https://doi.org/10.1145/3313831.3376727

James, A., & Filgo, E. (2023). Where does ChatGPT fit into the Framework for Information Literacy? The possibilities and problems of AI in library instruction. College & Research Libraries News, 84(9), 334. doi:https://doi.org/10.5860/crln.84.9.334

Introduction to "Practical AI" 

If you are interested in how Generative AI works, here is a jargon-free explanation on Large Language Models (LLMs) from Ars Technica.

If you are new to the practice of using generative AI tools like ChatGPT, these short videos provide a useful introduction.

Practical AI for Instructors and Students (10 to 12 minutes each)
Wharton School, University of Pennsylvania

From University of Arizona Libraries, licensed under a Creative Commons Attribution 4.0 International License.

Evaluating AI Tools and Output

When using artificial intelligence, it is important to evaluate the tool itself and the tool’s output critically. Ask yourself these questions:

  • What is the purpose of the tool?
  • How is this tool funded? Does the funding impact the credibility of the output?
  • What, if any, ethical concerns do you have about this tool? 
  • Does the tool ask you to upload existing content such as an image or paper? If so, are there copyright concerns? Is there a way to opt out of including your uploaded content in the training corpus? 
  • What is the privacy policy? If you are assigning this tool in a class, be sure to consider any FERPA concerns. Faculty may also reach out to their Deans for guidance.
  • What corpus or data was used to train the tool or is the tool accessing? Consider how comprehensive the data set is (for example, does it consider paywalled information like that in library databases and electronic journals?), if it is current enough for your needs, any bias in the data set, and algorithmic bias.
  • If reproducibility is important to your research, does the tool support it?
  • Is the information the tool creates or presents credible? Because generative AI generates content as well as or instead of returning search results, it is important to read across sources to determine credibility.
  • If any evidence is cited, are the citations real or "hallucinations" (made up citations)

From University of Texas Libraries, licensed under a Creative Commons Attribution-NonCommercial 4.0 Generic License.

Explore More

Resources for staying up to date on the growth of AI: 

Perspectives on AI and Education

Banner Image Credits

Banner created in Canva using embedded text and photos.