COLI on AI and Academics

An Updated version of this guide is published at the Canisius University AI and Society Initiative Site.  This version is not updated after June 5th, 2023

Here, COLI is assembling a list of introductory information, tips, and resources concerning artificial intelligence, and it's possible implications for pedagogy and scholarship.  This resource is tentative, incomplete, might age rapidly, and probably (at any point in time) a little wrong in places.  

Understanding Large Language Model (LLM) Generative Artificial Intelligence (AI)s

Large Language Model AIs, or more properly their chat versions, such as ChatGPT, Google Bard, or Bing Chat, are designed to simulate human typed-text conversation.  They are computer programs that, in addition to code written by software engineers and developers, have been trained on large quantities of (mostly) human-generated text.  Much of this is the open internet, but other sources occasionally have been added to the training corpus as well.  Versions have been in development for years, but LLM AIs emerged into broader public attention in late 2022, when the firm OpenAI made available, for free use by anyone on the internet, ChatGPT, which was powered by the GPT-3 LLM.  

What LLM AIs Do

These AIs can create things or do by generating text.  Just some examples are:

  • recipes for food dishes,

  • lesson plane for secondary school science classes,
  • a cover letter accompanying a job application,
  • a thank-you note,
  • an essay on the development of the Code Napoleon,
  • simulate a fifteen-year-old blogger reviewing a video game,
  • code a module or particular task within a computer program.

In the following box is a copy of a conversation with LLM AI Google Bard, in which it describes itself.  Bard's description is helpful, but it also provides a basic example of what LLM AIs can do in response to prompts. 

Google Bard

As an AI, how are you classified?

"I am classified as a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories."

Can you describe your training corpus?

"Sure. My training corpus is a massive dataset of text and code that I was trained on. It includes text from a variety of sources, such as books, articles, websites, and code. The corpus is designed to teach me about the structure of language and how to use language to communicate effectively.

The corpus is also designed to teach me about the world. It includes text about a variety of topics, such as science, history, and current events. This helps me to understand the world around me and to generate text that is relevant and informative.

The corpus is constantly being updated with new text. This helps me to stay up-to-date on the latest information and to generate text that is accurate and up-to-date."

Any successful LLM AI production requires a carefully written prompt from the user.  This must properly describe the user's intent, and if the AI's response is not to be dependent on its training corpus, it the user's prompt must supply relevant data as well.  Already, education and technology experts speak of "prompt engineering" as a probable skill necessary for students in the future, but it remains to be seen whether that is true, as these tools (more or less) evolve.

In COLI, we've taken to describing an LLM AI as a machine that simulates a person who knows what they are talking about responding to questions.  

Limits to LLM AIs

Importantly, the current LLM AIs have limits.  If you ask them to describe those limits, they will usually enumerate.  For example, when asked why it occasionally gets things wrong, ChatGPT replies that its answers will reflect shortcomings in its training data: biases, incomplete or wrong information, or ambiguity.  Plus, it may struggle to interpret language within that training corpus.  

Wrong Answers

Perhaps most important, although something that LLMs might not be able to articulate themselves, is that they present simulations of humans, instead of possessing human concepts of truth or correctness.  If an LLM AI is prompted to answer a question for which it does not have training data it may decline to answer, or it may provide a plausible, but fictional answer.  These are what AI developers refer to as "hallucinations."  Some examples of these fictions could be:

  • descriptions of a book whose text or detailed summaries of the same are not in the AI's training data.  The AI might develop a plausible but false interpretation or summary based on the book's title, or what information it may have on the book's subject.  In COLI, we find that copyrighted works seem particularly troublesome in this respect.  It is likely that such works, and text descriptions of those works, have not made it into LLM AI training data.
  • scientific or engineering explanations of complex phenomena.  One example is rocketry.
  • biographies of non-famous individuals. Try asking for a short biography of you and your title, if it is already publicly available on the web.  For example, ChatGPT had this to say about COLI Director Dr. Mark Gallimore: "Gallimore holds a Bachelor of Arts degree in English from St. Bonaventure University and a Master of Science degree in Education from Canisius College. He began his career in education as a high school teacher, teaching English and social studies in Buffalo, New York."  While he has the highest respect for the aforementioned institutions and high school teachers, this biography of Dr. Gallimore is in every respect false.  

We may say that LLM AIs "making stuff up" or "get it wrong," but they are not malfunctioning.  They simulate human composition.  Ars Technica's Benj Edwards offers a good explanation of this. But the short version was offered by OpenAI CEO Sam Altman, soon after ChatGPT became available in late 2022: "ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It's a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness." 

Whatever we do in our courses regarding LLM AIs, conveying this basic truth to students that these tools can smoothly invent things should be part of it.  But it doesn't preclude their use altogether.  

What Is Or Is Not An AI?

This tentative guide is concerned with Large Language Model AIs, which are (at the time this is written) the most powerful artificial text generation tools available.  Other tools, be it a customer service chatbot, a grammar assistant, or even the non-playable antagonist and their henchmen in video games, can be considered artificial intelligence, although they generally follow much simpler routines than LLM AIs.  However, in COLI we expect to see in the next several months and years LLM AI-powered tools and features appear across the internet, in software and on mobile devices.  So this distinction may not be relevant over time.  


Each discipline will need to determine the extent to which LLM AIs compel adaptation or alteration of their curriculum.  Each faculty member will also likely need to consider the relationship between their teaching style and methods, and LLM AIs.  However, here is a list of things that might spur some inspiration.


AIs might assist students in faculty at the very beginning of thinking about a topic.  This is akin to encyclopedias, and students may get some basic search terms from a helpful LLM AI.  Moreover, the ability to have a conversation with an AI might spur additional inspiration, or paths of exploration.  

For faculty, having an AI craft a lesson plan, or portions of a syllabus may be a starting point for designing a course.  Even if the finished product heavily departs from anything the LLM AI supplied, it can still be beneficial for getting the project started, as well as considering possibilities for content, activities, or even learning goals and objectives that faculty may not otherwise consider.

Mundane Writing Chores

We might use AI to write boilerplate text that professionals use everyday at their work.  We might permit students to do the same.  We might also ask students to analyze the boilerplate responses generated by AIs, to look for cultural context: in what ways do AIs write a thank you letter, or job application cover letter, that reflects certain social, economic, or cultural status?  

You or your students may use AI to develop sample text for projects or processes.  For example, could the AIs write simple scenarios or case studies that students can work through using skills or abilities learned in a course?  AIs can produce data in columns or .csv, useful for learning statistics or data processing.

Fictional but Plausible Examples

LLM AIs are designed to simulate human beings, and so may be sources for learning simulations.  For example, could ChatGPT write essays in the style of an adolescent student?  It might provide teacher education students with examples on which to practice assessment and feedback skills.  Can Bing Chat suggest problems for mathematics students to solve?  

Source Analysis

LLM AIs might provide handy source content on which students can practice critical thinking skills.  Can students spot certain arguments or descriptions that have political implications, and so are not as "objective" as perhaps the AI's tone might suggest?  Can students spot errors or falsehoods?  Can students employ web literacy and fact-checking skills to assess veracity or just cultural nuances within an AI's version of a story?  

Prompt Engineering

At least in the present, how the user crafts a question or command for an LLM AI, prompt engineering, determines the AI's product.  Even subtle re-wording of the same (to humans) questions can produce radically different results from the AI.  It seems that, for the time being, prompt engineering might be a useful skill to cultivate in students.  What specific practices or procedures produce different kinds of outcomes may depend on different disciplines.  


LLM AIs have learned primarily on open-sourced content.  This might be on the internet, or books that are out of copyright.  There may be exceptions in unpublished training aids.  But much of what we assign is copyrighted content, out of necessity, since that is where specialized disciplinary knowledge is found.  Writing assignments that ask students to focus on these specialized resources will not be accessible to generative AIs.

Similarly, having students do primary research is both pedagogically sound as well as irrelevant to AIs.  If students must do the lab work, or labor in the archives, they acquire familiarity with the foundations of knowledge.  ChatGPT itself points to "original research" as something it cannot perform or simulate:  

Micro Examples

LLM AIs will not have extensive access to specific examples that illustrate larger trends.  For example, asking students to read testimonies, letters, or documents written in the past, but are not particularly famous, can help them connect greater ideas to specific people or events.  Aside from the issue of LLM AIs, this often generates greater interest among students.  For example, having students read a letter written by a nurse during the 1918 Influenza epidemic, or read a Treasury Department report about a specific corporate fraud case, can help students understand larger arguments or legal conceptions, within the structure of a compelling story.  Since LLM AIs may not be able to write with authority about these cases, since they are not published on the open internet, this gives students the opportunity to draw their own conclusions.  

Scaffolded Work

"One-and-Done" assignments are where LLM AIs shine.  If you require students to complete a project in stages, providing formative feedback at each stage, students are more likely to learn research, writing computational, and other skills, and acquire more confidence along the way.  This isn't something they can hand off to AIs.  

Reflective Writing

Have students write reflections on course concepts or their learning.  For example, have a student describe how they arrive at a (perhaps tentative) conclusion based on available evidence.  Have a student describe how they arrived at their method for coding a program.  

At the Top of Bloom's Taxonomy

Assignments that require creation or evaluation are particularly suited to humans and not AIs.  Have students make arguments based on (original or primary) evidence.  Or have students provide an interpretation, or assessment of quality, of a particular composition or source.  

Creative Production That Isn't Text

Have students create narrated videos: documentaries, tutorials, explainers, and so on.  While these could in theory be scripted by an AI, you may reasonably require composition that is more closely tied to visuals on screen, which might make AI-generated information less useful.  As with all above, this is a solid pedagogy regardless of AIs, since students are compelled to think critically about media that they are more likely to encounter than a traditional college essay. 

Steps for Faculty

Update Your Course Policies

To start with, mention generative AIs in your syllabus.  Should students avoid them altogether, for some or all assignments?  Can students use them for certain purposes?  This policy may be imperfect at first, until you acquire greater familiarity with LLM AI capabilities.  But it is better than nothing.

Periodically reflect on what exactly your position on AIs is from a curricular or pedagogical standpoint.  Do they have no place in your classroom or activities?  Could they reasonably assist students in some parts of their work, so that students can better focus their efforts on other, more important things?  Would you like students to experiment with AIs, to determine for themselves a sense of that LLM AIs are or are not capable of in your discipline?

Experiment With AIs, Generally

Become Familiar With AIs

Try having conversations with the LLM AIs on topics both within your professional discipline, or just about anything else.  Get a sense of how they respond. 

Build Examples

Below, we suggest asking students to complete the assignments you typically assign to students.  But beyond your current assignment prompts, ask the AIs to perform the types of analyses that are core skillsets for your discipline.  Can it accurately perform calculations of a sort?  Can it interpret types of evidence commonly used by professionals?  Can it identify important elements in a given text, according to certain scholarly or professional priorities?  Can it discuss relevant literature on a particular topic, or describe debates within a discipline?  And with all of the above, can it provide depth, detail, or precision that you expect students to exhibit when completing assignments?  

You'll likely discover limitations.  Either there are procedures or activities you have students do, that AIs don't do very well, or the AIs' penchant for falsehood will surface.  Document these examples to share with students.  Even just asking AIs to summarize lengthy texts that are less likely to be the topic of analysis (good, bad, and everything in between) on the internet might not be as AI-friendly as it first sounds.  

Here's an example: Allan Millett, Peter Maslowski, and William Fies organized their book For the Common Defense around six themes:

  1. First, national military considerations alone have rarely shaped military policies and programs.
  2. Second, American defense policy has traditionally been built upon pluralistic military institutions, most noticeably a mixed force of professionals and citizen-soldiers.
  3. Third, the popular belief that the United States has generally been unprepared for war, policymakers have done remarkably well in preserving the nation’s security.
  4. Fourth, the nation’s firm commitment to civilian control of military policy requires careful attention to civil-military relations.
  5. Fifth, the armed forces have become progressively more nationalized and professionalized.
  6. Finally, beginning in the mid-nineteenth century, but especially during the twentieth century, industrialization has shaped the way the nation has fought.

(Millett, Allan R., Maslowski, Peter, and William Feis. For the Common Defense: A Military History of the United States from 1607 to 2012. 3rd ed. New York: Free Press, 2012 pp. 14-15. Free Press. Kindle Edition.)

When asked to list these themes, ChatGPT had this to say: 

  1. Civil-Military Relations: The book explores the relationship between the military and civilian authorities. It examines how this relationship has evolved over time, the challenges that have arisen, and how they have been resolved.

  2. Technology and Innovation: The authors highlight the importance of technology and innovation in shaping the military history of the United States. They trace the evolution of military technology, from muskets to drones, and how it has influenced military tactics and strategy.

  3. Strategy and Operations: The book covers a range of military campaigns, including the Revolutionary War, the Civil War, and the Gulf War. It analyzes the strategic and operational decisions made by military leaders and the outcomes of these decisions.

  4. Military Leadership: The authors discuss the role of military leadership in the success or failure of military campaigns. They examine the qualities that make an effective military leader and the challenges that leaders face in times of war.

  5. Military Culture: The book explores the culture of the military and how it has evolved over time. It examines the values, norms, and beliefs that shape military organizations and the impact of this culture on military operations.

  6. The Impact of War on Society: Finally, the book examines the impact of war on American society. It discusses the social, economic, and political consequences of war, including the effects on veterans and their families.

Some of these are more or less close to the truth.  Most suggest that ChatGPT made a reasonable but erroneous guess based on the book title and perhaps some other publicly available descriptions.  Since the book has wide circulation and the earliest edition dates from the 1980s, this suggests that considerable quantities of long-form text, and especially copyrighted text, have not been included in the AI's training materials. 

This example could be supplied to students (perhaps assigned the Millet, Maslowski, and Feis text) as a warning that the AI production cannot currently, at least, be trusted at face value for veracity.

With Your Students

When we say that AIs are likely a part of the professional future for many of our students, and we need to prepare them to work with or around AIs, what we might mean is simply making students aware of AI's current likely behaviors, and developing in students a habit of remaining aware of trends in AI.  For example, if you are teaching economics or anthropology, you might periodically prompt AIs to discuss the day's class subject, content or activity, and then discuss with students how the AI responds.  

Companies like OpenAI and Google are sensitive to charges that AIs inherit bias and discrimination present in their human creators or training data.  They have taken steps to prevent this, but their products remain controversial.  This might be an important conversation to have with students, based on various perspectives presented in sources.  However, one should be cautious about in-class or assigned experimentation with, for example, engineering prompts designed to provoke racist replies.

Another concern can be that students must create accounts at OpenAI, Google, and Microsoft to experiment with these AIs.  Asking students to provide these companies with personally identifiable information (PII) may be problematic.  Students should be encouraged to consult and understand the terms of service, even if it is optional.  

But classroom experimentation with AIs might be beneficial, to determine how forms of knowledge and ways of thinking in your discipline interact with AIs.  Use a single account, and project it on the big screen in class.  Work together as a class to generate or modify prompts.  Students might see how AIs stumble with certain questions, or provide simulated but incorrect answers.  If an AI cannot perform the kinds of analyses, creativity, or other skills you hope students learn in the course and in the process of assignments, it is good for students to see that for themselves while you are present to answer questions.  In engineering prompts and discussing AI outcomes, you also have an opportunity to demonstrate ways of thinking, habits, practices, and procedures that are the substance of your course learning objectives.

Lastly, it is reasonable to warn students that, apart from the serious moral, ethical, and social implications of academic dishonesty, there is the practical problem of a data footprint. Even if students take steps to anonymize their use of AIs (throwaway email accounts, private browser sessions, and so on) they might generate a digital trail of evidence.  Under various scenarios, that evidence may become exposed or fall into the hands of extortionists, which could threaten their future careers and professional lives.  This problem exists already with essay mills or contract cheating, and another form of it might appear with AIs, depending on whether AI providers change hands, or equip themselves with sufficient cybersecurity resources.

Ask AIs to Do Your Assignments

If you suspect that an LLM AI might be able to complete your students' assignments for them, ask the LLM AIs to do the assignments, and see how they do.  You may need to vary your prompt a bit to ensure that the AI understands exactly what is asked since, among other reasons, the AI hasn't been a student in your class for several weeks before the assignment is to be completed.  

Based on the AI's performance, you can determine which assignments might need to be scrapped, which need to be altered, and which assignments prompt poor responses from the AI.  If you ask the AI to regenerate those responses several times, you will likely see familiar patterns, since in response to the identical prompt, it is less likely to provide radically different answers.

Look for falsehoods.  For example, if you ask it to draw details from a copyrighted book in order to make a case, you may see where the AI provides fictional, or more properly, simulated details.  If you have a good grasp of the source in question, this is obvious.  

Ask the AI

As a starting point, it can be helpful to ask the AIs directly what they cannot do, but that are typical learning outcomes of a discipline.  They might answer with good insights.  For example, Chat GPT suggests several things that undergraduate students should learn to do in college classes, but that LLM AIs will not be able to do effectively:


  1. Primary source analysis: Assign students to analyze and interpret primary sources, such as letters, diaries, or government documents. This task requires critical thinking skills and the ability to contextualize information, which LLM AI models cannot fully replicate.

  2. Historiographical essay: Have students write an essay that evaluates the historiography of a specific historical event or period. This assignment requires students to synthesize and critically analyze a variety of sources, which LLM AI models may not be able to fully replicate.

  3. Historical interpretation: Assign students to write a short essay that offers an original interpretation of a specific historical event or figure. This task requires creativity, critical thinking, and the ability to make connections between different sources and information, which LLM AI models may not be able to fully replicate.

  4. Group research project: Assign students to work in groups to research and present on a specific historical topic. This task requires collaboration, communication, and the ability to synthesize information from multiple sources, which LLM AI models may not be able to fully replicate.

  5. Oral presentation: Ask students to give an oral presentation on a historical topic or event, and to answer questions from the class. This task requires public speaking skills, the ability to think on one's feet, and the ability to explain complex ideas in a clear and concise manner, which LLM AI models may not be able to fully replicate.


  1. Laboratory experiments: Assign students to conduct experiments in a laboratory setting, where they can practice hands-on skills such as pipetting, culturing microorganisms, or using microscopes. LLM AI models cannot fully replicate the experience of conducting experiments in a physical laboratory environment.

  2. Data analysis and interpretation: Have students analyze and interpret data from a scientific study or experiment. This assignment requires critical thinking skills and the ability to draw conclusions from complex data, which LLM AI models may not be able to fully replicate.

  3. Literature review: Assign students to write a literature review that synthesizes and evaluates research on a specific biological topic. This task requires the ability to critically analyze and synthesize information from multiple sources, which LLM AI models may not be able to fully replicate.

  4. Field study: Ask students to conduct a field study of a specific biological ecosystem, and to write a report that details their observations and conclusions. This task requires observation skills, the ability to identify different species, and the ability to make connections between different aspects of an ecosystem, which LLM AI models may not be able to fully replicate.

  5. Oral presentation: Have students give an oral presentation on a specific biological topic or research study, and to answer questions from the class. This task requires public speaking skills, the ability to explain complex ideas in a clear and concise manner, and the ability to think on one's feet, which LLM AI models may not be able to fully replicate.


  1. Case study analysis: Assign students to analyze and solve a real-world business case study. This task requires critical thinking skills, the ability to identify and evaluate different solutions, and the ability to make decisions based on incomplete information, which LLM AI models may not be able to fully replicate.

  2. Group project: Have students work in groups to develop a business plan or strategy for a hypothetical or real-world business. This task requires collaboration, communication, and the ability to synthesize information from multiple sources, which LLM AI models may not be able to fully replicate.

  3. Leadership reflection: Ask students to write a reflection paper on their own leadership style and how it has evolved throughout the course. This task requires self-awareness, the ability to identify and evaluate one's own strengths and weaknesses, and the ability to reflect on personal experiences, which LLM AI models may not be able to fully replicate.

  4. Role-playing exercise: Assign students to role-play different scenarios related to management and leadership, such as handling a difficult employee or negotiating a contract. This task requires interpersonal skills, the ability to read and respond to social cues, and the ability to think on one's feet, which LLM AI models may not be able to fully replicate.

  5. Oral presentation: Have students give an oral presentation on a specific business topic or strategy, and to answer questions from the class. This task requires public speaking skills, the ability to explain complex ideas in a clear and concise manner, and the ability to think on one's feet, which LLM AI models may not be able to fully replicate.

To generate answers similar to these, here's the prompt: What are some assignments for a undergraduate university (discipline) course that have students practice or demonstrate things LLM AIs cannot do for them?

Like a lot of responses from LLM AIs, these suggestions are typically vague.  And an LLM AI–ChatGPT, for example-- may attempt to simulate the things it tells it can not do very well, if prompted by you (or a student.)  Therefore, the faculty member needs strong command of any disciplinary knowledge involved in the assignment, if they are to assess student work for accuracy or integrity.  But these can be a good starting line for your process of thinking about assignments that are "AI proof"

Was This Written By AI?

LLM AIs are designed to simulate people's writing, but there are often signs that a text was written by AI:

  • AIs do not adhere to standards of accuracy or truth, and so will invent events, people, or other details as are needed to plausibly simulate a person writing about a topic.  This can even include sources cited within the text.  AIs might misattribute real quotes to the wrong author or speaker.  So if something seems (dramatically) untrue, this might be an AI at work.
  • If a real author's work is identifiable in another text, it likely differs dramatically from text attributed to an author but in fact composed by an AI.  AIs can simulate styles, such as a nine-year-old crafting a book report, or of an American of limited education describing his experiences during the Great Depression, but it likely cannot reproduce the work of an individual who is not a famous author.  
  • AIs may struggle with understanding a prompt, rather more than most real people.  So it may in effect answer the wrong question, in whole or part.  


The popular plagiarism prevention and detection service Turnitin has a toolset for detecting AI-composed writing within student submissions.  COLI has experimented with it, and it seems to work in most respects.   Just as Turnitin does, we strongly recommend faculty follow up on any suspected unauthorized AI use among students with further steps.  Check citations and quotes.  Does the student's submission properly address the assignment prompt?  Does it answer the right question or perform the correct procedure?  Are quotes and citations properly attributed to real sources?  Is there detail or depth of argument enough to satisfy the prompt?  Discuss with students the context of the assignment and their submission.  

Innovative Pedagogy

If you need to "AI-proof" your course, you have the opportunity to do something more.  Can you make your assignments more effective as opportunities for students to practice or demonstrate the skills embodied in your learning objectives and goals?  

Academic and professional disciplines across campus usually advertise some or another form of critical thinking and analysis skills as part of their learning outcomes.  This is also present in our general education programs, the Core Curriculum and All-College Honors Programs.  These tend to correlate with higher levels of Bloom's Taxonomy.  They can also be especially challenging to assess on classroom exams, at least in something approaching a real-world scenario.

But perhaps we can develop authentic assessments that challenge students to complete tasks poorly done, or altogether inaccessible to LLM AIs.  Many of these assignments may have been especially valuable before AIs existed.


Mills, Anna and Lauren Goodlad, "Critical AI: Adapting College Writing For The Age Of Large Language Models Such As ChatGPT: Some Next Steps For Educators."  Critical AI.  

"ChatGPT and Artificial Intelligence Tools."  Center for New Designs in Learning and Scholarship, GeorgeTown University.

Fox-Jackson, Eric.  ChatGPT in Your Classroom.  YouTube. 

Chat GPT Cheat Sheet.

"Practical Responses to ChatGPT."  Montclair University Office of Faculty Excellence. 

Edwards, Benj.  "Why ChatGPT and Bing Chat Are So Good At Making Things Up."  Ars Technica.