Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Here, COLI is assembling a list of introductory information, tips, and resources concerning artificial intelligence, and it's possible implications for pedagogy and scholarship.  This resource is tentative, certainly incomplete, might age rapidly, and probably (at any point in time) wrong in places.  

Understanding Large Language Model (LLM) Generative Artificial Intelligence (AI)s

...

Google Bard

As an AI, how are you classified?

"I am classified as a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories."

Can you describe your training corpus?

"Sure. My training corpus is a massive dataset of text and code that I was trained on. It includes text from a variety of sources, such as books, articles, websites, and code. The corpus is designed to teach me about the structure of language and how to use language to communicate effectively.

The corpus is also designed to teach me about the world. It includes text about a variety of topics, such as science, history, and current events. This helps me to understand the world around me and to generate text that is relevant and informative.

The corpus is constantly being updated with new text. This helps me to stay up-to-date on the latest information and to generate text that is accurate and up-to-date."

...

This tentative guide is concerned with Large Language Model AIs, which are (at the time this is written) the most powerful artificial text generation tools available.  Other tools, be it a customer service chatbot, a grammar assistant, or even the non-playable antagonist and their henchmen in video games, can be considered artificial intelligence, although they generally follow much simpler routines than LLM AIs.  However, in COLI we expect to see in the next several months and years LLM AI-powered tools and features appear across the internet, in software and on mobile devices.  So this distinction may not be relevant over time.  

Pedagogy 

Each discipline will need to determine the extent to which LLM AIs compel adaptation or alteration of their curriculum.  Each faculty member will also likely need to consider the relationship between their teaching style and methods, and LLM AIs.  However, here is a list of things that might spur some inspiration.

...

Try having conversations with the LLM AIs on topics both within your professional discipline, or just about anything else.  Get a sense of how they respond. 

Build Examples

If you ask Below, we suggest asking students to complete the assignments you typically assign to students.  But beyond your current assignment prompts, ask the AIs to perform the kinds of analysis you do in your discipline, youtypes of analyses that are core skillsets for your discipline.  Can it accurately perform calculations of a sort?  Can it interpret types of evidence commonly used by professionals?  Can it identify important elements in a given text, according to certain scholarly or professional priorities?  Can it discuss relevant literature on a particular topic, or describe debates within a discipline?  And with all of the above, can it provide depth, detail, or precision that you expect students to exhibit when completing assignments?  

You'll likely discover limitations.  Either they simply there are procedures or activities you have students do, that AIs don't do it very well, or their the AIs' penchant for falsehood will surface.  Document these examples to share with students.  You might (as we suggest below) ask the AIs to try your assignments, but even before that, simply ask them Even just asking AIs to summarize lengthy texts that are less likely to be the topic of considerable analysis (good, bad, and everything in between) on the interneton the internet might not be as AI-friendly as it first sounds.  

Here's an example: Allan Millett, Peter Maslowski, and William Fies organized their book For the Common Defense around six themes:

...

When asked to list these themes, ChatGPT had this to say: 

  1. Civil-Military Relations: The book explores the relationship between the military and civilian authorities. It examines how this relationship has evolved over time, the challenges that have arisen, and how they have been resolved.

  2. Technology and Innovation: The authors highlight the importance of technology and innovation in shaping the military history of the United States. They trace the evolution of military technology, from muskets to drones, and how it has influenced military tactics and strategy.

  3. Strategy and Operations: The book covers a range of military campaigns, including the Revolutionary War, the Civil War, and the Gulf War. It analyzes the strategic and operational decisions made by military leaders and the outcomes of these decisions.

  4. Military Leadership: The authors discuss the role of military leadership in the success or failure of military campaigns. They examine the qualities that make an effective military leader and the challenges that leaders face in times of war.

  5. Military Culture: The book explores the culture of the military and how it has evolved over time. It examines the values, norms, and beliefs that shape military organizations and the impact of this culture on military operations.

  6. The Impact of War on Society: Finally, the book examines the impact of war on American society. It discusses the social, economic, and political consequences of war, including the effects on veterans and their families.

Some of these are more or less close to the truth.  Most suggest that ChatGPT made a reasonable but erroneous guess based on the book title and perhaps some other publicly available descriptions.  Since the book has wide circulation and the earliest edition dates from the 1980s, this suggests that considerable quantities of long-form text, and especially copyrighted text, have not been included in the AI's training materials. 

...

Companies like OpenAI and Google are sensitive to charges that AIs inherit bias and discrimination present in their human creators or training data.  They have taken steps to prevent this, but their products remain controversial.  This might be an important conversation to have with students, based on various perspectives presented in sources.  However, one should be cautious about in-class or assigned experimentation with, for example, engineering prompts designed to provoke racist replies.

Another concern can be that students must create accounts at OpenAI, Google, and Microsoft to experiment with these AIs.  Asking students to provide these companies with personally identifiable information (PII) may be problematic.  Students should be encouraged to consult and understand the terms of service, even if it is optional.  

But classroom experimentation with AIs might be beneficial, to determine how forms of knowledge and ways of thinking in your discipline interact with AIs.  Use a single account, and project it on the big screen in class.  Work together as a class to generate or modify prompts.  Students might see how AIs stumble with certain questions, or provide simulated but incorrect answers.  If an AI cannot perform the kinds of analyses, creativity, or other skills you hope students learn in the course and in the process of assignments, it is good for students to see that for themselves while you are present to answer questions.  In engineering prompts and discussing AI outcomes, you also have an opportunity to demonstrate ways of thinking, habits, practices, and procedures that are the substance of your course learning objectives.

Ask AIs to Do Your Assignments

If you suspect that an LLM AI might be able to complete your students' assignments for them, ask the LLM AIs to do the assignments, and see how they do.  You may need to vary your prompt a bit to ensure that the AI understands exactly what is asked since, among other reasons, the AI hasn't been a student in your class for several weeks before the assignment is to be completed.  

Based on the AI's performance, you can determine which assignments might need to be scrapped, which need to be altered, and which assignments prompt poor responses from the AI.  If you ask the AI to regenerate those responses several times, you will likely see familiar patterns, since in response to the identical prompt, it is less likely to provide radically different answers.

Look for falsehoods.  For example, if you ask it to draw details from a copyrighted book in order to make a case, you may see where the AI provides fictional, or more properly, simulated details.  If you have a good grasp of the source in question, this is obvious.  

Determine AI's effectiveness

Beyond your current assignment prompts, ask the AIs to perform the types of analyses that are core skillsets for your discipline.  Can it accurately perform calculations of a sort?  Can it interpret types of evidence commonly used by professionals?  Can it identify important elements in a given text, according to certain scholarly or professional priorities?  Can it discuss relevant literature on a particular topic, or describe debates within a discipline?  And with all of the above, can it provide depth, detail, or precision that you expect students to exhibit when completing assignments?sources.  However, one should be cautious about in-class or assigned experimentation with, for example, engineering prompts designed to provoke racist replies.

Another concern can be that students must create accounts at OpenAI, Google, and Microsoft to experiment with these AIs.  Asking students to provide these companies with personally identifiable information (PII) may be problematic.  Students should be encouraged to consult and understand the terms of service, even if it is optional.  

But classroom experimentation with AIs might be beneficial, to determine how forms of knowledge and ways of thinking in your discipline interact with AIs.  Use a single account, and project it on the big screen in class.  Work together as a class to generate or modify prompts.  Students might see how AIs stumble with certain questions, or provide simulated but incorrect answers.  If an AI cannot perform the kinds of analyses, creativity, or other skills you hope students learn in the course and in the process of assignments, it is good for students to see that for themselves while you are present to answer questions.  In engineering prompts and discussing AI outcomes, you also have an opportunity to demonstrate ways of thinking, habits, practices, and procedures that are the substance of your course learning objectives.

Ask AIs to Do Your Assignments

If you suspect that an LLM AI might be able to complete your students' assignments for them, ask the LLM AIs to do the assignments, and see how they do.  You may need to vary your prompt a bit to ensure that the AI understands exactly what is asked since, among other reasons, the AI hasn't been a student in your class for several weeks before the assignment is to be completed.  

Based on the AI's performance, you can determine which assignments might need to be scrapped, which need to be altered, and which assignments prompt poor responses from the AI.  If you ask the AI to regenerate those responses several times, you will likely see familiar patterns, since in response to the identical prompt, it is less likely to provide radically different answers.

Look for falsehoods.  For example, if you ask it to draw details from a copyrighted book in order to make a case, you may see where the AI provides fictional, or more properly, simulated details.  If you have a good grasp of the source in question, this is obvious.  

Ask the AI

As a starting point, it can be helpful to ask the AIs directly what they cannot do, but that are typical learning outcomes of a discipline.  They might answer with good insights.  For example, Chat GPT suggests several things that undergraduate students should learn to do in college classes, but that LLM AIs will not be able to do effectively:

...

To generate answers similar to these, here's the prompt: What are some assignments for a undergraduate university (discipline) course that have students practice or demonstrate things LLM AIs cannot do for them?

Like a lot of responses from LLM AIs, these suggestions are typically vague.  And an LLM AI–ChatGPT, for example-- may attempt to simulate the things it tells it can not do very well, if prompted by you (or a student.)  Therefore, the faculty member needs strong command of any disciplinary knowledge involved in the assignment, if they are to assess student work for accuracy or integrity.  But these can be a good starting line for your process of thinking about assignments that are "AI proof"

...