Here, COLI is assembling a list of resources concerning artificial intelligence, and it's possible implications for pedagogy and scholarship. This resource is tentative, certainly incomplete, might age rapidly, and probably wrong in places.
...
Google BardAs an AI, how are you classified? "I am classified as a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories." Can you describe your training corpus? "Sure. My training corpus is a massive dataset of text and code that I was trained on. It includes text from a variety of sources, such as books, articles, websites, and code. The corpus is designed to teach me about the structure of language and how to use language to communicate effectively. The corpus is also designed to teach me about the world. It includes text about a variety of topics, such as science, history, and current events. This helps me to understand the world around me and to generate text that is relevant and informative. The corpus is constantly being updated with new text. This helps me to stay up-to-date on the latest information and to generate text that is accurate and up-to-date." |
...
Importantly, the current LLM AIs have limits. If you ask them to describe those limits, they will usually enumerate. For example, when asked why it occasionally gets things wrong, ChatGPT replies that its answers will reflect shortcomings in its training data: biases, incomplete or wrong information, or ambiguity. Plus, it may struggle to interpret language within that training corpus. Equally
Wrong Answers
Perhaps most important, although something that LLMs might not be able to articulate themselves, is that they present present simulations of humans, instead of possessing human concepts of truth or correctness. If an LLM AI is prompted to answer a question for which it does not have training data , it may decline to answer, or it may provide a plausible, but fictional answer. These are what AI developers refer to as "hallucinations." Some examples of these fictions could be:
...
This amounts to LLM AIs "making stuff up" or "getting it wrong," which may be true to us, but they are not malfunctioning, in strict accordance to their development. They simulate human composition. How AIs do or do not develop to overcome these limits will be important for education. At any point, in what ways can students and faculty responsibly use these AIs? In what ways are these AIs not appropriate, given their limitations? Ars Technica's Benj Edwards offers a good explanation of this. But the short version was offered by OpenAI CEO Sam Altman, soon after ChatGPT became available in late 2022: "ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It's a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness."
Whatever we do in our courses regarding LLM AIs, conveying this basic truth to students that these tools can smoothly invent things should be part of it. But it doesn't preclude their use altogether. Used for brainstorming, or when we can provide the AI with the facts of the situation as part of the prompt
How AIs do or do not develop to overcome these limits will be important for education. At any point, in what ways can students and faculty responsibly use these AIs? In what ways are these AIs not appropriate, given their limitations?
Prompt Engineering
At least in the present, how the user crafts a question or command for an LLM AI, prompt engineering, determines the AI's product. Even subtle re-wording of the same (to humans) questions can produce radically different results from the AI. It seems that, for the time being, prompt engineering might be a useful skill to cultivate in students. What specific practices or procedures produce different kinds of outcomes may depend on different disciplines. But the subtleties of prompt engineering may mean that using LLM AIs, honestly or otherwise in academics may occasionally be frustrating.
What Is Or Is Not An AI?
This tentative guide is concerned with Large Language Model AIs, which are (at the time this is written) the most powerful artificial text generation tools available. Other tools, be it a customer service chatbot, a grammar assistant, or even the non-playable antagonist and their henchmen in video games, can be considered artificial intelligence, although they generally follow much simpler routines than LLM AIs. However, in COLI we expect to see in the next several months and years LLM AI-powered tools and features appear across the internet, in software and on mobile devices. So this distinction may not be relevant over time.
Pedagogy
Each discipline will need to determine the extent to which LLM AIs compel adaptation or alteration of their curriculum. Each faculty member will also likely need to consider the relationship between their teaching style and methods, and LLM AIs. However, here is a list of things that might spur some inspiration.
...
To generate answers similar to these, here's the prompt: What are some assignments for a undergraduate university (discipline) course that have students practice or demonstrate things LLM AIs cannot do for them?
Like a lot of responses from LLM AIs, these suggestions are typically vague. And an LLM AI–ChatGPT, for example-- may attempt to simulate the things it tells it can not do very well, if prompted by you (or a student.) Therefore, the faculty member needs strong command of any disciplinary knowledge involved in the assignment, if they are to assess student work for accuracy or integrity. But these can be a good starting line for your process of thinking about assignments that are "AI proof"
...
Good list of links concerned with AI and pedagogy https://www.chronicle.com/newsletter/teaching/2023-03-16