Here, COLI is assembling a list of resources concerning artificial intelligence, and it's possible implications for pedagogy and scholarship.
...
- descriptions of a book whose text or detailed summaries of the same are not in the AI's training data. The AI might develop a plausible but false interpretation or summary based on the book's title, or what information it may have on the book's subject.
- scientific or engineering explanations of complex phenomena.
- biographies of non-famous individuals. (Try asking for a short biography of you and your title, if it is already publicly available on the web. You may receive a fantastic, if false biography.)
Pedagogy
Sources
LLM AIs have learned primarily on open-sourced content. This might be on the internet, or books that are out of copyright. There may be exceptions in unpublished training aids. But much of what we assign is copyrighted content, out of necessity, since that is where specialized disciplinary knowledge is found. Writing assignments that ask students to focus on these specialized resources will not be accessible to generative AIs.
...
Update Your Course Policies
To start with, mention generative AIs in your syllabus. Should students avoid them altogether, for some or all assignments? Can students use them for certain purposes? This policy may be imperfect at first, until you acquire greater familiarity with LLM AI capabilities. But it is better than nothing.
Periodically reflect on what exactly your position on AIs is from a curricular or pedagogical standpoint. Do they have no place in your classroom or activities? Could they reasonably assist students in some parts of their work, so that students can better focus their efforts on other, more important things? Would you like students to experiment with AIs, to determine for themselves a sense of that LLM AIs are or are not capable of in your discipline?
...
Another concern can be that students must create accounts at OpenAI, Google, and Microsoft to experiment with these AIs. Asking students to provide these companies with personally identifiable information (PII) may be problematic. Students should be encouraged to consult and understand the terms of service, even if it is optional.
But classroom experimentation with AIs might be beneficial, to determine how forms of knowledge and ways of thinking in your discipline interact with AIs. Use a single account, and project it on the big screen in class. Work together as a class to generate or modify prompts. Students might see how AIs stumble with certain questions, or provide simulated but incorrect answers. If an AI cannot perform the kinds of analyses, creativity, or other skills you hope students learn in the course and in the process of assignments, it is good for students to see that for themselves while you are present to answer questions. In engineering prompts and discussing AI outcomes, you also have an opportunity to demonstrate ways of thinking, habits, practices, and procedures that are the substance of your course learning objectives.
Ask AIs to Do Your Assignments
...
As a starting point, it can be helpful to ask the AIs directly what they cannot do, but that are typical learning outcomes of a discipline. They might answer with good insights. For example, Chat GPT suggests several things that undergraduate students should learn to do in history classes, but that LLM AIs will not be able to do effectively:
|
However, be aware that an LLM AI–ChatGPT, for example-- may attempt to simulate these things the things it tells it can not do very well, if prompted by you (or a student.) Therefore, the faculty member needs strong command of any disciplinary knowledge involved in the assignment, if they are to assess student work for accuracy or integrity.
...