Here, COLI is assembling a list of resources concerning artificial intelligence, and it's possible implications for pedagogy and scholarship.
Challenges
Landscape of LLM Generative AIs is rapidly changing. For example, the arrival of Bing Chat and Google Bard in Winter, 2023 have presented AIs that provide citations. 2022's GPT 3.5-powered ChatGPT did not do this.
Sources
https://cndls.georgetown.edu/ai-composition-tools/
Chat GPT Cheat Sheet: https://drive.google.com/file/d/1UOfN0iB_A0rEGYc2CbYnpIF44FupQn2I/view?usp=sharing
OpenAI's ChatGPT blog: https://openai.com/blog/chatgpt/
"Practical Responses to ChatGPT" https://www.montclair.edu/faculty-excellence/practical-responses-to-chat-gpt
Good list of links concerned with AI and pedagogy https://www.chronicle.com/newsletter/teaching/2023-03-16
Pedagogy
Sources
LLM AIs have learned primarily on open-sourced content. This might be on the internet, or books that are out of copyright. There may be exceptions in unpublished training aids. But much of what we assign is copyrighted content, out of necessity, since that is where specialized disciplinary knowledge is found. Writing assignments that ask students to focus on these specialized resources will not be accessible to generative AIs.
Similarly, having students do primary research is both pedagogically sound as well as irrelevant to AIs. If students must do the lab work, or labor in the archives, they acquire familiarity with the foundations of knowledge. ChatGPT itself points to "original research" as something it cannot perform or simulate:
Micro Examples
LLM AIs will not have extensive access to specific examples that illustrate larger trends. For example, asking students to read testimonies, letters, or documents written in the past, but are not particularly famous, can help them connect greater ideas to specific people or events. Aside from the issue of LLM AIs, this often generates greater interest among students. For example, having students read a letter written by a nurse during the 1918 Influenza epidemic, or read a Treasury Department report about a specific corporate fraud case, can help students understand larger arguments or legal conceptions, within the structure of a compelling story. Since LLM AIs may not be able to write with authority about these cases, since they are not published on the open internet, this gives students the opportunity to draw their own conclusions.
Scaffolded Work
"One-and-Done" assignments are where LLM AIs shine. If you require students to complete a project in stages, providing formative feedback at each stage, students are more likely to learn research, writing computational, and other skills, and acquire more confidence along the way. This isn't something they can hand off to AIs.
Reflective Writing
Have students write reflections on course concepts or their learning. For example, have a student describe how they arrive at a (perhaps tentative) conclusion based on available evidence. Have a student describe how they arrived at their method for coding a program. While AIs can suggest interpretations or examples of computer code, they will not be able to offer a personalized description of how they may arrive at one method or conclusion over another.
At the Top of Bloom's Taxonomy
Assignments that require creation or evaluation are particularly suited to humans and not AIs. Have students make arguments based on (original or primary) evidence. Or have students provide an interpretation, or assessment of quality, of a particular composition or source.
Creative Production That Isn't Text
Have students create narrated videos: documentaries, tutorials, explainers, and so on. While these could in theory be scripted by an AI, you may reasonably require composition that is more closely tied to visuals on screen, which might make AI-generated information less useful. As with all above, this is a solid pedagogy regardless of AIs, since students are compelled to think critically about media that they are more likely to encounter than a traditional college essay.