Here, COLI is assembling a list of resources concerning artificial intelligence, and it's possible implications for pedagogy and scholarship.
...
- descriptions of a book whose text or detailed summaries of the same are not in the AI's training data. The AI might develop a plausible but false interpretation or summary based on the book's title, or what information it may have on the book's subject.
- scientific or engineering explanations of complex phenomena.
- biographies of non-famous individuals. (Try asking for a short biography of you and your title, if it is already publicly available on the web. You may receive a fantastic, if false biography.)
Pedagogy
Sources
LLM AIs have learned primarily on open-sourced content. This might be on the internet, or books that are out of copyright. There may be exceptions in unpublished training aids. But much of what we assign is copyrighted content, out of necessity, since that is where specialized disciplinary knowledge is found. Writing assignments that ask students to focus on these specialized resources will not be accessible to generative AIs.
...
Look for falsehoods. For example, if you ask it to draw details from a copyrighted book in order to make a case, you may see where the AI provides fictional, or more properly, simulated details. If you have a good grasp of the source in question, this is obvious.
Determine AI's effectiveness
Beyond your current assignment prompts, ask the AIs to perform the types of analyses that are core skillsets for your discipline. Can it accurately perform calculations of a sort? Can it interpret types of evidence commonly used by professionals? Can it identify important elements in a given text, according to certain scholarly or professional priorities? Can it discuss relevant literature on a particular topic, or describe debates within a discipline? And with all of the above, can it provide depth, detail, or precision that you expect students to exhibit when completing assignments?
As a starting point, it can be helpful to ask the AIs directly what they cannot do, but that are typical learning outcomes of a discipline. They might answer with good insights. For example, Chat GPT suggests several things that undergraduate students should learn to do in history classes, but that LLM AIs will not be able to do effectively:
|
However, be aware that an LLM AI–ChatGPT, for example-- may attempt to simulate these things if prompted by you (or a student.) Therefore, the faculty member needs strong command of any disciplinary knowledge involved in the assignment, if they are to assess student work for accuracy or integrity.