Here, COLI is assembling a list of resources concerning artificial intelligence, and it's possible implications for pedagogy and scholarship.
...
recipes for food dishes,
- lesson planes plane for secondary school math science classes,
- a cover letter accompanying a job application,
- a thank-you note,
- an essay on the development of the Code Napoleon,
- simulate a fifteen-year-old blogger reviewing a video game,
- code a module or particular task within a computer program.
Each of these examples, and any other successful production, requires a carefully written prompt from the user. This must properly describe the user's intent,
Importantly, AIs have limits. If you ask them to describe those limits, they will usually enumerate. For example, when asked why it occasionally gets things wrong, ChatGPT replies that its answers will reflect shortcomings in its training data: biases, incomplete or wrong information, or ambiguity. Plus, it may struggle to interpret language within that training corpus.
...
- descriptions of a book whose text or detailed summaries of the same are not in the AI's training data. The AI might develop a plausible but false interpretation or summary based on the book's title, or what information it may have on the book's subject.
- scientific or engineering explanations of complex phenomena.
- biographies of non-famous individuals. (Try asking for a short biography of you and your title, if it is already publicly available on the web. You may receive a fantastic, if false biography.)
Pedagogy
Sources
LLM AIs have learned primarily on open-sourced content. This might be on the internet, or books that are out of copyright. There may be exceptions in unpublished training aids. But much of what we assign is copyrighted content, out of necessity, since that is where specialized disciplinary knowledge is found. Writing assignments that ask students to focus on these specialized resources will not be accessible to generative AIs.
...