- OpenAI, the company behind ChatGPT, has released a guide for teachers who use the conversational artificial intelligence model in their classrooms.
- The guide includes examples of how K-12 teachers and college faculty use ChatGPT in their classrooms, as well as an FAQ with information on using the tool for assessments, safety guardrails, and potential biases and other limitations.
- The guide comes as educators, parents, school advocates, lawmakers and others are gathering information on best practices and considering standards for classroom use while ensuring students aren't shortchanged in their development of critical thinking skills.
Use of ChatGPT in education has been increasing rapidly over the past year since the AI tool and other models have been introduced to the public. A survey by the Walton Family Foundation and Impact Research, a public opinion research and consulting firm, found that 63% of teachers said in late June and early July that they used the technology, up from 50% in February.
The quick adoption of AI tools — which some say have the potential to revolutionize teaching and learning like the internet has done — or the refusal to incorporate them has led to a scramble to find best practices and frameworks for usage.
A May report by the U.S. Department of Education's Office of Educational Technology recommends educators use a "humans in the loop" approach when exploring AI capabilities and protecting against risks.
On Capitol Hill, lawmakers are considering what role Congress should take in mitigating the technology's risk and leveraging the benefits, including in education.
"AI will either be a shortcut for students’ critical thinking or an incredible sparring partner to strengthen them — what actions can we take to ensure it is the latter?" said a white paper published by Sen. Bill Cassidy, R-Louisiana, who is also ranking member of the Senate Committee on Health, Education, Labor and Pensions.
Internationally, the United Nations Educational, Scientific and Cultural Organization is urging all governments to regulate generative AI in schools to ensure a human-centered approach is being used. The organization warned in a Sept. 7 statement that the "education sector is largely unprepared for the ethical and pedagogical integration of these rapidly evolving tools."
OpenAI's educators guide, released Aug. 31, provides examples of how teachers are using the tool to accelerate student learning. Included in the guide are sample ChatGPT generative prompts for developing lesson plans and helping students learn by allowing them to demonstrate their knowledge.
It also offers several disclaimers and warnings, including that the model may not always generate correct information, and that while AI can be an aide for assessments, humans need to be involved to make decisions and judgements about measuring students' knowledge.
"Models today are subject to biases and inaccuracies, and they are unable to capture the full complexity of a student or an educational context," the FAQ said.
Additionally, in response to a question about how educators can detect if students are presenting AI-generated work as their own, OpenAI's FAQ said, “ChatGPT has no ‘knowledge’ of what content could be AI-generated. It will sometimes make up responses to questions like ‘did you write this [essay]?’ or ‘could this have been written by AI?’ These responses are random and have no basis in fact."
Plagiarism detection service Turnitin said in May that its tool detected a “higher incidence of false positives” — or the incorrect identification of fully human-written text as AI-generated text — when there is less than 20% of AI writing in a document. At that time, the company also announced it will display an asterisk when the detector finds less than 20% of AI writing in text.
The OpenAI FAQ advises teachers to ask students to share specific conversations from ChatGPT so they can discuss students' skills in asking questions, analyzing responses, and integrating information.