Dive Brief:
- A large majority of both students (86%) and teachers (85%) reported using artificial intelligence during the 2024-25 school year, according to survey data released Wednesday by the Center for Democracy & Technology.
- However, this surging use of AI in educational settings, CDT said, is linked to increased risks for students. The more a school uses AI, the survey found, the more students are prone to: data breaches and ransomware attacks, sexual harassment and bullying, AI systems not working as designed, and concerning interactions between students and AI tools.
- In an Oct. 7 letter emailed to U.S. Education Secretary Linda McMahon, CDT and nine other education civil rights, library and technology groups cited these risks and called for the U.S. Department of Education to integrate its July guidance on responsible AI use as it administers grants and research programs for AI implementation in schools.
Dive Insight:
The groups' letter to McMahon said its findings demonstrate the need to address risks related to widespread use of AI in schools — especially as the Trump administration continues to prioritize AI in education. Even so, concerns have arisen from district leaders about how the Education Department will successfully implement AI use in schools given the Office of Educational Technology’s closure earlier this year.
CDT found, for example, that half of students said using AI in class makes them feel less connected to their teacher, and 38% said it’s easier to talk to AI instead of their parents. Yet just 11% of teachers reported getting training on how to respond if they think a student is using AI in a way that could harm their well-being.
CDT highlighted concerns about problematic relationships some students are developing with AI tools. During the 2024-25 school year, 42% of students said that they or their friends had used AI for mental health support, as a friend or companion, or as a way to escape from real life. Some 19% also reported using AI to have a romantic relationship.
Children’s media safety and mental health organizations have been sounding the alarm about these kinds of trends, warning against the use of AI companions for anyone under the age of 18. Advocates say that AI companions pose serious mental health risks to children and teens, especially for those with conditions like depression, anxiety disorders, ADHD or bipolar disorder.
To combat the issue of AI companions, one mental health expert recently recommended that schools offer digital literacy programs to educate students about these tools. Schools are advised to make it clear to students that AI companions are not humans and that students should discuss important issues with a trusted adult.
CDT also said AI can pose risks to a school’s cyberdefense. The more that teachers rely on AI for school-related work, the more likely they were to report their school experienced a large-scale data breach.
CDT conducted its online surveys between June and August 2025 with 1,030 students in grades 9-12, 806 grade 6-12 teachers and 1,018 parents with children in grades 6-12.
Additionally, 36% of surveyed students reported an issue in their school involving deepfakes — fake audio videos, photos or audio recordings that are created by AI to seem real — during the 2024-25 school year. Although concerns persist, less than a quarter of teachers said their schools had released policies to address deepfakes, specifically ones that depict sexually explicit imagery without a person’s consent.
In May, President Donald Trump signed into law the Take It Down Act, which criminalizes the use of AI to create deepfake images without the subject's consent.