In the summer of 1956, a group of mathematicians set out to discover if human intelligence could be recreated in a machine. The workshop – the Dartmouth Summer Research Project on Artificial Intelligence – defined the phrase “artificial intelligence” for the first time. This history underscores how Dartmouth approaches, discusses, and defines AI today. The front page ai.dartmouth.edu asserts, “The birthplace of AI. Now shaping what comes next.” It’s a compelling claim, and one that the College has spent the last several years working hard to support.
However, AI in education materializes in heterogeneous ways. Students and faculty experience the technology as either the greatest pedagogical opportunity or the biggest pedagogical threat. Both views coexist on campus: Dartmouth continues its AI initiatives, students continue to use the tools, and a university body that coheres people who embrace and reject the technology continues to debate with itself. How these dynamics play out will define the next few years of AI in education – the pivotal moment that determines whether institutions and students will adapt or stagnate.
In December 2025, Dartmouth became the first Ivy League school to roll out a campus-wide AI partnership at institutional scale. Through an agreement with Anthropic and Amazon Web Services, every student, faculty, and staff member has access to Claude for Education through a Dartmouth-branded portal with enterprise-level privacy protections. President Sian Beilock deemed it “the next chapter in a story that began at Dartmouth 70 years ago,” saying that the institution that introduced the term AI will “show the world how to use it wisely in pursuit of knowledge.” Anthropic president Daniela Amodei grounds the partnership in “AI fluency” and in teaching students how to deeply engage with problems “rather than bypass them.”
The Anthropic deal is layered on top of an already substantial institutional apparatus. Dartmouth Chat provides the entire campus with a single interface for access to advanced versions of several models. Run jointly by Dartmouth Libraries and Research Computing and Data Services, Dartmouth Chat offers LLMs from Anthropic, OpenAI, Google, and Dartmouth-hosted open-source models alongside document upload, prompt comparison, and custom chatbot building. Student-driven projects have also been a goal. In 2024, students at the DALI Lab began developing an AI literacy application sponsored by Jed Dobson, the Special Advisor to the Provost for AI. Built for students and by students, the product helps inculcate knowledge through gamified interactions with an LLM-powered tool. The application is one of many in the Lab integrating AI into student-built products.
Further, the Dartmouth Center for the Advancement of Learning (DCAL) and Learning Design and Innovation (LDI) jointly operate the Teaching with GenAI Initiative. The initiative supports educators to experiment with GenAI as a teaching tool – promoting AI integration into specific courses and hosting event series. This pedagogical infrastructure is underpinned by the goal of preserving academic integrity while expanding access to interactive support.
While Dartmouth is actively promulgating the integration and leveraging of AI in education, faculty hold contrasting views on the promise and impact of the technology in learning. Dartmouth faculty have formed an AI Faculty Leadership Group – drawing from across the disciplines to define classroom best practices – as well as a faculty group that is firmly anti-AI. Wariness about AI is centered on concerns that the cognitive offloading enabled by AI tools is corroding the quality and effectiveness of the liberal arts education, and the College’s institutional support for AI risks endorsing exactly this kind of unproductive use.
Student Use and Existing Gaps
In October 2025, DCAL conducted a student-faculty roundtable on GenAI and academic integrity. The report asserts that GenAI has produced a “mutual and threatening erosion of trust between faculty and students” and that “students’ ease with AI is a myth,” where many students are not as fluent with the technology as faculty may assume. The dichotomy between knowledge bases warrants close attention as the College strives to promote AI enablement across its campus. Some students are extremely fluent while others have never used an LLM before. The variance within a classroom is vast, and the gap is quickly compounding.
The AWS announcement states AI fluency as a primary objective, preparing the Class of 2029 to become “Dartmouth’s first ‘AI-fluent’ undergraduate class.” Students who struggle to learn and adapt to AI may lose the ability to evaluate, use, and build AI tools in a world that increasingly demands those skills. Roundtable participants noted this tension between efficiency and internalized learning, between outcome and process, and the environment of a liberal arts education where all sides are at play.
On campus, the faculty response is varied. Course policies on AI range from full ban, to permitted-with-citation, to required use as part of the assignment. In the curriculum itself, AI has become a recurring focal point across departments. Courses like Critical AI, Deep Learning, Politics and Artificial Intelligence, and The Dark Side of AI/ML are just a few examples of offerings across computer science, English, cognitive science, and government.
Looking Forward
Dartmouth represents a preview of what post-secondary education and learning alongside AI is moving towards, and several patterns are emerging.
AI literacy is necessary, though unevenly distributed across student populations. Tools and curricula that assume students’ literacy will widen the gap between those with and without access. How that gap closes, and how the College can meet students where they are, represent challenges that Dartmouth and its offices are actively working through. In turn, concerns over academic integrity stem from concerns over trust in the classroom. Students and faculty alike are wary that AI may undermine learning, so structures and tools that emphasize process and visibility will likely prove to be more resilient.
Dartmouth recognizes both the risks of AI as well as its opportunities, as the risk that AI may replace student learning remains a valid concern. Nonetheless, engagement with the technology is the more defensible response. How AI ultimately materializes in education will depend on how courses are restructured, how students are taught to interact with and use the technology, and how the gap between familiar and unfamiliar users is addressed in practice.
Seventy years after the Dartmouth Summer Research Project on AI, the College is defining how AI takes shape at every step of the learning process. At AIDEC, we believe this future is built through student-driven development and innovative learning – empowering students, educators, and institutions to lead the transformation of AI in education.