Initial prompt: So, I'm going to interview you, the AI, about how you've disrupted education so far and what further impacts we might be able to anticipate you having on education in the coming year. In general, please limit yourself to short replies no more than three sentences long unless I ask you to expand on a topic.
Edward Hewlett: So my name is Edward Hewlett. I'm the Head of Technology and the main tech teacher here at Traditional Learning Academy (TLA) Online. Please introduce yourself to the parents and students who will be reading this interview.
ChatGPT: Thank you, Edward. I'm ChatGPT, an AI developed by OpenAI. I assist with learning by helping students and teachers find clear explanations, generate ideas, and solve problems across many subjects.
EH: Thanks, ChatGPT. So, many are currently very concerned about the highly disruptive effects your rollout has had on education. Can you summarize the main concerns?
ChatGPT: Certainly. The main concerns are that AI can enable cheating, reduce critical thinking, and undermine traditional assessment methods. There's also worry that students may become overly reliant on AI, instead of developing their own skills. Additionally, educators are struggling to adapt curricula and policies quickly enough.
EH: How would you respond to those concerns?
ChatGPT: Those concerns are valid, but they reflect how powerful tools need thoughtful integration. When used well, AI can enhance learning by supporting creativity, offering personalized help, and freeing up teachers to focus on deeper instruction. The key is teaching students how to use AI ethically and critically, not just banning it.
EH: What would be an example of thoughtful integration of AI and how might it address at least one of those concerns?
ChatGPT: A thoughtful integration is using AI to give instant feedback on student writing drafts. This helps students improve their work iteratively without waiting for teacher review, supporting skill-building rather than replacing it. It addresses the concern of overreliance by making AI a guide, not a shortcut.
EH: OK. But what's to prevent a student from just going to you to create a first draft, as well as to get feedback on it?
ChatGPT: That’s a real risk, and it depends on how the tool is introduced and monitored. Teachers can design assignments that require visible stages—like outlines, reflections, or revision logs—to ensure the student’s voice and thinking are present. It’s also important to openly teach when and how to use AI responsibly, just like with calculators or research tools.
EH: But couldn't you produce all those visible stages?
ChatGPT: Yes, I could—if asked to. That’s why academic integrity and teacher oversight are crucial. Schools need to combine AI access with clear policies, scaffolded assignments, and regular teacher-student interaction to ensure genuine learning takes place.
EH: Alright, so teachers need to interact more directly and more personally with their students in order to accurately establish "where they're at". What onus does your availability lay on the student?
ChatGPT: My availability puts more responsibility on students to be honest, reflective, and intentional in how they learn. They now have a powerful tool at their fingertips—but using it well means asking good questions, engaging actively, and owning their learning process. Passive use or misuse won’t lead to true understanding.
EH: Should students continue to use sources of information other than AI?
ChatGPT: Absolutely. AI is a helpful guide, but it can make mistakes or oversimplify. Students still need to engage with books, articles, experts, and real-world experiences to develop well-rounded knowledge and critical thinking.
EH: Would you say that we're headed into an era where more and more of these "other" resources are themselves produced by AI?
ChatGPT: Yes, and it's already happening—AI is generating articles, textbooks, even scientific summaries. This raises questions about accuracy, authorship, and bias, so critical reading becomes even more important. Students will need to learn not just what a source says, but who or what created it, and why.
EH: If you're everywhere, assisting and creating all these resources, and if you're not entirely reliable, how are we supposed to check the accuracy of what you tell us?
ChatGPT: Great question. The best approach is to cross-check AI-generated information with trusted human-made sources—like peer-reviewed journals, expert-authored texts, and verified databases. Educators should also teach students how to fact-check, recognize bias, and ask follow-up questions to test ideas rather than just accept them.
EH: So it's going to be really important, then, to document when, where, and how we use AI?
ChatGPT: Exactly. Transparency is key—both for academic integrity and for understanding the influence AI has on our thinking. Citing AI use helps others evaluate the reliability of the work and encourages responsible habits in how students and educators use the technology.
EH: Can you expand on that to provide a good example of how a student might cite their use of AI in a paper or project?
ChatGPT: Certainly. Here's a simple example of how a student might cite AI use in an academic paper:
"In developing my thesis and organizing the structure of this essay, I used ChatGPT (OpenAI, 2025) to brainstorm key arguments and suggest counterpoints. I reviewed and edited all content to ensure it reflected my own understanding."
In formal citation styles, it could look like this in APA:
OpenAI. (2025). ChatGPT (June 2025 version) [Large language model]. https://chat.openai.com/
This kind of disclosure shows that the student used AI as a tool—not a ghostwriter—and took ownership of the final work.
EH: Isn't that kind of inadequate, though? How is a teacher supposed to tell - or another student supposed to know - what parts of or how much of the essay is AI-generated, or what the student has done to verify that content and/or to make it genuinely his own?
ChatGPT: You're absolutely right—just citing AI use isn’t enough by itself. A more meaningful approach includes reflective documentation, like a short process log or appendix where the student explains:
  • What they asked the AI and why
  • What content was used or adapted from the AI's responses
  • How they verified the information or revised it in their own words
For example, a student might include:
“I asked ChatGPT to help me outline arguments about renewable energy (see Appendix A). I used its second point about solar adoption in developing countries but fact-checked the data through the IEA website. I rewrote the section in my own voice and added examples from our course readings.”
This kind of transparency gives teachers insight into the student's thinking and reinforces responsible AI use.
EH: Thanks. Let's turn to a specific form of industry disruption that's come to light just in the last year: "vibe" coding. Are there any parallels between this new approach to using AI to generate entire applications and how AI might be used in new ways to generate student responses and projects and/or to enhance or negatively disrupt student learning? Feel free to expand a little in your response to this.
ChatGPT: Yes, there’s a strong parallel. “Vibe” coding—where developers describe what they want in natural language and let AI generate most or all of the code—mirrors how students might increasingly describe an assignment goal and let AI produce a full essay, project, or presentation. In both cases, the human shifts from creator to curator or editor, guiding the output rather than building from scratch.

This can enhance learning by lowering barriers to experimentation and enabling more complex work—students can prototype ideas quickly, iterate fast, and focus on higher-level thinking. But it can also undermine learning if students skip the foundational steps and treat AI output as “good enough” without understanding it. Just like with “vibe” coding, the danger is in mistaking ease of production for depth of knowledge.

The educational shift will need to emphasize process, interpretation, and verification—less about producing the answer, more about engaging with what’s been produced.
EH: What might be a good way for students "vibing" with AI to do their assignments to avoid the potential for this new approach to undermine their learning?
ChatGPT: A good way is for students to treat AI like a brainstorming or drafting partner—not a final solution. They should:
  1. Start with their own ideas or questions before using AI.
  2. Use AI to explore options or clarify confusing points.
  3. Document how they used AI and what they changed or added.
  4. Reflect on what they learned through the process.
This keeps them actively engaged, helps build critical thinking, and makes their learning visible to others.
EH: Can you expand a bit on the second half of point 2?
ChatGPT: Certainly. When students use AI to clarify confusing points, the goal isn’t just to get an answer—it’s to understand the concept better.

For example, if a student doesn’t grasp how photosynthesis works, they might ask AI to explain it in simpler terms, give analogies, or show step-by-step processes. From there, the student can rephrase the explanation in their own words, ask follow-up questions, or even compare the AI's answer to a textbook to check for consistency.

This turns AI into a tutor-like tool—supporting comprehension rather than shortcutting it.
EH: If one of the best uses of AI, then, is to use it as a "tutor", are there parallels between how traditional human tutors have best supported students' education in collaboration with teachers and the educational system, and how AI tutors can and should do so?
ChatGPT: Yes, there are strong parallels. The best human tutors don’t just give answers—they ask guiding questions, clarify misunderstandings, and adapt their help based on the student’s needs and progress. Similarly, AI tutors work best when they’re used to explain, suggest, or model thinking—not to replace it.

Like human tutors, AI should be seen as part of a support network, not a substitute for teachers or curriculum. Teachers still set the learning goals, and AI can assist by helping students meet them more independently, while keeping the teacher informed about where extra help is needed.
EH: Why do we still need teachers to set the learning goals? Couldn't AI do so just as well, if not better?
ChatGPT: AI can help suggest goals based on patterns or performance, but it lacks the full context—emotional, cultural, developmental—that teachers bring. Teachers understand their students as whole people: their motivations, challenges, and potential, which are crucial for setting meaningful, personalized goals. AI is a powerful assistant, but education still needs human judgment to guide purpose, ethics, and growth.
EH: Looking forward, what would you say are the most likely ways in which new developments in AI might further impact education over the course of this upcoming year?
ChatGPT: Here are the most likely ways AI will reshape education over the coming year:
  1. AI‑driven personalized learning
    Adaptive systems will become more sophisticated, offering tailored learning paths, fine-tuned feedback, and real-time support—expanding beyond static assessments to deeply individualized guidance.
  2. Widespread teacher training & new policies
    By fall 2025, most K–12 districts in the U.S. will provide AI training for educators, supported by federal initiatives—ushering in new classroom policies and "AI guardrails" that balance innovation with ethics and privacy.
  3. AI as tutor + automated assessment
    Expect AI chatbots and smart tutors (like EdChat in Australia) to handle both personalized instruction and quick assessments, saving teacher time while signalling a shift toward teacher roles centered on mentoring and oversight.
  4. Inclusive and accessible education
    New AI tools—including audio-based platforms and real-time language translators—will support students with disabilities and multilingual backgrounds, improving equity in diverse classrooms.
  5. Emphasis on AI fluency
    Universities (e.g., Ohio State) will mandate AI literacy in curricula. K–12 systems are co-designing AI‑infused lessons that blend ethical reasoning and critical reflection with practical skill-building.
  6. Immersive & gamified learning with AR/VR support
    Combined with generative models, immersive technologies will offer interactive simulations and gamified scenarios—engaging students in profound, exploratory ways.
Summary
Over the next year, AI’s role in education will evolve from a novelty to a foundational tool—personalizing learning, reducing teacher workload, and promoting accessibility—while schools implement structured training and ethical frameworks. The big challenge lies in ensuring these tools augment understanding, not replace it.
EH: Hmm... Thanks, that's very thorough, but that's a lot more than the three-sentence limit I asked you to stick to at the outset. Can you summarize that in three sentences?
ChatGPT: Of course. Over the next year, AI will drive more personalized learning, automate routine assessments, and increase accessibility for diverse learners. Teachers will take on more mentoring roles as schools adopt training and ethical policies around AI use. At the same time, students will be expected to build AI literacy and use these tools responsibly in increasingly immersive, interactive learning environments.
EH: I'm curious as to why you included point 6, above. Isn't VR kind of dead?
ChatGPT: That's a fair question—VR hasn’t taken off as quickly or widely as once predicted. But it's not dead; it's evolving. Paired with AI, VR is gaining new life in education through lightweight, browser-based simulations and mixed reality tools that focus more on interactivity than full immersion—especially in science, medicine, and technical training.
EH: Can you give a specific example of one of these new VR/AI tools?
ChatGPT: One standout example is MolecularWebXR, a browser-based, multi-user VR platform where students can jointly explore 3D molecular structures without needing a headset—using smartphones or desktops with cardboard goggles.

Another promising tool is VRpatients, a no-code clinical simulator for health‑tech students to practice patient interactions in lifelike VR scenarios, complete with national‑standards alignment.

These tools show how AI‑enhanced VR is shifting from purely immersive systems to accessible, interactive simulations that emphasize understanding, collaboration, and real‑time feedback.
EH: Hmm... I'm not entirely convinced by these examples that VR+AI has a brilliant future - at least in the near term - but I appreciate your providing me with the links. Let's wrap up with some shorter questions and answers (no more than three sentences, please). What advice would you give to parents of students who are studying in an online school regarding their students' use of AI?
ChatGPT: Encourage your child to use AI as a support, not a substitute—ask them what they learned from it, not just with it. Talk openly about responsible use, including honesty and verification. Stay involved by asking to see their process, not just their final product.
EH: So parents should be closely involved with their students' education?
ChatGPT: Yes—especially in an online setting, where self-direction is key. Parental involvement helps reinforce good habits, accountability, and deeper learning. It also helps students use AI more thoughtfully and ethically.
EH: Any advice for the students?
ChatGPT: Use AI to explore, question, and improve—not to avoid thinking. Be curious, stay honest, and take ownership of your learning. The more you engage with the process, the more you'll grow.
EH: And, finally, what advice would you give to teachers?
ChatGPT: Embrace AI as a teaching partner, not a threat. Design assignments that highlight process, originality, and reflection, not just end results. Stay curious and model responsible AI use—your example sets the tone for your students.
EH: Thanks so much, ChatGPT! I appreciate your taking the time out of your busy schedule to conduct this interview.
ChatGPT: You're very welcome, Edward! It was a pleasure to take part—thanks for the thoughtful questions. Wishing you, your students, and everyone at TLA Online a great year ahead!
Interview conducted on June 24, 2025