At dinner tables across Great Neck, a new topic has quietly joined the usual conversations about academic success and our children’s future: Artificial Intelligence. Parents are debating tools like ChatGPT with a mix of curiosity and anxiety.
“Is this just a new way to cheat?” “Will it make our children’s hard-earned skills irrelevant?”
These concerns are understandable. But if fear alone guides our response, we risk preparing our children for a world that no longer exists.
We have been here before. Decades ago, when calculators entered math classrooms, many worried that students would forget basic arithmetic. Instead, calculators freed students to focus on higher-level problem solving. The calculator did not replace thinking; it raised the ceiling of what students could do.
AI is the calculator of our time – far more powerful, and far more complex. Unlike calculators, AI can be confident but wrong. It can reflect bias. It can generate polished answers that are incomplete or misleading. That is precisely why banning it outright misses the point. What matters is not simply access, but literacy: the ability to use it thoughtfully, responsibly, and critically.
Drawing on my extensive experience leading engineering teams in the tech industry, I observe this shift firsthand. Success is no longer defined just by memorizing information or writing basic code from scratch. AI can already do much of that. What distinguishes top engineers today is their ability to direct AI toward meaningful goals, evaluate its output critically, and integrate it into complex systems.
I see the same divide in my classroom at Columbia University. Some students try to use AI to “ghostwrite” their assignments, and they quickly fall behind. Others use AI as a partner-asking it to propose counter-arguments or summarize difficult readings – then they verify the output and make it their own.
The gap is widening. Students who treat AI as a partner are not just finishing assignments faster; they are producing work of a complexity that students working alone simply cannot match. This is the new baseline for excellence.
For Great Neck parents, the useful question is not “How do we ban this?” but “How do we teach our children to use it well?” To thrive in college and the future workforce, students need to develop three essential AI-era skills:
-
Asking good questions: We must teach students to frame prompts that guide AI toward insight, not just information. This is an exercise in logic: students must learn to articulate clear goals, provide precise context, and demand reasoning rather than just a final answer. In the age of AI, the ability to formulate a precise, complex query is a better indicator of understanding than rote memorization.
-
Critical verification: Maintaining healthy skepticism. Students must learn to confirm facts, compare sources, and test claims – especially when the AI sounds confident.
-
Ethical judgment: Understanding when AI use is appropriate and when independent work is required. The goal is to use it to strengthen learning, not to bypass it.
Great Neck has long been recognized for its commitment to educational excellence. Preserving that legacy means helping our students become not just consumers of new technology, but thoughtful masters of it.
If we want our students to lead in the years ahead, we should not only teach them to use powerful tools – we should teach them how to think with them, question them, and apply them wisely. That is what AI literacy is really about.
Spencer Luo is a Great Neck resident, a tech lead at Google, and an adjunct professor at Columbia University, where he teaches AI courses at both the undergraduate and graduate levels. He is passionate about promoting AI literacy in education.
The views expressed in this article are the author’s own and do not necessarily reflect the official policy or position of Google or Columbia University.





























