Quantcast

Spencer Luo column: Your kid is using ChatGPT. Now what?

Spencer_Luo_Headshot_OpEd
Spencer Luo
Photo provided by Spencer Luo

After my last column ran, I heard from a few Columbia University students. One conversation in particular stuck with me. “The students who use AI as a shortcut get caught fast. The ones who use it well? Their professors can’t even tell.”

That line stayed with me, because those students aren’t so different from the kids on Long Island right now. And parents here are asking me the same question from the other side: “My kid is using ChatGPT for homework. Should I be worried?”

My honest answer: it depends on how they’re using it.

At work, I watch engineers use AI constantly. The good ones treat it like a sharp but unreliable coworker. They’ll ask it to poke holes in their own design, then decide for themselves what holds up. The ones who worry me aren’t the junior engineers — it’s the experienced ones who stop questioning the output because it sounds right. Ten years of expertise, and they’re deferring to a tool that can’t tell you why it made a choice.

Kids do the same thing. The issue isn’t that your child opened an AI chatbot. It’s whether they’re thinking or just copying. There’s a big gap between typing “write me a 500-word essay on the Civil War” and asking “what were the economic arguments against abolition and why did they fail?” One is outsourcing. The other is actually thinking through the problem.

I’ve seen a college student use AI to stress-test whether her thesis held up against the strongest counterargument. She rewrote half her paper after that. That’s not cheating. That’s what good researchers do.

So how can you tell which camp your kid falls into? A few things I’d watch for.

  • Can they explain it without the screen? Ask your kid to walk you through their homework. No laptop, no notes. If they can explain the reasoning, they probably used AI the right way. If they stumble, the machine did the work for them. Simplest test there is.
  • Are they asking better questions? Look at what they’re typing into the chat, not just what comes out. Vague prompts get vague answers. But a kid who’s asking specific, layered questions is actually exercising their brain. The input matters more than the output.
  • Do they get frustrated anymore? This one’s counterintuitive. In my classes at Columbia, the students who are actually learning still hit walls. They get stuck, they complain, they push through. The ones coasting on AI have suspiciously smooth sailing. If your kid never seems to struggle with homework anymore, that’s not a sign they got smarter over the summer.

Here’s what I keep coming back to. I teach students who will be building the next generation of these tools. And the skill that separates the best from the rest isn’t technical. It’s knowing when the machine is wrong and having the confidence to say so.

Parents on Long Island have always pushed their kids to think for themselves. That part hasn’t changed. What’s new is that now there’s a very convincing machine offering to do the thinking for them.

I’m still figuring that balance out myself. Most of us in the industry are.

Spencer Luo is a Great Neck resident, a tech lead at Google, and an adjunct professor at Columbia University, where he teaches AI courses at both the undergraduate and graduate levels. He writes about AI and how it fits into everyday life.

The views expressed in this article are the author’s own and do not necessarily reflect the official policy or position of Google or Columbia University.