From the article:
Kate Conroy
I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded.
That’s a neat way to have students show their work. Sounds like hell to validate though.
Sounds like a great application for AI! Oh, wait…
It’s not that hard. Just scroll through the editing history. You can even look at timestamps to see if the student actually spent any time thinking and editing or just re-typed a ChatGPT result word for word all in one go. Creating a plausible fake editing history isn’t easy.
I’ve spent a significant amount of my time at my last job looking through editing history in Google products. It’s slow and annoying. It’s not easy.
Kid me would go to more effort to make GPT write a bunch of “in progress” versions than to just write the damn essay.
They’re not wrong, but I heard similar things when search engines first appeared. To be fair, that wasn’t wrong either.
I feel like teachers are going to have to set aside time for essay writing in class instead of as homework.
I have always been opposed to the concept of “homework”, so I would support this.
True, but at some point they’ll need to use a computer to write the essay. At that point, it’s pretty easy to slip over to an AI prompt
Teachers can just watch the handful of children with disabilities that require a computer.
I think this works. I was in fact one of them, although honestly, at a certain point, they trusted me and weren’t even watching all that hard.
I meant more that, at some point, kids do need to practice long-form writing on a computer. At that point, you have to watch everybody.
In college (25+ years ago) we were warned that we couldn’t trust Wikipedia and shouldn’t use it. And, yes, it was true back then that you had to be careful with what you found on Wikipedia, but it was still an incredible resource for finding resources.
My 8 year old came home this year saying they were using AI, and I used it as an opportunity to teach her how to properly use an LLM, and how to be very suspicious of what it tells her.
She will need the skills to efficiently use an LLM, but I think it’s going to be on me to teach her that because the schools aren’t prepared.
Wikipedia didn’t start out hallucinating. Also unlike LLMs, Wikipedia isn’t being marketed as being capable of doing things it can’t do.
It’s not that good of a comparison.
The first thing you learn when using Wikipedia is to use the sources and control F