AI Apology Fails in New Zealand Court: Why Robots Can't Fake True Remorse


We have finally reached the bottom of the barrel, folks. I thought we hit rock bottom years ago with the monkey JPEGs, but we dug deeper. We found a new low in the form of a **New Zealand court case** that is shaking the foundations of **legal ethics**.
Here is the news that is currently driving search traffic: A lawyer in New Zealand got caught using **Artificial Intelligence** to write an apology letter for a client. That’s right. A man broke the law, stood before a judge, and couldn’t even be bothered to generate his own lies. He had to ask a **ChatGPT-style bot** to do it for him. The judge, possessing actual cognitive function, noticed the writing style was a little too fancy. It didn't sound like a human; it sounded like an algorithm trying to simulate sadness.
This is where we are now. We are so hollow, so empty, and so lazy that we have outsourced our conscience to a chatbot.
Let’s be honest about these **court apologies**. They are usually fake anyway. You aren’t sorry you did the crime; you are sorry you got caught. But for hundreds of years, we at least had the decency to put on a show. It was human theater. It required effort. Now? We type “Make me sound sad” into a text box, and seconds later, we get three paragraphs of **AI-generated remorse**.
This scandal is the perfect symbol for our times. The Right loves to scream about personal responsibility and law and order, yet their corporate tech heroes are building tools designed to help people avoid accountability. And the Left? They love to talk about empathy and restorative justice. Well, you can’t have healing when the apology comes from a server farm. There is no empathy in code. It is just math pretending to have feelings.
The judge in New Zealand said this raises a big question about "true remorse." That is the understatement of the century. It kills the idea of remorse. If I send you a birthday card that a robot wrote and mailed, did I remember your birthday? No. A computer had a scheduled task.
Think about the implications for **AI in relationships** and politics. Politicians already use interns; soon they will just have robots arguing with robots. Your spouse will use AI to write wedding vows. We are stripping the messy, ugly, hard parts out of being alive. We want the "not guilty" verdict without the guilt.
So here is my advice, which I doubt you'll take because it requires effort: Next time you mess up, try actually feeling bad. Sit with it. Then, use your own mouth and your own brain to say you are sorry. It might be clumsy, but at least you won't be a meat puppet reading a script written by a ghost in the machine.
***
### References & Fact-Check * **Original Event**: A New Zealand lawyer admitted to using generative AI to draft a sworn affidavit for a client, prompting judicial concern over the authenticity of the contrition. * **Primary Source**: [Question of True Remorse When A.I. Helps Write Your Court Apology](https://www.nytimes.com/2026/02/17/world/asia/new-zealand-court-ai-apology.html) – *The New York Times* * **Key Topics**: **Artificial Intelligence in Law**, **AI Ethics**, **New Zealand Legal System**, **Restorative Justice**.
This story is an interpreted work of social commentary based on real events. Source: NY Times