Generative AI and Tech Writing: The Right Answer is to Disclose.
Jon Sully
8 Minutes
100% Human-Written
A reasonable approach to writing in 2026
Alright, I’ll join in the ring here and post an opinion. I drafted up a multi-paragraph message to post in a community Slack workspace and realized I probably ought to just write my own blog post on the matter. I’ll try to keep it short.
There are two general takes which have resonated with me when it comes to generative AI (LLMs) and writing in the technical / developer / coding / etc. space — not your everyday tabloid or newspaper stuff; the stuff that shows up on Hacker News.
Note
For the record, I have always, and will continue to, love the em-dash. I deliberately use it wrong in subtle defiance of LLM-generated prose. It is incorrect to use spaces between the prior word, the dash, and the next word… I don’t care. It feels right to me. Apparently it also proves my authenticity.
…hopefully the rest of my prose also does that. What a time we live in.
Take 1: If you don’t care…
If I begin to read your article and detect the all-too-familiar signs of LLM-generated prose, it immediately signals the following question in my head:
If you didn’t care enough about this topic to actually write this article, why should I care enough to read it?
And, credit where it’s due, I think this take is best wrapped up in “ai;dr” (Sid’s Blog), which, now that I’ve re-read that, actually phrases it better:
Why should I bother to read something someone else couldn’t be bothered to write?
So good. Stop here and read that post if you haven’t yet. It’s worth it!
Anyway, there’s something about this take that resonates so much with me. Maybe it’s because I’ve felt the pain of staring at the blank canvas, waiting for words to arrive. Maybe it’s having spent so many hours crafting language into something that’s unique, something recognizable — a voice. I don’t know. But I think I agree: if I detect (undisclosed) AI-generated prose in your blog post / article / document, I’m probably going to bail on it because it feels like you’re asking me to put in more effort than you did.
Take 2: Just Give Me the Prompt
The other notable take that resonates with me is, essentially, “instead of actually running the LLM and copy/pasting the result into your blog, just post the raw prompt that you gave the LLM.” Which is to say that anything uniquely worthwhile the LLM might spit out is a result of the worthwhile details you already included in the prompt. That the LLM is just turning your chicken-scratch into cursive, per se.
I believe I first noted this concept in a comment on HN. While the commenter there is referring to GitHub Issues, I think there’s a pretty clear parallel here.
I think this take is great as well. If you’re going to lean on AI to write the actual prose of your long-form text content, I think I’d rather just see the prompt you gave the LLM to begin with. Look, this:
The last few features I worked on for my job, I found that Codex was pretty much able to handle them entirely, end to end, after I just described them in a few sentences. Write me an essay to post on my blog that talks about this.
Is way more helpful than this:
The last few features I shipped at work were different.
Not in what they did. Not in their scope. But in how they came into existence.
I described each feature to Codex in a few sentences. High-level intent. A bit of context about the surrounding system. A constraint or two. Then I let it run.
It implemented the database changes. It wired up the models. It added controllers, views, background jobs. It handled edge cases I would have written tests for myself. It refactored adjacent code where necessary. In each case, I reviewed the diff, made a few surgical corrections, and merged.
End to end.
This is not autocomplete. This is not “help me write this method.” This is feature-level delegation.
The shift is subtle but profound.
…
🤮 and that was just the first hundred-ish words. It generated a thousand words all in this same pompous style that makes me want to bang my head on the wall. Linebreak. ~ profound statement ~. LINEBREAK! Mic drop. LINEBREAK AGAIN!
Aside from the aggravation of the language style, and rewinding back to the main point here, the LLM output is neither correct nor more helpful than the original prompt. It decided to make up a bunch of details the original prompt never specified (so… lied on your behalf? Because… it sounds good?) and ultimately added nothing to the reader’s extra time spent reading slop instead of simply “I found that Codex was pretty much able to handle them entirely, end to end, after I just described them in a few sentences.”
What a colossal waste of precious human seconds.
Maybe if I detect AI in an article that’s masquerading as personal I’ll just copy its contents back into an LLM and ask, “distill out the initial prompt which likely generated this article” 🙄
Stop This, Technical Folks.
Before I get too riled up, let me get to my point here. I’m writing to you, technical people, as a technical person who also writes. Stop thinking this is just going to work for you. It’s not. Our community (nerds globally) is so much closer to LLMs in usage, integration, understanding, building, tuning, etc., than any other sector of humanity right now. Normal folks are not using LLMs like we (mostly) are. Why on earth would you think that you can post AI-generated text, targeting these same people, and they suddenly will forget how LLM-generated text sounds? That’s silly.
Your audience knows what LLM prose sounds like. I promise you, they do. And they’ll bail. Because both of the Takes above are true. The best answer — the ‘right’ answer — remains: just write the dang thing! Do the hard thing and write the article. Choose your words. Craft your sentences. Find your inner Gary Provost and craft melodies with your thoughts and your words. It’s an art just as much as a means of information transmission (language is technology, after all). People, especially technical people that work closely with AI, crave authentic thoughts. If we wanted AI’s thoughts statistically-yet-magically-generated tokens, we’d have asked it. Instead, we took a chance on your article because it sounded interesting and novel. Don’t disappoint us with generated slop! Give us your best.
At Worst, Disclose.
The former paragraphs are the right answer. There’s no ‘but’ here.
If you are, however, determined to use AI to write your thoughts, have other limitations (AI translations are neat), or just don’t care about my opinions (you do you), allow me to encourage what I hope becomes a common practice: disclose your AI prose percentage.
That is, at the top of your blog post / article / whatever-body-of-text, clearly and simply disclose what percentage of the words were written by you and what percentage remain pasted tokens yielded from an LLM. iA Writer is an example of an editor which helps to track this for you, but figure out a way to get this number. Make it a priority to know this information as you write.
I earnestly believe that, when done honestly, this is the only way that writers will be able to build trust with their readers if they insist on including LLM-generated text in their writings. And “honestly” is the key here. If you lie about this number and technical people get wind of it, you’ve done worse than having never disclosed at all. Just be honest.
Tip
I’m not here to make passing judgements on those that choose to disclose; I’ll be thrilled that folks choose to at all. But just a gentle encouragement… if your human-written-text percentage is <75%, you may be headed in the wrong direction.
Strong Opinions
I don’t want to belabor the point so let’s wrap it up:
- If you can’t be bothered to write it, why should I be bothered to read it?
- Just show me the prompt instead?
- Technical people can sniff slop a mile away
- Just write your stuff
- If you just plain refuse to do that, please disclose your AI %’s
PS: I Still Need to Figure Out How to Ask
I guess this all started from a slightly different place. I wanted to solicit opinions: what’s the best way to ask an author how much of their article was AI-generated? Not to be rude, mean, or with malicious intent. From a genuine place of “I felt like I read some AI patterns; I just want to know how much of this prose was really you”.
I don’t have an answer to that yet.
It’s a hard question to ask and it’s loaded with implications, likely hurt feelings, and bias. It’s a question whose answer is solely based on trust. That’s hard. Maybe the best way is the simple and straightforward path:
Hey I felt like I sensed some LLM patterns in your post; could you disclose what percentage of your article’s text was LLM-generated vs. personally/manually written?
I don’t know. TBD. Let me know if you have any good ideas. Be kind.
Ta ta
Anyway, there you go. My off-the-cuff, past-my-bedtime thoughts on AI usage in technically-oriented writing. Good night!
Note
Coming back with some next-day clarity, I shared this with a good friend and they had some helpful feedback. If you disregard the earlier takes and reasons in this article, the deepest truth here is that we (the readers), “want to know the human’s point of view, not the LLM’s!”. Totally agree. Thanks Adam!