If you wrote it with an LLM, it wasn’t worth writing
I’m seeing a lot of excitement around LLMs these days. I’m personally less than excited; I think they’re mostly poised to generate vast quantities of worthless noise, piling further on top of the already vast quantities of worthless noise which fill the Internet (I do think a personal, local LLM could be a useful way to query my own data, if AI startup dicks would stop buying up all the GPUs). I particularly think they’re going to make blogs a lot worse by removing the one very low bar to publishing a useless post: having to write the damn thing.
My basic thesis is: if you can get ChatGPT to write your blog post by describing in a few sentences what you want to “say”, well, why not just post the prompt? If all you have is “a union for software developers is a bad idea”, just post that – because all the LLM is going to do is stir up 1000 words of bland words from its training set that say the same thing.
When your midterm exam in American History instructs you to write 1500 words on the Reconstruction era in the South, is it fair to say that actually the professor is the one who wrote all 30 essays, since she’s the one who made the prompt?
Maybe you’re more advanced: you write up an outline of your desired post and feed it into the LLM so it covers your 5 core points. The problem is that it’s still not you writing. It’s just the Internet zeitgeist filling in the blanks for you, your Anti-Skub article being populated with the summation of a decade’s discussion on the
/r/antiskub sub-reddit. Just post the damn outline and be intellectually honest!
If you don’t care enough about the contents to actually make the contents yourself, if you’re happy to abdicate responsibility for the tedious reality of writing and skip straight to the Hacker News upvotes, then I don’t think it was worth writing in the first place, because it’s clearly already been said well enough by the rest of the world that a Large Language Model can fill in the blanks for you.