One of my former (and very long-term) freelance gigs, How Stuff Works, has replaced writers with ChatGPT-generated content and also laid off its excellent editorial staff.
It seems that going forward, when articles I wrote are updated by ChatGPT, my byline will still appear at the top of the article with a note at the bottom of the article saying that AI was used. So it will look as if I wrote the article using AI.
To be clear: I did not write articles using ChatGPT.
#AI #LLM #ChatGPT
This seems really short-sighted. Why would I go to How Stuff Works when I can just ask the LLM myself?
Maybe there’s just no possible business model for them anymore with the advent of LLMs, but at least if they focused on the “actually written by humans!” angle there’d be some hook to draw people in.
2- the more content you have the more organic traffic you’re likely to attract from Google;
3- displaying ads on your website makes you money.
Websites full of LLM generated content are just the natural continuation of MFAs (Made For AdSense) and there were lots of tools on sale back then in the 2006~2008 period that promised to automatically create websites for you and fill them with randomized content that is optimized for AdSense.
The thing is, the LLM doesn’t actually know anything, and lies about it.
So you go to How Stuff Works now, and you get bullshit lies instead of real information, you’ll also get nonsense that looks like language at first glance, but is gibberish pretending to be an article. Because sometimes the language model changes topics midway through and doesn’t correct, because it can’t correct. It doesn’t actually know what it’s saying.
See, these language models are pre-trained, that the P in chatGPT. They just regurgitate the training data, but put together in ways that sort of look like more of the same training data.
There are some hard coded filters and responses, but other than that, nope, just a spew of garbage out from the random garbage in.
And yet, all sorts of people think this shit is ready to take over writing duties for everyone, saving money and winning court cases.
Yeah, this is why I can’t really take anyone seriously when they say it’ll take over the world. It’s certainly cool, but it’s always going to be limited in usefulness.
Some areas I can see it being really useful are:
generating believable text - scams, placeholder text, and general structure
distilling existing information - especially if it can actually cite sources, but even then I’d take it with a grain of salt
This seems really short-sighted. Why would I go to How Stuff Works when I can just ask the LLM myself?
Maybe there’s just no possible business model for them anymore with the advent of LLMs, but at least if they focused on the “actually written by humans!” angle there’d be some hook to draw people in.
It’s a combination of three things:
1- most people still google things;
2- the more content you have the more organic traffic you’re likely to attract from Google;
3- displaying ads on your website makes you money.
Websites full of LLM generated content are just the natural continuation of MFAs (Made For AdSense) and there were lots of tools on sale back then in the 2006~2008 period that promised to automatically create websites for you and fill them with randomized content that is optimized for AdSense.
The thing is, the LLM doesn’t actually know anything, and lies about it.
So you go to How Stuff Works now, and you get bullshit lies instead of real information, you’ll also get nonsense that looks like language at first glance, but is gibberish pretending to be an article. Because sometimes the language model changes topics midway through and doesn’t correct, because it can’t correct. It doesn’t actually know what it’s saying.
See, these language models are pre-trained, that the P in chatGPT. They just regurgitate the training data, but put together in ways that sort of look like more of the same training data.
There are some hard coded filters and responses, but other than that, nope, just a spew of garbage out from the random garbage in.
And yet, all sorts of people think this shit is ready to take over writing duties for everyone, saving money and winning court cases.
Yeah, this is why I can’t really take anyone seriously when they say it’ll take over the world. It’s certainly cool, but it’s always going to be limited in usefulness.
Some areas I can see it being really useful are:
That’s about it.