I love fake product reviews. You can see the marketing speak just dripping off of them. I swear people in marketing can’t control themselves when it comes to speaking like an ad.
That is exactly the type of content LLMs were designed to excel at generating.
Hm. It’s also exactly the kind of disingenuousness that that humans have spent a couple million years evolving to try to detect, though.
I wonder if the LLMs are going to win this. Maybe more likely: When everyone realizes that the entire Internet is being flooded with even more bullshit, we’ll just stop trusting it, and the LLMs will more or less have put themselves out of a job.
It would be funny if the propensity for humans to lie to each other meant that we were basically already inoculated from this terrifying new category of machines that we’ve designed to lie to us too.
Hm. It’s also exactly the kind of disingenuousness that that humans have spent a couple million years evolving to try to detect, though.
I agree, but by now there’s probably no reason to make people write those kind of things. It’s likely that no human oversight is needed at all. Astroturfing can now be nearly completely automated.
I wonder if the LLMs are going to win this. Maybe more likely: When everyone realizes that the entire Internet is being flooded with even more bullshit, we’ll just stop trusting it, and the LLMs will more or less have put themselves out of a job.
One good thing about perfect bullshit generators is that they might help us abolish bullshit things like cover letters and marketing copy. But that’s a very small gain considering the massive loss of trust in the web and making it a glitchy, spammy, scammy experience.
It would be funny if the propensity for humans to lie to each other meant that we were basically already inoculated from this terrifying new category of machines that we’ve designed to lie to us too.
On the contrary, I believe our inherent ability to trust each other is one of the main pillars of civilization, and undisclosed use of LLMs heavily undermines it.
I agree, but by now there’s probably no reason to make people write those kind of things. It’s likely that no human oversight is needed at all. Astroturfing can now be nearly completely automated.
Probably I’m just picking semantics, but that kinda is the good reason to make humans write those kinds of things. Astroturfing is bad, so needing to pay an entire human to be able to do it imposes a cost that limits its spread and application. I guess that’s also what you’re saying.
But that’s a very small gain considering the massive loss of trust in the web and making it a glitchy, spammy, scammy experience.
I’ve been passively wondering how long it will be until I have to start adding before:2023 to get remotely useful web search results on any topic. Don’t know what to try yet if I need to look up something from after that.
On the contrary, I believe our inherent ability to trust each other is one of the main pillars of civilization, and undisclosed use of LLMs heavily undermines it.
Oh yeah, definitely. I just meant that as an ironic silver lining, the damage would probably be worse if there wasn’t already some level of dishonesty and deception in society, because then we’d be too pure to have any defences against LLMs.
Yep. And it’s gonna get so much worse once LLMs are mainstream. Perhaps they have been for some time. After all, the Dead Internet theory precedes the onslaught of ChatGPT.
I’ve been passively wondering how long it will be until I have to start adding before:2023 to get remotely useful web search results on any topic. Don’t know what to try yet if I need to look up something from after that.
Yes, that’s very sad. And what would we get in return for losing the Web to the bots? Nothing but automatic expensive BS at scale.
Oh yeah, definitely. I just meant that as an ironic silver lining, the damage would probably be worse if there wasn’t already some level of dishonesty and deception in society, because then we’d be too pure to have any defences against LLMs.
Sorry for the misunderstanding. You’re right. Distrust is essential to critical thinking also. Maybe once everyone learns to assume that you absolutely shouldn’t trust anything on the internet (and especially not anything produced by ChatGPT), it will be easier to combat the spread of fake news and nefarious propaganda. But I doubt it. Even smart people seem to fall into this trap, lured by the plausibility of the output, as was shown by Mozilla recently.
I love fake product reviews. You can see the marketing speak just dripping off of them. I swear people in marketing can’t control themselves when it comes to speaking like an ad.
That is exactly the type of content LLMs were designed to excel at generating.
Hm. It’s also exactly the kind of disingenuousness that that humans have spent a couple million years evolving to try to detect, though.
I wonder if the LLMs are going to win this. Maybe more likely: When everyone realizes that the entire Internet is being flooded with even more bullshit, we’ll just stop trusting it, and the LLMs will more or less have put themselves out of a job.
It would be funny if the propensity for humans to lie to each other meant that we were basically already inoculated from this terrifying new category of machines that we’ve designed to lie to us too.
I agree, but by now there’s probably no reason to make people write those kind of things. It’s likely that no human oversight is needed at all. Astroturfing can now be nearly completely automated.
One good thing about perfect bullshit generators is that they might help us abolish bullshit things like cover letters and marketing copy. But that’s a very small gain considering the massive loss of trust in the web and making it a glitchy, spammy, scammy experience.
On the contrary, I believe our inherent ability to trust each other is one of the main pillars of civilization, and undisclosed use of LLMs heavily undermines it.
Probably I’m just picking semantics, but that kinda is the good reason to make humans write those kinds of things. Astroturfing is bad, so needing to pay an entire human to be able to do it imposes a cost that limits its spread and application. I guess that’s also what you’re saying.
I’ve been passively wondering how long it will be until I have to start adding
before:2023
to get remotely useful web search results on any topic. Don’t know what to try yet if I need to look up something from after that.Oh yeah, definitely. I just meant that as an ironic silver lining, the damage would probably be worse if there wasn’t already some level of dishonesty and deception in society, because then we’d be too pure to have any defences against LLMs.
Yep. And it’s gonna get so much worse once LLMs are mainstream. Perhaps they have been for some time. After all, the Dead Internet theory precedes the onslaught of ChatGPT.
Yes, that’s very sad. And what would we get in return for losing the Web to the bots? Nothing but automatic expensive BS at scale.
Sorry for the misunderstanding. You’re right. Distrust is essential to critical thinking also. Maybe once everyone learns to assume that you absolutely shouldn’t trust anything on the internet (and especially not anything produced by ChatGPT), it will be easier to combat the spread of fake news and nefarious propaganda. But I doubt it. Even smart people seem to fall into this trap, lured by the plausibility of the output, as was shown by Mozilla recently.