They warn us that future artificial intelligence will wipe out humanity. This may be a lie with ulterior motives.Support independent research and analysis by...
I’ve been concerned about AI as x risk for years before big tech had a word to say on the matter. It is both possible for it to be a threat, and for large companies to be trying to take advantage of that.
Those concerns mostly apply to artificial general intelligence, or “AGI”. What’s being developed is another can of worms entirely, it’s a bunch of generative models. They’re far from intelligent; the concerns associated with them is 1) energy use and 2) human misuse, not that they’re going to go rogue.
I’m well aware, but we don’t get to build an AGI and then figure it out, and we can’t keep these ones on target, see any number of “funny” errors people posted, up to the paper I can’t recall the name of offhand that had all of the examples of even simpler systems being misaligned.
Interesting video. At the core it can be summed up as:
I think that’s a good idea in general, not just because of AI
Thanks for the TL;DR!
I’ve been concerned about AI as x risk for years before big tech had a word to say on the matter. It is both possible for it to be a threat, and for large companies to be trying to take advantage of that.
Those concerns mostly apply to artificial general intelligence, or “AGI”. What’s being developed is another can of worms entirely, it’s a bunch of generative models. They’re far from intelligent; the concerns associated with them is 1) energy use and 2) human misuse, not that they’re going to go rogue.
I’m well aware, but we don’t get to build an AGI and then figure it out, and we can’t keep these ones on target, see any number of “funny” errors people posted, up to the paper I can’t recall the name of offhand that had all of the examples of even simpler systems being misaligned.