- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’::Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’: ‘The worst bits of everything this industry is’
Oh noes, somebody using AI wrong and getting bad results. What else is new? ChatGPT works on tokens (aka words or word segments converted to integers), not on characters. Any character based questions will naturally be problematic, since the AI literally doesn’t see the characters you are questioning it about. Same with digits and math. The surprising part here isn’t that ChatGPT gets this wrong, that bit is obvious, but the amount of questions in that area that it manages to answer correctly anyway.
Whenever I read “just” I can’t help but think of Homer Simpson’s: It Only Transports Matter?. Seriously, there is nothing “just” about this. What ChatGPT is capable of is utterly mind boggling. Humans worked on trying to teach computers how to understand natural language ever since the very first computers 80 or so years ago, without much success. Even just a simple automatic spell checker that actually worked was elusive. ChatGPT is so f’n good at natural language that people don’t even realize how hard of a problem that is, they just accept that it works and don’t think about it, because it’s basically 100% correct at understanding language.
ChatGPT is a text auto-complete engine. The developers didn’t set out to build a machine that can think, reason, replicate the brain or even build a chatbot. They build one that tells you what word comes next. And then they threw lots of data at it. Everything ChatGPT is capable of is basically an accident, not design. As it turns out, to predict the next word correctly you have to have a very rich understanding of the world and GPT figures that out all by itself just by going through lots and lots of texts.
That’s the part that makes modern AI interesting and a scary: We don’t really know why any of this works. We just keep throwing data at the AI and see what sticks. And for the last 10 years, a lot of it stuck. Find a problem space that you have lots of data for, throw it at AI and get interesting results. No human set around and taught DALLE how to draw and no human taught ChatGPT how to write English, it’s all learned from the data. Worse yet, the lesson learned over the last decade is essentially that human expertise is largely worthless in teaching AIs, you get much better results by simply throwing lots of data at it.
That is utterly meaningless. OpenAI is constantly tweaking that thing for business reasons, including downgrading it to consume less resources and censoring it to not produce something nasty (Meta didn’t get the memo). Same happened with Bing Chat and same thing just happened with DALL-E3, which until a few days ago could generate celebrity faces and now blocks all requests in that direction.
When you compare GPT-3.5 with the new/pay GPT-4, i.e. a newly training versions with more data, it ends up being far superior than the previous one. Same with DALLE2 vs DALLE3.
Also note that modern AIs don’t learn. They are trained on a dataset once and that’s it. The models are completely static after that. Nothing of what you type into them will be remembered by them. The illusion of a short-term memory comes from the whole conversation history getting feed into the model each time. The training step is completely separate from chatting with the model.