Nemeski@lemm.ee to Technology@lemmy.worldEnglish · 5 months agoTim Cook is “not 100 percent” sure Apple can stop AI hallucinationswww.theverge.comexternal-linkmessage-square136fedilinkarrow-up1203arrow-down112
arrow-up1191arrow-down1external-linkTim Cook is “not 100 percent” sure Apple can stop AI hallucinationswww.theverge.comNemeski@lemm.ee to Technology@lemmy.worldEnglish · 5 months agomessage-square136fedilink
minus-squareQuantumSoul@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up0·5 months agoThey do have internal concepts though: https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation Probably not of what a human is, but thought process is needed for better text generarion and is therefore emergent in their neural net
minus-squareNatanael@slrpnk.netlinkfedilinkEnglisharrow-up1·5 months agoThe problem is they have many different internal concepts with conflicting information and no mechanism for determining truthfulness or for accuracy or for pruning bad information, and will sample them all randomly when answering stuff
They do have internal concepts though: https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation
Probably not of what a human is, but thought process is needed for better text generarion and is therefore emergent in their neural net
The problem is they have many different internal concepts with conflicting information and no mechanism for determining truthfulness or for accuracy or for pruning bad information, and will sample them all randomly when answering stuff