• bamboo@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    20 days ago

    Is the distrust in the quality of the output? If so, I think the main thing Apple has going for it is that they use many fine tuned models for context constrained tasks. ChatGPT can be arbitrarily prompted and is expected to give good output for everything, sometimes long output. Being able to do that is… hard. However, most of apple’s applications are much, much narrower. Like, the writing assistant which will rephrase at most a few paragraphs: the output is relatively short, and the model has to do exactly one task. Or in Siri: the model has to take a command, and then select one or more intents to call. It’s likely that choosing which intents to call, and what kinds of arguments to provide are handled by separate models optimized for each case. Despite all that, it is very possible that errors can still occur, but there are fewer chances for them to occur. I think part of Apple’s motivation for partnering with OpenAI specifically for certain complex Siri questions, is that this is an area they aren’t comfortable putting Apple branding on due to output quality concerns, and by providing it with a partner, they can pass blame onto the partner. Someday if LLMs are better understood and their output can be better controlled and verified for open ended questions, that’s when Apple might dump OpenAI and advertise their in house replacement as being accurate and reliable in a way ChatGPT isn’t.

    • LostWanderer@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 days ago

      I think it’s due to a combination of the tech still being relatively young (it’s made leaps and bounds) and its thoughtless hallucinations that pass as valid answers. If the training data is poisoned by disinformation or misinformation, it makes any output potentially useless at best, at worst it’s harmful. The quality of LLM results purely depends on the people in charge of creating them and the source of its data. After writing it out, I feel that I mistrust the people in control of LLM development because it’s so easy to implement this tech incorrectly and for the people in charge to be completely irresponsible. Since, the techbros behind this latest push for making LLMs into AI are so gung-ho about it, the guard rails have been pushed aside. That makes it all the easier for my fears to become manifest.

      Once again, it sounds all well and good what Apple is likely trying to do with their implementation of LLM. However, I can’t help but wonder about how terribly wrong it can all go.