- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
tacking on a bunch of LLMs sure is a way to “make the web more human”.
please stop. just fucking stop shoving this shit into everything.
Too late… everybody is doing this shizzle. I can’t take it anymore.
I didn’t want to pay for their search engine before, and this garbage sure as hell isn’t going to change my mind.
Every company is still doing this even though studies have shown it puts customers off.
Barf.
I posted some of my experience with Kagi’s LLM features a few months ago here: https://literature.cafe/comment/6674957 . TL;DR: the summarizer and document discussion is fantastic, because it does not hallucinate. The search integration is as good as anyone else’s, but still nothing to write home about.
The Kagi assistant isn’t new, by the way; I’ve been using it for almost a year now. It’s now out of beta and has an improved UI, but the core functionality seems mostly the same.
As far as actual search goes, I don’t find it especially useful. It’s better than Bing Chat or whatever they call it now because it hallucinates less, but the core concept still needs work. It basically takes a few search results and feeds them into the LLM for a summary. That’s not useless, but it’s certainly not a game-changer. I typically want to check its references anyway, so it doesn’t really save me time in practice.
Kagi’s search is primarily not LLM-based and I still find the results and features to be worth the price, after being increasingly frustrated with Google’s decay in recent years. I subscribed to the “Ultimate” Kagi plan specifically because I wanted access to all the premium language models, since subscribing to either ChatGPT or Claude would cost about the same as Kagi, while Kagi gives me access to both (plus Mistral and Gemini). So if you’re interested in playing around with the latest premium models, I still think Kagi’s Ultimate plan is a good deal.
That said, I’ve been disappointed with the development of LLMs this year across the board, and I’m not convinced any of them are worth the money at this point. This isn’t so much a problem with Kagi as it is with all the LLM vendors. The models have gotten significantly worse for my use cases compared to last year, and I don’t quite understand why; I guess they are optimizing for benchmarks that simply don’t align with my needs. I had great success getting zsh or Python one-liners last year, for example, whereas now it always seems to give me wrong or incomplete answers.
My biggest piece of advice when dealing with any LLM-based tools, including Kagi’s, is: don’t use it for anything you’re not able to validate and correct on your own. It’s just a time-saver, not a substitute for your own skills and knowledge.
Kagi was founded as an AI company so this is not surprising. I unsubscribed from them after learning that. Also, their CEO is a weirdo who harasses people critical of their product and he thinks the GDPR is optional.
It’s funny, I’ve been thinking a lot about people’s acknowledgement of faults or shortcomings and choosing to ignore them, whether it’s because they agree, don’t care, or think it doesn’t matter. Or don’t agree and there’s no better alternative, or it’s the least bad alternative. I dunno.
In the public internet spaces like Facebook, discord, the others, I’ve been seeing a lot of this happening recently with Linkin Park’s new singer. Some are happy and ignorant, some know and don’t care, some know and are saddened. There is a lot of vitrol between the people who know and are saddened and the people who don’t know/don’t care. This is just one example from this week, but it happens every week to every story. It can be, probably, literally applied to anything. People’s level of information heavily biases them from their predisposed beliefs (as in, if they already have an opinion, chances are that the opinion will not change when presented with new information).
In our spaces I see it with Brave. I see it with Kagi. We all saw it with Unity en masse and something actually happened about that, but even so people are still using Unity today, albeit I would guess out of necessity, or now ignorance since time has passed (not saying ignorance here is a fault). Before then we saw it with Audacity. Can’t forget Reddit, where a significant chunk of users are now participating here instead. And… yet… Reddit still exists, nearly in full.
It’s such a crazy phenomena with how opinions are formed from emotional judgements based on the level of information they have, and due to our current state of informational sharing there are microcosms of willful ignorance. And some aren’t ignorant, it just doesn’t matter to them.
Welp and there goes any reason to try it. God i hate AI.
Do you really hate algorithms (since AI doesn’t really exist yet) or do you hate the hype and marketing?
Yes
Well, the web shouldn’t be human. But if they were to attempt to make it then LLMs would not be the way.
I’ve used some of these features when I’m trying to skim many articles for my grad school work. It’s not terrible.
There is a use case for this stuff. Especially in a search engine.
Short of hosting your own LLM, Kagi is one of the few I’d hope can get it right and respect privacy. (So far unverified on the AI side tho)
It’s not terrible
it sucks at summarising information, though https://www.crikey.com.au/2024/09/03/ai-worse-summarising-information-humans-government-trial/
It’s often not a choice between an AI-generated summary and a human-generated one, though. It’s a choice between an AI-generated summary and no summary.
so, no summary at all, or one that does shit job pointing out important bits or gets them wrong and therefore isn’t a proper summary? choices, choices.
Kagi actually does an interesting implementation for their search summary and while not perfect, it is miles better than the alternatives in my experience. It uses a combination of anthropic’s claude for language processing as well as incorporates wolfram alpha for stuff that needs numerical accuracy. Compared to google AI or copilot I’ve been seeing good results.
While it isn’t perfect at summarizing, I’ve found their implementation to be “good enough”, and it can summarize pieces near instantly, which I think is the place where it actually becomes useful. Humans may be better, but I dont have the money or time to pay a human to summarize pages for me to see if they’re going to be useful to delve further into.
Well that’s a bummer. I believe it.
Hot take: the web should not be more human.
And I’m pretty progressive on technological matters. There should still be a clear separation, though.