Hi,
Just like the title says:
I’m try to run:
With:
- koboldcpp:v1.43 using HIPBLAS on a 7900XTX / Arch Linux
Running :
--stream --unbantokens --threads 8 --usecublas normal
I get very limited output with lots of repetition.
I mostly didn’t touch the default settings:
Does anyone know how I can make things run better?
EDIT: Sorry for multiple posts, Fediverse bugged out.
Ah thank you for the trove of information. What would be the best general knowledge model according to you?
Well, I’m not that up to date anymore. I think MythoMax 13b is pretty solid. Also for knowledge. But I can’t be bothered anymore to read up on things twice weekly. That news is probably already 3 weeks old and there will be a (slightly) better one out there now. And it gets outperformed by pretty much every one of the big 70b models. But I can’t run them on my hardware, so I wouldn’t know.
This benchmark ranks them by several scientific tests. You can hide the 70b models and scarlett-33b seems to be a good contender. Or the older Platypus models directly below. But be cautious, sometimes these models look better on paper than they really are.
Also regarding ‘knowledge’: I don’t know about your application. Just in case you’re not aware of this… Language models hallucinate and regularly just make up stuff. Even expensive and big models will do this. The models we play with, even more so. Just be aware of it.
And lastly: There is another good community here on Lemmy: [email protected] You can find a few tutorials and more people there, too. And have a look at the ‘About’ section or stickied posts there. They linked more benchmarks and info.
Alright, thanks for the info & additional pointers.