-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: Perplexity model name does not match models available via the API #2274
Comments
This chore is so annoying. I really wish they would just host a /models endpoint so we dont have to keep doing this... |
Totally get that! |
Would be way more effort than its worth because they also keep changing the docs page too! 😆 ~$3B company and can't make a /models endpoint like everyone else, crazy! |
brainstorming here: Maybe a feature where user can type in model name in addition to the preset drop down? |
Ideally, yeah - basically a combobox for those who dont allow us to pull models dynamically! |
How are you running VectorAdmin?
Docker (local)
What happened?
The model name under perplexity is wrong and does not match any model on perplexity model card.
Are there known steps to reproduce?
Follow these steps:
Chat Settings > Workspace LLM Provider > Perplexity AI > Workspace Chat Model > llama-3.1-sonar-huge-128k-chat
Now try chatting and it will say invalid model. That's because
llama-3.1-sonar-huge-128k-chat
is not the right name for the 405B model. It isllama-3.1-sonar-huge-128k-online
as seen in docs here - https://docs.perplexity.ai/guides/model-cardsThe text was updated successfully, but these errors were encountered: