You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
No it isn't
Describe the solution you'd like
"Think of the model as an LLM that can also hear and understand speech. As such, it can be used as a voice agent, and also to do speech-to-speech translation, analysis of spoken audio,"
This feature would expand Localai to the next level. Once added, Big-agi could add support so you won't have to have a calling back-end like they do.
Additional context
"Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message). The input to the model is given as a text prompt with a special <|audio|> pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio. Using the merged embeddings as input, the model will then generate output text as usual.
In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output. No preference tuning has been applied to this revision of the model."
Thank you for building such an easy and awesome project!
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
No it isn't
Describe the solution you'd like
"Think of the model as an LLM that can also hear and understand speech. As such, it can be used as a voice agent, and also to do speech-to-speech translation, analysis of spoken audio,"
This feature would expand Localai to the next level. Once added, Big-agi could add support so you won't have to have a calling back-end like they do.
https://huggingface.co/fixie-ai/ultravox-v0_5-llama-3_1-8b
https://github.com/vllm-project/vllm/blob/661a34fd4fdd700a29b2db758e23e4e243e7ff18/examples/offline_inference_audio_language.py#L23
Describe alternatives you've considered
Nothing
Additional context
"Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message). The input to the model is given as a text prompt with a special <|audio|> pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio. Using the merged embeddings as input, the model will then generate output text as usual.
In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output. No preference tuning has been applied to this revision of the model."
Thank you for building such an easy and awesome project!
The text was updated successfully, but these errors were encountered: