You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fish_speech/models/text2semantic/inference.py --text "The text you want to convert" --prompt-text "Your reference text" --prompt-tokens "fake.npy" --checkpoint-path "checkpoints/fish-speech-1.5-yth-lora-2A100/" --num-samples 2 --compile
Hi, I trained my model, and during the inference phase, I followed these steps but did not hear a vocal voice. Why?
Step1: Training
Step2: Inference:
prepare model for inference:
Then as mentioned in documentation (https://speech.fish.audio/inference/#1-generate-prompt-from-voice):
I can not hear any valid voice (https://drive.google.com/file/d/1w3MPQ6jL0Mc5qneBF2fgR9G7-aoTiBtP/view?usp=sharing)
Also, the next evaluation step, as mentioned in the documentation (https://speech.fish.audio/inference/#2-generate-semantic-tokens-from-text) is not working well for me to generate voice:
and
which return
And the fake.wav audio is attached at [Google Drive link]. Could you guide me on why it does not generate a valid response?
The text was updated successfully, but these errors were encountered: