
Specifically, if you provide the audio URL the system looks for If no such file is provided, then lip sync is automatically generated by default. If a phonetics file (".pho") is provided on the same URL (see the lip sync tool below), the system will use it for lip sync.


You can define utterances as objects, and then use them in a furhat.say: val greeting = utterance Īn example of the richness you can achieve with natural speech is available on YouTube.Īudio files have to be fetched from a URL and need to be in. Utterances allows you to combine speech with behaviors (such as Gestures) or events that should be mixed with the spoken utterances, as well as randomly selected parts, and audio files. The furhat.say and furhat.ask (see further down) commands also support more complex speech actions, called Utterances. You can make Furhat stop speaking with the following action: furhat.stopSpeaking() You can also check if Furhat is currently speaking: furhat.isSpeaking() If you want to continue with the next action immediately, you can pass the async parameter: furhat.say("Some other text", async = true) // whatever comes after here will be executed immediately The synthesis action is by default blocking (synchronous), which means that the synthesis is completed before the next action is taking place.

Speech synthesis actions are added to a speech synthesis queue. This means that your application will continue receiving events while speaking. To have Furhat say something, use: furhat.say("Some text")īehind the scenes, say is calling a say-state that will send an event to the synthesizer to synthesize the text, and then return when the synthesizer is done. This involves both making Furhat speak, and to listen for speech from the user, which Furhat can then respond to. Speech is fundamental to most interactions.
