Start a session. The robotMode parameter specifies which mode to open duix in. The optional value is 'bot'|'null'. null: Direct drive mode. After calling the speak method, the digital person will speak the sentence directly. For example: calling speak('who are you?') , the digital human will say "who are you?" bot: Conversational mode. After calling the speak method, the digital human will answer your question. For example: calling speak('who are you?') , the digital person might say "I'm Aixia, the silicon-based digital person, nice to meet you."
# stopRecord({success: Callback when successful,fail: Callback when fail})
End the recording, and the voice recognition result (text) will be added after the recording is successful.
# openAsr({result: Callback when real-time recognition result is received})
Turn on real-time speech recognition (note that this method needs to be called when the show is triggered). After real-time speech recognition is enabled, the result callback function will continue to be triggered to return the recognition result.
The server drives the commands of the digital human. If dialogue mode is specified when calling start , every time the server performs a verbal reply, it will be triggered by this event, which contains the text and other information replied by the digital person in this round, which can be used for subtitles.
Fix bug DigitalHuman.js line: 166 & 169 event name error causes wsClose wsError not to trigger bug
Modify the webpack configuration to output the sdk version once by default, which is convenient for debugging in development and production environments
0.0.39
Added pause and resume methods.
Repair the occasional swallowing phenomenon.
When the playback ends, the puase event is no longer triggered, only the ended event is triggered.
Added function, pause the screen and sound when the page is not visible, and continue playing when it is visible again.
0.0.38
Fixed the occasional bug that after calling the say method, the loading is stuck and cannot be played.
Added functions. When options.body.autoplay=false, calling say does not autoplay silence.
0.0.37
Added getCanvas() method.
Added getAudioContext() method.
0.0.36
Modify the startup method, and now the system can be accessed normally on the mobile phone in the form of ip.
AIFace.js reconnectInterval is changed to 1 to enable disconnection and reconnection
Bug fix AIFace.js line:48 close => onClose
0.0.34
Added wsClose event, AIFace connection close event.
Switching the silent video forward/backward to solve the problem that the silent video jumps when the silent video is not connected to the end (such as Jordanian male).
Delete some debug logs.
Fix the bug of not triggering the load event
0.0.32
Fixed the bug that the canplaythrough event could not be triggered if the audio was too short.
0.0.31
Further optimize the client buffer strategy and reduce the memory usage. Now the memory usage of the Jordan model is stable at about 700M.
Fix some bugs
0.0.30
Modify the client buffer policy to reduce client memory usage.
Added options.body.autoplay to control whether the silent video is automatically played after loading. The default is true. If it is set to false, the duix.playSilence() method can be called after the bodyload event is triggered to actively play.
Optimize the TTS cache scheme, now the cache can be retained for a longer time.
0.0.27
Added the configuration body.autoplay to control whether the body will play automatically after loading.
To delete the code of the real-time texture, the buffer must be used, and the buffer size can be set to 0.
The default buffer strategy is changed to auto, and the buffer size is predicted by the half-second loading speed of the first face loading.
Adjust the decoding interval to reduce the instantaneous consumption of the CPU, and solve the problem that the page is forcibly refreshed due to the instantaneous high CPU usage of some mobile phones.
0.0.26
Fix the error when quality.fps and quality.quarter are not passed.
Added bodyprocess event to notify body loading progress.