# DUIX H5 SDK documentation

# Install

# Install Duix
npm i duix-guiji -S

# Get Started

Get Demo from here (opens new window) You can also try it out on CodePen (opens new window)

import DUIX from 'duix-guiji'
const duix = new DUIX({
    container: '.container',
    token: 'xxxxxxxxx'
})

How to get Token ?

# Options

The new DUIX(options) in the above example can get a DUIX instance, where option is a configuration object, the details are as follows:

Name Type Description Default Example
container string Digital human container. The rendered digital people will come out in this Dom . .remote-container #human
token string duix token, for authentication . see How to get Token ?

# Methods

# start({robotMode: 'bot'/'null'})

Start a session. The robotMode parameter specifies which mode to open duix in. The optional value is 'bot'|'null'.
null: Direct drive mode. After calling the speak method, the digital person will speak the sentence directly. For example: calling speak('who are you?') , the digital human will say "who are you?"
bot: Conversational mode. After calling the speak method, the digital human will answer your question. For example: calling speak('who are you?') , the digital person might say "I'm Aixia, the silicon-based digital person, nice to meet you."

# setVideoMuted()

Set whether the digital human video is muted, true is muted, false is non-muted.

# speak(content: string)

Drive digital people to speak, support text-driven and audio file-driven.

# startRecord()

start recording.

# stopRecord({success: Callback when successful,fail: Callback when fail})

End the recording, and the voice recognition result (text) will be added after the recording is successful.

# openAsr({result: Callback when real-time recognition result is received})

Turn on real-time speech recognition (note that this method needs to be called when the show is triggered). After real-time speech recognition is enabled, the result callback function will continue to be triggered to return the recognition result.

# closeAsr()

Turn off real-time speech recognition.

# stop()

Stop the current session.

# on(eventname, callback)

Listen for events.

# Parameters:
# eventname

For the event name, see the table below.

# callback

# Events

Name Description
error Fired when there is an uncaught error.
bye Fired when the session ends.
intialSucccess Digital human initialization succeeded.
show Out figures are displayed.
progress Digital man loading progress.
command The server drives the commands of the digital human. If dialogue mode is specified when calling start , every time the server performs a verbal reply, it will be triggered by this event, which contains the text and other information replied by the digital person in this round, which can be used for subtitles.

# Versions:

1.0.30

  1. Change the DUIX constructor, now only two required parameters are required.
  2. Enhanced IOS compatibility, fixed the problem of real-time recognition and conversion effect above IOS15.1.
  3. Added unified log.
  4. The start method accepts the robotMode parameter, and now it is possible to switch the dialogue/direct drive mode without renewing the instance.

1.0.27

  1. Fix rtc echo problem

1.0.26

  1. After canceling the internal recording of the sdk, the automatic playback function is changed to event throwing and external playback

1.0.25

  1. Cancel the automatic playback function after the sdk internal voice is converted to text, and change it to event throwing and external playback

1.0.24

  1. Added privatization configuration support

1.0.23

  1. One-click build and package script optimization

1.0.22

  1. Added xmpp to add disconnect time
  2. Several new configuration items have been added to the rtc audio parameter

1.0.19

  1. Change the underlying architecture of the sdk and change to the webrtc mode

0.0.45 (not released to npm yet)

  1. Solve the problem of multiple triggering of the monitor screen-off event
  2. Solve the problem that the resume face will continue to play after the digital human pause and stop
  3. Solve the problem that resume audio will continue to play after digital people stop
  4. Solve the problem that when the playback is paused, switching to the background and then back will directly start playback

0.0.44

  1. The major version adds authentication function
  2. Optimize the test code to facilitate testing
  3. Optimized some bugs

0.0.43

  1. Added method getAudioDest to get MediaStream from AudioContext

0.0.42

  1. Request.js => getArrayBuffer to add a method to actively disconnect the request
  2. DigitalHuman.js => _sayVoice adds return when judging network cancellation
  3. DigitalHuman.js => stop Add the cancel method to prevent the network request from succeeding after the stop and cause the stop to fail

0.0.41

  1. Request.js add axios timeout
  2. Request.js => getArrayBuffer Add audio request failure return && DigitalHuman.js => _sayVoice Add judgment Call event when audio request fails && DUIX.js => Add new event audioFailed event when audio request fails

0.0.40

  1. Fix bug DigitalHuman.js line: 166 & 169 event name error causes wsClose wsError not to trigger bug
  2. Modify the webpack configuration to output the sdk version once by default, which is convenient for debugging in development and production environments

0.0.39

  1. Added pause and resume methods.
  2. Repair the occasional swallowing phenomenon.
  3. When the playback ends, the puase event is no longer triggered, only the ended event is triggered.
  4. Added function, pause the screen and sound when the page is not visible, and continue playing when it is visible again.

0.0.38

  1. Fixed the occasional bug that after calling the say method, the loading is stuck and cannot be played.
  2. Added functions. When options.body.autoplay=false, calling say does not autoplay silence.

0.0.37

  1. Added getCanvas() method.
  2. Added getAudioContext() method.

0.0.36

  1. Modify the startup method, and now the system can be accessed normally on the mobile phone in the form of ip.
  2. AIFace.js reconnectInterval is changed to 1 to enable disconnection and reconnection
  3. Bug fix AIFace.js line:48 close => onClose

0.0.34

  1. Added wsClose event, AIFace connection close event.
  2. Added wsError event, AIFace connection error event.

0.0.33

  1. Switching the silent video forward/backward to solve the problem that the silent video jumps when the silent video is not connected to the end (such as Jordanian male).
  2. Delete some debug logs.
  3. Fix the bug of not triggering the load event

0.0.32

  1. Fixed the bug that the canplaythrough event could not be triggered if the audio was too short.

0.0.31

  1. Further optimize the client buffer strategy and reduce the memory usage. Now the memory usage of the Jordan model is stable at about 700M.
  2. Fix some bugs

0.0.30

  1. Modify the client buffer policy to reduce client memory usage.
  2. Added options.body.autoplay to control whether the silent video is automatically played after loading. The default is true. If it is set to false, the duix.playSilence() method can be called after the bodyload event is triggered to actively play.
  3. Optimize the TTS cache scheme, now the cache can be retained for a longer time.

0.0.27

  1. Added the configuration body.autoplay to control whether the body will play automatically after loading.
  2. To delete the code of the real-time texture, the buffer must be used, and the buffer size can be set to 0.
  3. The default buffer strategy is changed to auto, and the buffer size is predicted by the half-second loading speed of the first face loading.
  4. Adjust the decoding interval to reduce the instantaneous consumption of the CPU, and solve the problem that the page is forcibly refreshed due to the instantaneous high CPU usage of some mobile phones.

0.0.26

  1. Fix the error when quality.fps and quality.quarter are not passed.
  2. Added bodyprocess event to notify body loading progress.