CloudHub Docs
Download Documents

RTC


CloudHub JSSDK

CloudHubRTC Data object document

ClientConfig description

Define the interface of the config parameter in the createclient type Object.

namedescription
codec:stringThe encoding method, the encoding method has “VP8” and “H264”, which is default “VP8”.
Note:: One channel can only support one coding method, and determine the encoding method used by the user to enter the channel.
mode:stringThe use scenario of the channel has two communication scenarios RTC and live broadcast scenarios Live, default is RTC .
live : Live scenes, there are two user roles of an anchor and audience, and can set the role of an anchor and audience via the Client.setClienTrole method. An anchor can send and receive voice / video streams, and the viewer can only receive voice / video and cannot be sent.
rtc: Communication scenarios, for common one-to-one call or group chat, any user in the channel can send and receive voice / video streams.
Note::
1)One channel can only support one type of use, and determine the use scenario of the channel with the first entry channel.
2)We recommend that 1 Channel will only use one type of use, do not appear Channel to use communication scenarios today, using live scenes tomorrow, if you need to replace the scene, please use the new Channel.
3)Communication scene RTC Not allowed to enter too many users, in the communication scene RTC, default 1 channel allows 100 people to enter, please contact CloudHub business personnel if you need to adjust the number of users.
4)Specify MODE to specify the use scenario of the channel when you specify modeClient.

Example:

var config = {}
var client = CloudHubRTC.createClient(config);

StreamSpec Description

Description: The interface that defines the spec parameter in createStream, type object.

namedescription
uid: stringUser id.
type: stringThe type of stream that has the following values:
1) video: Audio and video device streaming.
2) screen: Screen sharing / program sharing.
attributes?: objectOptional, streaming custom information, available on the receiving end getAttributes fetch
sourceID?: stringSource ID, the value of this value in different TYPE requirements is as follows:
1) video: Snex mode This value does not need to pass, which is written as “default_source_id”, and if the biup will be set to “default_source_id”; multi-stream mode requires the video device ID, and does not pass the default video device ID.
2) screen: This value does not need to be passed, the value is “screen”.
Note: Multi-streaming mode and flow types are VIDEO (ie: type->video), not allowed to be streaming audio and video at the same time, that is, the audio and video need to be released in multi-stream mode, and the SourceId of the audio must be Audio, video SourceId must be the device ID of the video.
audio?: booleanSpecifies whether the stream contains audio resources, default is true, when Type is Screen, Audio defaults to false.
When Type is Screen, AUDIO defaults to false, if set to true, you need to pay attention to the following:
1)This feature only supports Windows Platform Chrome 73 and above.
2)After setting up AUDIO, you also need to check the sharing audio to really take effect on the pop-up box shared.
3)In multi-stream mode, the flow type is VIDEO (ie: type -> video), when SourceID != “audio”, SDK will force AUDIO to set to false.
microphoneId?: stringThe microphone device ID is not the passage of the default audio device ID. TYPE must make this value for this value.
cameraId?: stringCamera device ID, single-current mode is not transmitted, using the default video device ID.
1)TYPE must make this value for this value.
2)Under multi-stream mode, the flow type is VIDEO (ie: type -> video), this value is no pass, no SDK set this value to the value of SourceID (ie: device ID).
speakerId?: stringSpeaker device ID, passing the default speaker device ID without passage.
mirror?: booleanSpecifies whether a local video stream is displayed locally to mirror flip
true :Mirror.
false :(Default) is not mirror. We recommend that mirror flipping when using the front camera, do not mirror flip when using the rear camera.
audioSource?: MediaStreamTrackSpecifies the audio source of the audio and video stream. This configuration is not recommended without special needs.

1、 The MicrophoneID configuration will be invalid when specifying Audiosource.
2、 The SwitchDevice method is no longer supported after specifying Audiosource.
videoSource?: MediaStreamTrackSpecify the video source of the audio and video stream. This configuration is not recommended without special needs.
1、 If the video source is from Canvas, you need to re-draw the CANVAS content every 1 second to maintain the normal release of the video stream at the time of Canvas content.
2、When the specified videosource, the CAMERAID configuration will be invalid.
3、The SwitchDevice method is no longer supported after specifying videosource.

Note: 1、According to the UID, Type, SourceID generating stream ID, stream ID format is: Uid: Type: SourceId 2、Multi-streaming mode and flow types are VIDEO (ie: type -> video), not allowed to be streaming audio and video at the same time, that is, the audio and video need to be released in multi-stream mode, and the SourceId of the audio must be Audio, video SourceId must be the device ID of the video.

Example:

var stream = CloudHubRTC.createStream({
    uid: "uid",
    type: "video",
    audio:true,
    video:true
});

LocalAudioStatsMap

A set of LocalaudioStats objects, each stream ID (SID) corresponds to a localaudioostats. After calling getLocalaudioStats, the SID and localaudiostats data of the local stream will be provided through the interface, and the type is Object.

namedescription
[sid:string]:LocalAudioStatsLocalaudioOstats corresponding to the stream ID.
LocalAudioStats Please refer to this document LocalAudioStats

Example:

{sid1:LocalAudioStats1, sid2:LocalAudioStats2}

LocalVideoStatsMap

A set of Localvideostats objects, each stream ID (SID) corresponds to a LocalVideostats. After calling getLocalVideoStats, the SID and Localvideostats data of the local stream will be provided through the interface, and the type is Object.

namedescription
[uid:string]:LocalVideoStatsThe user uid corresponds to Localvideostats.
Localvideostats Please refer to this document LocalVideoStats

Example:

{sid1:LocalVideoStats1, sid2:LocalVideoStats2}

RemoteAudioStatsMap

A set of RemoteAudiostats objects, each stream ID (SID) corresponds to a RemoteAudioOstats. After calling the getRemoteAudioOstats, the remote streaming SID and RemoteAudioStats data are available through this interface, and the type is Object.

namedescription
[uid:string]:RemoteAudioStatsRemoteaudiostats corresponding to the UID.
RemoteAudioStats Please refer to this document RemoteAudioStats

Example:

{sid1:RemoteAudioStats1, sid2:RemoteAudioStats2}

RemoteVideoStatsMap

A set of RemotevideStats objects, each stream ID (SID) corresponds to a Remotevideostats. After calling the GetRemotevideStats, the remote streaming SID and RemotevideStats data are provided through this interface, and the type Object is available.

namedescription
[uid:string]:RemoteVideoStatsRemotevideostats corresponding to user uid.
RemoteVideoStats Please refer to this document’s RemoteVideoStats

Example:

{sid1:RemoteVideoStats1, sid2:RemoteVideoStats2}

MediaStreamTrack

Media stream track

This interface represents a media track in a stream, such as an audio orbit or a video track.

For more information, please refer to Browser MediaMtrack explain.

LocalStreamStats

Local video stream connection statistics

After the streaming of the local release is successful, the connection statistics will be provided through the interface.

structure:

{audio:{...audiostats}, video:{...videostats} }
namedescription
video.rtt:numberVideo average round-trip delay.
If it is the uplift, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2.
video.bytes:numberSend video byte
video.packets:numberSend video package
video.packetsLost:numberSend number
video.frameRate:numberSend video frame rate
video.resolutionWidth:numberSend video wide
video.resolutionHeight:numberSend video height
video.timestamp:numberVideo timestamp
audio.rtt:numberAverage round-trip time for audio.
If it is the uplift, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2.
audio.bytes:numberSend audio byte
audio.packets:numberSended audio package
audio.packetsLost:numberThe number of audio lost packets sent
audio.timestamp:numberAudio timestamp

RemoteStreamStats

Far audio video stream connection statistics

After the remote video stream is successful, the connection statistics will be provided through this interface.

structure:

{audio:{...audiostats}, video:{...videostats} }
namedescription
video.rtt:numberVideo average round-trip delay.
If it is the uplink, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2.
video.bytes:numberReceived video bytes
video.packets:numberReceived video package
video.packetsLost:numberReceived video lost number
video.frameRate:numberReceived video frame rate
video.resolutionWidth:numberReceived video width
video.resolutionHeight:numberReceived video height
video.timestamp:numberVideo timestamp
audio.rtt:numberAverage time delay in audio.
If it is the uplink, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2.
audio.bytes:numberReceived audio byte
audio.packets:numberReceived audio package
audio.packetsLost:numberNumber of audio lost packets
audio.timestamp:numberAudio timestamp

LocalAudioStats

Local audio connection statistics

namedescription
rtt:numberAverage time delay in audio.
If it is the uplink, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2.
bytes:numberSend audio byte
packets:numberSended audio package
packetsLost:numberThe number of audio lost packets sent
timestamp:numberAudio timestamp

LocalVideoStats

Connection statistics for local video

namedescription
rtt:numberVideo average round-trip delay.
If it is the uplink, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2.
bytes:numberSend video byte
packets:numberSend video package
packetsLost:numberSend number
frameRate:numberSend video frame rate
resolutionWidth:numberSend video wide
resolutionHeight:numberSend video height
timestamp:numberVideo timestamp

StreamPlayError

Flow play error message

Calling the stream.playVideo / stream.playAudio method When playing the video stream / audio stream, you can understand the possible reasons.

Normally, call stream.playAudio fails In addition to errflag to “Abort _ERROR”, you can guide users to try to trigger video playback with gestures (calls Stream.Resumeaudio method).

namedescription
errflag: stringPlay the wrong logo, have the following value:
“ABORT_ERROR” :The operation is aborted.
“NOT_SUPPORTED_ERROR” :Media formats that are not supported.
“NOT_ALLOWED_ERROR” :Do not allow play, such as the browser does not allow automatic playback, ask for a click to allow playback, please refer to Processing the automatic playback strategy of browsing
“REQ_TIMEOUT”:Play timeout, now the default is not timeout, if you need timeout, please set CloudHubRTC.DEVICE_ADAPTER_CONFIGMEDIA_ELE_PLAY_TIMEOUT_TS (Millisecond), set to - 1 Then indicate that there is no timeout. like:CloudHubRTC.DEVICE_ADAPTER_CONFIGMEDIA_ELE_PLAY_TIMEOUT_TS = 30000 Indicates that the timeout time is set to 30 seconds.
“INVALID_ARG” :Invalid parameters.
“UNDEFINED_ERROR” :Undefined error.
name: stringThe error name returned by the browser.
Please refer to the browser video/audio The Play method returns the Name field in the error object.
messageError Description information.
Please refer to the Message field in the error object returned by the browser’s VIDEO / AUDIO Play method.

Note:

  1. Due to browser restrictions, Audio / Video’s Play may fail.
  2. Browser Limits VIDEO / AUDIO Auto Play, please refer to Processing a browser automatic playback policy
  3. For more information, please refer to About AUDIO / Video Play

NetworkQualityStats

Network quality statistics.

After the user adds the channel, the SDK will trigger a stream-network-quality callback every two seconds and provide network quality statistics through this interface.

structure:

{audio:{...audiostats}, video:{...videostats} }
namedescription
video.bps: numberVideo bandwidth (bit/s)。
video.packetsLostRate: numberVideo packet loss rate [0-100]。
video.rtt: numberVideo average round-trip delay.
If it is the uplink, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2.
video.frameRate: numberThe video frame rate.
video.resolutionWidth: numberVideo resolution width.
video.resolutionHeight: numberVideo resolution height.
video.netquality: numberVideo network quality. Based on the packet loss rate, average round-trip time delay and network jitter calculation, there are the following values:
0:Quality unknown
1:Excellent quality
2:User subjective feelings and excellent differences, but the code rate may be slightly lower than excellent
3:Users subjectively feel flaws but does not affect communication
4:Bistens to communicate but not smooth
5:The network quality is very poor, basically cannot communicate
6:Network connection is disconnected, it is completely unable to communicate
video.timestamp:numberThe timestamp of the video.
audio.bps: numberAudio bandwidth (bit/s)。
audio.packetsLostRate: numberAudio packet loss rate [0-100]。
audio.rtt: numberAverage time delay in audio. Note: If it is the upflow, the RTT has divided the retracted delay in 2, ie the algorithm is: (Delayed + S2C of C2S) / 2.
audio.netquality: numberAudio network quality. Based on audio packet loss rate, average round-trip time delay and network jitter calculation, there are the following values:
0:Quality unknown
1:Excellent quality
2:User subjective feelings and excellent differences, but the code rate may be slightly lower than excellent
3:Users subjectively feel flaws but does not affect communication
4:Bistens to communicate but not smooth
5:The network quality is very poor, basically cannot communicate
6:Network connection is disconnected, it is completely unable to communicate
audio.timestamp:numberAudio timestamp.

MediaDeviceInfo

Media device information

After users join the channel, get the media device details, such as microphones, cameras, speakers, etc.

structure:

    {
        hasdevice: {
            audioinput: true
            audiooutput: true
            videoinput: true
        }, 
        devices: {  
            audioinput: [...]
            audiooutput: [...]
            videoinput: [...]
        }, 
        useDevices: {
            audioinput: "default"
            audiooutput: "default"
            videoinput:  "619bae7a5444acbba3268db78018f7649a92c13fcf2e1849d709ec0d0a16fc2a"  
        }
    }
namedescription
hasdevice: objIs there a media input / output device: such as a microphone, camera, speaker, etc.
devices: objAll media input / output device information is detected;
For example: a plurality of microphone devices detected, default microphone information[{deviceId: “default” , groupId: “groupId”, kind: “audioinput”, label: “default - MacBook Pro microphone (Built-in)"}]
useDevices: objMedia input / output device information being used