CloudHub JSSDK
- Overview
- CloudHubRTC
- CloudHubRTCObject
- Client
- ClientEvent
- ClientLiveTranscoding
- ClientChannelMediaRelayConfig
- Stream
- StreamEvent
- DeviceManager
- constant
CloudHubRTC Data object document
ClientConfig description
Define the interface of the config parameter in the createclient type Object.
name | description |
---|---|
codec:string | The encoding method, the encoding method has “VP8” and “H264”, which is default “VP8”. Note:: One channel can only support one coding method, and determine the encoding method used by the user to enter the channel. |
mode:string | The use scenario of the channel has two communication scenarios RTC and live broadcast scenarios Live , default is RTC .live : Live scenes, there are two user roles of an anchor and audience, and can set the role of an anchor and audience via the Client.setClienTrole method. An anchor can send and receive voice / video streams, and the viewer can only receive voice / video and cannot be sent.rtc : Communication scenarios, for common one-to-one call or group chat, any user in the channel can send and receive voice / video streams.Note:: 1)One channel can only support one type of use, and determine the use scenario of the channel with the first entry channel. 2)We recommend that 1 Channel will only use one type of use, do not appear Channel to use communication scenarios today, using live scenes tomorrow, if you need to replace the scene, please use the new Channel. 3)Communication scene RTC Not allowed to enter too many users, in the communication scene RTC , default 1 channel allows 100 people to enter, please contact CloudHub business personnel if you need to adjust the number of users.4)Specify MODE to specify the use scenario of the channel when you specify modeClient. |
Example:
var config = {}
var client = CloudHubRTC.createClient(config);
StreamSpec Description
Description: The interface that defines the spec parameter in createStream, type object.
name | description |
---|---|
uid: string | User id. |
type: string | The type of stream that has the following values: 1) video: Audio and video device streaming. 2) screen: Screen sharing / program sharing. |
attributes?: object | Optional, streaming custom information, available on the receiving end getAttributes fetch |
sourceID?: string | Source ID, the value of this value in different TYPE requirements is as follows: 1) video: Snex mode This value does not need to pass, which is written as “default_source_id”, and if the biup will be set to “default_source_id”; multi-stream mode requires the video device ID, and does not pass the default video device ID. 2) screen: This value does not need to be passed, the value is “screen”. Note: Multi-streaming mode and flow types are VIDEO (ie: type-> video ), not allowed to be streaming audio and video at the same time, that is, the audio and video need to be released in multi-stream mode, and the SourceId of the audio must be Audio, video SourceId must be the device ID of the video. |
audio?: boolean | Specifies whether the stream contains audio resources, default is true, when Type is Screen, Audio defaults to false. When Type is Screen, AUDIO defaults to false, if set to true, you need to pay attention to the following: 1)This feature only supports Windows Platform Chrome 73 and above. 2)After setting up AUDIO, you also need to check the sharing audio to really take effect on the pop-up box shared. 3)In multi-stream mode, the flow type is VIDEO (ie: type -> video ), when SourceID != “audio”, SDK will force AUDIO to set to false. |
microphoneId?: string | The microphone device ID is not the passage of the default audio device ID. TYPE must make this value for this value. |
cameraId?: string | Camera device ID, single-current mode is not transmitted, using the default video device ID. 1)TYPE must make this value for this value. 2)Under multi-stream mode, the flow type is VIDEO (ie: type -> video ), this value is no pass, no SDK set this value to the value of SourceID (ie: device ID). |
speakerId?: string | Speaker device ID, passing the default speaker device ID without passage. |
mirror?: boolean | Specifies whether a local video stream is displayed locally to mirror flip true :Mirror. false :(Default) is not mirror. We recommend that mirror flipping when using the front camera, do not mirror flip when using the rear camera. |
audioSource?: MediaStreamTrack | Specifies the audio source of the audio and video stream. This configuration is not recommended without special needs. 1、 The MicrophoneID configuration will be invalid when specifying Audiosource. 2、 The SwitchDevice method is no longer supported after specifying Audiosource. |
videoSource?: MediaStreamTrack | Specify the video source of the audio and video stream. This configuration is not recommended without special needs. 1、 If the video source is from Canvas, you need to re-draw the CANVAS content every 1 second to maintain the normal release of the video stream at the time of Canvas content. 2、When the specified videosource, the CAMERAID configuration will be invalid. 3、The SwitchDevice method is no longer supported after specifying videosource. |
Note:
1、According to the UID, Type, SourceID generating stream ID, stream ID format is: Uid: Type: SourceId
2、Multi-streaming mode and flow types are VIDEO (ie: type -> video
), not allowed to be streaming audio and video at the same time, that is, the audio and video need to be released in multi-stream mode, and the SourceId of the audio must be Audio, video SourceId must be the device ID of the video.
Example:
var stream = CloudHubRTC.createStream({
uid: "uid",
type: "video",
audio:true,
video:true
});
LocalAudioStatsMap
A set of LocalaudioStats objects, each stream ID (SID) corresponds to a localaudioostats. After calling getLocalaudioStats, the SID and localaudiostats data of the local stream will be provided through the interface, and the type is Object.
name | description |
---|---|
[sid:string]:LocalAudioStats | LocalaudioOstats corresponding to the stream ID. LocalAudioStats Please refer to this document LocalAudioStats |
Example:
{sid1:LocalAudioStats1, sid2:LocalAudioStats2}
LocalVideoStatsMap
A set of Localvideostats objects, each stream ID (SID) corresponds to a LocalVideostats. After calling getLocalVideoStats, the SID and Localvideostats data of the local stream will be provided through the interface, and the type is Object.
name | description |
---|---|
[uid:string]:LocalVideoStats | The user uid corresponds to Localvideostats. Localvideostats Please refer to this document LocalVideoStats |
Example:
{sid1:LocalVideoStats1, sid2:LocalVideoStats2}
RemoteAudioStatsMap
A set of RemoteAudiostats objects, each stream ID (SID) corresponds to a RemoteAudioOstats. After calling the getRemoteAudioOstats, the remote streaming SID and RemoteAudioStats data are available through this interface, and the type is Object.
name | description |
---|---|
[uid:string]:RemoteAudioStats | Remoteaudiostats corresponding to the UID. RemoteAudioStats Please refer to this document RemoteAudioStats |
Example:
{sid1:RemoteAudioStats1, sid2:RemoteAudioStats2}
RemoteVideoStatsMap
A set of RemotevideStats objects, each stream ID (SID) corresponds to a Remotevideostats. After calling the GetRemotevideStats, the remote streaming SID and RemotevideStats data are provided through this interface, and the type Object is available.
name | description |
---|---|
[uid:string]:RemoteVideoStats | Remotevideostats corresponding to user uid. RemoteVideoStats Please refer to this document’s RemoteVideoStats |
Example:
{sid1:RemoteVideoStats1, sid2:RemoteVideoStats2}
MediaStreamTrack
Media stream track
This interface represents a media track in a stream, such as an audio orbit or a video track.
For more information, please refer to Browser MediaMtrack explain.
LocalStreamStats
Local video stream connection statistics
After the streaming of the local release is successful, the connection statistics will be provided through the interface.
structure:
{audio:{...audiostats}, video:{...videostats} }
name | description |
---|---|
video.rtt:number | Video average round-trip delay. If it is the uplift, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2. |
video.bytes:number | Send video byte |
video.packets:number | Send video package |
video.packetsLost:number | Send number |
video.frameRate:number | Send video frame rate |
video.resolutionWidth:number | Send video wide |
video.resolutionHeight:number | Send video height |
video.timestamp:number | Video timestamp |
audio.rtt:number | Average round-trip time for audio. If it is the uplift, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2. |
audio.bytes:number | Send audio byte |
audio.packets:number | Sended audio package |
audio.packetsLost:number | The number of audio lost packets sent |
audio.timestamp:number | Audio timestamp |
RemoteStreamStats
Far audio video stream connection statistics
After the remote video stream is successful, the connection statistics will be provided through this interface.
structure:
{audio:{...audiostats}, video:{...videostats} }
name | description |
---|---|
video.rtt:number | Video average round-trip delay. If it is the uplink, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2. |
video.bytes:number | Received video bytes |
video.packets:number | Received video package |
video.packetsLost:number | Received video lost number |
video.frameRate:number | Received video frame rate |
video.resolutionWidth:number | Received video width |
video.resolutionHeight:number | Received video height |
video.timestamp:number | Video timestamp |
audio.rtt:number | Average time delay in audio. If it is the uplink, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2. |
audio.bytes:number | Received audio byte |
audio.packets:number | Received audio package |
audio.packetsLost:number | Number of audio lost packets |
audio.timestamp:number | Audio timestamp |
LocalAudioStats
Local audio connection statistics
name | description |
---|---|
rtt:number | Average time delay in audio. If it is the uplink, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2. |
bytes:number | Send audio byte |
packets:number | Sended audio package |
packetsLost:number | The number of audio lost packets sent |
timestamp:number | Audio timestamp |
LocalVideoStats
Connection statistics for local video
name | description |
---|---|
rtt:number | Video average round-trip delay. If it is the uplink, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2. |
bytes:number | Send video byte |
packets:number | Send video package |
packetsLost:number | Send number |
frameRate:number | Send video frame rate |
resolutionWidth:number | Send video wide |
resolutionHeight:number | Send video height |
timestamp:number | Video timestamp |
StreamPlayError
Flow play error message
Calling the stream.playVideo / stream.playAudio method When playing the video stream / audio stream, you can understand the possible reasons.
Normally, call stream.playAudio fails In addition to errflag to “Abort _ERROR”, you can guide users to try to trigger video playback with gestures (calls Stream.Resumeaudio method).
name | description |
---|---|
errflag: string | Play the wrong logo, have the following value: “ABORT_ERROR” :The operation is aborted. “NOT_SUPPORTED_ERROR” :Media formats that are not supported. “NOT_ALLOWED_ERROR” :Do not allow play, such as the browser does not allow automatic playback, ask for a click to allow playback, please refer to Processing the automatic playback strategy of browsing。 “REQ_TIMEOUT”:Play timeout, now the default is not timeout, if you need timeout, please set CloudHubRTC.DEVICE_ADAPTER_CONFIGMEDIA_ELE_PLAY_TIMEOUT_TS (Millisecond), set to - 1 Then indicate that there is no timeout. like:CloudHubRTC.DEVICE_ADAPTER_CONFIGMEDIA_ELE_PLAY_TIMEOUT_TS = 30000 Indicates that the timeout time is set to 30 seconds. “INVALID_ARG” :Invalid parameters. “UNDEFINED_ERROR” :Undefined error. |
name: string | The error name returned by the browser. Please refer to the browser video/audio The Play method returns the Name field in the error object. |
message | Error Description information. Please refer to the Message field in the error object returned by the browser’s VIDEO / AUDIO Play method. |
Note:
- Due to browser restrictions, Audio / Video’s Play may fail.
- Browser Limits VIDEO / AUDIO Auto Play, please refer to Processing a browser automatic playback policy。
- For more information, please refer to About AUDIO / Video Play。
NetworkQualityStats
Network quality statistics.
After the user adds the channel, the SDK will trigger a stream-network-quality
callback every two seconds and provide network quality statistics through this interface.
structure:
{audio:{...audiostats}, video:{...videostats} }
name | description |
---|---|
video.bps: number | Video bandwidth (bit/s)。 |
video.packetsLostRate: number | Video packet loss rate [0-100]。 |
video.rtt: number | Video average round-trip delay. If it is the uplink, the RTT has divided the retracted delay in 2, ie the algorithm is: (delayed + S2C of C2S) / 2. |
video.frameRate: number | The video frame rate. |
video.resolutionWidth: number | Video resolution width. |
video.resolutionHeight: number | Video resolution height. |
video.netquality: number | Video network quality. Based on the packet loss rate, average round-trip time delay and network jitter calculation, there are the following values: 0:Quality unknown 1:Excellent quality 2:User subjective feelings and excellent differences, but the code rate may be slightly lower than excellent 3:Users subjectively feel flaws but does not affect communication 4:Bistens to communicate but not smooth 5:The network quality is very poor, basically cannot communicate 6:Network connection is disconnected, it is completely unable to communicate |
video.timestamp:number | The timestamp of the video. |
audio.bps: number | Audio bandwidth (bit/s)。 |
audio.packetsLostRate: number | Audio packet loss rate [0-100]。 |
audio.rtt: number | Average time delay in audio. Note: If it is the upflow, the RTT has divided the retracted delay in 2, ie the algorithm is: (Delayed + S2C of C2S) / 2. |
audio.netquality: number | Audio network quality. Based on audio packet loss rate, average round-trip time delay and network jitter calculation, there are the following values: 0:Quality unknown 1:Excellent quality 2:User subjective feelings and excellent differences, but the code rate may be slightly lower than excellent 3:Users subjectively feel flaws but does not affect communication 4:Bistens to communicate but not smooth 5:The network quality is very poor, basically cannot communicate 6:Network connection is disconnected, it is completely unable to communicate |
audio.timestamp:number | Audio timestamp. |
MediaDeviceInfo
Media device information
After users join the channel, get the media device details, such as microphones, cameras, speakers, etc.
structure:
{
hasdevice: {
audioinput: true
audiooutput: true
videoinput: true
},
devices: {
audioinput: [...]
audiooutput: [...]
videoinput: [...]
},
useDevices: {
audioinput: "default"
audiooutput: "default"
videoinput: "619bae7a5444acbba3268db78018f7649a92c13fcf2e1849d709ec0d0a16fc2a"
}
}
name | description |
---|---|
hasdevice: obj | Is there a media input / output device: such as a microphone, camera, speaker, etc. |
devices: obj | All media input / output device information is detected; For example: a plurality of microphone devices detected, default microphone information[{deviceId: “default” , groupId: “groupId”, kind: “audioinput”, label: “default - MacBook Pro microphone (Built-in)"}] |
useDevices: obj | Media input / output device information being used |