Using WebRTC Streams

If your camera or platform supports WebRTC, you can connect Ouva to your streams.

Although peer-to-peer communication between peers is standard, the handshake step is not. There is no standard way to implement signaling server communication. Because of this reason, a specific integration with your already existing signaling server is needed on our side.

For the integration, we require the following information:

  • Signaling server URL

  • Signaling server <-> peer communication schema, including authentication schema, handshake method, and messages.

  • A message to our backend services that contains the bed details, including credentials, signaling server address, and remote peer ID.

You can define a new sensor in the dashboard with the following URI and our backend services will negotiate with the signaling server and streaming will begin:

webrtc://signalling-server:port/?camera-name=<camera-name>&other-param1=value&...

You can find the details of an example message flow from out WebRTC client and a sample signalling server.

  • Ouva Backend is triggered (from API, or dashboard) for a patient admit.

  • Related backend service sends a enable session with the associated session id to the service that manages the streams.

  • WebRTC client and signalling server uses Socket.IO for communication. The current implementation requires Socket.IO protocol revision 5. Please use a compatible version server SDK.

Sequence for WebRTC communication:

  1. Call Socket.IO connect.

  2. Receive: sid, pingInterval (handled by socket.io server)

  3. Send get node identity event

  4. Receive node identity. The response data is a single string.

  5. Send login node <node_identity> event.

  6. Receive response. The response data is a single string, either yes or no.

  7. Send link user endpoint <camera_id> <camera_id> <user_id> event. camera_id is the camera_id given in the URL. user_id is a randomly generated string for each connection.

  8. Send login user <user_id> event with the same user_id.

  9. Receive offer <camera_id> <data> event. data is a json in the following format:

{
    'sdp': "sdp string",
    'type': "webrtc offer type"
}
  1. Create a peer connection, set offer parameters and create an answer.

  2. Send peer signaling state <camera_id> have-remote-offer

  3. Send receiving media <camera_id> <camera_id> audioVideo

  4. Send receiving media <camera_id> <camera_id> audioVideo

  5. Send peer signaling state <camera_id> stable

  6. Send the answer. The event is send <camera_id> answer. And the event data is a json:

{
    'sdp': "answer sdp string",
    'type': "answer webrtc type"
}

WebRTC client/server should handle ping/pong communication by its own. At this point, peer to peer connection should be started and streaming should be started.

You can find a sample client implementation in this repository. Please see the readme for more information.

Last updated