Make a WebSocket Stream
Nvision with Streaming Videos
WebSocket stream is another way to submit images to the Nvision service. To use the WebSocket stream, the easiest way is to use our provided SDK, see JavaScript SDK.
To use Nvision with streaming videos, your application needs to implement the following:
Stream video frames to Nvision service: Develop websocket stream using SDK.
Get results from the WebSocket via WebHook: Create a webhook url to receive HTTP callback.
Set up callback endpoints: in the Nvision service page.
Stream video frames to Nvision service
Some application need to be run as a Headless agent on edge compute. For example, you develop a headless agent running on a RaspberryPi to read video frames from CCTV and submit them to the Nvision service as the following diagram.

Installation
Initialize your NPM project, then install @nipacloud/nvision
and opencv4nodejs
using npm
or yarn
command.
yarn init
yarn add @nipacloud/nvision opencv4nodejs
Headless Agent
const OpenCV = require("opencv4nodejs");
const nvision = require("@nipacloud/nvision");
const objectdetectionStreamClient = nvision.objectDetection({
streamingKey: "<YOUR_STREAMING_KEY>"
}).stream();
// display returned image if webhook is not set
objectdetectionStreamClient.on("message", (data) => {
console.log("raw_data size:", data.raw_data.length);
const img = OpenCV.imdecode(data.raw_data);
OpenCV.imshow("visualization", img);
OpenCV.waitKey(1);
});
objectdetectionStreamClient.on("sent", (bytesSent) => {
console.log("video frame sent: ", bytesSent, "bytes")
});
objectdetectionStreamClient.connect().then(() => {
const cvCam = new OpenCV.VideoCapture(0);
setInterval(() => {
const cvCamFrameMat = cvCam.read();
const jpgEncoded = OpenCV.imencode(".jpg", cvCamFrameMat);
// make prediction request
objectdetectionStreamClient.predict({
rawData: new Uint8Array(jpgEncoded.buffer),
confidenceThreshold: 0.1,
outputCroppedImage: false,
outputVisualizedImage: true
})
}, 1000);
});
Get results from the WebSocket via WebHook
To get results, you need to configure a Webhook endpoint to your service. Webhook is a user-defined HTTP callback endpoint.
Installation
yarn init
yarn add koa koa-bodyparser
Headless Agent
const Koa = require("koa");
const bodyparser = require("koa-bodyparser");
const koa = new Koa();
koa.use(bodyparser());
koa.use((ctx) => {
console.log(ctx.request.body);
ctx.status = 204;
});
koa.listen(3000);
For testing webhook integration, we use ngrok to create a secure tunnel on local machine along with public URLs for exposing local web server.
By running this command ngrok http 3000
, you will get public URLs as follows:

Set up callback endpoints
As the socket protocol is used, we provide a custom callback endpoint configuration that allows you to have independent backends for receiving and analyzing prediction results.
WebSocket Streaming Callback URL
Now, input your exposed URL to the WebSocket streaming callback URL in service's setting.
e.g. https://d1706502.ngrok.io/

Lastly, when the image is processed, the Nvision service will make a HTTP request to the specified endpoint with the prediction results structured as follow.
Method: POST
Body: The request body will be provided as
{
"detected_objects": [
{
"confidence": number,
"parent": string,
"bounding_box": {
"left": number,
"right": number,
"top": number,
"bottom": number
},
"name": string
}
]
}
Example output
Output Logs from Streaming agent (left) and Webhook callback (right)

Last updated
Was this helpful?