JavaScript SDK

JavaScript SDK is based on the Nvision image processing service as Promise based function call and WebSocket client with TypeScript definition provided.

Quickstarts

Before you begin to use this SDK, these quickstarts will guide you to get started with the Nvision API.

Installation

Install @nipacloud/nvision via NPM or Yarn to your project.

yarn add @nipacloud/nvision

Importing the SDK

Import the SDK module to your source file

const nvision = require("@nipacloud/nvision")
// or use the ES6 import
import nvision from "@nipacloud/nvision"

Additional setup for front-end app

If you use the SDK in the front-end application, you need to import the browser variant provided at @nipacloud/nvision/dist/browser/nvision.js

const nvision = require("@nipacloud/nvision/dist/browser/nvision")
// or use the ES6 import
import nvision from "@nipacloud/nvision/dist/browser/nvision"

If you use the SDK in the webpack-based project, you can provide the module resolution alias in your webpack configuration.

module.exports = {
  ...,
  "resolve": {
    "alias": {
      "@nipacloud/nvision": "@nipacloud/nvision/dist/browser/nvision.js"
    }
  }
}

If you correctly setup the module resolution, you can import module using a typical module name @nipacloud/nvision without path.

Using the object detection service

To use the object detection service, you need to create the service object using objectDetection() generator function

const nvision = require("@nipacloud/nvision");

const objectDetectionService = nvision.objectDetection({
    apiKey: "<YOUR_RESTFUL_KEY>",
    streamingKey: "<YOUR_STREAMING_KEY>"
});

You do not have to provide both apiKey or streamingKey . If you use only API call, you can provide only apiKey , this applied to WebSocket streaming too.

Making an API call

You can make an API call using predict() function of the service object

Function signature
predict ({
    rawData: string,
    confidenceThreshold?: number,
    outputCroppedImage?: boolean,
    outputVisualizedImage?: boolean
}): Promise<{
    service_id: string,
    detected_object: {
        bounding_box: {
            bottom: number,
            left: number,
            right: number,
            top: number
        },
        parent: string,
        name: string,
        confidence: number,
        cropped_image: string
    }
}>

Parameters

Promised return object

Example

import nvision from "@nipacloud/nvision";

const objectDetectionService = nvision.objectDetection({
    apiKey: "<YOUR_RESTFUL_KEY>"
});

objectDetectionService.predict(
    "BASE64_ENCODED_IMAGE"
).then((result) => {
    // Outout the result object to console
    console.log(result);
});

Streaming video frames through WebSocket

You can make a WebSocket connection using stream() generator function to get the streaming client object.

stream (): {
    on: (event: string, listener: (eventArgs: any) => unknown) => void;
    once: (event: string, listener: (eventArgs: any) => unknown) => void;
    connect: () => Promise<void>;
    predict: ({
        sourceId?: string, 
        frameId?: string,
        rawData: string | ArrayBuffer, 
        confidenceThreshold?: number,
        outputCroppedImage?: boolean,
        outputVisualizedImage?: boolean
    }) => Promise<void>;
};

>

Returned object

Example

This example uses opencv4nodejs to capture the webcam image, then submit it through the WebSocket channel.

const OpenCV = require("opencv4nodejs");
const nvision = require("@nipacloud/nvision");
const objectdetectionStreamClient = nvision.objectDetection({
    streamingKey: "<YOUR_STREAMING_KEY>"
}).stream();

// display visualized image 
objectdetectionStreamClient.on("message", (data) => {
    console.log("raw_data size:", data.raw_data.length);
    const img = OpenCV.imdecode(data.raw_data);
    OpenCV.imshow("visualization", img);
    OpenCV.waitKey(1);
});

objectdetectionStreamClient.on("sent", (bytesSent) => { 
    console.log("video frame sent: ", bytesSent, "bytes")
});

objectdetectionStreamClient.connect().then(() => {
    const cvCam = new OpenCV.VideoCapture(0);
    setInterval(() => {
        const cvCamFrameMat = cvCam.read();
        const jpgEncoded = OpenCV.imencode(".jpg", cvCamFrameMat);

        // make prediction request
        objectdetectionStreamClient.predict({
            rawData: new Uint8Array(jpgEncoded.buffer),
            confidenceThreshold: 0.1,
            outputCroppedImage: false,
            outputVisualizedImage: false
        })
    }, 1000);
});

Last updated