You can use the Nvision’s Object Detection service to detect labels in an image. this service localizes and identifies multiple objects in the image such as people, animals, vehicles, and furniture. see machine learning services.
A response is returned in JSON format similar to the following example:
If you have not created a Nvision service account credentials, do so now in this set up the Nvision service quickstart for instructions.
Once your service has been created, go to the service overview page under API Key to get your service key.
Detect objects in an image
Image Content
The Nvision API can perform object detection on a local image file by sending an image as a base64 encodedstring in your request body.
The base64 encoded string is a binary-to-text encoding that represents binary data in an ASCII string format as the following example: /9j/4AAQSkZJRgABAQEBLAEsAAD...
JSON Request Body
The API is accessible via the HTTP method and URL, see :
The configuration is different on individual service types. It is structured as a key-value mapping. A config name is defined in parameter field and the corresponding value is defined in value field in string format.
For object detection service, there are two available configurations as follows:
ConfidenceThreshold: to define the minimum confidence score of the prediction results.
Value options: [0, 1]
Default: "0.1"
OutputCroppedImage: to return cropped images from bounding box detections.
Value options: "true" or "false"
Default: "false"
OutputVisualizedImage: to return drawn bounding box detections on raw image.
Value options: "true" or "false"
Default: "false"
Make a RESTful Call
You can call this API through REST calls or native SDKs.
Using the cURL command line
Using the client libraries
Nvision SDKs provide interface for calling Nvision services in your own language.
export API_KEY="<<YOUR_API_KEY>>"
# save the json request body as a file named request.json
curl -X POST \
https://nvision.nipa.cloud/api/v1/object-detection
-H 'Authorization: ApiKey '$API_KEY \
-H 'Content-Type: application/json' \
-d @request.json | json_pp
# or read a local image from filepath
echo -n '{"raw_data": "'"$(base64 image.jpg)"'"}' | \
curl -X POST \
https://nvision.nipa.cloud/api/v1/object-detection \
-H 'Authorization: ApiKey '$API_KEY \
-H "Content-Type: application/json" \
-d @- | json_pp
pip install nvision
yarn init
yarn add @nipacloud/nvision
import os
import json
import base64
from nvision import ObjectDetection
model = ObjectDetection(api_key='YOUR_API_KEY')
# base64 encoed string
with open('image.jpg', 'rb') as file:
image = file.read()
image = base64.b64encode(image).decode('utf-8')
# make a RESTful call to the Nvision API
response = model.predict(image)
# get the predictions (in JSON format) from the response
print(json.dumps(response.json(), indent=4, sort_keys=True))
const nvision = require("@nipacloud/nvision");
const objectDetectionService = nvision.objectDetection({
apiKey: "<YOUR_RESTFUL_KEY>"
});
objectDetectionService.predict({
rawData: "BASE64_ENCODED_IMAGE"
}).then((result) => {
// Outout the result object to console
console.log(result);
});