Introduction

The IBM Watson™ Visual Recognition service uses deep learning algorithms to identify scenes, objects, and faces in images you upload to the service. You can create and train a custom classifier to identify subjects that suit your needs.

The code examples on this tab use the client library that is provided for Java.

Maven

<dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>java-sdk</artifactId>
  <version>6.11.0</version>
</dependency>

Gradle

compile 'com.ibm.watson.developer_cloud:java-sdk:6.11.0'

GitHub

The code examples on this tab use the client library that is provided for Node.js.

Installation

npm install --save watson-developer-cloud

GitHub

The code examples on this tab use the client library that is provided for Python.

Installation

pip install --upgrade "watson-developer-cloud>=2.5.1"

GitHub

The code examples on this tab use the client library that is provided for Ruby.

Installation

gem install ibm_watson

GitHub

The code examples on this tab use the client library that is provided for Go.

go get -u github.com/watson-developer-cloud/go-sdk/...

GitHub

Authentication

You authenticate to the API by using IAM. You can pass either a bearer token in an Authorization header or an apikey. Tokens support authenticated requests without embedding service credentials in every call. API keys use basic authentication. Learn more about IAM.

If you pass in the apikey, the SDK manages the lifecycle of the tokens. If you pass a token, you maintain the token lifecycle. Learn more about IAM authentication with the SDK.

IAM authentication. Replace {apikey} with your service credentials.

curl -u "apikey:{apikey}" "https://gateway.watsonplatform.net/visual-recognition/api/v3/{method}"

SDK managing the IAM token. Replace {apikey} and {version}.

IamOptions options = new IamOptions.Builder()
    .apiKey("{apikey}")
    .build();

VisualRecognition visualRecognition = new VisualRecognition("{version}", options);

SDK managing the IAM token. Replace {apikey} and {version}.

var VisualRecognitionV3 = require('watson-developer-cloud/visual-recognition/v3');

var visualRecognition = new VisualRecognitionV3({
  version: '{version}',
  iam_apikey: '{apikey}'
});

SDK managing the IAM token. Replace {apikey} and {version}.

from watson_developer_cloud import VisualRecognitionV3

visual_recognition = VisualRecognitionV3(
    version='{version}',
    iam_apikey='{apikey}'
)

SDK managing the IAM token. Replace {apikey} and {version}.

require "ibm_watson"

visual_recognition = IBMWatson::VisualRecognitionV3.new(
  version: "{version}",
  iam_apikey: "{apikey}"
)

SDK managing the IAM token. Replace {apikey} and {version}.

import "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3"

visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(&visualrecognitionv3.VisualRecognitionV3Options{
  Version: "{version}",
  IAMApiKey: "{apikey}",
})

Service endpoint

The Visual Recognition v3 API is hosted only in the Dallas location and has a single service endpoint. The URL is different when you use IBM Cloud Dedicated.

API endpoint

https://gateway.watsonplatform.net/visual-recognition/api

Versioning

API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. When we change the API in a backwards-incompatible way, we release a new version date.

Send the version parameter with every API request. The service uses the API version for the date you specify, or the most recent version before that date. Don't default to the current date. Instead, specify a date that matches a version that is compatible with your app, and don't change it until your app is ready for a later version.

Specify the version to use on API requests with the version parameter when you create the service instance. The service uses the API version for the date you specify, or the most recent version before that date. Don't default to the current date. Instead, specify a date that matches a version that is compatible with your app, and don't change it until your app is ready for a later version.

This documentation describes the current version of Visual Recognition, 2018-03-19. In some cases, differences in earlier versions are noted in the descriptions of parameters and response models.

Error handling

The Visual Recognition service uses standard HTTP response codes to indicate whether a method completed successfully. HTTP response codes in the 2xx range indicate success. A response in the 4xx range is some sort of failure, and a response in the 5xx range usually indicates an internal system error that cannot be resolved by the user. Response codes are listed with the method.

ErrorResponse

Name Description
code
integer
The HTTP response code.
error
string
General description of an error.

ErrorAuthentication

Name Description
status
string
The status of error.
statusInfo
string
Information about the error.

ErrorHTML

Name Description
Error
string
HTML description of the error.

ErrorInfo

Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.

Name Description
code
integer
HTTP response code.
description
string
Human-readable error description. For example, File size limit exceeded.
error_id
string
Codified error string. For example, limit_exceeded.

The Java SDK generates an exception for any unsuccessful method invocation. All methods that accept an argument can also throw an IllegalArgumentException.

Exception Description
IllegalArgumentException An illegal argument was passed to the method.

When the Java SDK receives an error response from the Visual Recognition service, it generates an exception from the com.ibm.watson.developer_cloud.service.exception package. All service exceptions contain the following fields:

Field Description
statusCode The HTTP response code returned.
message A message that describes the error.

When the Node SDK receives an error response from the Visual Recognition service, it creates an Error object with information that describes the error that occurred. This error object is passed as the first parameter to the callback function for the method. The contents of the error object are as shown in the following table.

Error

Field Description
code The HTTP response code returned.
message A message that describes the error.

The Python SDK generates an exception for any unsuccessful method invocation. When the Python SDK receives an error response from the Visual Recognition service, it generates a WatsonApiException that contains the following fields.

Field Description
code The HTTP response code returned.
message A message that describes the error.
info A dictionary of additional information about the error.

When the Ruby SDK receives an error response from the Visual Recognition service, it generates a WatsonApiException that contains the following fields.

Field Description
code The HTTP response code returned.
message A message that describes the error.
info A dictionary of additional information about the error.

The Go SDK generates an error for any unsuccessful service instantiation and method invocation. You can check for the error immediately. The contents of the error object are as shown in the following table.

Error

Field Description
code The HTTP response code returned.
message A message that describes the error.

Example error handling

try {

    // Invoke a Visual Recognition method
} catch (NotFoundException e) {

    // Handle Not Found (404) exception
} catch (RequestTooLargeException e) {

    // Handle Request Too Large (413) exception
} catch (ServiceResponseException e) {

    // Base class for all exceptions caused by error responses from the service
    System.out.println("Service returned status code " + e.getStatusCode() + ": " + e.getMessage());
}

Example error handling

visualRecognition.method(params,
  function (err, response) {
    // The error will be the first argument of the callback
    if (err.code == 404) {

      // Handle Not Found (404) error
    } else if (err.code == 413) {

      // Handle Request Too Large (413) error
    } else {
      console.log('Unexpected error: ', err.code);
      console.log('error:', err);
    }
  });

Example error handling

from watson_developer_cloud import WatsonApiException
try:
    # Invoke a Visual Recognition method
except WatsonApiException as ex:
    print "Method failed with status code " + str(ex.code) + ": " + ex.message

Example error handling

require "ibm_watson"
begin
  # Invoke a Visual Recognition method
rescue WatsonApiException => ex
  print "Method failed with status code #{ex.code}: #{ex.error}"
end

Example error handling

import "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3"

// Instantiate a service
visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(&visualRecognitionv3.VisualRecognitionV3Options{})

// Check for error
if visualRecognitionErr != nil {
  panic(visualRecognitionErr)
}

// Call a method
response, responseErr := visualRecognition.methodName(&methodOptions)

// Check for error
if responseErr != nil {
  panic(responseErr)
}

Data handling

Additional headers

Some Watson services accept special parameters in headers that are passed with the request. You can pass request header parameters in all requests or in a single request to the service.

To pass header parameters with every request, use the setDefaultHeaders method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, use the addHeader method as a modifier on the request before you execute the request.

To pass header parameters with every request, specify the headers parameter when you create the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, use the headers method as a modifier on the request before you execute the request.

To pass header parameters with every request, specify the set_default_headers method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, include headers as a dict in the request.

To pass header parameters with every request, specify the add_default_headers method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, specify the headers method as a chainable method in the request.

To pass header parameters with every request, specify the SetDefaultHeaders method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, specify the Headers as a map in the request.

Example header parameter in a request

ReturnType returnValue = visualRecognition.methodName(parameters)
        .addHeader("Custom-Header", "{header_value}")
        .execute();

Example header parameter in a request

visualRecognition.methodName({
  parameters,
  headers: {
    'Custom-Header': '{header_value}'
  }
},
  function (err, response) {
    if (err) {
      console.log('error:', err);
    } else {
      console.log(response);
    }
  }
);

Example header parameter in a request

response = visualRecognition.methodName(
    parameters,
    headers = {
        'Custom-Header': '{header_value}'
    })

Example header parameter in a request

response = visual_recognition.headers(
  "Custom-Header" => "{header_value}"
).methodName(parameters)

Example header parameter in a request

response, _ := visualrecognitionv3.methodName(
  &methodOptions{
    Headers: map[string]string{
      "Accept": "application/json",
    },
  },
)

Response details

The Visual Recognition service might return information to the application in response headers.

To access information in the response headers, use one of the request methods that returns details with the response: executeWithDetails(), enqueueWithDetails(), or rxWithDetails(). These methods return a Response<T> object, where T is the expected response model. Use the getResult() method to access the response object for the method, and use the getHeaders() method to access information in response headers.

Example request to access response headers

Response<ReturnType> response = visualRecognition.methodName(parameters)
        .executeWithDetails();
// Access response from methodName
ReturnType returnValue = response.getResult();
// Access information in response headers
Headers responseHeaders = response.getHeaders();

To access information in the response headers, specify the headers attribute on the third parameter (response) that is passed to the callback function.

Example request to access response headers

visualRecognition.methodName({
  parameters
},
  function (err, result, response) {
    if (err) {
      console.log('error:', err);
    } else {
      console.log(response.headers);
    }
  }
);

The return value from all service methods is a DetailedResponse object. To access information in the result object or response headers, use the following methods.

DetailedResponse

Method Description
get_result() Returns the response for the service-specific method.
get_headers() Returns the response header information.
get_status_code() Returns the HTTP status code.

Example request to access response headers

visualRecognition.set_detailed_response(True)
response = visualRecognition.methodName(parameters)
// Access response from methodName
print(json.dumps(response.get_result(), indent=2))
// Access information in response headers
print(response.get_headers())
// Access HTTP response status
print(response.get_status_code())

The return value from all service methods is a DetailedResponse object. To access information in the response object, use the following properties.

DetailedResponse

Property Description
result Returns the response for the service-specific method.
headers Returns the response header information.
status Returns the HTTP status code.

Example request to access response headers

response = visual_recognition.methodName(parameters)
# Access response from methodName
print response.result
# Access information in response headers
print response.headers
# Access HTTP response status
print response.status

The return value from all service methods is a DetailedResponse object. To access information in the result object or response headers, use the following methods.

DetailedResponse

Method Description
GetResult() Returns the response for the service-specific method.
GetHeaders() Returns the response header information.
GetStatusCode() Returns the HTTP status code.

Example request to access response headers

import "github.com/watson-developer-cloud/go-sdk/core"
response, _ := visualrecognitionv3.methodName(&methodOptions{})

// Access result
core.PrettyPrint(response.GetResult(), "Result ")

// Access response headers
core.PrettyPrint(response.GetHeaders(), "Headers ")

// Access status code
core.PrettyPrint(response.GetStatusCode(), "Status Code ")

Data labels

You can remove customer data if you associate the customer and the data when you send the information to a service. First you label the data with a customer ID, and then you can delete the data by the ID.

  • Use the X-Watson-Metadata header to associate a customer ID with the data. By adding a customer ID to a request, you indicate that it contains data that belongs to that customer.

    Specify a random or generic string for the customer ID. Do not include personal data, such as an email address. Pass the string customer_id={id} as the argument of the header.

  • Use the Delete labeled data method to remove data that is associated with a customer ID.

Labeling data is used only by methods that accept customer data. For more information about Visual Recognition and labeling data, see Information security.

For more information about how to pass headers, see Additional headers.

Data collection

By default, all Watson services log requests and their results. Logging is done only to improve the services for future users. The logged data is not shared or made public.

To prevent IBM usage of your data for an API request, set the X-Watson-Learning-Opt-Out header parameter to true.

You must set the header on each request that you do not want IBM to access for general service improvements.

You can set the header by using the setDefaultHeaders method of the service object.

You can set the header by using the headers parameter when you create the service object.

You can set the header by using the set_default_headers method of the service object.

You can set the header by using the add_default_headers method of the service object.

You can set the header by using the SetDefaultHeaders method of the service object.

Example request

curl -u "apikey:{apikey}" -H "X-Watson-Learning-Opt-Out: true" "{url}/{method}"

Example request

Map<String, String> headers = new HashMap<String, String>();
headers.put("X-Watson-Learning-Opt-Out", "true");

visualRecognition.setDefaultHeaders(headers);

Example request

var  = require('watson-developer-cloud/');

var visualRecognition = new VisualRecognitionV3({
  version: '{version}',
  iam_apikey: '{apikey}',
  headers: {
    'X-Watson-Learning-Opt-Out': 'true'
  }
});

Example request

visual_recognition.set_default_headers({'x-watson-learning-opt-out': "true"})

Example request

visual_recognition.add_default_headers(headers: {"x-watson-learning-opt-out" => "true"})

Example request

import "net/http"

headers := http.Header{}
headers.Add("x-watson-learning-opt-out", "true")
visualrecognitionv3.Service.SetDefaultHeaders(headers)

Synchronous and asynchronous requests

The Java SDK supports both synchronous (blocking) and asynchronous (non-blocking) execution of service methods. All service methods implement the ServiceCall interface.

  • To call a method synchronously, use the execute method of the ServiceCall interface. You can call the execute method directly from an instance of the service.
  • To call a method asynchronously, use the enqueue method of the ServiceCall interface to receive a callback when the response arrives. The ServiceCallback interface of the method's argument provides onResponse and onFailure methods that you override to handle the callback.

The Ruby SDK supports both synchronous (blocking) and asynchronous (non-blocking) execution of service methods. All service methods implement the Concurrent::Async module. When you use the synchronous/asynchronous methods, an IVar object is returned. You access the DetailedResponse object by calling ivar_object.value.

For more information about the Ivar object, see the IVar class docs.

  • To call a method synchronously, either call the method directly, or use the .await chainable method of the Concurrent::Async module.

    Calling a method directly (without .await) returns a DetailedResponse object.

  • To call a method asynchronously, use the .async chainable method of the Concurrent::Async module.

You can call the .await and .async methods directly from an instance of the service.

Example synchronous request

ReturnType returnValue = visualRecognition.method(parameters).execute();

Example asynchronous request

visualRecognition.method(parameters).enqueue(new ServiceCallback<ReturnType>() {
        @Override public void onResponse(ReturnType response) {
            . . .
        }
        @Override public void onFailure(Exception e) {
            . . .
        }
    });

Example synchronous request

response = visual_recognition.method_name(parameters)

or

response = visual_recognition.await.method_name(parameters)

Example asynchronous request

response = visual_recognition.async.method_name(parameters)

Methods

Classify an image

Classify an image with the built-in or custom classifiers.

GET /v3/classify
Request

Custom Headers

  • The language of the output class names. The full set of languages is supported for the built-in classifier IDs: default, food, and explicit. The class names of custom classifiers are not translated.

    The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.

    Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

    Default: en

Query Parameters

  • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

  • The URL of an image (.jpg, .png). The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

  • Which categories of classifiers to apply. Use IBM to classify against the default general classifier, and use me to classify against your custom classifiers. To analyze the image against both classifier categories, set the value to both IBM and me. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

    The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

    Allowable values: [IBM,me]

    Constraints: collection format: csv

  • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

    The following built-in classifier_ids require no training:

    • default: Returns classes from thousands of general tags.
    • food: Enhances specificity and accuracy for images of food items.
    • explicit: Evaluates whether the image might be pornographic.
  • The minimum score a class must have to be returned.

    Constraints: 0 ≤ value ≤ 1

    Default: 0.5

Example requests
          Response

          Results for all images.

          Status Code

          • success

          • Invalid request due to user input, for example:

            • Bad header parameter
            • Invalid output language
            • No input images
            • The size of the image file in the request is larger than the maximum supported size
          • No API key, or the key is not valid.

          Example responses

          Classify images

          Classify images with built-in or custom classifiers.

          Classify images with built-in or custom classifiers.

          Classify images with built-in or custom classifiers.

          Classify images with built-in or custom classifiers.

          Classify images with built-in or custom classifiers.

          Classify images with built-in or custom classifiers.

          POST /v3/classify
          (visualRecognition *VisualRecognitionV3) Classify(classifyOptions *ClassifyOptions) (*core.DetailedResponse, error)
          ServiceCall<ClassifiedImages> classify(ClassifyOptions classifyOptions)
          classify(params, callback())
          classify(self, images_file=None, accept_language=None, url=None, threshold=None, owners=None, classifier_ids=None, images_file_content_type=None, images_filename=None, **kwargs)
          classify(images_file: nil, accept_language: nil, url: nil, threshold: nil, owners: nil, classifier_ids: nil, images_file_content_type: nil, images_filename: nil)
          Request

          Use the ClassifyOptions.Builder to create a ClassifyOptions object that contains the parameter values for the classify method.

          Custom Headers

          • The language of the output class names. The full set of languages is supported for the built-in classifier IDs: default, food, and explicit. The class names of custom classifiers are not translated.

            The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.

            Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

            Default: en

          Query Parameters

          • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

          Form Parameters

          • An image file (.jpg, .png) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            You can also include an image with the url parameter.

          • The URL of an image to analyze. Must be in .jpg, or .png format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB.

            You can also include images with the images_file parameter.

          • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to ignore the classification score and return all values.

            Default: 0.5

          • The categories of classifiers to apply. Use IBM to classify against the default general classifier, and use me to classify against your custom classifiers. To analyze the image against both classifier categories, set the value to both IBM and me.

            The built-in default classifier is used if both classifier_ids and owners parameters are empty.

            The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

            Constraints: collection format: csv

          • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

            The following built-in classifier IDs require no training:

            • default: Returns classes from thousands of general tags.
            • food: Enhances specificity and accuracy for images of food items.
            • explicit: Evaluates whether the image might be pornographic.

            Constraints: collection format: csv

          parameters

          • An image file (.jpg, .png) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            You can also include an image with the url parameter.

          • The filename for imagesFile.

          • The language of the output class names. The full set of languages is supported for the built-in classifier IDs: default, food, and explicit. The class names of custom classifiers are not translated.

            The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.

            Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

            Default: en

          • The URL of an image to analyze. Must be in .jpg, or .png format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB.

            You can also include images with the images_file parameter.

          • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to ignore the classification score and return all values.

            Default: 0.5

          • The categories of classifiers to apply. Use IBM to classify against the default general classifier, and use me to classify against your custom classifiers. To analyze the image against both classifier categories, set the value to both IBM and me.

            The built-in default classifier is used if both classifier_ids and owners parameters are empty.

            The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

          • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

            The following built-in classifier IDs require no training:

            • default: Returns classes from thousands of general tags.
            • food: Enhances specificity and accuracy for images of food items.
            • explicit: Evaluates whether the image might be pornographic.
          • The content type of imagesFile. Values for this parameter can be obtained from the HttpMediaType class.

          The classify options.

          The classify options.

          parameters

          • An image file (.jpg, .png) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            You can also include an image with the url parameter.

          • The language of the output class names. The full set of languages is supported for the built-in classifier IDs: default, food, and explicit. The class names of custom classifiers are not translated.

            The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.

            Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

            Default: en

          • The URL of an image to analyze. Must be in .jpg, or .png format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB.

            You can also include images with the images_file parameter.

          • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to ignore the classification score and return all values.

            Default: 0.5

          • The categories of classifiers to apply. Use IBM to classify against the default general classifier, and use me to classify against your custom classifiers. To analyze the image against both classifier categories, set the value to both IBM and me.

            The built-in default classifier is used if both classifier_ids and owners parameters are empty.

            The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

          • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

            The following built-in classifier IDs require no training:

            • default: Returns classes from thousands of general tags.
            • food: Enhances specificity and accuracy for images of food items.
            • explicit: Evaluates whether the image might be pornographic.
          • The content type of images_file.

          • The filename for images_file.

          parameters

          • An image file (.jpg, .png) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            You can also include an image with the url parameter.

          • The language of the output class names. The full set of languages is supported for the built-in classifier IDs: default, food, and explicit. The class names of custom classifiers are not translated.

            The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.

            Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

            Default: en

          • The URL of an image to analyze. Must be in .jpg, or .png format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB.

            You can also include images with the images_file parameter.

          • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to ignore the classification score and return all values.

            Default: 0.5

          • The categories of classifiers to apply. Use IBM to classify against the default general classifier, and use me to classify against your custom classifiers. To analyze the image against both classifier categories, set the value to both IBM and me.

            The built-in default classifier is used if both classifier_ids and owners parameters are empty.

            The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

          • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

            The following built-in classifier IDs require no training:

            • default: Returns classes from thousands of general tags.
            • food: Enhances specificity and accuracy for images of food items.
            • explicit: Evaluates whether the image might be pornographic.
          • The content type of images_file.

          • The filename for images_file.

          parameters

          • An image file (.jpg, .png) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            You can also include an image with the url parameter.

          • The language of the output class names. The full set of languages is supported for the built-in classifier IDs: default, food, and explicit. The class names of custom classifiers are not translated.

            The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.

            Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

            Default: en

          • The URL of an image to analyze. Must be in .jpg, or .png format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB.

            You can also include images with the images_file parameter.

          • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to ignore the classification score and return all values.

            Default: 0.5

          • The categories of classifiers to apply. Use IBM to classify against the default general classifier, and use me to classify against your custom classifiers. To analyze the image against both classifier categories, set the value to both IBM and me.

            The built-in default classifier is used if both classifier_ids and owners parameters are empty.

            The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

          • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

            The following built-in classifier IDs require no training:

            • default: Returns classes from thousands of general tags.
            • food: Enhances specificity and accuracy for images of food items.
            • explicit: Evaluates whether the image might be pornographic.
          • The content type of images_file.

          • The filename for images_file.

          Example requests
          Response

          Results for all images.

          Results for all images.

          Results for all images.

          Results for all images.

          Results for all images.

          Results for all images.

          Status Code

          • success

          • Invalid request due to user input, for example:

            • Bad JSON input
            • Bad query parameter or header
            • Invalid output language
            • No input images
            • The size of the image file in the request is larger than the maximum supported size
            • Corrupt .zip file
          • No API key, or the key is not valid.

          • The .zip file is too large.

          Example responses

          Detect faces in an image

          Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

          Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

          Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels per inch.

          GET /v3/detect_faces
          Request

          Query Parameters

          • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

          • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

          Example requests
          Response

          Results for all faces

          Status Code

          • success

          • Invalid request

          Example responses

          Detect faces in images

          Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

          Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

          Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels per inch.

          Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

          Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

          Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels per inch.

          Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

          Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

          Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels per inch.

          Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

          Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

          Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels per inch.

          Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

          Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

          Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels per inch.

          Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

          Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

          Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels per inch.

          POST /v3/detect_faces
          (visualRecognition *VisualRecognitionV3) DetectFaces(detectFacesOptions *DetectFacesOptions) (*core.DetailedResponse, error)
          ServiceCall<DetectedFaces> detectFaces(DetectFacesOptions detectFacesOptions)
          detectFaces(params, callback())
          detect_faces(self, images_file=None, url=None, images_file_content_type=None, images_filename=None, **kwargs)
          detect_faces(images_file: nil, url: nil, images_file_content_type: nil, images_filename: nil)
          Request

          Use the DetectFacesOptions.Builder to create a DetectFacesOptions object that contains the parameter values for the detectFaces method.

          Query Parameters

          • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

          Form Parameters

          • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

            Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            You can also include an image with the url parameter.

          • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

            You can also include images with the images_file parameter.

          parameters

          • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

            Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            You can also include an image with the url parameter.

          • The filename for imagesFile.

          • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

            You can also include images with the images_file parameter.

          • The content type of imagesFile. Values for this parameter can be obtained from the HttpMediaType class.

          The detectFaces options.

          The detectFaces options.

          parameters

          • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

            Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            You can also include an image with the url parameter.

          • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

            You can also include images with the images_file parameter.

          • The content type of images_file.

          • The filename for images_file.

          parameters

          • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

            Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            You can also include an image with the url parameter.

          • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

            You can also include images with the images_file parameter.

          • The content type of images_file.

          • The filename for images_file.

          parameters

          • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

            Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            You can also include an image with the url parameter.

          • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

            You can also include images with the images_file parameter.

          • The content type of images_file.

          • The filename for images_file.

          Example requests
          Response

          Results for all faces

          Results for all faces.

          Results for all faces.

          Results for all faces.

          Results for all faces.

          Results for all faces.

          Status Code

          • success

          • Invalid request

          Example responses

          Create a classifier

          Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative examples. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative examples. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative examples. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative examples. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative examples. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative examples. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          POST /v3/classifiers
          (visualRecognition *VisualRecognitionV3) CreateClassifier(createClassifierOptions *CreateClassifierOptions) (*core.DetailedResponse, error)
          ServiceCall<Classifier> createClassifier(CreateClassifierOptions createClassifierOptions)
          createClassifier(params, callback())
          create_classifier(self, name, negative_examples=None, negative_examples_filename=None, **kwargs)
          create_classifier(name:, positive_examples:, negative_examples: nil, positive_examples_filename: nil, negative_examples_filename: nil)
          Request

          Use the CreateClassifierOptions.Builder to create a CreateClassifierOptions object that contains the parameter values for the createClassifier method.

          Query Parameters

          • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

          Form Parameters

          • The name of the new classifier. Encode special characters in UTF-8.

          • A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

            Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

            Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

            Encode special characters in the file name in UTF-8.

          • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

            Encode special characters in the file name in UTF-8.

          parameters

          • The name of the new classifier. Encode special characters in UTF-8.

          • A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

            Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

            Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

            Encode special characters in the file name in UTF-8.

          • The filename for positiveExamples.

          • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

            Encode special characters in the file name in UTF-8.

          • The filename for negativeExamples.

          The createClassifier options.

          The createClassifier options.

          parameters

          • The name of the new classifier. Encode special characters in UTF-8.

          • A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

            Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

            Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

            Encode special characters in the file name in UTF-8.

          • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

            Encode special characters in the file name in UTF-8.

          • The filename for positive_examples.

          • The filename for negative_examples.

          parameters

          • The name of the new classifier. Encode special characters in UTF-8.

          • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

            Encode special characters in the file name in UTF-8.

          • The filename for negative_examples.

          • A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

            Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

            Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

            Encode special characters in the file name in UTF-8.

          • The filename for positive_examples.

          parameters

          • The name of the new classifier. Encode special characters in UTF-8.

          • A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

            Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

            Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

            Encode special characters in the file name in UTF-8.

          • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

            Encode special characters in the file name in UTF-8.

          • The filename for positive_examples.

          • The filename for negative_examples.

          Example requests
          Response

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Status Code

          • success

          • Invalid request due to user input, for example:

            • Bad query parameter or header
            • No input images
            • The size of the image file in the request is larger than the maximum supported size
            • Corrupt .zip file
            • Cannot find the classifier
          • No API key, or the key is not valid.

          • The .zip file is too large.

          Example responses

          Retrieve a list of classifiers

          GET /v3/classifiers
          (visualRecognition *VisualRecognitionV3) ListClassifiers(listClassifiersOptions *ListClassifiersOptions) (*core.DetailedResponse, error)
          ServiceCall<Classifiers> listClassifiers(ListClassifiersOptions listClassifiersOptions)
          listClassifiers(params, callback())
          list_classifiers(self, verbose=None, **kwargs)
          list_classifiers(verbose: nil)
          Request

          Use the ListClassifiersOptions.Builder to create a ListClassifiersOptions object that contains the parameter values for the listClassifiers method.

          Query Parameters

          • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

          • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

          parameters

          • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

          The listClassifiers options.

          The listClassifiers options.

          parameters

          • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

          parameters

          • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

          parameters

          • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

          Example requests
          Response

          A container for the list of classifiers.

          A container for the list of classifiers.

          A container for the list of classifiers.

          A container for the list of classifiers.

          A container for the list of classifiers.

          A container for the list of classifiers.

          Status Code

          • success

          • Invalid request due to user input, such as a bad parameter.

          • No API key, or the key is not valid.

          Example responses

          Retrieve classifier details

          Retrieve information about a custom classifier.

          Retrieve information about a custom classifier.

          Retrieve information about a custom classifier.

          Retrieve information about a custom classifier.

          Retrieve information about a custom classifier.

          Retrieve information about a custom classifier.

          GET /v3/classifiers/{classifier_id}
          (visualRecognition *VisualRecognitionV3) GetClassifier(getClassifierOptions *GetClassifierOptions) (*core.DetailedResponse, error)
          ServiceCall<Classifier> getClassifier(GetClassifierOptions getClassifierOptions)
          getClassifier(params, callback())
          get_classifier(self, classifier_id, **kwargs)
          get_classifier(classifier_id:)
          Request

          Use the GetClassifierOptions.Builder to create a GetClassifierOptions object that contains the parameter values for the getClassifier method.

          Path Parameters

          • The ID of the classifier.

          Query Parameters

          • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

          parameters

          • The ID of the classifier.

          The getClassifier options.

          The getClassifier options.

          parameters

          • The ID of the classifier.

          parameters

          • The ID of the classifier.

          parameters

          • The ID of the classifier.

          Example requests
          Response

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Status Code

          • success

          • Invalid request due to user input, such as a bad parameter.

          • No API key, or the key is not valid.

          • Cannot find the requested classifier in this account.

          Example responses

          Update a classifier

          Update a custom classifier by adding new positive or negative classes (examples) or by adding new images to existing classes. You must supply at least one set of positive or negative examples. For details, see Updating custom classifiers.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Tip: Don't make retraining calls on a classifier until the status is ready. When you submit retraining requests in parallel, the last request overwrites the previous requests. The retrained property shows the last time the classifier retraining finished.

          Update a custom classifier by adding new positive or negative classes (examples) or by adding new images to existing classes. You must supply at least one set of positive or negative examples. For details, see Updating custom classifiers.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Tip: Don't make retraining calls on a classifier until the status is ready. When you submit retraining requests in parallel, the last request overwrites the previous requests. The retrained property shows the last time the classifier retraining finished.

          Update a custom classifier by adding new positive or negative classes (examples) or by adding new images to existing classes. You must supply at least one set of positive or negative examples. For details, see Updating custom classifiers.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Tip: Don't make retraining calls on a classifier until the status is ready. When you submit retraining requests in parallel, the last request overwrites the previous requests. The retrained property shows the last time the classifier retraining finished.

          Update a custom classifier by adding new positive or negative classes (examples) or by adding new images to existing classes. You must supply at least one set of positive or negative examples. For details, see Updating custom classifiers.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Tip: Don't make retraining calls on a classifier until the status is ready. When you submit retraining requests in parallel, the last request overwrites the previous requests. The retrained property shows the last time the classifier retraining finished.

          Update a custom classifier by adding new positive or negative classes (examples) or by adding new images to existing classes. You must supply at least one set of positive or negative examples. For details, see Updating custom classifiers.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Tip: Don't make retraining calls on a classifier until the status is ready. When you submit retraining requests in parallel, the last request overwrites the previous requests. The retrained property shows the last time the classifier retraining finished.

          Update a custom classifier by adding new positive or negative classes (examples) or by adding new images to existing classes. You must supply at least one set of positive or negative examples. For details, see Updating custom classifiers.

          Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

          Tip: Don't make retraining calls on a classifier until the status is ready. When you submit retraining requests in parallel, the last request overwrites the previous requests. The retrained property shows the last time the classifier retraining finished.

          POST /v3/classifiers/{classifier_id}
          (visualRecognition *VisualRecognitionV3) UpdateClassifier(updateClassifierOptions *UpdateClassifierOptions) (*core.DetailedResponse, error)
          ServiceCall<Classifier> updateClassifier(UpdateClassifierOptions updateClassifierOptions)
          updateClassifier(params, callback())
          update_classifier(self, classifier_id, negative_examples=None, negative_examples_filename=None, **kwargs)
          update_classifier(classifier_id:, positive_examples: nil, negative_examples: nil, positive_examples_filename: nil, negative_examples_filename: nil)
          Request

          Use the UpdateClassifierOptions.Builder to create a UpdateClassifierOptions object that contains the parameter values for the updateClassifier method.

          Path Parameters

          • The ID of the classifier.

          Query Parameters

          • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

          Form Parameters

          • A .zip file of images that depict the visual subject of a class in the classifier. The positive examples create or update classes in the classifier. You can include more than one positive example file in a call.

            Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

            Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

            Encode special characters in the file name in UTF-8.

          • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

            Encode special characters in the file name in UTF-8.

          parameters

          • The ID of the classifier.

          • A .zip file of images that depict the visual subject of a class in the classifier. The positive examples create or update classes in the classifier. You can include more than one positive example file in a call.

            Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

            Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

            Encode special characters in the file name in UTF-8.

          • The filename for positiveExamples.

          • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

            Encode special characters in the file name in UTF-8.

          • The filename for negativeExamples.

          The updateClassifier options.

          The updateClassifier options.

          parameters

          • The ID of the classifier.

          • A .zip file of images that depict the visual subject of a class in the classifier. The positive examples create or update classes in the classifier. You can include more than one positive example file in a call.

            Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

            Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

            Encode special characters in the file name in UTF-8.

          • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

            Encode special characters in the file name in UTF-8.

          • The filename for positive_examples.

          • The filename for negative_examples.

          parameters

          • The ID of the classifier.

          • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

            Encode special characters in the file name in UTF-8.

          • The filename for negative_examples.

          • A .zip file of images that depict the visual subject of a class in the classifier. The positive examples create or update classes in the classifier. You can include more than one positive example file in a call.

            Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

            Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

            Encode special characters in the file name in UTF-8.

          • The filename for positive_examples.

          parameters

          • The ID of the classifier.

          • A .zip file of images that depict the visual subject of a class in the classifier. The positive examples create or update classes in the classifier. You can include more than one positive example file in a call.

            Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

            Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

            Encode special characters in the file name in UTF-8.

          • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

            Encode special characters in the file name in UTF-8.

          • The filename for positive_examples.

          • The filename for negative_examples.

          Example requests
          Response

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Information about a classifier.

          Status Code

          • success

          • Invalid request due to user input, for example:

            • Bad query parameter or header
            • No input images
            • The size of the image file in the request is larger than the maximum supported size
            • Corrupt .zip file
            • Cannot find the classifier
          • No API key, or the key is not valid.

          • The .zip file is too large.

          Example responses

          Delete a classifier

          DELETE /v3/classifiers/{classifier_id}
          (visualRecognition *VisualRecognitionV3) DeleteClassifier(deleteClassifierOptions *DeleteClassifierOptions) (*core.DetailedResponse, error)
          ServiceCall<Void> deleteClassifier(DeleteClassifierOptions deleteClassifierOptions)
          deleteClassifier(params, callback())
          delete_classifier(self, classifier_id, **kwargs)
          delete_classifier(classifier_id:)
          Request

          Use the DeleteClassifierOptions.Builder to create a DeleteClassifierOptions object that contains the parameter values for the deleteClassifier method.

          Path Parameters

          • The ID of the classifier.

          Query Parameters

          • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

          parameters

          • The ID of the classifier.

          The deleteClassifier options.

          The deleteClassifier options.

          parameters

          • The ID of the classifier.

          parameters

          • The ID of the classifier.

          parameters

          • The ID of the classifier.

          Example requests
          Response

          Status Code

          • success

          • Invalid request due to user input, such as a bad parameter.

          • No API key, or the key is not valid.

          • Cannot find the requested classifier in this account.

          Example responses

          Retrieve a Core ML model of a classifier

          Download a Core ML model file (.mlmodel) of a custom classifier that returns "core_ml_enabled": true in the classifier details.

          Download a Core ML model file (.mlmodel) of a custom classifier that returns "core_ml_enabled": true in the classifier details.

          Download a Core ML model file (.mlmodel) of a custom classifier that returns "core_ml_enabled": true in the classifier details.

          Download a Core ML model file (.mlmodel) of a custom classifier that returns "core_ml_enabled": true in the classifier details.

          Download a Core ML model file (.mlmodel) of a custom classifier that returns "core_ml_enabled": true in the classifier details.

          Download a Core ML model file (.mlmodel) of a custom classifier that returns "core_ml_enabled": true in the classifier details.

          GET /v3/classifiers/{classifier_id}/core_ml_model
          (visualRecognition *VisualRecognitionV3) GetCoreMlModel(getCoreMlModelOptions *GetCoreMlModelOptions) (*core.DetailedResponse, error)
          ServiceCall<InputStream> getCoreMlModel(GetCoreMlModelOptions getCoreMlModelOptions)
          getCoreMlModel(params, callback())
          get_core_ml_model(self, classifier_id, **kwargs)
          get_core_ml_model(classifier_id:)
          Request

          Use the GetCoreMlModelOptions.Builder to create a GetCoreMlModelOptions object that contains the parameter values for the getCoreMlModel method.

          Path Parameters

          • The ID of the classifier.

          Query Parameters

          • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

          parameters

          • The ID of the classifier.

          The getCoreMlModel options.

          The getCoreMlModel options.

          parameters

          • The ID of the classifier.

          parameters

          • The ID of the classifier.

          parameters

          • The ID of the classifier.

          Example requests
          Response

          Response type: io.ReadCloser

          Response type: InputStream

          Response type: NodeJS.ReadableStream|FileObject|Buffer

          Response type: file

          Response type: String

          Status Code

          • The request succeeded.

          • Invalid request due to user input, such as a bad parameter.

          • No API key, or the key is not valid.

          • Cannot find the requested classifier in this account.

          Example responses