Perform status updates on a specified inference id

Check Inference Status
Open Recipe
Access your API Key by Logging In
Login Login

Introduction and Purpose

The /inference/{inference-id}/status endpoint allows you to check the status of a previously submitted inference request. By providing the unique inference ID associated with your request, you can retrieve information about the progress and outcome of the inference event.


A Valid API Key will be required to be sent forward in the request using the header x-vody-api-key


This API endpoint provides information about the status of an inference event based on the provided inference ID.

Usage Example

Here's an example of how to use this API endpoint to check the status of an inference event:

curl --request GET \
     --url \
     --header 'accept: application/json' \
     --header 'x-vody-api-key: 1234abcd-1234-1234-a39a-3ff12ce386d5


  • Method: GET
  • URL:{inference-id}/status
  • Headers:
    • accept: application/json
    • x-vody-api-key: YOUR_API_KEY


Upon sending a GET request to the /inference/{inference-id}/status endpoint, you will receive a JSON response containing information about the inference event's status.

Successful Response (200)

  "id": "1234abcd-a593-4e42-8c7b-4fca9a149777",
  "created_at": "2023-08-12T17:22:55.882273+00:00",
  "updated_at": "2023-08-12T17:22:58.495959+00:00",
  "status": "completed",
  "model": "color-classification",
  "response_code": "200",
  "request_item_count": "2",
  "response_item_count": "2",
  "response_time": "2.613686"

id: The unique identifier for the inference event.
created_at: The timestamp indicating when the inference event was created.
updated_at: The timestamp indicating when the inference event was last updated.
status: The status of the inference event. Possible values include: "in_progress", "completed", "failed".
model: The model used for the inference event, in this case, "color-classification".
response_code: The HTTP response code associated with the inference event.
request_item_count: The number of items (images) in the original inference request.
response_item_count: The number of items (images) included in the response.
response_time: The time it took to generate the response for the inference event, in seconds.

Error Response (404)

If the provided inference ID is not found or is invalid, you will receive an error response indicating that the inference event does not exist.

  "error": "Not Found",
  "message": "Inference event with ID 1676dc31-b6d2-4878-9861-c75a54ff56c6 not found.",
  "code": 404

Rate Limiting

To ensure fair usage and maintain the API's performance, rate limiting policies are in place. These policies define the maximum number of requests that you can make within a specific time frame. If you exceed the allowed limit, the API will respond with a rate limit exceeded error. Please refer to the API documentation or contact our support team to learn more about the rate limits imposed by the Vody Inference API.

Best Practices

  • Keep track of the inference IDs generated when submitting inference requests. This ID is crucial for retrieving the status of your inference events.
  • Regularly check the status of your inference events to monitor their progress and ensure timely processing.
  • Handle potential error responses gracefully by including appropriate error handling in your API integration code.

By utilizing the /inference/{inference-id}/status endpoint, you can seamlessly monitor the progress and outcome of your inference events, enabling better management of your color classification tasks.

Click Try It! to start a request and see the response here!