Retrieve model inference results for a specified inference id

Recipes
📤
Retrieve Inference Results
Open Recipe
Access your API Key by Logging In
Login Login

Retrieve Inference Results

The /inference/{inference-id}/results endpoint allows you to retrieve the JSON data output from a previously submitted inference request. By providing the unique inference ID associated with your request, you can retrieve the output of the results.

Authentication

A Valid API Key is required to be sent in the x-vody-api-key header for authentication.

Endpoint

This API endpoint provides the inference event output data based on the provided inference ID.

Usage Example

Here's an example of how to use this API endpoint to check the status of an inference event:

curl --request GET \
     --url https://dev-api.vody.com/inference/0924dcfe-a593-4e42-8c7b-4fca9a149777/results \
     --header 'accept: application/json' \
     --header 'x-vody-api-key: 1234abcd-bdde-1234-a39a-3ff12ce386d5'

Request

  • Method: GET
  • URL: https://prod-api.vody.com/inference/{inference-id}/results
  • Headers:
    • accept: application/json
    • x-vody-api-key: YOUR_API_KEY

Response

Upon sending a GET request to the /inference/{inference-id}/results endpoint, you will receive a JSON response containing information output by the model inference event for the request ID.

Rate Limiting

To ensure fair usage and maintain the API's performance, rate limiting policies are in place. These policies define the maximum number of requests that you can make within a specific time frame. If you exceed the allowed limit, the API will respond with a rate limit exceeded error. Please refer to the API documentation or contact our support team to learn more about the rate limits imposed by the Vody Inference API.

Best Practices

  • Keep track of the inference IDs generated when submitting inference requests. This ID is crucial for retrieving the status of your inference events.
  • Regularly check the status of your inference events to monitor their progress and ensure timely processing.
  • Handle potential error responses gracefully by including appropriate error handling in your API integration code.

By utilizing the /inference/{inference-id}/results endpoint, you can seamlessly retrieve the outcome of your inference events

Language
Credentials
Header
URL
Click Try It! to start a request and see the response here!