Skip to main content
Most PeelAPI operations are asynchronous. When you make a request to an endpoint, the API immediately returns a job ID and processes your request in the background.

Why async?

AI model processing can take anywhere from a few seconds to several minutes depending on:
  • Model complexity and the specific endpoint
  • Request parameters and input size
  • Current system load
  • Processing requirements
Rather than keeping your HTTP connection open, PeelAPI follows an async pattern that lets you:
  • Submit multiple jobs in parallel
  • Check status at your own pace
  • Stream real-time progress updates
  • Build responsive UIs that don’t block

Job lifecycle

Every job moves through a series of states:
StateDescription
queuedJob is queued but not yet started
processingActively processing your request
succeededCompleted successfully; outputs are ready
failedEncountered an error; check error field for details

Checking job status

Polling with GET /jobs/:id

The simplest approach is to poll the job endpoint until it reaches a terminal state:
curl -H "Authorization: Bearer $PEEL_API_KEY" \
  https://api.peelapi.com/v1/jobs/job_abc123
Response example:
{
  "data": {
    "status": "processing",
    "created_at": "2025-10-21T10:30:00Z",
    "outputs": null,
    "error": null
  },
  "errors": null
}
When succeeded:
{
  "data": {
    "id": "job_abc123",
    "status": "succeeded",
    "progress": 1.0,
    "outputs": {
      "results": [
        {
          "url": "https://static.peelapi.com/.../output_1.png"
        },
        {
          "url": "https://static.peelapi.com/.../output_2.png"
        }
      ]
    },
    "error": null
  },
  "errors": null
}
Polling best practices:
  • Start with 2-3 second intervals
  • Use exponential backoff (double the interval after each check)
  • Max out at 30-60 second intervals for long jobs
  • Stop polling once status is succeeded or failed

Streaming with GET /jobs/:id/stream

For real-time updates, use Server-Sent Events (SSE) streaming:
curl -N -H "Authorization: Bearer $PEEL_API_KEY" \
  https://api.peelapi.com/v1/jobs/job_abc123/stream
The stream emits events as the job progresses:
event: status
data: {"status":"processing","output":null}

event: log
data: {"level":"info","message":"Processing input","created_at":"2025-10-21T10:30:05Z"}

event: log
data: {"level":"info","message":"Running model inference","created_at":"2025-10-21T10:30:12Z"}

event: result
data: {"images":[{"w256":"https://static.peelapi.com/.../output_1_w256.png","w512":"https://static.peelapi.com/.../output_1_w512.png","w1024":"https://static.peelapi.com/.../output_1_w1024.png","full":"https://static.peelapi.com/.../output_1.png"}],"request_duration_s":45,"model_id":"portrait-1-pro","style_id":123}

event: end
data: succeeded
Event types:
EventDescription
statusJob state changed; includes progress (0.0-1.0)
logHuman-readable progress message
resultOutput is ready (for endpoints with multiple results)
endJob reached terminal state (succeeded or failed)
Streaming is ideal for:
  • Incremental result delivery for batch operations
  • Real-time log displays