Update Pipeline Memory

Update memory allocation for all jobs in a pipeline

Overview

Update the memory allocation for all jobs in a pipeline. This is useful when pipelines are failing due to insufficient memory (OOM errors).

Path Parameters

ParameterTypeRequiredDescription
pipelineIdstringYesThe UUID of the pipeline

Request Body

FieldTypeRequiredDescription
amountintegerYesMemory allocation in MB (max: 12000 MB / 12 GB)

Memory Limits

LimitValue
Default2048 MB (2 GB)
Maximum (API)12000 MB (12 GB)

Note: If your pipeline still fails with 12 GB of memory, contact Dadosfera support. Higher memory limits are available upon request for specific use cases.

Examples

Increase Pipeline Memory to 4GB

PUT /platform/pipeline/1b33ad2f-33d3-4837-9eeb-83c82c8b909d/memory

{
  "amount": 4096
}

Response (200 OK):

{
  "status": true,
  "data": {
    "message": "Pipeline memory updated successfully",
    "pipeline_id": "1b33ad2f_33d3_4837_9eeb_83c82c8b909d",
    "memory_mb": 4096
  }
}

Python Example

import requests

BASE_URL = "https://maestro.dadosfera.ai"
PIPELINE_ID = "1b33ad2f-33d3-4837-9eeb-83c82c8b909d"

# Update pipeline memory to 4GB
response = requests.put(
    f"{BASE_URL}/platform/pipeline/{PIPELINE_ID}/memory",
    headers=headers,
    json={"amount": 4096}
)

print(response.json())

Set Maximum Memory (12GB)

For very large data extractions:

PUT /platform/pipeline/1b33ad2f-33d3-4837-9eeb-83c82c8b909d/memory

{
  "amount": 12000
}

Error Responses

400 Bad Request - Exceeds Maximum

{
  "message": "Memory limit exceeded. Maximum allowed: 12000MB (12GB)",
  "statusCode": 400
}

404 Not Found

{
  "detail": {
    "status": false,
    "exception_type": "PipelineNotFound",
    "traceback": "Pipeline with id 1b33ad2f_33d3_4837_9eeb_83c82c8b909d not found",
    "data": null
  }
}

When to Increase Memory

Consider increasing memory when you see these errors in pipeline logs:

Error TypeRecommended Action
OutOfMemoryErrorIncrease memory by 2x
Container killed due to memoryIncrease memory
Java heap spaceIncrease memory
Very large tables (millions of rows)Proactively increase memory

Memory Allocation Strategy

Table SizeRecommended Memory
< 100K rows2048 MB (default)
100K - 1M rows4096 MB
1M - 10M rows6144 MB
10M+ rows8192 - 12000 MB

Need More Than 12GB?

If your pipeline still fails with the maximum 12 GB memory allocation:

  1. Contact Dadosfera Support - Higher limits are available for enterprise customers
  2. Consider splitting the extraction - Break into multiple smaller jobs
  3. Use incremental sync - Extract data in smaller batches over time
  4. Filter columns - Reduce memory by selecting only necessary columns

Notes

  • Memory changes apply to all jobs in the pipeline
  • To update memory for a specific job only, use PUT /platform/jobs/:jobId/memory
  • Changes take effect on the next pipeline execution
  • Higher memory allocation may increase execution costs
Language
Credentials
Header
Click Try It! to start a request and see the response here!