put
https://maestro.dadosfera.ai/platform/pipeline//memory
Update memory allocation for all jobs in a pipeline
Overview
Update the memory allocation for all jobs in a pipeline. This is useful when pipelines are failing due to insufficient memory (OOM errors).
Path Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
pipelineId | string | Yes | The UUID of the pipeline |
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
amount | integer | Yes | Memory allocation in MB (max: 12000 MB / 12 GB) |
Memory Limits
| Limit | Value |
|---|---|
| Default | 2048 MB (2 GB) |
| Maximum (API) | 12000 MB (12 GB) |
Note: If your pipeline still fails with 12 GB of memory, contact Dadosfera support. Higher memory limits are available upon request for specific use cases.
Examples
Increase Pipeline Memory to 4GB
PUT /platform/pipeline/1b33ad2f-33d3-4837-9eeb-83c82c8b909d/memory
{
"amount": 4096
}Response (200 OK):
{
"status": true,
"data": {
"message": "Pipeline memory updated successfully",
"pipeline_id": "1b33ad2f_33d3_4837_9eeb_83c82c8b909d",
"memory_mb": 4096
}
}Python Example
import requests
BASE_URL = "https://maestro.dadosfera.ai"
PIPELINE_ID = "1b33ad2f-33d3-4837-9eeb-83c82c8b909d"
# Update pipeline memory to 4GB
response = requests.put(
f"{BASE_URL}/platform/pipeline/{PIPELINE_ID}/memory",
headers=headers,
json={"amount": 4096}
)
print(response.json())Set Maximum Memory (12GB)
For very large data extractions:
PUT /platform/pipeline/1b33ad2f-33d3-4837-9eeb-83c82c8b909d/memory
{
"amount": 12000
}Error Responses
400 Bad Request - Exceeds Maximum
{
"message": "Memory limit exceeded. Maximum allowed: 12000MB (12GB)",
"statusCode": 400
}404 Not Found
{
"detail": {
"status": false,
"exception_type": "PipelineNotFound",
"traceback": "Pipeline with id 1b33ad2f_33d3_4837_9eeb_83c82c8b909d not found",
"data": null
}
}When to Increase Memory
Consider increasing memory when you see these errors in pipeline logs:
| Error Type | Recommended Action |
|---|---|
OutOfMemoryError | Increase memory by 2x |
Container killed due to memory | Increase memory |
Java heap space | Increase memory |
| Very large tables (millions of rows) | Proactively increase memory |
Memory Allocation Strategy
| Table Size | Recommended Memory |
|---|---|
| < 100K rows | 2048 MB (default) |
| 100K - 1M rows | 4096 MB |
| 1M - 10M rows | 6144 MB |
| 10M+ rows | 8192 - 12000 MB |
Need More Than 12GB?
If your pipeline still fails with the maximum 12 GB memory allocation:
- Contact Dadosfera Support - Higher limits are available for enterprise customers
- Consider splitting the extraction - Break into multiple smaller jobs
- Use incremental sync - Extract data in smaller batches over time
- Filter columns - Reduce memory by selecting only necessary columns
Notes
- Memory changes apply to all jobs in the pipeline
- To update memory for a specific job only, use PUT /platform/jobs/:jobId/memory
- Changes take effect on the next pipeline execution
- Higher memory allocation may increase execution costs
