put
https://maestro.dadosfera.ai/platform/jobs//memory
Configure memory allocation for a specific job
Overview
Update the memory allocation for a specific job. This endpoint allows you to increase memory for jobs that process large data volumes or complex transformations.
Memory Limits
| Limit | Value |
|---|---|
| Minimum | No minimum (use default) |
| Maximum | 750,000 MB (750 GB) |
| Default | System default based on connector type |
Job ID Format
The jobId follows the format: {pipeline_id}_{job_index}
Example: 1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0 (first job in pipeline 1b33ad2f_33d3_4837_9eeb_83c82c8b909d)
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
amount | integer | Yes | Memory allocation in MB |
Examples
Set Memory to 4GB
PUT /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/memory
{
"amount": 4096
}Response (200 OK):
The endpoint triggers an internal DAG update to propagate the memory configuration.
Set Memory to 16GB for Large Tables
PUT /platform/jobs/a9f82c14_7e21_4b5a_8c9d_12345abcdef0_0/memory
{
"amount": 16384
}Set Memory to 32GB for Complex Transformations
PUT /platform/jobs/c5d92e18_4f31_4a2b_9c8d_67890fghijk0_0/memory
{
"amount": 32768
}Error Responses
400 Bad Request - Memory Exceeds Maximum
{
"detail": "The amount of memory provided is greatner than the max memory supported!!"
}404 Not Found
{
"detail": "job_id invalid_job_id not found"
}When to Increase Memory
Consider increasing memory allocation when:
| Scenario | Recommended Memory |
|---|---|
| Small tables (< 1M rows) | Default (no change) |
| Medium tables (1-10M rows) | 4,096 - 8,192 MB |
| Large tables (10-100M rows) | 8,192 - 16,384 MB |
| Very large tables (> 100M rows) | 16,384 - 32,768 MB |
| Complex transformations | +50% of base recommendation |
Notes
- Memory changes take effect on the next pipeline execution
- The memory allocation is stored in the job's
input.memory_allocation_mbfield - After updating memory, the pipeline DAG is automatically refreshed
- To update memory for all jobs in a pipeline at once, use PUT /platform/pipeline/:pipelineId/memory
- Monitor job execution to determine optimal memory allocation
