Add columns or update job input fields
Overview
Partially update a job's input configuration. This endpoint is particularly useful for adding new columns to an existing JDBC job without replacing the entire configuration.
Job ID Format
The jobId follows the format: {pipeline_id}_{job_index}
Example: 1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0 (first job in pipeline 1b33ad2f_33d3_4837_9eeb_83c82c8b909d)
Column Selection Behavior
When updating column_include_list, this endpoint uses union logic:
- New columns are added to the existing list
- Existing columns are preserved
- Duplicate columns are automatically handled (not added twice)
This makes it safe to call multiple times without losing previously selected columns.
Request Body
The request body is a JSON object with the fields you want to update. Only the specified fields will be modified.
| Field | Type | Description |
|---|---|---|
column_include_list | array | List of column names to add to the job (union with existing) |
load_type | string | Change load type: full_load, incremental, incremental_with_qualify |
incremental_column_name | string | Column for incremental tracking |
incremental_column_type | string | Data type of incremental column |
primary_keys | array | Primary key columns (for incremental_with_qualify) |
memory_allocation_mb | integer | Memory allocation in MB |
| Any other input field | varies | Other job input configuration fields |
Examples
Add New Columns to JDBC Job
Add columns to an existing job's column selection:
PATCH /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/input
{
"column_include_list": ["new_column_1", "new_column_2", "new_column_3"]
}Response (200 OK):
{
"status": true,
"detail": "Successfully updated input fields for job_id 1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0",
"updated_fields": ["column_include_list"]
}If the job previously had columns ["id", "name", "email"], after this call it will have:
["id", "name", "email", "new_column_1", "new_column_2", "new_column_3"]
Add Columns and Update Memory
PATCH /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/input
{
"column_include_list": ["address", "phone"],
"memory_allocation_mb": 4096
}Response (200 OK):
{
"status": true,
"detail": "Successfully updated input fields for job_id 1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0",
"updated_fields": ["column_include_list", "memory_allocation_mb"]
}Change Incremental Configuration
Update the incremental column used for tracking:
PATCH /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/input
{
"incremental_column_name": "modified_at",
"incremental_column_type": "timestamp with time zone"
}Response (200 OK):
{
"status": true,
"detail": "Successfully updated input fields for job_id 1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0",
"updated_fields": ["incremental_column_name", "incremental_column_type"]
}Add Primary Keys for Deduplication
Add primary keys when using incremental_with_qualify:
PATCH /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/input
{
"primary_keys": ["id", "tenant_id"]
}Response (200 OK):
{
"status": true,
"detail": "Successfully updated input fields for job_id 1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0",
"updated_fields": ["primary_keys"]
}Update Multiple Fields
PATCH /platform/jobs/a9f82c14_7e21_4b5a_8c9d_12345abcdef0_0/input
{
"column_include_list": ["order_total", "discount_amount"],
"load_type": "incremental",
"incremental_column_name": "updated_at",
"incremental_column_type": "timestamp with time zone"
}Response (200 OK):
{
"status": true,
"detail": "Successfully updated input fields for job_id a9f82c14_7e21_4b5a_8c9d_12345abcdef0_0",
"updated_fields": ["column_include_list", "load_type", "incremental_column_name", "incremental_column_type"]
}Error Responses
404 Not Found
{
"detail": "job_id 1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0 not found"
}500 Internal Server Error
{
"detail": "Unable to complete transaction due to database error"
}Pipeline Refresh
When certain fields are updated (like load_type or incremental configuration), the endpoint automatically triggers a pipeline refresh to update the Airflow DAG configuration. This ensures the pipeline uses the new settings on the next execution.
Fields that trigger pipeline refresh:
load_typeincremental_column_nameincremental_column_typeprimary_keys
Use Cases
Adding Columns to Existing Pipeline
When you need to sync additional columns from a source table:
- Get the current job configuration to see existing columns
- Use PATCH to add the new columns
- The next pipeline run will include the new columns
// Step 1: Check existing configuration (optional)
GET /platform/jobs/jdbc/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0
// Step 2: Add new columns
PATCH /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/input
{
"column_include_list": ["new_col_1", "new_col_2"]
}Switching from Full Load to Incremental
When you want to change a job from full load to incremental:
PATCH /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/input
{
"load_type": "incremental",
"incremental_column_name": "updated_at",
"incremental_column_type": "timestamp with time zone"
}Note: For more complex sync mode migrations with validation, consider using the POST /platform/jobs/jdbc/:jobId/sync-mode endpoint instead.
Notes
- This endpoint uses partial update semantics - only specified fields are modified
- The
column_include_listfield uses union logic - columns are added, not replaced - Changes to certain fields will trigger an automatic pipeline DAG refresh
- For replacing the entire job input configuration, use PUT /platform/jobs/:jobId/input instead
