Reset job incremental state
Overview
Reset the incremental state for a job, forcing the next execution to perform a full extraction. This endpoint works for all connector types: JDBC, Singer, and S3.
When to Use
Use this endpoint when you need to:
- Re-sync all data after fixing data quality issues
- Recover from errors that corrupted the incremental state
- Backfill historical data that was missed
- Change incremental column and need fresh state
- Debug pipeline issues by starting from scratch
Job ID Format
The jobId follows the format: {pipeline_id}_{job_index}
Example: 1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0 (first job in pipeline 1b33ad2f_33d3_4837_9eeb_83c82c8b909d)
What Gets Reset
| Connector Type | State Reset |
|---|---|
| JDBC | Clears incremental_column_value - next run extracts all records |
| Singer | Clears state_file bookmarks - next run performs full sync |
| S3 | Clears ConnectorState - next run processes all files regardless of LastModified |
Examples
Reset JDBC Job State
POST /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/reset-stateResponse (200 OK):
{
"status": true,
"detail": "Successfully reset state for job_id 1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0",
"message": "Next job execution will start from beginning (full load/extraction)"
}Reset Singer Job State
POST /platform/jobs/a9f82c14_7e21_4b5a_8c9d_12345abcdef0_0/reset-stateResponse (200 OK):
{
"status": true,
"detail": "Successfully reset state for job_id a9f82c14_7e21_4b5a_8c9d_12345abcdef0_0",
"message": "Next job execution will start from beginning (full load/extraction)"
}Reset S3 Job State
POST /platform/jobs/c5d92e18_4f31_4a2b_9c8d_67890fghijk0_0/reset-stateResponse (200 OK):
{
"status": true,
"detail": "Successfully reset state for job_id c5d92e18_4f31_4a2b_9c8d_67890fghijk0_0",
"message": "Next job execution will start from beginning (full load/extraction)"
}Error Responses
404 Not Found
{
"detail": "job_id invalid_job_id not found"
}500 Internal Server Error
{
"detail": "Unable to complete state reset due to database error"
}Behavior by Load Type
JDBC Connectors
| Load Type | Before Reset | After Reset |
|---|---|---|
full_load | No effect (always full) | No change |
incremental | Tracks incremental_column_value | Value cleared, next run extracts all |
incremental_with_qualify | Tracks incremental_column_value | Value cleared, next run extracts all + deduplicates |
Singer Connectors
All Singer connectors maintain a state_file with bookmarks for incremental sync. After reset:
- All bookmarks are cleared
- Next sync performs full extraction for all streams
- New bookmarks are created after successful sync
S3/Azure Connectors
S3 connectors track which files have been processed using the ConnectorState. After reset:
- File processing state is cleared
- Next run processes all files in the configured prefix
- New state is recorded based on
LastModifiedtimestamps
Partial Reset (JDBC Only)
Instead of a full reset, you can perform a partial reset by setting the incremental column value to a specific point in the past. This re-extracts only data from that timestamp onwards, avoiding a complete full load.
Using PATCH to Set a Past Value
PATCH /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/input
{
"connector": "jdbc",
"incremental_column_value": "2024-01-01T00:00:00Z"
}This will:
- Set the incremental state to January 1st, 2024
- Next run extracts only records where
incremental_column > '2024-01-01T00:00:00Z' - Much faster than a full reset if you only need recent data
When to Use Partial Reset
| Scenario | Recommended Approach |
|---|---|
| Need last 30 days of data | Partial reset with date 30 days ago |
| Need all historical data | Full reset (POST /reset-state) |
| Fix data from specific date | Partial reset with that date |
| Complete re-sync | Full reset (POST /reset-state) |
Example: Re-sync Last 7 Days
import requests
from datetime import datetime, timedelta
BASE_URL = "https://maestro.dadosfera.ai"
JOB_ID = "1b33ad2f-33d3-4837-9eeb-83c82c8b909d-0"
# Calculate date 7 days ago
seven_days_ago = (datetime.utcnow() - timedelta(days=7)).isoformat() + "Z"
# Partial reset - set incremental value to 7 days ago
response = requests.patch(
f"{BASE_URL}/platform/jobs/{JOB_ID}/input",
headers=headers,
json={
"connector": "jdbc",
"incremental_column_value": seven_days_ago
}
)
print(f"Set incremental state to: {seven_days_ago}")Note: Partial reset is only available for JDBC connectors with
incrementalorincremental_with_qualifyload types. For Singer and S3 connectors, use the full reset endpoint.
Use Cases
Re-sync After Schema Change
If you added new columns to the source table and want to backfill data:
# 1. Add new columns to the job
PATCH /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/input
{
"column_include_list": ["new_column_1", "new_column_2"]
}
# 2. Reset state to trigger full extraction
POST /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/reset-state
# 3. Execute the pipeline (or wait for scheduled run)
POST /platform/pipeline/execute
{
"pipeline_id": "1b33ad2f_33d3_4837_9eeb_83c82c8b909d",
"customer_id": "your_customer_id"
}Recover From Data Quality Issue
If source data was corrupted and later fixed:
# Reset state to re-extract all data
POST /platform/jobs/1b33ad2f_33d3_4837_9eeb_83c82c8b909d_0/reset-stateRe-process All S3 Files
If you need to reprocess all files in an S3 bucket:
# Reset state - next run will process all files
POST /platform/jobs/c5d92e18_4f31_4a2b_9c8d_67890fghijk0_0/reset-stateNotes
- State reset is immediate - no pipeline execution is triggered
- The next scheduled or manual pipeline run will perform full extraction
- For
incremental_with_qualifyjobs, the qualify (deduplication) step will still run - Consider the data volume impact before resetting - full extraction may take significantly longer
- This operation cannot be undone - the previous incremental state is permanently cleared
