Does my data ever leave my database?▾
No. The in-DB agent runs in your environment, so raw data stays in your warehouse. The control plane exchanges
metadata and configuration validate pipelines and coordinate governance, validation, and visibility. See
Security.
Do you store our database credentials?▾
No. Credentials remain in your environment; the control plane never needs raw credentials.
What leaves our environment when we use BoltPipeline?▾
Only metadata/logs required for validation, lineage, and drift detection. No raw data or PII leaves your DB.
Where is the agent deployed and what are the network requirements?▾
Runs beside your warehouse. Outbound egress to control plane; private/approved paths to warehouse. No inbound openings required.
How does AI validation work?▾
We analyze SQL structure, semantics, lineage, and runtime to surface issues and suggest fixes before deploy. We also generate validation artifacts and docs.
Do you train on our SQL or data?▾
No. We don’t train on your raw data; we use SQL structure/metadata and runtime signals.
What level of lineage do you provide?▾
Column-level lineage for precise impact analysis, reviews, drift/schema events, and audit trails.
Does BoltPipeline run our production ETL jobs?▾
BoltPipeline governs pipeline execution within your environment and integrates with your orchestrator and warehouse. Execution remains under your control, with validation and certification enforced by the platform.
How does this fit with dbt/Airflow/Dagster?▾
We complement your workflow and can export artifacts or plug into CI/CD checks.
Can we block merges or deploys on failed validation?▾
Yes. Teams wire checks into CI to block merges until issues are resolved or risk is approved by policy.
How do we roll back if a change causes issues?▾
Use your standard rollback. We scope impact via lineage/drift and supply the artifacts used during approval.
Do you offer a local quick start?▾
Yes. Run the agent locally against a dev database—ideal for POCs/evaluations.
What exactly is the BoltPipeline Agent?▾
The BoltPipeline Agent is the in-environment runtime component of the platform that runs inside your environment. It implements, validates, and governs data pipelines based on SQL intent, while coordinating with the Command Center for visibility and control.
Does the BoltPipeline Agent execute pipelines?▾
The BoltPipeline Agent generates, validates, and enforces certified pipeline logic within your environment, while execution is triggered by your existing runtime systems and orchestrators. It integrates with your data systems and orchestration tools to ensure pipelines run correctly, consistently, and with the appropriate validations enforced.
How does the Agent work with Airflow or other orchestrators?▾
BoltPipeline integrates with existing orchestrators and schedulers rather than replacing them. The Agent ensures pipeline logic, validation, and correctness are enforced regardless of how execution is scheduled.
Is the Agent long-running or on-demand?▾
The BoltPipeline Agent operates as a managed runtime within your environment, executing pipeline-related work and reporting operational signals back to the platform as needed.
Can multiple BoltPipeline Agents run across environments?▾
Yes. BoltPipeline supports multiple Agents operating across environments and systems, coordinated centrally through the Command Center.
Do engineers need to manually monitor schema or data drift?▾
No. Drift detection is fully automated. The BoltPipeline Agent continuously evaluates schema and data characteristics as pipelines run and change. Engineers are alerted only when drift occurs and guided on the downstream impact and remediation.
Do I need to manually configure or run data profiling?▾
No. Data profiling is implicitly performed as part of pipeline implementation and execution. The BoltPipeline Agent automatically profiles relevant data characteristics without requiring engineers to define or maintain separate profiling jobs.
Does BoltPipeline require manual checks to ensure pipeline correctness?▾
No. Pipeline validation, correctness checks, and certification are automated by the platform. Engineers do not need to manually chase validation failures—BoltPipeline evaluates pipelines continuously and enforces correctness as changes occur.
Can BoltPipeline detect incorrect joins or data mismatches?▾
Yes. BoltPipeline analyzes pipeline logic and data relationships to identify incorrect joins, missing keys, and mismatched data assumptions. When issues are detected, the platform highlights the affected pipelines and provides context to help engineers correct them safely.
How does BoltPipeline handle complex SQL with multiple joins?▾
BoltPipeline understands SQL intent and evaluates joins, relationships, and dependencies as part of pipeline implementation. This allows the platform to surface correctness issues and downstream impact even in complex, multi-join pipelines.
Does BoltPipeline catch issues before pipelines run?▾
Yes. BoltPipeline analyzes SQL intent and pipeline structure before execution. As SQL is translated into a data pipeline, the platform evaluates joins, relationships, schemas, data assumptions, and correctness constraints to surface issues prior to runtime.
What happens after pipelines are deployed?▾
After deployment, BoltPipeline continuously monitors pipelines for structural and data changes. If schemas are altered, columns are dropped, or data characteristics drift over time, the platform detects these changes automatically and alerts teams without requiring manual monitoring.
Does BoltPipeline rely on runtime failures to detect problems?▾
No. BoltPipeline is designed to catch issues as early as possible. Most correctness and structural issues are detected during pipeline implementation, before execution. Ongoing monitoring then ensures changes over time are detected without relying on pipeline failures.
Does BoltPipeline see or store my data?▾
No. BoltPipeline does not ingest, copy, or store customer data. All pipeline execution happens inside your database. The control plane receives metadata, lineage, and validation signals only.
Does BoltPipeline require database credentials?▾
No long-lived credentials are stored by BoltPipeline. Authentication is handled within your environment, and access is scoped according to your security and RBAC policies.
What do you mean by “artifacts”?▾
In BoltPipeline,
artifacts are ready-to-use outputs generated from your SQL:
- Executable code — SQL scripts, database procedures, or orchestration snippets (e.g., Airflow/Dagster) you can run in your environment.
- Validation & quality results — schema checks, join verification, test results, drift detection, and review-ready reports.
- Profiling outputs — stats, distributions, and quality baselines to understand data shape and change.
- Lineage maps — table/column-level traces that show where data came from, how it transforms, and what’s impacted.
All artifacts are
certified,
versioned, and designed to plug into CI/CD and your existing tools (run inside your DB or schedule via your orchestrator).
How does an engineer interact with BoltPipeline day to day?▾
Engineers interact with BoltPipeline by writing and maintaining SQL files that describe business logic and intent. These SQL files may include lightweight, intelligent tags to express expectations such as keys, constraints, or behaviors. BoltPipeline analyzes this SQL and automatically builds the corresponding data pipeline artifacts.
What does an engineer need to provide to BoltPipeline?▾
Engineers provide a set of SQL statements that describe transformations and intent. BoltPipeline does not require a new DSL or framework—teams continue working in SQL, optionally using simple tags to express expectations. See the Quickstart documentation for examples.
What does BoltPipeline generate from my SQL?▾
BoltPipeline automatically generates executable pipeline artifacts from SQL, including deployable scripts, stored procedures, validation outputs, lineage, and execution-ready plans. These artifacts are certified, versioned, and designed for direct use in production workflows.
Do engineers need to manually wire validation, profiling, or drift detection?▾
No. Validation, profiling, correctness checks, and drift detection are automatically derived as part of pipeline implementation. Engineers do not need to create or maintain separate jobs for these capabilities.
What happens when my SQL or schemas change?▾
When SQL logic or underlying schemas change, BoltPipeline re-evaluates pipeline correctness before execution and continues monitoring after deployment. If columns are dropped, schemas evolve, or assumptions change, the platform detects drift automatically and surfaces the impact without requiring manual intervention.
How complex is the BoltPipeline Agent setup?▾
BoltPipeline is designed to be straightforward to adopt incrementally. Setup details depend on your environment and execution model, and are covered step-by-step in the Quickstart documentation.