Get Started with BoltPipeline
From sign-up to your first certified pipeline. No new language to learn — just SQL.
How it works
Three phases. Zero guesswork.
BoltPipeline is a governed data pipeline platform that takes your SQL from development to production with built-in certification, role-based promotion controls, and automated operations.
- •Upload your SQL — one statement per semicolon
- •BoltPipeline parses, classifies, and builds the dependency graph
- •Execution order is determined automatically
- •Structural validation: naming, dependencies, contracts
- •Issues surfaced before code reaches production
- •Fix and re-upload until certification passes
- •Promote through Dev → Integration → Production
- •Each promotion: request → approval → execution
- •Automated scheduling, monitoring, and failure handling
Every step is governed by role-based access controls. Only authorized team members can request, approve, and execute promotions — ensuring your data pipelines move through environments with proper guardrails and compliance.
Your team
Four roles, clear responsibilities
- •View pipelines, lineage, and execution history
- •View certification results and pipeline design
- •Read-only access across all environments
- •Upload SQL to create and version pipelines
- •Request promotions across environments
- •View certification details and resolve issues
- •Execute approved pipeline promotions
- •Register, monitor, and manage agents
- •Push approved agent upgrades (automated)
- •Monitor pipeline execution and operations
- •Approve or reject promotion requests
- •Configure tenant-level platform parameters
- •Manage users, roles, security, and settings
- •Full access to all operator and developer actions
Onboarding
Pipelines
Pipelines are SQL-based data transformations that BoltPipeline plans, certifies, and promotes through environments with full governance at every step.
- 1
Get your SQL into BoltPipeline
Write your transformation SQL as a
.sqlfile with one statement per semicolon. BoltPipeline parses each statement, classifies the operations, and builds the execution dependency graph automatically.Manual upload
Upload your
.sqlfile directly from the Console. Best for getting started quickly or one-off changes.GitHub repo sync
Coming soonConnect your GitHub repository and BoltPipeline automatically picks up changes when you push. No manual uploads needed.
- 2
Certification
Your pipeline is automatically validated against structural rules, naming conventions, dependency order, and data contract compliance. Any issues are surfaced immediately — fix and re-upload until certification passes.
- 3
Promote through environments
Once certified, promote your pipeline from Dev → Integration → Production. Each promotion follows a governed workflow: a developer requests, an admin approves, and an operator executes. The pipeline lifecycle is fully tracked and auditable.
SQL file format
Your SQL is a standard .sql file with lightweight metadata tags added as SQL comments. Dependencies and execution order are computed automatically.
-- Metadata tags: pipeline name, schedule, target table, keys, SCD type
-- BoltPipeline reads these comments to generate the full pipeline
INSERT INTO silver.public.dim_customer (customer_id, customer_name, ...)
SELECT DISTINCT c.customer_id, c.customer_name, c.customer_segment
FROM bronze.public.customers c;
INSERT INTO silver.public.dim_orders (order_id, customer_id, order_amount)
SELECT o.order_id, o.customer_id, o.order_amount
FROM bronze.public.orders o
WHERE o.order_status = 'COMPLETED';What you provide
- •Pipeline name and schedule
- •Target table for each step
- •Primary and natural keys (for SCD)
- •SCD type (0, 1, or 2) when needed
What BoltPipeline generates
- •Dependency graph and execution order
- •SCD merge logic and staging tables
- •Audit columns (created, updated, hash, etc.)
- •Column-level lineage and profiling
Full directive reference available after sign-up in the Console documentation.
Onboarding
Agents
Agents are lightweight runtimes deployed in your infrastructure that execute pipeline operations securely. They operate autonomously with offline support and automated upgrades — and communication is strictly one-way: agents pull instructions from BoltPipeline, never the other way around.
Prerequisites — have these ready before bootstrapping
The bootstrap token is time-sensitive (15–30 min). Complete these steps first.
Store database credentials in a secret manager
Store your database connection details in AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault. The agent reads credentials at runtime and never stores them on local disk.
Configure the agent
Create a configuration file with your secret manager reference, target environment, and operational settings. No sensitive values go in this file — only references to your secret manager.
Prepare the agent host
Set up the working directories and ensure the host has network access to your data warehouse and to BoltPipeline. Detailed setup instructions are provided during onboarding.
Bootstrap — registering a new agent
Only start this process after completing the prerequisites above.
- 1
Download the bootstrap package
From the Console, go to Settings → Agent Bootstrap and select the target environment. Each bootstrap package is tied to a specific environment — the agent will only execute pipelines in that environment.
The bootstrap JWT expires in 15–30 minutes. Do not download until your agent host is fully prepared and you are ready to start the agent immediately.
- 2
Place bootstrap files on the agent host
Extract the bootstrap package into the agent's identity directory. The package contains a one-time registration token (15–30 min expiry) and the CA certificate for secure communication.
- 3
Start the agent
Run the agent immediately. On first startup, the agent will:
- •Auto-generate its own RSA keypair (private key never leaves the host)
- •Store the private key and certificate in your secret manager
- •Register with BoltPipeline using the bootstrap JWT — only the public key is sent
- •Receive its mTLS client certificate (signed by the platform CA) and begin polling
Security — how agent identity works
BoltPipeline uses mutual TLS (mTLS) with a full PKI chain to secure all agent-to-platform communication. You do not need to create or manage certificates — the platform handles everything automatically during bootstrap.
- •Private key never leaves the host — generated locally, only the public key is sent during registration
- •Credentials stored in your secret manager — AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault
- •Certificates auto-rotate — the platform handles renewal automatically
- •Each agent has a unique identity — compromise of one agent does not affect others
Treat agent identity like database credentials. Each agent has a unique mTLS identity stored in your secret manager. Never share keys across hosts or commit them to version control.
Autonomous operation
Once registered, agents run unattended. They support offline mode when connectivity is interrupted, queue results locally, and sync when back online. Container upgrades are approved by operators and executed automatically.
Monitor and manage
View agent status, heartbeat, and session information from the Agents page. Admins can configure platform-level parameters inherited by all agents — execution defaults, resource limits, and upgrade policies.
Your data stays yours — BoltPipeline never has access to your data or database credentials. The platform carries only metadata. Agents communicate one-way — they pull instructions from BoltPipeline, but the platform cannot reach into your agents or infrastructure directly.
Onboarding
Operations
Once pipelines are promoted and agents are running, operations is where everything comes together — automated execution with proper guardrails, governance, and monitoring.
- 1
Automated execution
Promoted pipelines are scheduled and executed automatically by agents in the correct dependency order. There is no manual work required — BoltPipeline orchestrates the entire process.
- 2
Monitor and approve
Your day-to-day is monitoring execution health and approving lifecycle transitions. Pipelines move through their lifecycle with proper guardrails — you review, approve, and BoltPipeline handles the rest.
- 3
Resolve and iterate
When something fails, BoltPipeline surfaces the error with full context — step-level logs, dependency state, and failure reason. Fix the SQL, upload a new version, re-certify, and re-promote. The governed lifecycle ensures nothing reaches production without passing validation.
Ready to start?
Request a trial or explore the platform further.