Deploy to Production
Deploying with Docker Compose
Section titled “Deploying with Docker Compose”The simplest way to run Shaper in production is to run Docker Compose on a Virtual Machine.
- Install Docker and Docker Compose on the server
- Configure your DNS to point to the server’s IP address.
- Create a basic
docker-compose.yml
file:
services: shaper: image: taleshape/shaper:0 environment: - SHAPER_TLS_DOMAIN=yourdomain.com volumes: - shaperdata:/data ports: - 443:443 - 80:80volumes: shaperdata: driver: local
- Run
docker compose up -d
With this setup Shaper will use Let’s Encrypt to automatically obtain and renew TLS certificates for your domain and serve the web interface over HTTPS without the need for a separate reverse proxy.
Data is stored in the shaperdata
volume, which is persisted across container restarts and updates.
You likely also want to mount a file at /var/lib/shaper/init.sql
to setup credentials and attach remote databases and such. Shaper automatically executes this SQL on startup. See the Loading Data docs for more.
Secure your Shaper Instance
Section titled “Secure your Shaper Instance”By default no authentication is required to access the Shaper web interface. When exposing Shaper on the internet you must enable authentication to prevent unauthorized access to your data.
To do so, click on “Admin” in the bottom left corner and then click on “Create User”.
Backing up your data
Section titled “Backing up your data”Shaper can automatically create daily snapshots of your data and upload it to an S3-compatible object storage.
Shaper uses two file-based databases: SQLite and DuckDB. Internal data such as users and dashboards are stored in SQLite and you can persist data in DuckDB via the ingest API or tasks.sql. When snapshots are enabled, both databases are snapshotted and uploaded to the configured S3 bucket.
By default Shaper automatically restores the latest snapshot on startup if no local data is found.
Setting up snapshots
Section titled “Setting up snapshots”- Create an S3 bucket in your preferred object storage. Make sure to set a lifecycle policy to automatically delete old snapshots after a certain period of time.
- Create an access key and secret key with permissions to read and write objects in the bucket.
- Set the following environment variables:
SHAPER_SNAPSHOT_S3_BUCKET
: The name of the S3 bucket.SHAPER_SNAPSHOT_S3_ENDPOINT
: The endpoint URL of the S3 service (e.g.https://s3.amazonaws.com
for AWS).SHAPER_SNAPSHOT_S3_ACCESS_KEY
: The access key.SHAPER_SNAPSHOT_S3_SECRET_KEY
: The secret key.SHAPER_SNAPSHOT_S3_REGION
: The region of the S3 bucket (e.g.us-east-1
for AWS).SHAPER_SNAPSHOT_TIME
: The time of day when the snapshot should be created inHH:MM
format. Defaults to01:00
.
How snapshots work
Section titled “How snapshots work”SQLite and DuckDB snapshots are created one after the other, SQLite first.
When running in a multi-node setup, only one node creates snapshots to avoid conflicts.
SQLite snapshots
Section titled “SQLite snapshots”Shaper uses VACUUM INTO
to write a new SQLite database to a temporary file.
The temporary file is then uploaded to the S3 bucket with a name like shaper-snapshots/shaper-sqlite-2006-01-02_15-04-05.db
The SQLite snapshot contains the NATS consumer offsets in the consumer_state
table and will automatically resume from the last processed message on restore if there is newer data in the NATS stream.
DuckDB snapshots
Section titled “DuckDB snapshots”Shaper uses EXPORT DATABASE
and IMPORT DATABASE
to create and restore DuckDB snapshots directly to/from S3 from within DuckDB.
The data is stored in S3 as Parquet files (zstd compressed) and two SQL scripts.
The DuckDB snapshot is stored in a folder like shaper-snapshots/shaper-duckdb-2006-01-02_15-04-05/
.
To do this from within DuckDB, Shaper creates a DuckDB S3 secret for the configured S3 bucket. This secret always available which means you can interactively query the snapshots directly from DuckDB if needed.
When restoring a DuckDB snapshot, Shaper first runs all INSTALL
, LOAD
and ATTACH
statements found in the init.sql
file since some views and macros might depend on them.