You can export data:
- via an ETL pipeline
- via API (to surface into your own dApp)
- via a CSV file download
- by connecting to the Data Lake via a Jupyter Notebook.
|Option||Limit||Summary||Common Use Case|
|CSV Download||5MB/ query||Click the download button after executing the SQL query.||Download SQL query results into CSV on UI|
|Analytics API||10 API calls/ sec||1. Support any SQL input.|
2. Export data in a CSV format.
|Get data from ad-hoc queries :|
• Exploratory use case
• When tables don’t update often, usually metadata info
|Transactional/ GraphQL API||1k API calls/ sec||1. Serve as GraphQL API.|
2. Build your own API with desired transformation logic.
|Serve data streams to dApps/ apps directly, in need of low latency (10ms) support.|
|ETL pipeline*||No||1. ZettaBlock managed ELT pipelines (integrity & uptime guaranteed).|
2. Self-serve ELT connectors (users provide DB credentials).
3. Can export to BigQuery, Snowflake, S3, Databricks, MongoDB, Postgres, and other popular databases and warehouses.
|The default choice for the analytics use case.|
• E.g., ingest new Polygon transactions every 24 hours.
*To discuss the option of exporting data via ETL Pipeline, contact our team directly.
ZettaBlock allows for both - import and export of data.
Bring your own public/private data into ZettaBlock. You can easily import data from your own data sources such as MongoDB and Snowflake, or local files such as CSV, and analyze them with the existing data on ZettaBlock.
To find out how to connect your own data, visit this documentation page.
Updated 7 months ago