Export and Import Data

You can export data:

  • via an ETL pipeline
  • via API (to surface into your own dApp)
  • via a CSV file download
  • by connecting to the Data Lake via a Jupyter Notebook.
OptionLimitSummaryCommon Use Case
CSV Download5MB/ queryClick the download button after executing the SQL query.Download SQL query results into CSV on UI
Analytics API10 API calls/ sec1. Support any SQL input.
2. Export data in a CSV format.
Get data from ad-hoc queries :
• Exploratory use case
• When tables don’t update often, usually metadata info
Transactional/ GraphQL API1k API calls/ sec1. Serve as GraphQL API.
2. Build your own API with desired transformation logic.
Serve data streams to dApps/ apps directly, in need of low latency (10ms) support.
ETL pipeline*No1. ZettaBlock managed ELT pipelines (integrity & uptime guaranteed).
2. Self-serve ELT connectors (users provide DB credentials).
3. Can export to BigQuery, Snowflake, S3, Databricks, MongoDB, Postgres, and other popular databases and warehouses.
The default choice for the analytics use case.
• E.g., ingest new Polygon transactions every 24 hours.

*To discuss the option of exporting data via ETL Pipeline, contact our team directly.

1. Downloading data via a CSV file:

Step 1: Write the SQL query and run it.

Step 2: Download the results in a CSV format.

3. Data Connectors

ZettaBlock allows for both - import and export of data.

Bring your own public/private data into ZettaBlock. You can easily import data from your own data sources such as MongoDB and Snowflake, or local files such as CSV, and analyze them with the existing data on ZettaBlock.

To find out how to connect your own data, visit this documentation page.


What’s Next