Skip to main content

Data Exports API

Create and download bulk exports of analytics data in CSV or JSON format.


Overview

The Exports API allows you to:

  • Export analytics data for external analysis
  • Choose between CSV and JSON formats
  • Apply filters to export specific segments
  • Handle large exports asynchronously

Base path: /exports


Export Types

TypeDescription
pagesPage-level metrics (path, views, entrances, conversions)
sourcesTraffic by UTM source
mediumsTraffic by UTM medium
campaignsTraffic by UTM campaign
termsTraffic by UTM term (keywords)
countriesTraffic by country
devicesTraffic by device type
browsersTraffic by browser
osTraffic by operating system
conversionsConversion events
microconversionsMicroconversion events
time_seriesDaily time series data
landing_pagesLanding page metrics

Export Formats

FormatContent-TypeBest For
csvtext/csvSpreadsheets, BI tools
jsonapplication/jsonProgrammatic analysis

Create Export

POST /exports?site_id={site_id}

Create a new export job.

Request Body:

{
"export_type": "sources",
"format": "csv",
"date_from": "2025-01-01",
"date_to": "2025-01-10",
"filters": {
"country": "ES",
"utm_medium": "cpc"
}
}
FieldTypeRequiredDescription
export_typeenumYesType of data to export
formatenumNocsv (default) or json
date_fromdateYesStart date (YYYY-MM-DD)
date_todateYesEnd date (YYYY-MM-DD)
filtersobjectNoFilter conditions

Filter Options

{
"filters": {
"utm_source": "google",
"utm_medium": "cpc",
"utm_campaign": "spring_sale",
"utm_term": "running shoes",
"utm_content": "banner_a",
"country": "ES",
"device_type": "mobile",
"browser": "Chrome",
"os": "iOS",
"content_grouping": "blog",
"conversion_type": "purchase",
"advanced_filters": "path:contains:/products/"
}
}

Response (201 Created):

{
"job": {
"id": 1,
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"site_id": "my-site",
"export_type": "sources",
"format": "csv",
"date_from": "2025-01-01",
"date_to": "2025-01-10",
"filters": {"country": "ES", "utm_medium": "cpc"},
"status": "completed",
"estimated_rows": 850,
"actual_rows": 847,
"file_size_bytes": 45230,
"file_size_human": "44.2 KB",
"progress_percent": 100,
"download_url": "https://api.sealmetrics.com/api/v1/exports/download/abc123...",
"download_expires_at": "2025-01-11T14:30:00Z",
"started_at": "2025-01-10T14:30:00Z",
"completed_at": "2025-01-10T14:30:02Z",
"created_at": "2025-01-10T14:30:00Z"
},
"estimate": {
"estimated_rows": 850,
"estimated_size_bytes": 45000,
"estimated_size_human": "43.9 KB",
"recommended_format": "csv",
"will_use_background_job": false,
"estimated_duration_seconds": 2
},
"message": "Export completed. Ready for download."
}

Export Status Values

StatusDescription
pendingJob created, waiting to start
estimatingCalculating export size
generatingExport in progress
completedReady for download
failedExport failed (check error_message)
expiredDownload link expired

Estimate Export Size

POST /exports/estimate?site_id={site_id}

Get size estimate without creating a job.

Request Body: Same as create export.

Response:

{
"estimated_rows": 125000,
"estimated_size_bytes": 8500000,
"estimated_size_human": "8.1 MB",
"recommended_format": "csv",
"will_use_background_job": true,
"estimated_duration_seconds": 45
}
FieldDescription
estimated_rowsApproximate number of rows
estimated_size_bytesApproximate file size
will_use_background_jobtrue if >10K rows (async processing)
estimated_duration_secondsApproximate processing time

List Exports

GET /exports?site_id={site_id}

List all exports for an account.

Query Parameters:

ParameterTypeDefaultDescription
limitinteger20Max results (1-100)
offsetinteger0Skip N results
include_expiredbooleanfalseInclude expired exports

Response:

{
"exports": [
{
"id": 5,
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"export_type": "sources",
"format": "csv",
"status": "completed",
"actual_rows": 847,
"file_size_human": "44.2 KB",
"download_url": "https://...",
"created_at": "2025-01-10T14:30:00Z"
}
],
"total": 12
}

Get Export Status

GET /exports/{job_id}?site_id={site_id}

Get status of a specific export job.

Response: Same as single job in create response.


Download Export

GET /exports/download/{download_token}

Download completed export file.

note

This endpoint uses a secure download token and does not require authentication headers. The token is included in the download_url field.

Response Headers:

Content-Type: text/csv
Content-Disposition: attachment; filename="sealmetrics_sources_20250101_20250110.csv"

Stream Export (Small Datasets)

POST /exports/stream?site_id={site_id}

Stream export directly for small datasets (<10K rows).

Request Body: Same as create export.

Response: File stream with appropriate content type.

Error (413 - Too Large):

{
"detail": "Export too large for streaming (125,000 rows). Use POST /exports to create a background job."
}

Cancel/Delete Export

DELETE /exports/{job_id}?site_id={site_id}

Cancel a pending export or delete a completed one.

Response: 204 No Content


Small vs Large Exports

RowsProcessingResponse
Less than 10,000SynchronousImmediate download URL
10,000 or moreBackground jobPoll status until complete

For large exports:

  1. Create export (returns status: pending)
  2. Poll GET /exports/{job_id} until status: completed
  3. Download using the download_url

  • Download links expire after 24 hours
  • After expiration, you must create a new export
  • Expired exports show status: expired

Rate Limits

PlanConcurrent ExportsMax Rows/Export
Free1100,000
Pro31,000,000
Enterprise10Unlimited

Code Examples

Python - Export and Download

import requests
import time

API_KEY = "sm_your_api_key"
BASE_URL = "https://api.sealmetrics.com/api/v1"

def export_data(site_id, export_type, date_from, date_to, filters=None):
"""Create export and return download URL."""

# Create export
response = requests.post(
f"{BASE_URL}/exports",
headers={"X-API-Key": API_KEY},
params={"site_id": site_id},
json={
"export_type": export_type,
"format": "csv",
"date_from": date_from,
"date_to": date_to,
"filters": filters or {}
}
)
response.raise_for_status()
data = response.json()

job = data["job"]
job_id = job["job_id"]

# Poll until complete (for large exports)
while job["status"] in ("pending", "estimating", "generating"):
time.sleep(2)
response = requests.get(
f"{BASE_URL}/exports/{job_id}",
headers={"X-API-Key": API_KEY},
params={"site_id": site_id}
)
job = response.json()

if job["status"] == "failed":
raise Exception(f"Export failed: {job.get('error_message')}")

return job["download_url"]


def download_file(download_url, output_path):
"""Download export file."""
response = requests.get(download_url, stream=True)
response.raise_for_status()

with open(output_path, "wb") as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)


# Usage
url = export_data(
site_id="my-site",
export_type="sources",
date_from="2025-01-01",
date_to="2025-01-10",
filters={"country": "ES"}
)
download_file(url, "sources_export.csv")

JavaScript - Stream Small Export

async function streamExport(accountId, exportType, dateFrom, dateTo) {
const response = await fetch(
`${BASE_URL}/exports/stream?site_id=${accountId}`,
{
method: 'POST',
headers: {
'X-API-Key': API_KEY,
'Content-Type': 'application/json'
},
body: JSON.stringify({
export_type: exportType,
format: 'csv',
date_from: dateFrom,
date_to: dateTo
})
}
);

if (response.status === 413) {
throw new Error('Export too large for streaming. Use background job.');
}

const blob = await response.blob();
return blob;
}

Check Estimate Before Export

def estimate_first(site_id, export_config):
"""Check size before creating export."""

response = requests.post(
f"{BASE_URL}/exports/estimate",
headers={"X-API-Key": API_KEY},
params={"site_id": site_id},
json=export_config
)
estimate = response.json()

print(f"Estimated rows: {estimate['estimated_rows']:,}")
print(f"Estimated size: {estimate['estimated_size_human']}")
print(f"Background job: {estimate['will_use_background_job']}")
print(f"Est. duration: {estimate['estimated_duration_seconds']}s")

if estimate['estimated_rows'] > 500000:
confirm = input("Large export. Continue? (y/n): ")
if confirm.lower() != 'y':
return None

# Proceed with export
return export_data(site_id, **export_config)