Skip to Content

Usage

GET /v1/usage returns request and cost telemetry for your account. It is the same data the Usage tab of the developer dashboard renders — call it directly when you want to feed it into your own observability stack, your finance dashboard, or a weekly digest.

The endpoint accepts a date range and a group_by axis. By default it returns the last 30 days, bucketed by day.

Why monitor usage

  • Spend. Every scan and every export deducts from the same budget pool as the regsn.app UI. Knowing where the cents are going stops surprises. The dashboard’s Settings → Billing page is the UI for the same pool.
  • Capacity. Per-key rate limits are documented in Rate limits. If you’re routinely hitting 429s, the per-key buckets in /v1/usage tell you which key needs to be sharded or which endpoint to back off.
  • Audit. last_used_at on a key only tells you the key is alive; /v1/usage tells you what it’s been doing.

Request

curl "https://api.regsn.app/v1/usage?group_by=day&from=2026-04-15T00:00:00Z" \ -H "Authorization: Bearer $REGSN_API_KEY"
ParamTypeDefaultNotes
fromISO 8601now − 30 daysStart of range.
toISO 8601nowEnd of range.
group_byday / endpoint / keydayHow to bucket.

[!TIP] /v1/usage is dual-auth: it accepts either a bearer key (for API consumers, returns that account’s usage) or a Clerk session token (for the dashboard, returns the signed-in user’s usage). For programmatic use, just pass your bearer key.

Response

{ "totals": { "request_count": 1284, "scan_count": 47, "export_count": 12, "total_cost_cents": 412 }, "buckets": [ { "key": "2026-04-15", "request_count": 73, "scan_count": 2, "export_count": 1, "cost_cents": 18 }, { "key": "2026-04-16", "request_count": 41, "scan_count": 1, "export_count": 0, "cost_cents": 9 } ], "filters": { "from": "2026-04-15T00:00:00.000Z", "to": "2026-05-15T00:00:00.000Z", "group_by": "day" } }

All cost figures are in US cents. Divide by 100 for dollars.

group_by=day

buckets[i].key is an ISO date (YYYY-MM-DD). Buckets are sorted oldest-first. Days with zero activity are omitted.

group_by=endpoint

buckets[i].key is the request method + path, e.g. POST /v1/scans. Sorted by cost_cents descending — the endpoints costing you the most appear first.

group_by=key

buckets[i].key is the API-key UUID (or (none) for dashboard-session traffic). Sorted by cost_cents descending. Great for spotting which key is your hottest consumer.

CSV export

The endpoint returns JSON only. To produce a CSV for finance, transform client-side:

Python

from regsn import RegSn import csv client = RegSn() # `from` is a Python keyword — pass it via dict unpacking. data = client.usage.get(group_by="day", **{"from": "2026-04-01T00:00:00Z"}) with open("usage.csv", "w", newline="") as f: w = csv.DictWriter(f, fieldnames=["key", "request_count", "scan_count", "export_count", "cost_cents"]) w.writeheader() for row in data["buckets"]: w.writerow(row)

JavaScript

import { RegSn } from '@regsn/api'; import { writeFile } from 'node:fs/promises'; const client = new RegSn(); const data = await client.usage.get({ group_by: 'day', from: '2026-04-01T00:00:00Z' }); const header = 'key,request_count,scan_count,export_count,cost_cents'; const rows = data.buckets.map(b => `${b.key},${b.request_count},${b.scan_count},${b.export_count},${b.cost_cents}` ); await writeFile('usage.csv', [header, ...rows].join('\n'));

The dashboard’s Download CSV button on the Usage tab uses exactly this pattern against the same response.

Cost attribution

/v1/usage joins your request log against snapshots and export_jobs, so each bucket’s cost_cents is the sum of:

  • Scan costs — engine + analyst + verifier billing, surfaced as actual_cost_cents (or fallback cost_cents) on the snapshot.
  • Export costs — provider fees for AI-generated artifacts (NotebookLM video, ElevenLabs audio, etc.).

Read endpoints (GET /v1/snapshots, GET /v1/scans/{id}, etc.) contribute to request_count but not to cost_cents — reads are free.

Pulling a weekly digest

from regsn import RegSn from datetime import datetime, timedelta, timezone client = RegSn() now = datetime.now(timezone.utc) week_ago = now - timedelta(days=7) # By endpoint for the last 7 days. # `from` is a Python keyword — pass it via dict unpacking. data = client.usage.get( group_by="endpoint", to=now.isoformat(), **{"from": week_ago.isoformat()}, ) print(f"Total spend: ${data['totals']['total_cost_cents'] / 100:.2f}") print(f"Scans: {data['totals']['scan_count']}, Exports: {data['totals']['export_count']}\n") for b in data["buckets"][:5]: print(f" {b['key']:35s} ${b['cost_cents']/100:>7.2f} ({b['request_count']} req)")

Topping up budget

If total_cost_cents is approaching your monthly budget, you’ll start seeing 402 budget_exhausted on POST /v1/scans and POST /v1/exports. Top up in the dashboard at api.regsn.app ; the limit lifts immediately on the next request. The same top-up controls live on the product’s Settings → Billing page.

There is no separate “API budget” — the API and the regsn.app UI share one budget pool per account.

[!WARNING] Don’t auto-retry 402s in a tight loop. You’ll hit the reads or scans rate limit and still not get charged. Surface the failure, alert your operations team, retry once the budget is topped up.

See also

  • Authentication — last_used_at for key liveness; /v1/usage for traffic.
  • Rate limits — per-key bucket sizes; pair with group_by=key to find hotspots.
  • Settings (product) — the dashboard view of billing and budget.
  • Errors — 402 budget_exhausted.