Usage
GET /v1/usage returns request and cost telemetry for your account. It is the same data the Usage tab of the developer dashboard renders â call it directly when you want to feed it into your own observability stack, your finance dashboard, or a weekly digest.
The endpoint accepts a date range and a group_by axis. By default it returns the last 30 days, bucketed by day.
Why monitor usage
- Spend. Every scan and every export deducts from the same budget pool as the regsn.app UI. Knowing where the cents are going stops surprises. The dashboardâs Settings â Billing page is the UI for the same pool.
- Capacity. Per-key rate limits are documented in Rate limits. If youâre routinely hitting
429s, the per-key buckets in/v1/usagetell you which key needs to be sharded or which endpoint to back off. - Audit.
last_used_aton a key only tells you the key is alive;/v1/usagetells you what itâs been doing.
Request
curl "https://api.regsn.app/v1/usage?group_by=day&from=2026-04-15T00:00:00Z" \
-H "Authorization: Bearer $REGSN_API_KEY"| Param | Type | Default | Notes |
|---|---|---|---|
from | ISO 8601 | now â 30 days | Start of range. |
to | ISO 8601 | now | End of range. |
group_by | day / endpoint / key | day | How to bucket. |
[!TIP]
/v1/usageis dual-auth: it accepts either a bearer key (for API consumers, returns that accountâs usage) or a Clerk session token (for the dashboard, returns the signed-in userâs usage). For programmatic use, just pass your bearer key.
Response
{
"totals": {
"request_count": 1284,
"scan_count": 47,
"export_count": 12,
"total_cost_cents": 412
},
"buckets": [
{
"key": "2026-04-15",
"request_count": 73,
"scan_count": 2,
"export_count": 1,
"cost_cents": 18
},
{
"key": "2026-04-16",
"request_count": 41,
"scan_count": 1,
"export_count": 0,
"cost_cents": 9
}
],
"filters": {
"from": "2026-04-15T00:00:00.000Z",
"to": "2026-05-15T00:00:00.000Z",
"group_by": "day"
}
}All cost figures are in US cents. Divide by 100 for dollars.
group_by=day
buckets[i].key is an ISO date (YYYY-MM-DD). Buckets are sorted oldest-first. Days with zero activity are omitted.
group_by=endpoint
buckets[i].key is the request method + path, e.g. POST /v1/scans. Sorted by cost_cents descending â the endpoints costing you the most appear first.
group_by=key
buckets[i].key is the API-key UUID (or (none) for dashboard-session traffic). Sorted by cost_cents descending. Great for spotting which key is your hottest consumer.
CSV export
The endpoint returns JSON only. To produce a CSV for finance, transform client-side:
Python
from regsn import RegSn
import csv
client = RegSn()
# `from` is a Python keyword â pass it via dict unpacking.
data = client.usage.get(group_by="day", **{"from": "2026-04-01T00:00:00Z"})
with open("usage.csv", "w", newline="") as f:
w = csv.DictWriter(f, fieldnames=["key", "request_count", "scan_count", "export_count", "cost_cents"])
w.writeheader()
for row in data["buckets"]:
w.writerow(row)JavaScript
import { RegSn } from '@regsn/api';
import { writeFile } from 'node:fs/promises';
const client = new RegSn();
const data = await client.usage.get({ group_by: 'day', from: '2026-04-01T00:00:00Z' });
const header = 'key,request_count,scan_count,export_count,cost_cents';
const rows = data.buckets.map(b =>
`${b.key},${b.request_count},${b.scan_count},${b.export_count},${b.cost_cents}`
);
await writeFile('usage.csv', [header, ...rows].join('\n'));The dashboardâs Download CSV button on the Usage tab uses exactly this pattern against the same response.
Cost attribution
/v1/usage joins your request log against snapshots and export_jobs, so each bucketâs cost_cents is the sum of:
- Scan costs â engine + analyst + verifier billing, surfaced as
actual_cost_cents(or fallbackcost_cents) on the snapshot. - Export costs â provider fees for AI-generated artifacts (NotebookLM video, ElevenLabs audio, etc.).
Read endpoints (GET /v1/snapshots, GET /v1/scans/{id}, etc.) contribute to request_count but not to cost_cents â reads are free.
Pulling a weekly digest
from regsn import RegSn
from datetime import datetime, timedelta, timezone
client = RegSn()
now = datetime.now(timezone.utc)
week_ago = now - timedelta(days=7)
# By endpoint for the last 7 days.
# `from` is a Python keyword â pass it via dict unpacking.
data = client.usage.get(
group_by="endpoint",
to=now.isoformat(),
**{"from": week_ago.isoformat()},
)
print(f"Total spend: ${data['totals']['total_cost_cents'] / 100:.2f}")
print(f"Scans: {data['totals']['scan_count']}, Exports: {data['totals']['export_count']}\n")
for b in data["buckets"][:5]:
print(f" {b['key']:35s} ${b['cost_cents']/100:>7.2f} ({b['request_count']} req)")Topping up budget
If total_cost_cents is approaching your monthly budget, youâll start seeing 402 budget_exhausted on POST /v1/scans and POST /v1/exports. Top up in the dashboard at api.regsn.app ; the limit lifts immediately on the next request. The same top-up controls live on the productâs Settings â Billing page.
There is no separate âAPI budgetâ â the API and the regsn.app UI share one budget pool per account.
[!WARNING] Donât auto-retry
402s in a tight loop. Youâll hit thereadsorscansrate limit and still not get charged. Surface the failure, alert your operations team, retry once the budget is topped up.
See also
- Authentication â
last_used_atfor key liveness;/v1/usagefor traffic. - Rate limits â per-key bucket sizes; pair with
group_by=keyto find hotspots. - Settings (product) â the dashboard view of billing and budget.
- Errors â
402 budget_exhausted.