Make existing AI agents accessible from anywhere so you and your team can see, interact with, and collaborate on what the agents are building.
Add token-bucket rate limiter per API key and a simple usage dashboard.
I'll add rate limiting middleware and wire it to Redis. I'll create a new middleware file that implements token-bucket per API key, then update the API routes to use it. I'll also add a simple usage dashboard so you can see request counts per key.
edit_file
Middleware file is in place. Now I need to check the existing API structure and apply the middleware to the v1 items routes. I'll look at how other middleware is applied in this codebase so the pattern is consistent.
search_replace
Routes are updated. I'll add the retry-after header to the 429 response so clients know when to retry, and double-check that the Redis key TTL is set correctly for the sliding window.
search_replace
Summary of what was done:
New middleware (api/src/middleware/rate-limit.ts): Token-bucket rate limiter keyed by API key. Uses Redis in production and in-memory in dev. Default 100 requests per minute per key; 429 responses include a Retry-After header.
API integration (app/api/v1/items/route.ts): The v1 items routes (GET, POST, DELETE) now run through the rate limit middleware. Unauthenticated or missing-key requests skip limiting.
Dashboard: A minimal usage view was added so you can see request counts per key over the last 24h. You can expand this later with charts or export.
If you want to tweak limits, Redis key prefix, or add more routes, say what you’d like changed.
Preview of app
Proxied from dev machine over secure bridge
Update saved-cards checkout: persist default card selection, validate on submit, and add a11y labels on the card picker.
Update saved-cards checkout: persist default card selection, validate on submit, and add a11y labels on the card picker.
Caught up in the agent thread—thanks for spinning this. On the preview, the default card affordance still reads ambiguous on mobile. Can we tighten the copy before we merge?
Install and run the CLI on any machine to create a secure WebSocket tunnel to it. Your agent and your local dev environment stay on your private machine or infrastructure but BuildAutomaton can reach them through the tunnel.
BuildAutomaton uses the Agent Client Protocol (ACP) so that you can use any agent you prefer. BuildAutomaton is agent-agnostic, and works with Cursor, Claude Code, Codex, Kiro, and other ACP-capable agents..
Agent sessions, files, dev servers, and admin tools, accessible from anywhere, on any device, and by the whole team
One secure link—any device. Follow live, review changes, steer the agent, keep prompting. Do it together with your team.
Add token-bucket rate limiter per API key and a simple usage dashboard.
I'll add rate limiting middleware and wire it to Redis. I'll create a new middleware file that implements token-bucket per API key, then update the API routes to use it. I'll also add a simple usage dashboard so you can see request counts per key.
edit_file
Middleware file is in place. Now I need to check the existing API structure and apply the middleware to the v1 items routes. I'll look at how other middleware is applied in this codebase so the pattern is consistent.
search_replace
Routes are updated. I'll add the retry-after header to the 429 response so clients know when to retry, and double-check that the Redis key TTL is set correctly for the sliding window.
search_replace
Summary of what was done:
New middleware (api/src/middleware/rate-limit.ts): Token-bucket rate limiter keyed by API key. Uses Redis in production and in-memory in dev. Default 100 requests per minute per key; 429 responses include a Retry-After header.
API integration (app/api/v1/items/route.ts): The v1 items routes (GET, POST, DELETE) now run through the rate limit middleware. Unauthenticated or missing-key requests skip limiting.
Dashboard: A minimal usage view was added so you can see request counts per key over the last 24h. You can expand this later with charts or export.
If you want to tweak limits, Redis key prefix, or add more routes, say what you’d like changed.
Add token-bucket rate limiter per API key and a simple usage dashboard.
I'll add rate limiting middleware and wire it to Redis. I'll create a new middleware file that implements token-bucket per API key, then update the API routes to use it. I'll also add a simple usage dashboard so you can see request counts per key.
edit_file
Middleware file is in place. Now I need to check the existing API structure and apply the middleware to the v1 items routes. I'll look at how other middleware is applied in this codebase so the pattern is consistent.
search_replace
Routes are updated. I'll add the retry-after header to the 429 response so clients know when to retry, and double-check that the Redis key TTL is set correctly for the sliding window.
search_replace
Summary of what was done:
New middleware (api/src/middleware/rate-limit.ts): Token-bucket rate limiter keyed by API key. Uses Redis in production and in-memory in dev. Default 100 requests per minute per key; 429 responses include a Retry-After header.
API integration (app/api/v1/items/route.ts): The v1 items routes (GET, POST, DELETE) now run through the rate limit middleware. Unauthenticated or missing-key requests skip limiting.
Dashboard: A minimal usage view was added so you can see request counts per key over the last 24h. You can expand this later with charts or export.
If you want to tweak limits, Redis key prefix, or add more routes, say what you’d like changed.
Add token-bucket rate limiter per API key and a simple usage dashboard.
I'll add rate limiting middleware and wire it to Redis. I'll create a new middleware file that implements token-bucket per API key, then update the API routes to use it. I'll also add a simple usage dashboard so you can see request counts per key.
edit_file
Middleware file is in place. Now I need to check the existing API structure and apply the middleware to the v1 items routes. I'll look at how other middleware is applied in this codebase so the pattern is consistent.
search_replace
Routes are updated. I'll add the retry-after header to the 429 response so clients know when to retry, and double-check that the Redis key TTL is set correctly for the sliding window.
search_replace
Summary of what was done:
New middleware (api/src/middleware/rate-limit.ts): Token-bucket rate limiter keyed by API key. Uses Redis in production and in-memory in dev. Default 100 requests per minute per key; 429 responses include a Retry-After header.
API integration (app/api/v1/items/route.ts): The v1 items routes (GET, POST, DELETE) now run through the rate limit middleware. Unauthenticated or missing-key requests skip limiting.
Dashboard: A minimal usage view was added so you can see request counts per key over the last 24h. You can expand this later with charts or export.
If you want to tweak limits, Redis key prefix, or add more routes, say what you’d like changed.
Teammates open a secure URL to dev servers running on machines in your network, so they can preview work in progress without a PR or a deploy
| Dev Server | Env | Ports | Status | ||
|---|---|---|---|---|---|
npm run dev | NODE_ENV=developmentPORT=3000 | Running |
Proxied from localhost
A live tunnel—not a screenshot. What you run locally is what opens here.
Anyone in your workspace can try the live running on your local dev server on any device without a staging deploy
Get reactions to the WIP while it's still in iteration with the agent, instead of a rush of conflicting opinions after you've backed in decisions
Give feedback on the real WIP draft app, instead of a design mockup or a PRD
One secure link to the live session or preview—collaborate while the agent is still iterating, not after decisions are baked in.
Steer direction and surface issues during the live iteration-before the approach hardens and you’re arguing in a PR.
Product, design, and engineering collaborate on actual code changes and product drafts: not intermediate artifacts.
What people say in the thread becomes the next prompt—or a stop—not a ticket, handoff, or “I’ll sync with eng later.”
Give everyone internal tools with access to local data for fast diagnostics and troubleshooting
Local database
Bridge reaches your dev DB
Request lookup
Query rows on the bridge’s dev database
Runtime logs
Tail files on the VPC host
Log tail
Stream recent lines from a VPC workload
Local database
Bridge reaches your dev DB
Request lookup
Query rows on the bridge’s dev database
Runtime logs
Tail files on the VPC host
Log tail
Stream recent lines from a VPC workload
Hit the bridge machine’s DBs, logs, and APIs—no dashboards, no pasted queries.
Run on local, staging, or prod—right data for the ticket, fast diagnostics.
Seed test data, confirm local DB changes, run admin scripts—tool blocks give your team quick internal utilities without building a separate admin app.
npx @buildautomaton/cli (or install the CLI globally). The bridge stays connected until you stop the CLI. See our Getting started guide for full steps.