Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .deepsec/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
node_modules/
.env*.local

# Scan output — regenerated by `deepsec scan` / `process`. INFO.md
# and SETUP.md (manually edited) stay tracked.
data/*/files/
data/*/runs/
data/*/reports/
data/*/project.json
data/*/tech.json
23 changes: 23 additions & 0 deletions .deepsec/AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Agent setup

This is a deepsec scanning workspace. Each registered project has its
own setup prompt at `data/<id>/SETUP.md` — open the relevant one when
asked to set a project up.

## Common tasks

- **Set up a project for scanning**: read `data/<id>/SETUP.md` and
follow it (read `node_modules/deepsec/SKILL.md`, then fill
`data/<id>/INFO.md` from the target codebase).
- **Add a new project**: run `deepsec init-project <root>` — it
scaffolds `data/<id>/` and prints/writes the setup prompt for the
new project.
Comment thread
coderabbitai[bot] marked this conversation as resolved.
Outdated
- **Write a custom matcher** (only after a real true-positive shows you
a pattern worth keeping): read
`node_modules/deepsec/dist/docs/writing-matchers.md`.

## Reference

The deepsec skill is at `node_modules/deepsec/SKILL.md` (after
`bun install`). The full docs ship at
`node_modules/deepsec/dist/docs/`.
74 changes: 74 additions & 0 deletions .deepsec/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# deepsec

This directory holds the [deepsec](https://www.npmjs.com/package/deepsec)
config for the parent repo. Checked into git so teammates inherit
project context (auth shape, threat model, custom matchers); generated
scan output is gitignored.

Currently configured project: `capgo` (target: `..`).

## Setup

1. `bun install` — installs deepsec.
2. Add an AI Gateway / Anthropic / OpenAI token to `.env.local`. If
you already have `claude` or `codex` CLI logged in on this
machine, you can skip the token for non-sandbox runs (`process` /
`revalidate` / `triage`); deepsec auto-detects and reuses the
subscription. See
`node_modules/deepsec/dist/docs/vercel-setup.md` after install.
3. Keep `data/capgo/INFO.md` short and project-specific. Refresh it
when auth, API-key, storage, or plugin endpoint architecture changes.

## Daily commands

```bash
bun deepsec scan
bun deepsec process --concurrency 5
bun deepsec revalidate --concurrency 5 # cuts FP rate
bun deepsec export --format md-dir --out ./findings
```

`--project-id` is auto-resolved while there's only one project in
`deepsec.config.ts`. Once you've added a second project, pass
`--project-id capgo` (or whichever id you want) explicitly.

`scan` is free (regex only). `process` is the AI stage; cost depends on
the selected agent/model and the number of files investigated. Run
state goes to `data/capgo/`.

## Adding another project

To scan another codebase from this same `.deepsec/`:

```bash
bun deepsec init-project ../some-other-package # path relative to .deepsec/
```

Appends an entry to `deepsec.config.ts` and writes
`data/<id>/{INFO.md,SETUP.md,project.json}`. Open the new SETUP.md
in your agent to fill in INFO.md.

## Layout

```
deepsec.config.ts Project list (one entry per scanned repo)
data/capgo/
INFO.md Repo context — checked into git, hand-curated
config.json Project-specific scanner settings
project.json Generated (gitignored)
files/ One JSON per scanned source file (gitignored)
runs/ Run metadata (gitignored)
reports/ Generated markdown reports (gitignored)
AGENTS.md Pointer for coding agents
.env.local Tokens (gitignored)
```
Comment thread
coderabbitai[bot] marked this conversation as resolved.
Outdated

## Docs

After `bun install`:

- Skill: `node_modules/deepsec/SKILL.md`
- Full docs: `node_modules/deepsec/dist/docs/{getting-started,configuration,models,writing-matchers,plugins,architecture,data-layout,vercel-setup,faq}.md`

Or browse on
[GitHub](https://github.com/vercel-labs/deepsec/tree/main/docs).
299 changes: 299 additions & 0 deletions .deepsec/bun.lock

Large diffs are not rendered by default.

63 changes: 63 additions & 0 deletions .deepsec/data/capgo/INFO.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# capgo

## What this codebase does

Capgo is a live-update platform for Capacitor apps. The repo contains the Vue 3
console, the Capgo CLI workspace, Supabase Edge Functions for self-hosting, and
Cloudflare Worker entry points for production traffic. The hottest backend paths
are the plugin endpoints used by mobile devices: `/updates`, `/stats`, and
`/channel_self`; private console APIs and public customer APIs share core Hono
logic from `supabase/functions/_backend/`.

## Auth shape

- `middlewareAuth` validates Supabase JWTs with `getClaimsFromJWT` and writes
`AuthInfo` into Hono context for user-session routes.
- `middlewareV2` accepts either JWT auth or Capgo API keys from `capgkey`,
`authorization`, or `x-api-key`; it enforces key modes and failed-auth/IP
throttling.
- `middlewareKey` is API-key-only auth for public/CLI routes and may resolve
limited subkeys through `x-limited-key-id`.
- `middlewareAPISecret` protects trigger and cron functions with the `apisecret`
header and timing-safe comparison.
- User-facing Supabase reads should prefer `supabaseClient`,
`supabaseWithAuth`, or RLS-backed API-key clients; `supabaseAdmin` is for
internal jobs, trusted server-side writes, and carefully reviewed exceptions.

## Threat model

Highest-impact failures are unauthorized app, bundle, channel, org, billing, or
RBAC mutations; malicious bundle upload/download paths; and bypasses that let a
device or API key read/update another app. Hot unauthenticated plugin endpoints
must preserve plan/on-prem response contracts while staying replica-safe and
bounded under high request volume. Admin dashboard features are read-only except
for impersonation; platform admin must not become a general write capability.

## Project-specific patterns to flag

- New private/public routes without `middlewareAuth`, `middlewareV2`,
`middlewareKey`, or a deliberate public-device comment.
- User-facing handlers that use `supabaseAdmin` instead of a user/API-key client
or that pass unsanitized user input into PostgREST filters.
- Plugin `/updates`, `/stats`, or `/channel_self` code that calls the primary DB
in-request/background work, queries non-replicated views/functions, or changes
the cached `429` `on_premise_app` / `need_plan_upgrade` response shapes.
- PostgreSQL functions, RPCs, RLS policies, or triggers missing
`SET search_path = ''`, explicit privileges, or bounded indexed access.
- File/TUS/R2 paths that derive storage keys, bundle paths, or preview hostnames
from request data without app ownership and plan checks.

## Known false-positives

- `supabase/functions/_backend/plugins/{updates,stats,channel_self}.ts` are
intentionally unauthenticated device endpoints; validate plan/rate-limit/body
checks instead of requiring JWT/API-key auth.
- `supabase/functions/_backend/triggers/**` uses service-role/admin access by
design, but it should stay behind `middlewareAPISecret` or webhook signature
validation.
- `tests/**`, `supabase/seed.sql`, and local Playwright fixtures contain demo
credentials and isolated test data.
- `cloudflare_workers/snippet/index.js` intentionally inspects and caches some
plugin error bodies at the edge; changing those bodies can be a production bug.
- `src/auto-imports.d.ts`, generated Supabase type files, native platform
directories, and build outputs are mostly generated noise for security review.
54 changes: 54 additions & 0 deletions .deepsec/data/capgo/SETUP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Agent setup for `capgo`

This is a deepsec scanning workspace. Project `capgo` targets `..`.
Use this prompt only when `data/capgo/INFO.md` needs to be refreshed.

## What to do

1. **Read the deepsec skill.** After `bun install`, the file is at
`node_modules/deepsec/SKILL.md`. It maps every doc topic to a file
under `node_modules/deepsec/dist/docs/`. Read `getting-started.md`,
`configuration.md`, and `writing-matchers.md` (skim the rest).

2. **Fill in `data/capgo/INFO.md`.** It's auto-injected into the AI
prompt for every batch — keep it short and selective.

**Length budget: 50–100 lines total.** Verbose context dilutes
signal in the scanner's prompt window. The goal is "what would a
reviewer miss if they didn't read this?", not exhaustive enumeration.

**Per-section rubric**:
- Pick 3–5 representative items per section. **Don't list every
file, helper, or callsite** — pick the patterns.
- Name primitives by their public name (e.g. `withAuthentication`,
`auth.can()`, `isTeamAdmin`). **No line numbers.** Don't enumerate
more than 5 paths in any list.
- Skip generic CWE categories — built-in matchers already cover
"SSRF", "SQL injection", "XSS". Cover what's *project-specific*:
internal auth helpers, custom middleware names, fork-specific
stubs, intended-public endpoints.
- One short paragraph or 3–5 short bullets per section. Not both.

Source material (read in this order, stop when you have enough):
- `../README.md`
- any `AGENTS.md` / `CLAUDE.md` in `..`
- `../package.json` (or `go.mod`, `pyproject.toml`, etc.)
- 5–10 representative code files (entry points, auth helpers) — not
a full code tour.

3. **(Optional) Add custom matchers** for repo-specific patterns the
built-in matchers won't catch. Read
`node_modules/deepsec/dist/docs/writing-matchers.md` first; the
workflow there starts from a confirmed finding and grows the matcher
from it. Don't add matchers speculatively — wait for a real TP.

## When you're done

The user will run:

```bash
bun deepsec scan --project-id capgo
bun deepsec process --project-id capgo
```

You can delete this file once setup is complete.
9 changes: 9 additions & 0 deletions .deepsec/data/capgo/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"priorityPaths": [
"supabase/functions/_backend/",
"cloudflare_workers/",
"supabase/migrations/",
"src/services/",
"cli/src/"
]
}
9 changes: 9 additions & 0 deletions .deepsec/deepsec.config.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
import { defineConfig } from 'deepsec/config'

export default defineConfig({
defaultAgent: 'codex',
projects: [
{ id: 'capgo', root: '..' },
// <deepsec:projects-insert-above>
],
})
12 changes: 12 additions & 0 deletions .deepsec/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
"name": "deepsec-workspace",
"version": "0.1.0",
"private": true,
"description": "deepsec scanning workspace",
"type": "module",
"workspaces": [],
"packageManager": "bun@1.3.11",
"dependencies": {
"deepsec": "^2.0.8"
}
}
120 changes: 120 additions & 0 deletions .github/scripts/ai-gateway-proxy.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
#!/usr/bin/env node
import { Buffer } from 'node:buffer'
import http from 'node:http'
import process from 'node:process'

const port = Number(process.env.PROXY_PORT ?? 8787)
const upstream = new URL(process.env.AI_GATEWAY_UPSTREAM ?? 'https://ai-gateway.vercel.sh')
const maxRequestBytes = Number(process.env.MAX_AI_PROXY_REQUEST_BYTES ?? 10 * 1024 * 1024)
const allowedPaths = new Set([
'/v1/chat/completions',
'/v1/embeddings',
'/v1/models',
'/v1/responses',
])

const chunks = []
for await (const chunk of process.stdin) {
chunks.push(Buffer.from(chunk))
}

const [gatewayToken, clientToken] = Buffer.concat(chunks).toString('utf8').split('\n').map(part => part.trim())
if (!gatewayToken || !clientToken) {
console.error('AI Gateway proxy did not receive both required tokens')
process.exit(1)
}

function writePlain(res, status, body) {
res.statusCode = status
res.setHeader('content-type', 'text/plain; charset=utf-8')
res.end(body)
}

const server = http.createServer(async (req, res) => {
if (req.url === '/health') {
writePlain(res, 200, 'ok')
return
}

try {
if (req.method !== 'POST') {
writePlain(res, 405, 'method not allowed')
return
}

if (req.headers.authorization !== `Bearer ${clientToken}`) {
writePlain(res, 401, 'unauthorized')
return
}

const requestUrl = new URL(req.url ?? '/', 'http://127.0.0.1')
if (requestUrl.pathname.includes('/../') || requestUrl.pathname.includes('/./') || !allowedPaths.has(requestUrl.pathname)) {
writePlain(res, 404, 'not found')
return
}

const bodyChunks = []
let bodyBytes = 0
for await (const chunk of req) {
const buffer = Buffer.from(chunk)
bodyBytes += buffer.length
if (bodyBytes > maxRequestBytes) {
writePlain(res, 413, 'request too large')
req.destroy()
return
}
bodyChunks.push(buffer)
}
const body = Buffer.concat(bodyChunks)
const headers = {}

for (const [key, value] of Object.entries(req.headers)) {
const lowerKey = key.toLowerCase()
if (lowerKey === 'host' || lowerKey === 'content-length' || lowerKey === 'connection')
continue
headers[key] = value
}

headers.authorization = `Bearer ${gatewayToken}`

const upstreamUrl = new URL(requestUrl.pathname, upstream)
upstreamUrl.search = requestUrl.search

const upstreamResponse = await fetch(upstreamUrl, {
method: req.method,
headers,
body: body.length > 0 ? body : undefined,
redirect: 'manual',
})

for (const [key, value] of upstreamResponse.headers.entries()) {
const lowerKey = key.toLowerCase()
if (lowerKey === 'content-encoding' || lowerKey === 'transfer-encoding' || lowerKey === 'connection')
continue
res.setHeader(key, value)
}

res.statusCode = upstreamResponse.status
if (!upstreamResponse.body) {
res.end()
return
}

const reader = upstreamResponse.body.getReader()
while (true) {
const { done, value } = await reader.read()
if (done)
break
res.write(value)
}
res.end()
Comment thread
coderabbitai[bot] marked this conversation as resolved.
Outdated
}
catch (error) {
console.error('AI Gateway proxy request failed:', error instanceof Error ? error.message : String(error))
writePlain(res, 502, 'AI Gateway proxy request failed')
}
})

server.listen(port, '127.0.0.1', () => {
console.log(`AI Gateway proxy listening on 127.0.0.1:${port}`)
})
Loading
Loading