Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.capy.sc/llms.txt

Use this file to discover all available pages before exploring further.

Lambda is trickier than long-running hosts: you can’t wrap the handler with capy run at invoke time, because Lambda invokes your function directly (not a shell). Two viable patterns, depending on how you package the function.

Pattern 1 - Container image with capy run as the Lambda entrypoint

Lambda container images support a custom runtime interface. capy run can sit between Lambda’s invocation layer and your handler, but it’s more involved than serverless frameworks. The common alternative: use a container image with decryption at init time, not per-invocation. capy run does the decrypt once when the container cold-starts, sets plaintext env vars in the process, and then invokes your handler normally for each request.
FROM public.ecr.aws/lambda/nodejs:22

# Install Capy CLI into the image
RUN npm install -g @capy/cli

COPY package.json ./
RUN npm install --production
COPY . ${LAMBDA_TASK_ROOT}

# Wrap the Lambda runtime client with capy run.
# capy run resolves SECRETS_BLOB + PROJECT_KEY from Lambda env at cold start,
# sets plaintext vars, then execs the Lambda runtime which loads the handler.
ENTRYPOINT ["capy", "run", "--", "/var/runtime/bootstrap"]
CMD ["app.handler"]
Set the SECRETS_BLOB and PROJECT_KEY env vars on the Lambda configuration (via AWS console, CDK, SAM, or Terraform). One service fetch per cold start; warm invocations reuse the in-memory plaintext.

Pattern 2 - Zip deploy with build-time inline

For zip-package Lambdas (SAM, Serverless Framework, CDK using NodejsFunction), the build step can bundle plaintext values into the function code:
capy run -- sam build            # or
capy run -- serverless package   # or
capy run -- cdk deploy
During the build, capy run decrypts .env and injects values into process.env. Your IaC tool reads process.env and sets the Lambda’s Environment.Variables config - Lambda stores those plaintext on AWS’s side. Example with SAM:
# template.yaml
Resources:
  MyFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .
      Handler: app.handler
      Runtime: nodejs22.x
      Environment:
        Variables:
          DATABASE_URL: !Ref DatabaseUrlParam
          STRIPE_SECRET: !Ref StripeSecretParam

Parameters:
  DatabaseUrlParam:
    Type: String
  StripeSecretParam:
    Type: String
Deploy with capy run wrapping the deploy command so the parameters are populated from decrypted env:
capy run -- sam deploy \
  --parameter-overrides \
    DatabaseUrlParam=$DATABASE_URL \
    StripeSecretParam=$STRIPE_SECRET
AWS stores the parameter values plaintext on the Lambda config. Matches the Vercel / Cloudflare build-time inline tradeoff: your cloud provider sees plaintext, but the Capy service never does, and the app has zero runtime crypto overhead.

Which pattern to pick

  • Container image + capy run entrypoint: preserves the “secrets never touch AWS config plaintext” property. Costs a cold-start service fetch (~100-300ms added once per cold start).
  • Zip deploy + build-time inline: AWS has plaintext on the Lambda config, no runtime overhead. Simpler. Same trust model as pasting into the Lambda console directly - except the plaintext only lives in AWS, not in git / local env files.
Most teams already trust AWS with Lambda config env vars, so pattern 2 is usually the right call. Reach for pattern 1 when you’ve explicitly decided AWS shouldn’t see plaintext.

Revocation

  • Pattern 1: revoke deploy token → new cold starts fail, existing warm Lambdas keep serving until idle-killed.
  • Pattern 2: AWS has plaintext env vars - revoking the Capy deploy token does nothing for already-deployed functions. Rotate the Lambda env directly via your IaC tool (re-deploy with new values) or rotate the project key and redeploy.