Documentation
Learn how to read scan results, understand the visualization, and interpret security findings.
Contents
The SLOP Protocol
SLOP (Segmentation, Labeling, Organization, Parallelism) is the architecture that powers aurasecurity's multi-scanner orchestration and visualization.
Each scanner runs in isolation. Gitleaks can't corrupt Trivy's output. One tool failing doesn't break the pipeline.
Every finding is tagged with its source tool, file path, and line number. Full transparency on where results came from.
Findings are deduplicated, categorized by severity, and organized into actionable groups. Clean, structured output.
All scanners run simultaneously. 8 tools don't mean 8x the wait time. Results stream in real-time.
💡 Why SLOP Matters
Traditional security tools give you a wall of text. SLOP transforms raw scanner output into structured, navigable data that can be visualized in 3D, exported to any format, and integrated into your CI/CD pipeline.
Visual Guide
The 3D dashboard represents your security posture as an interactive scene. Here's what each element means:
Central Hub
The glowing hexagonal shield at the center represents the Security Auditor - the orchestration engine that coordinates all scanners and aggregates results.
Green Nodes
Clean modules or repos with no findings. These components passed all security checks. The goal is to have all nodes green.
Yellow/Orange Nodes
Modules with medium or high severity findings. These require attention but aren't critical. Review and prioritize fixes.
Red Nodes
Critical security issues detected. These require immediate attention - exposed secrets, critical CVEs, or severe misconfigurations.
Connection Lines
Lines between nodes show relationships - data flow, dependencies, or audit connections. Thicker lines indicate stronger coupling.
Orbital Rings
Circular paths show scanning progress and module groupings. Nodes orbit the hub based on their scan status and category.
Navigation
- Click a node to select it and view details
- Drag to rotate the scene
- Scroll to zoom in/out
- Hover over nodes to see tooltips
Severity Levels
Findings are categorized by severity to help you prioritize remediation efforts:
Critical
Immediate action required. Exposed secrets, critical CVEs with known exploits, or severe misconfigurations that could lead to full compromise.
High
Address soon. API keys, high-severity vulnerabilities, or security issues that could be exploited with some effort.
Medium
Plan to fix. Deprecated dependencies, missing security headers, or issues that increase attack surface.
Low
Best practice improvements. Code style, minor optimizations, or informational findings that improve security posture.
⚠️ Exit Codes
By default, aura-security scan exits with code 2 for critical findings and code 1 for high findings. Use --fail-on medium to fail on medium+ or --no-fail to always exit 0.
Finding Types
aurasecurity detects multiple categories of security issues:
🔑 Secrets & Credentials
Exposed API keys, passwords, tokens, private keys, and other sensitive credentials in your codebase. Detected by Gitleaks.
📦 Dependency Vulnerabilities
Known CVEs in your project dependencies. Detected by Grype, Trivy, and language-specific tools (npm audit, pip-audit, etc.).
🐛 Code Issues (SAST)
Static analysis findings like SQL injection, XSS, insecure deserialization. Detected by Semgrep.
☁️ Infrastructure as Code
Misconfigurations in Terraform, CloudFormation, Kubernetes manifests. Detected by Checkov.
🐳 Dockerfile Issues
Best practice violations in Dockerfiles - unpinned versions, running as root, missing security directives. Detected by Hadolint.
Good vs Bad: Real Examples
Here's what secure and insecure repositories look like when scanned with aurasecurity:
3D Visualization Comparison
See the difference at a glance - red means danger, green means safe:
🎨 Reading the 3D View
Node Color: Red = critical issues, Orange = high, Yellow = medium, Green = clean
Orbiting Shapes: Each shape around a node represents a finding category. More shapes = more issues.
Click to Drill Down: Click any node to see severity breakdown, click severity to see individual findings.
Scan Statistics
OWASP Juice Shop - an intentionally insecure application for security training.
Our own repository - we practice what we preach.
What Made juice-shop Fail?
Here are the actual findings from the scan:
💡 Key Takeaway
A clean scan doesn't mean "no vulnerabilities exist" - it means "no known vulnerabilities were detected by the scanners." Always combine automated scanning with manual security review for critical applications.
Interpreting Results
When You See Critical Findings
- Stop and assess - Don't deploy code with critical findings
- Check if it's a false positive - Test files, examples, and documentation may trigger detections
- Rotate compromised credentials immediately - If real secrets are exposed, assume they're compromised
- Fix at the source - Use environment variables, secret managers, or .gitignore
When You See High/Medium Findings
- Prioritize by exploitability - A CVE with a public exploit is more urgent than a theoretical issue
- Check your dependencies - Often the fix is just updating a package version
- Document accepted risks - Some findings may be acceptable in your context
Using the CLI Output
CI/CD Integration
Use exit codes to gate your pipeline: