Skip to Content
Transparency by Design

Security Performance Lab

Every configuration validated under real conditions before deployment.
Methodology published. Raw data available. Independently verifiable.

4 Vault Test Environment
<1% Result Variance
100% Public Methodology

Why We Built the SPL

Most firewall vendors publish theoretical specifications. Marketing numbers that look good on paper but don't reflect real-world performance with actual security features enabled.

We took a different approach. Before any SecureNet configuration ships to a customer, it runs through our Security Performance Lab. Real protocol traffic. Full security stack enabled. Metrics collected independently from the system being tested.

The problem with "trust our dashboard": When you measure performance using the same system you're testing, you're only seeing what that system wants to show you. SPL metrics come from the FreeBSD kernel and are cross-validated between client and server. No marketing filters.

The SPL exists because "trust us" isn't good enough. Every claim we make about SecureNet performance can be verified independently using our published methodology and raw data.

Lab Infrastructure

The Security Performance Lab is a dedicated 4-Vault test environment designed for repeatable, realistic performance validation.

Laboratory Topology
📤
Client Vault
Generates traffic patterns
🔥
Vault Under Test
SecureNet configuration
📥
Server Vault
Hosts test services
📊
Management Vault
Controls lab, collects metrics

All Vaults are Protectli hardware running identical firmware for consistency

Traffic Types Tested

We don't use synthetic benchmarks like iperf. SPL generates real protocol traffic that mirrors actual home network usage:

  • HTTP downloads: Large file transfers, web page loads
  • HTTPS browsing: Encrypted web traffic with TLS handshakes
  • FTP file transfers: Active and passive mode testing
  • UDP streaming: Video and audio streaming patterns
  • DNS queries: Resolution performance under load

Testing Modes

Mode Purpose Characteristics
Deterministic Precise measurement Controlled, repeatable, <1% variance between runs
Dynamic Real-world simulation Varying patterns, multiple concurrent flows, realistic usage

Testing Methodology

Every test follows the same documented procedure. This ensures results are comparable across different hardware, configurations, and time periods.

Metric Collection

Aspect Method
Data Source FreeBSD kernel (not OPNsense GUI)
Validation Client-side and server-side cross-validated
Format CSV with JSON metadata
Data Points Thousands of timestamped entries per test

What We Measure

  • Throughput: Actual bits per second through the firewall
  • Packet loss: Percentage of packets dropped under load
  • CPU utilization: Processing overhead with full security stack
  • Temperature: Thermal behavior during sustained load
  • Latency: Added delay from firewall processing

Full security stack enabled during all tests: Suricata IDS/IPS with 165,340 signatures, Unbound DNS filtering with 834,427 blocked domains, and FQ-CoDel traffic shaping. We don't test with features disabled to inflate numbers.

Performance Results

These are validated throughput numbers with the complete SecureNet security stack running. Not theoretical maximums. Not marketing figures. Real measurements from real hardware.

Protectli V1410

~1.2 Gbps

Full security stack enabled

Intel N5105, 8GB RAM, 4x i226 NICs. Best for gigabit internet with comprehensive security.

Protectli VP2430

~1.7 Gbps

Full security stack enabled

Intel N150, 16GB RAM, 4x i226 NICs. Best for gigabit with headroom or multi-gigabit ready.

Key Metrics

0% Packet Loss
<5ms Added Latency
~25% CPU at Capacity
69°C Peak Temperature

Real-World Context

What do these numbers mean for actual home usage? Here's how typical activities compare to SecureNet capacity:

Activity Bandwidth Required
4K Streaming 25 Mbps per stream
HD Video Call 3-5 Mbps per participant
Online Gaming 5-10 Mbps
Web Browsing 2-10 Mbps burst

Typical peak household usage: 150-200 Mbps (4x 4K streams + 2 video calls + gaming). SecureNet provides 3-5x headroom beyond typical peak usage.

Transparency Commitment

The SPL isn't just about validating our own work. It's about giving you the tools to validate it yourself.

What We Publish

  • Testing methodology: Complete documentation of how tests are conducted
  • Test scripts: The actual code used to generate traffic and collect metrics
  • Network topology: Diagrams showing exactly how the lab is configured
  • Configuration files: The OPNsense configs used during testing
  • Raw CSV data: Thousands of data points for independent analysis

Reproducibility

Results should be the same no matter who runs the test. Our methodology is designed for reproducibility:

Element Status
Testing methodology Fully documented
Test scripts Available for review
Network topology Diagrammed
Configuration files Documented
Results variance <1% between identical runs

Open invitation: Security researchers and competitors are welcome to validate our results. If you find discrepancies, we want to know.

Quality Control

The SPL isn't a one-time validation. It's an ongoing quality control process that ensures every configuration meets our standards.

Pre-Deployment Validation

Checkpoint Verification
New configuration SPL validation required before shipping
Firmware update Performance regression testing
New ruleset Impact measurement on throughput
Plugin evaluation Performance impact assessment

No configuration ships without SPL validation. If a change degrades performance beyond acceptable thresholds, it doesn't go out the door.

Available Documentation

Everything you need to understand, verify, or reproduce our testing is available publicly.

Verify Everything Yourself

Download the AI Whitepaper and ask any AI assistant to explain our methodology. Review the raw data on GitHub. Hold us accountable.