FortifAI Benchmarks Single Nol8 FPGA to Replace 60,000 CPUs Under AI Workloads

By John Zadeh -

FortifAI benchmarks single Nol8 FPGA to replace 60,000 CPUs under AI-grade workloads

FortifAI (ASX: FTI) has announced benchmark results demonstrating that a single Nol8 FPGA appliance can replace the equivalent compute capacity of up to 60,000 CPUs under AI-grade workload conditions. The testing, which used Google’s RE2 engine as the industry-standard benchmark framework, validates the infrastructure economics of Nol8’s AI Data Plane technology at P99 load with 6,000+ rule complexity.

The results build on previous benchmark announcements and represent what the company describes as a structural shift in AI data infrastructure economics. Under high-complexity conditions that mirror real-world enterprise AI workloads, performance thresholds that have historically required massive CPU arrays can now be delivered by a single hardware appliance.

The benchmark testing established three complexity tiers. At low complexity (~10 rules), a single Nol8 FPGA appliance replaces 100 CPUs. At medium complexity (~1,000 rules), that figure rises to 8,200 CPUs. At the highest tier tested (6,000+ rules), the single appliance replaces 60,000+ CPUs at the P99 percentile—the most demanding 1% of real-world operating conditions.

The cost equation — A$4.5 million versus A$37,000

To contextualise the benchmark results commercially, FortifAI provided a direct cost comparison for a mid-scale enterprise deployment of 10,000 CPUs processing high-complexity AI data workloads. The company noted this is a conservative example, as many real-world use cases require significantly greater infrastructure.

Cost Item 10,000 CPU Array Nol8 Single FPGA Appliance
Hardware OpEx ~A$4.5M/year ~A$37,000/year
Infrastructure Management A$1.5M–A$3M+/year <A$1,000/year
Estimated Total Annual OpEx A$6–7.5M+/year <A$50,000/year

The cost estimates are based on Amazon Web Services publicly available pricing as at 27 April 2026, using AWS F2 and C6 instance families as the reference compute infrastructure. The hardware operating expenditure differential alone—A$4.5 million versus A$37,000 annually—quantifies the commercial value proposition for enterprise sales conversations, with infrastructure management costs adding a further A$1.5–3 million+ per year to the CPU array model.

What the benchmark tiers reveal

The three complexity tiers tested by FortifAI reflect distinct enterprise use cases and demonstrate scalability across workload intensity:

  1. Low complexity (~10 rules): Basic web and API applications, where a single Nol8 FPGA replaces 100 CPUs.
  2. Medium complexity (~1,000 rules): Enterprise security workloads, where a single FPGA replaces 8,200 CPUs.
  3. High complexity (6,000+ rules): AI-grade data classification, where a single FPGA replaces 60,000+ CPUs.

Unlike CPU-based software approaches, Nol8’s FPGA architecture processes data in parallel at the hardware level. Performance does not degrade as workload complexity increases—a key differentiator when processing the unstructured, rule-heavy data flows that characterise modern AI systems.

Where 60,000-CPU arrays exist in the real world

The 6,000-rule, P99 scenario directly reflects the workloads of modern AI-driven enterprises across multiple sectors. FortifAI outlined specific industry verticals where CPU arrays of equivalent scale are deployed:

  • Cybersecurity and SIEM platforms: Platforms such as IBM QRadar, Microsoft Sentinel, and Palo Alto Cortex continuously process thousands of detection rules across petabytes of log and event data. Large enterprise deployments routinely operate arrays of thousands to tens of thousands of CPUs for this function alone.
  • Financial services fraud and compliance monitoring: Real-time transaction screening across global payment networks requires simultaneous matching against thousands of regulatory, fraud, and sanctions rules. Tier 1 banks operate CPU arrays of equivalent scale for this function.
  • Telecommunications deep packet inspection: Carriers performing network-level inspection at 100 Gbps+ for lawful intercept, quality-of-service enforcement, and security deploy CPU infrastructure at equivalent scale.
  • Enterprise AI data pipelines: As organisations deploy agentic AI and LLM systems, data flowing into and between models must be classified, filtered, governed, and routed in real time. This is where Nol8’s AI Data Plane operates, and where 6,000+ rule complexity is the standard.

The company positioned the benchmark results as validating a large addressable market across multiple high-value enterprise verticals, each characterised by data-intensive workloads that conventional CPU infrastructure struggles to process efficiently.

What is an AI Data Plane?

Nol8’s technology operates in a category the company is defining: the AI Data Plane. This is the infrastructure layer that sits between raw data and AI inference, purpose-built to solve the data processing constraints of unstructured AI workloads.

As organisations deploy agentic AI and LLM systems, the data flowing into and between models must be filtered, enriched, classified, and routed in real time—before it reaches the model. Unlike batch-oriented data processing, which buffers and delays, or CPU-based software approaches, which bottleneck under high-complexity workloads, Nol8’s FPGA-based AI Data Plane processes data-in-flight at millisecond-grade latency without buffering or batching.

The technology foundation combines algorithmically enhanced Longest Prefix Matching, machine learning neural networks, and FPGA hardware acceleration. Backed by five years of published academic research from Technion University, Nol8 has unlocked a previous ceiling to data processing scalability that no software-based architecture has been able to surpass.

Unstructured data from AI agents, autonomous systems, and LLMs is forecast to represent 90% of all future data flows. The AI Data Plane is the missing infrastructure layer the industry now requires to process this data category at the speed, scale, and cost efficiency that enterprise AI adoption demands.

Unlocking GPU potential

Dr. Alon Rashelbach, Co-Founder and CTO, Nol8

“Every CPU freed by Nol8 is a CPU that is no longer bottlenecking a GPU. Enterprises have invested heavily in GPU capacity to run their AI workloads, but when the data pipeline feeding that GPU is CPU-bound, the GPU is not being pushed to its potential. Nol8 solves the bottleneck that is preventing the AI investment enterprises have already made from delivering its full return.”

Dr. Rashelbach’s commentary reframes the value proposition of Nol8’s AI Data Plane. Enterprises have invested heavily in GPU capacity to run their AI workloads, but when the data pipeline feeding that GPU is CPU-bound, the GPU is not being pushed to its potential. By removing the CPU bottleneck in the data processing layer, Nol8 enables existing AI investments to deliver their full return, rather than requiring enterprises to scale CPU infrastructure indefinitely to keep pace with GPU capacity.

The scale of the data opportunity ahead

The global datasphere is forecast to grow from 334 Zettabytes in 2025 to 19,267 Zettabytes by 2035, driven overwhelmingly by unstructured data generated by AI agents, autonomous systems, and large language models. Nol8’s AI Data Plane is purpose-built to solve the data processing constraints of unstructured AI workloads, which will represent 90% of all future data flows.

FortifAI noted that the benchmark results published represent only a portion of Nol8’s total performance capability. Future benchmark testing will be conducted and published as the company continues to explore the full limits of its technology.

The company positioned the AI Data Plane as the missing infrastructure layer that sits between raw data and AI inference, processing data-in-flight at millisecond-grade latency without the buffering or batching delays that characterise conventional CPU-based approaches. As enterprises scale their AI deployments, the data processing bottleneck—not GPU capacity or model performance—increasingly determines system throughput and infrastructure economics.

Want the Next Tech Breakthrough in Your Inbox?

Join 20,000+ investors receiving FREE ASX tech alerts within minutes of release, complete with in-depth analysis. Click the “Free Alerts” button at StockWire X to get breaking news and expert coverage delivered instantly as market-moving announcements drop.


John Zadeh
By John Zadeh
Founder & CEO
John Zadeh is a seasoned small-cap investor and digital media entrepreneur with over 10 years of experience in Australian equity markets. As Founder and CEO of StockWire X, he leads the platform's mission to level the playing field by delivering real-time ASX announcement analysis and comprehensive investor education to retail and professional investors globally.
Learn More

Breaking ASX Alerts Direct to Your Inbox

Join +20,000 subscribers receiving alerts.

Join thousands of investors who rely on StockWire X for timely, accurate market intelligence.

About the Publisher