FortifAi’s Nol8 Beats Google RE2 by 200,000x in AI Data Processing Benchmark

By John Zadeh -

FortifAi Limited (ASX: FTI) has released benchmark results showing its Nol8 AI Data Plane technology achieved over 200,000x throughput advantage against Google’s RE2 engine under AI-grade workloads. The Fortifai Nol8 Google RE2 benchmark confirms Nol8’s FPGA-accelerated engine maintains constant 1,500 MB/s throughput across all complexity tiers and load conditions, while RE2 software collapses to near-zero as complexity increases.

FortifAi’s Nol8 delivers 200,000x throughput advantage over Google’s benchmark standard

The results published on 1 April 2026 position Nol8’s technology as having unlocked a previous ceiling to data processing scalability that no software-based architecture has been able to surpass. Google’s RE2 is the globally accepted standard for data pattern-matching at high performance, making this a direct measure of hardware-accelerated neural network processing against the best software alternative available.

At the highest complexity tier (6,000+ rules) and worst-case load conditions (P99), RE2 throughput collapsed to 0.007 MB/s, whilst Nol8 maintained 1,500 MB/s. This represents a throughput advantage exceeding 200,000x at precisely the conditions that define real-world AI data classification demands.

The benchmark was conducted ahead of the company’s targeted Enterprise Ready Benchmarking Engine release in June 2026, announced to the ASX on 16 February 2026. FortifAi noted the published results represent only a portion of Nol8’s total performance capability, with further testing planned to explore the full limits of the technology.

Understanding the benchmark methodology

Google RE2 is an open-source regular expression engine widely regarded as one of the fastest and safest software-based data pattern-matching engines available. It is used across Google’s own infrastructure and in widely deployed modern infrastructure software and enterprise data platforms. RE2’s primary advantage over alternative software engines is guaranteed linear processing time, meaning performance degrades predictably rather than catastrophically under complex inputs.

The benchmark was designed to provide an objective, verifiable measure of Nol8’s technology performance against this industry-standard software. The test reflects real-world enterprise AI workloads across three complexity tiers, from simple web filtering through to the full demands of agentic AI data pipelines.

Performance was measured at three load percentiles: P50 (median/normal load), P90 (top 10% of traffic events), and P99 (worst 1% of events representing maximum real-world stress). P99 represents conditions such as Black Friday-level demand, where the entire system operates under its highest conceivable pressure simultaneously.

Alon Rashelbach, Co-Founder and CTO, Nol8

“The purpose of this benchmark was simple: to put our technology against the best available software standard and let the results speak for themselves. What these numbers confirm is that we have crossed a threshold that software alone cannot surpass.”

P99 is the figure that matters most for AI infrastructure. Agentic AI systems operate continuously, 24 hours a day, 7 days a week. Traffic spikes are not anomalies to be planned around but operating conditions to be built for. A system that collapses at P99 is not enterprise-grade.

The three complexity tiers tested

The benchmark tested three distinct complexity tiers, each representing progressively sophisticated data processing tasks. Low complexity (~10 rules) covers basic scenarios such as static websites, simple API gateway filtering, and standard AI security applications like AWS Bedrock Guardrails. Medium complexity (~1,000 rules) addresses enterprise-grade demands including corporate threat detection, SIEM platforms, log analysis, and fraud screening. High complexity (6,000+ rules) represents AI-grade requirements such as real-time agentic AI pipelines, large language model data classification, unstructured data governance, and multi-jurisdiction compliance at scale.

Tier Rule Count Real-World Scenario
Low ~10 rules Static websites, simple API gateway, AWS Bedrock Guardrails scenarios
Medium ~1,000 rules Enterprise security, SIEM platforms, fraud screening
High 6,000+ rules Real-time agentic AI pipelines, LLM classification, multi-jurisdiction compliance

The 6,000+ rule tier is where enterprises actually operate for AI workloads. At Low complexity, RE2-based software performs adequately because it was designed for such scenarios. At High complexity, enterprises today compensate by deploying large arrays of CPUs working in parallel to spread the load and reduce latency. This approach is expensive, power-intensive, and structurally unable to keep pace with exponentially growing AI data volumes.

How Nol8’s architecture solves a structural limitation

The scalability ceiling that has constrained AI data infrastructure is architectural, not a software optimisation problem. CPU-based software approaches process data sequentially and degrade under load. In contrast, Nol8’s FPGA (Field-Programmable Gate Array) architecture processes data in parallel at the hardware level. Performance does not degrade as workload complexity increases. It is constant, predictable, and unlike any solution previously available.

Built on algorithmically enhanced Longest Prefix Matching, scaled by machine learning neural networks, and hyper-accelerated by FPGA hardware, Nol8’s engine processes data-in-motion at millisecond-grade latency without buffering or batching. The technology is backed by five years of published academic research by its founders.

Enterprises currently compensate for software limitations by deploying large CPU arrays working in parallel. This approach involves thousands of processors spreading the load to reduce latency. The method is costly, power-intensive, and structurally unable to scale with AI data growth. Nol8 eliminates this requirement entirely.

Alon Rashelbach, Co-Founder and CTO, Nol8

“The scalability ceiling that has constrained AI data infrastructure is not a software problem, it is an architectural one. Nol8 solves it at the hardware level.”

If the limitation is architectural rather than incremental, software competitors cannot catch up through optimisation. Nol8’s advantage is structural, not superficial.

Full benchmark results breakdown

The complete benchmark comparison across all complexity tiers and load percentiles demonstrates the exponential collapse of RE2 performance as complexity increases, contrasted against Nol8’s unwavering 1,500 MB/s throughput.

Complexity Tier P50 (RE2) P90 (RE2) P99 (RE2) Nol8 (All) P99 Advantage
Low (~10 rules) 46 MB/s 22 MB/s 10.8 MB/s 1,500 MB/s ~138x
Medium (~1,000 rules) 4.5 MB/s 1.7 MB/s 0.28 MB/s 1,500 MB/s ~5,400x
High (6,000+ rules) 1.5 MB/s 0.43 MB/s 0.007 MB/s 1,500 MB/s ~200,000x

At P99 and 6,000+ rules, the conditions that define real-world AI data classification, RE2 permits just 0.007 MB/s of traffic. Nol8 permits 1,500 MB/s of traffic throughput. The gap is over 200,000x. The exponential collapse of RE2 performance at higher complexity demonstrates why software solutions cannot serve the AI infrastructure market at scale.

The AI Data Plane market opportunity

The AI Data Plane is the high-speed infrastructure layer that sits between raw data and AI inference, purpose-built to solve the data processing constraints of unstructured AI workloads. The global datasphere is forecast to grow from 334 Zettabytes in 2025 to 19,267 Zettabytes by 2035, driven overwhelmingly by unstructured data generated by AI agents, autonomous systems, and large language models operating at continuous scale.

90% of future data flows will be unstructured, the most compute-intensive category of data to process. This data must be filtered, enriched, classified, and routed in real time before it reaches the model. This is the AI Data Plane, the missing infrastructure layer that the industry now requires. Nol8’s benchmark results confirm it has built this layer, and that its performance is in a category of its own.

Nol8’s technology delivers the AI Data Plane through five core capabilities:

  1. Processes data-in-motion at millisecond-grade latency
  2. No buffering or batching required
  3. Built on algorithmically enhanced Longest Prefix Matching
  4. Scaled by machine learning neural networks
  5. Hyper-accelerated by FPGA hardware

The market opportunity is expanding exponentially whilst existing solutions cannot scale. Nol8 is positioned to capture infrastructure spend that has nowhere else to go. As the global AI industry accelerates toward agentic and autonomous systems, the volume and complexity of data that must be processed in real time is growing exponentially. Enterprises deploying AI at scale across cybersecurity, financial services, healthcare, and AI infrastructure require data processing systems capable of handling thousands of simultaneous classification rules without performance degradation.

Next steps and upcoming catalysts

The published results represent only a portion of Nol8’s total performance capability. Further benchmark testing will be conducted and published as the company continues to explore the full limits of its technology. To date, the company has released throughput performance results only.

Work is ongoing to quantify these gains as economic reductions in hardware footprint, computational load, and infrastructure dependency whilst maintaining stable low latency at scale. The company is targeting release of its Enterprise Ready Benchmarking Engine by June 2026.

Upcoming workstreams include:

  • Further benchmark testing to explore full technology limits
  • Translation of throughput gains into measurable infrastructure reduction outcomes
  • Enterprise Ready Benchmarking Engine release (June 2026 target)

Multiple near-term catalysts provide opportunities for continued validation and commercial progression as FortifAi moves toward commercialisation of its Nol8 AI Data Plane technology.

Want the Next AI Infrastructure Breakout in Your Inbox?

Join 20,000+ investors getting FREE breaking ASX news delivered within minutes of release, complete with in-depth analysis. Click the “Free Alerts” button at Big News Blast to start receiving real-time alerts the moment market-moving tech announcements drop.


John Zadeh
By John Zadeh
Founder & CEO
John Zadeh is a seasoned small-cap investor and digital media entrepreneur with over 10 years of experience in Australian equity markets. As Founder and CEO of StockWire X, he leads the platform's mission to level the playing field by delivering real-time ASX announcement analysis and comprehensive investor education to retail and professional investors globally.
Learn More

Breaking ASX Alerts Direct to Your Inbox

Join +20,000 subscribers receiving alerts.

Join thousands of investors who rely on StockWire X for timely, accurate market intelligence.

About the Publisher