Fortifai unveils Nol8 AI Data Plane roadmap targeting commercial launch by end of 2026
Fortifai Ltd (ASX: FTI) has announced a formal commercialisation roadmap for its Fortifai Nol8 AI Data Plane following completion of the Nol8 acquisition. The three-phase timeline establishes clear milestones from proven demonstration technology through to revenue-generating contracts, with the Benchmarking Engine scheduled for July 2026 and the Commercial Platform targeted for end CY2026. Enterprise partner conversations are already underway to design and implement data benchmark testing in high-performance environments ahead of the mid-2026 launch.
The Demonstration Engine has delivered verified performance metrics showing infrastructure reduction from 5,000 CPUs to a single FPGA whilst improving latency from 500 milliseconds to 3 milliseconds (a 160x improvement). Events per second capability has increased from 5,000 to 2,000,000 (a 400x improvement), demonstrating the scalability potential of the neural-network-driven FPGA architecture. For investors, the roadmap provides visibility on the pathway from proven technology to commercial contracts, with defined milestones against which execution can be tracked through 2026.
The company is positioning the Fortifai Nol8 AI Data Plane as the infrastructure layer enabling real-time autonomous AI agents to operate at scale. The technology is designed to eliminate the “scalability ceiling” constraining enterprise deployment of agentic AI systems by processing data-in-motion at the speed of the data stream itself, rather than through traditional batch processing methods.
What is the AI Data Plane and why does it matter?
The AI Data Plane functions as the high-speed bridge between AI model inference and real-world execution. Traditional large language models operate in prompt-response mode, generating outputs based on user input. Agentic AI systems, by contrast, operate persistently across enterprise environments, drawing on organisational data, interacting with external tools and executing decisions in real time without human intervention.
These autonomous systems require continuous high-throughput data ingestion and deterministic low-latency processing, often measured in milliseconds. The Fortifai Nol8 AI Data Plane is designed to operate on “Data-In-Motion” at the speed of the data stream, processing information as it flows rather than storing and batch processing it later. This architectural approach addresses the structural bottleneck emerging as AI transitions from isolated model inference to real-time autonomous execution.
Over 90% of new AI-generated data is unstructured, comprising formats such as:
- Text and documents
- Images and graphics
- Audio recordings
- Video content
Processing unstructured data is materially more compute-intensive than structured database workloads. Legacy data pipelines, built for batch processing and structured datasets, are increasingly unable to support the scale, latency and cost profile required for enterprise-grade agentic AI deployment. The company is positioning its technology at this structural chokepoint where demand is accelerating but existing infrastructure cannot efficiently meet performance requirements.
Performance benchmarks demonstrate infrastructure leap
The Demonstration Engine has delivered proven performance metrics that establish the technical foundation for commercial deployment. By shifting data processing from traditional software running on CPU clusters to specialised neural networks optimised for FPGA-based architecture, Nol8 has demonstrated the ability to deliver dual benefits: massive throughput gains combined with dramatic reductions in infrastructure cost and physical footprint.
| Metric | Legacy Infrastructure | Nol8 AI Data Plane | Improvement |
|---|---|---|---|
| Events Per Second | 5,000 | 2,000,000 | 400x |
| Latency | 500ms | 3ms | 160x |
| Hardware Required | 5,000 CPUs | Single FPGA | N/A |
The neural-network algorithm combined with FPGA hardware acceleration enables millisecond-grade processing of high-volume data streams. For investors, these verified performance metrics de-risk the commercialisation pathway by providing quantifiable evidence that the technology can deliver the throughput and latency characteristics required for enterprise deployment. Enterprise partners can validate performance against their own real-world workloads during the benchmark testing phase commencing July 2026.
The infrastructure reduction from 5,000 CPUs to a single FPGA addresses both capital expenditure and operational expenditure considerations for enterprises. By drastically reducing the space and power required to run mission-critical AI workflows, the technology is positioned to provide an environmentally responsible and economically viable foundation for real-time agentic AI deployment at scale.
CTO quote on the execution layer opportunity
Alon Rashelbach, Co-Founder and CTO, Nol8
“There is a widening structural gap between having an intelligent AI model and having an actionable outcome. True ROI will be determined by the execution layer. The AI Data Plane is the high-speed bridge between large language model inference and execution—an architecture we built to operate at the speed of the data stream itself.”
Rashelbach’s commentary frames the investment thesis around a critical distinction: model intelligence alone does not determine return on investment. The efficiency, speed and scalability of the data transmission and classification layer that connects models to systems and actions has become the limiting factor as AI transitions from isolated model inference to real-time autonomous execution.
Agentic AI market forecast underpins growth thesis
The agentic AI market represents a US$4 trillion opportunity by 2030, according to McKinsey research. This growth trajectory will expand global data flows from 334 zettabytes in 2025 to 19,267 zettabytes by 2030, driven predominantly by autonomous AI-generated unstructured data. One zettabyte equals one trillion gigabytes, illustrating the scale of data volume growth expected across the forecast period.
Key market dynamics supporting the investment thesis include:
- 90% of new AI data is unstructured (text, documents, images, audio, video)
- Traditional infrastructure cannot efficiently process this data reality
- Critical bottlenecks emerging in latency, compute costs and operational complexity
- Market leader Anthropic experiencing margin compression from 50% to 40% due to unresolved infrastructure constraints
The Anthropic margin compression metric provides concrete evidence of the infrastructure cost pressure facing even well-funded market leaders. Anthropic is forecasting 30x growth in annual recurring revenue to 2030 but is already experiencing margin deterioration due to infrastructure constraints, validating the thesis that scalability will be determined by the execution layer rather than model capability alone.
The Fortifai Nol8 AI Data Plane is designed to decouple data volume from infrastructure cost. By processing and classifying data-in-motion at millisecond-grade speeds, the technology aims to allow customers to scale their AI platforms without a linear increase in the “scalability ceiling,” referring to the ballooning costs associated with legacy Extract, Transform and Load processes and batch analytics.
Second CTO quote on infrastructure readiness
Alon Rashelbach, Co-Founder and CTO, Nol8
“The introduction of the Benchmarking Engine represents a critical step in dismantling the scalability ceiling. By enabling enterprises to validate high-throughput, millisecond-grade data-in-motion processing against real-world workloads, Nol8 is establishing the execution layer required for scalable Agentic AI. This milestone shifts the focus from model capability to infrastructure readiness—ensuring AI systems are architected not just for intelligence, but for sustained, future-proofed autonomous action.”
The commentary positions the Benchmarking Engine as the validation mechanism through which enterprises can assess performance against their own specific use cases. The shift from model capability to infrastructure readiness reflects the broader market transition as AI deployment moves from experimentation to production-grade implementation requiring deterministic performance characteristics.
Next steps and commercialisation pathway
The commercialisation roadmap is structured across three distinct phases, each with defined deliverables and timelines:
-
Demonstration Engine (Proven): Current technology has achieved the leap from 5,000 to 2,000,000 Events Per Second. This phase has delivered verified performance metrics establishing the technical foundation for commercial deployment.
-
Customer Benchmarking Engine (July 2026): Currently under development by the engineering team, this engine will enable enterprises to ingest real-world, diverse datasets. The company is in active conversations with industry partners across verticals to plan, design and implement specific AI data pipeline scenarios in controlled, high-performance environments.
-
Commercial Platform (end CY2026): Following the benchmarking phase, the revenue-ready commercial engine will be available for first commercial contracts. This production-grade platform will provide a scalable solution for global organisations requiring real-time intelligence.
Enterprise partner discussions are already underway ahead of the July 2026 benchmarking phase, indicating active pipeline development prior to the formal launch of customer testing capabilities. The end CY2026 target for the Commercial Platform provides investors with a clear timeline for the transition from technology validation to revenue generation.
For investors, these milestones establish trackable catalysts through 2026. The progression from demonstration to benchmarking to commercial deployment provides visibility on execution risk, with each phase delivering measurable outputs against which progress can be assessed. The company’s positioning at a structural bottleneck in the AI infrastructure stack, where even well-funded competitors face margin pressure, supports the investment thesis that solving the scalability ceiling represents a multi-billion dollar market opportunity as agentic AI adoption accelerates across enterprise environments.
Want the Next Tech Breakout in Your Inbox?
Join 20,000+ investors receiving FREE ASX tech alerts within minutes of release, complete with expert analysis. Click the “Free Alerts” button at StockWire X to get breaking news and insights delivered straight to your inbox before the market moves.