Blogpost

Rethinking FPGA: challenges in zero trust network packet analysis

Uncover NANO Corp's insights on FPGA limitations in network security and dive into the future of cybersecurity in a cloud-driven era.

Florian Thebault
November 10, 2022
Share
LinkedIn LogoX logo

Analysis of network packets with an FPGA.

Many network probe vendors have turned to FPGAs over the past decade after their software solutions reached hardware limits. This approach was intended to help them overcome the increasing complexity of networks and their inability to keep up with bandwidth.

We don’t want to disparage FPGAs, because this technology has a proven track-record bothf for performance and quality. However, our experience has shown that in use cases like network packet analysis for cybersecurity purposes, this technology does not necessarily meet the requirements of a Zero Trust framework. Which implies a complete processing and analysis of absolutely every packet, on every layer. A much-needed requirement when security must take into account a progressive shift to the Cloud.

In the case of network analysis, FPGA limitations should be put into its own context, that of new challenges such as an increasing usage of virtualization or tunneling protocols, requiring network monitoring solutions to decapsulate all lower protocol layers. In addition to this problem, there is also the problem of complex protocol stacking and the constant evolution of networks, as new protocols appear regularly while there are already a multitude of them!

Our arguments on judging FPGAs to be imperfect for network monitoring and cybersecurity are both technical and commercial.

Technical arguments

As a reminder, an FPGA (which stands for Field Programmable Gate Arrays) is basically a chipset composed of a multitude of Arrays that can be reprogrammed after manufacture (which is not the case for a CPU). Therefore, it is possible to configure these Arrays and to define their interconnection points, to answer for a given purpose. While a CPU is able to execute billions of instructions during thousands of cycles, depending on what is required of it, an FPGA can achieve the same result in far fewer cycles. To make it simple, an FPGA is like a CPU you can custom as many time as you want to do one thing. It might do that thing far better than a CPU. But it will do just that thing. There is no polyvalence to an FPGA, it’s a bit of a one-trick pony technology. It has its advantages, but also many drawbacks.

Ideal in the automotive industry, finance or in network environments (LUA chips), it becomes more difficult to use FPGAs when the problems you wish to address have solutions that can only be implemented with complex entanglements of conditional data processing.

In those cases, our experience shows that FPGA is no longer able to deliver the expected performance. Especially when we needed to develop specific processes built on sequences of parallelized conditional nesting (algorithms based on complex and recursive decision trees, for example).

FPGA is usually very efficient when a set of conditions need to be tested simultaneously. This makes it an excellent tool in network signal processing, for example (Layer 1 of the OSI model) or when it comes to parallelizing some processes.

On the other hand, in a network use case based upon layer 2 to 7 and when the number and diversity of protocols to be classified become too complex (multiplexed protocols, complex protocol stacking, etc.), FPGA-based solutions will generally not be able to classify protocols that do not announce themselves. This results in increased inaccuracy and loss of visibility.

Among the reasons that made us realize FPGA has too many technical limitations in the field of network observability, here are the ones we considered the most striking or limiting:

  • FPGA didn’t perform well on sequential parallelization of dynamic conditional loop, which lead to focusing exclusively on processing select few low-level protocol layers. Network protocols that correctly announce the protocols that follow them (and the more dynamically and imprecisely the protocol layers stack up, the less relevant FPGA becomes). Beyond Layer 4, it’s not possible for FPGA to correctly classify network protocols. Similarly, network virtualization protocols (like VLAN or VxLAN), if even identified, are only processed for a given number of iterations. This makes deploying such solutions in core networks, such as that of a data center, far less relevant.
  • Its identification is very limited for high protocol layers and for unannounced protocols. It is very difficult to create complex filters with FPGA. Many solutions are satisfied with handling only a few lower layer network protocols and only under certain conditions (MPLS if it is the IP protocol that follows it, or VxLAN, but on ports only).
  • Networks have become very dynamic, and it is not uncommon to see some aberrations in terms of network stacking (virtual or physical), especially in hybrid environments. Because FPGA is almost stateless, it is difficult to identify protocols correctly when it’s necessary to hold several packets, especially when the throughput is high and the NIC's memory is limited.
  • FPGA will have increased difficulty with de-encapsulating successive stacking of tunneling and multiplexed protocols.
  • FPGAs are not agile. Network protocols are not immutable. They are constantly changing and it is not uncommon for them to evolve or for new protocols to appear. Because time-to-update for FPGAs is higher than any other approaches, they will always have difficulties handling unknown errors.
  • A solution "compiled" for** FPGA will not work outside the same FPGA NIC of the same manufacture**r.
  • At a time when more and more applications are migrating to the cloud, it is still impossible to virtualize FPGAs, which makes one of the major advantages of the cloud, namely its scalability, incompatible with any FPGA based solution.

While FGPAs can respond to some narrowly defined visibility issues, its approach is not Zero Trust since it accepts sacrificing observability in favor of processing higher volumes of data.

Business arguments

Our solution, Zero Trust Traffic Analysis, addresses all the technical issues outlined above. But mirroring those issues, It also allows us to build a solid business case around the following technical realities:

  • FPGA is said to be "Vendor Locked" as well as "Technology Locked" (It becomes impossible to transfer your protocol analysis solution to another remote server if it does not have the same FPGA chipset or an exactly equivalent hardware characteristics)
  • FPGA prohibits any form of scalability except through the purchase of additional hardware (you want to increase throughput or add interfaces, you have to pay again).
  • Semiconductor shortages have even more impact on specialized chips like FPGA
  • An FPGA solution generates a higher TCO (Total Cost Ownership): a powerful, expensive server and several FPGA chipsets are essential for high density environments (Data Centers). Which goes contrary to environments where overhead is to be banned
  • Orchestrating network analysis solutions built for CSP/Data Center architectures is impossible.

After reading these arguments, we hope we’ve convinced you, or at least opened your mind. Our words may seem inflammatory. But they are not. FPGA is still a great tool for many use cases.

Yet, we feel that FPGA is not well suited for network protocol analysis in the cybersecurity field because using it is restrictive and deployment is limited to proprietary infrastructures. Moreover, networks are bound to become increasingly more complex (new uses) and infrastructures to become hybrid (shift to the cloud). In this context, the only approach making sense will be to favor very high capabilities for adaptation and greater integration.

The very same approach chosen by NANO Corp.

Florian Thebault
November 10, 2022
Share
LinkedIn LogoX logo

Ready to unlock
full network visibility?

More blog posts

Go to the blog