In the new digital economy, businesses that are able to adapt will be the most competitive and successful. This will require adopting new technologies, networking systems, and strategies. But many of the emerging technologies and strategies that are being deployed across our networks come with a set of unknowns that are having a huge impact on security. The reason is that traditional approaches to security were never really designed to protect dynamic, borderless, and hyper-connected environments.
Many Factors Are in Play
For example, software-defined wide area networking (SD-WAN) is beginning to replace traditional MPLS infrastructure because, among other things, it is far less expensive. So much so that it is now being deployed in places where MPLS was never even possible. The security challenge is that packets travel over an encrypted tunnel in SD-WAN. While there are certain security advantages to such an approach, what if one end or the other has been compromised? What if ransomware has been installed on a particular endpoint device? It turns out that encrypted tunnels make an ideal mechanism for hiding the distribution of malware.
From another viewpoint, some organizations are starting to adopt software-defined perimeters (SDP) because they can stop network-based attacks against their applications infrastructure and control access to applications to ensure that they can only be accessed by preauthorized users and devices. SDPs do this using a combination of transport layer security (TLS), public key infrastructure (PKI) and security assertion markup language (SAML) married with a control infrastructure. The idea is that certificate-based authentication and TLS tunnels allow secure client/server communications that are immune to various network-based attacks.
This means that SDP is essentially guaranteeing that only pre-authorized users and devices can access the application infrastructure. But this approach doesn’t really answer the question: is this encrypted connection passing potentially malicious traffic? Because the client device could still be compromised via an advanced persistent threat (APT) attack, allowing malicious traffic to thereby reach the application infrastructure.
To really address that challenge, organizations need to also implement a second tier of security to their encrypted connections. That is, actually examining the applications and content inside these encrypted connections. The challenge is that when so much of your traffic is encrypted, this degree of inspection puts a vast amount of pressure on the performance requirements of security. Make no mistake; inspecting SSL requires some pretty heavy lifting when it comes to encryption and decryption. And we’re just looking at the tip of the iceberg here, because in addition to increased performance requirements, exponential increases in data means you will have to do this at an unprecedented scale as well.
Consider the following factors: Residential Gigabit connections are commonplace in many cities, 40G and 100G are commonplace in infrastructure, the adoption rate of Internet of Things (IoT) is escalating, as are the number of smart cities coming online, and we’re seeing a spike in the number of endpoint devices running on 5G. This means we are looking at billions of new devices potentially impacting our networks — all compounded by the phenomenon of hyperconnectivity. This means we have to start thinking, “What kind of equipment can encrypt and decrypt all of this data on the fly without dropping to its knees?”
Securing the Ocean
Let’s focus on IoT devices for a moment. IoT devices are fairly chatty due to the number of data points they are collecting, and they also share all of this collected data with some centralized infrastructure. In addition, they aren’t necessarily built on a well-vetted code base, primarily due to a need to address the market quickly. Other IoT devices use thousands of sessions and were not really tuned with network resource consumption in mind. Imagine a network managing access for tens of thousands of such devices. This can be a huge problem for any network. When you start multiplying every node by a thousand sessions and then add sustained or concurrent sessions, you are looking at a scenario that is going to overwhelm virtually any access point on the market. Then try to do all that over SSL. It’s going to generate an insane amount of traffic.
The “Individual Lakes of Networks” we once knew have now become an Ocean of Networks, interconnected throughout the world. From a security logistics perspective, that’s a very different concept than what we’re used to.
Today’s trend towards complex interconnected network environments presents a substantial security challenge. Your security posture is only as good as your weakest link. Think about how your current security is deployed and its ability to share, correlate, and respond to threats, and you realize that you’re really going to need to rethink your security.
Meanwhile, the sophistication of threats continues to increase and many are now shared openly or even for sale as a service. Simple detection and prevention—the approaches that make up the vast amount of security currently in use—just aren’t enough. Once you consider applying behavioral analytics and deep application inspection to highly encrypted SD-WAN traffic you are talking about exponentially increasing just the raw processing power that is needed. Let alone ways to see across the variety of network ecosystems in place in order to anticipate and stop a threat, while simultaneously closing that vulnerability anywhere else it may exist.
Weaving Security Across Networks
So, what do we do differently? Our tendency is to start dividing networks up and create new, hardened borders. But such an approach defeats the purpose of the evolution towards open systems and interconnectivity. In the past, even though you had to manage a variety of separate security vendors, it was still largely possible. Today there is a need to deploy network resources elastically across an expanding and changing ecosystem. This new reality makes managing your security deployment, one security vendor, at a time increasingly challenging. Sophisticated threats and distributed networks require cross-referencing data from a variety of tools to detect and respond to threats. Isolated security solutions from multiple vendors make that increasingly challenging.
Instead, organizations need to weave their security solutions into a single, flexible framework that can be spread across the network and can dynamically adapt as the network evolves.
Purpose-built security devices not only need to be deployed at network connection points across the distributed network ecosystem, where they can monitor and inspect local traffic, but they also need to be connected together through an intelligent fabric that provides a single view across the network to enable the distribution and orchestration of a unified and adaptable security policy. They also need to share and correlate global and local threat intelligence in real time, regardless of the environment they have been deployed in, automatically coordinating an effective and comprehensive response throughout the network, and as close to the threat as possible.
A Whole New World
We are not just facing another familiar security challenge. What is headed our way is literally unlike anything we have seen before, and we need to prepare now. Thoughtful engineering and careful planning—including the selection, deployment, and integration of security tools designed to work together across highly elastic and adaptive environments are necessary if we are to meet the requirements of the new digital economy. This is our reality. Those that don’t make this transition may not survive.
This article was written by Matthew Pley and was published on the Fortinet blog here.