Getty Images

Tip

How to ensure optimal network workload placement

Network and cybersecurity requirements are the most important considerations when designing how to deploy new network workloads across the enterprise.

With every new workload an enterprise deploys, IT teams decide where to run it -- on premises, in a cloud or as hybrid. Picking the wrong option can result in unexpectedly high costs, poor application performance or significant cybersecurity and legal risks.

Ideally, IT teams should follow a well-documented network workload placement process, one that is visible to all stakeholders and transparent with respect to criteria. Following this process for workload placement enables teams to consider a wide range of factors, including the following:

  • Regulatory and compliance concerns.
  • Cost.
  • Scaling requirements.
  • Agility, or acceptable time to production.
  • Workload longevity.
  • Staff expertise.
  • Application architecture.

However, the most important requirements focus on networking and cybersecurity.

Network performance is key

Proper workload placement considers several aspects of network performance. One is latency. The workload's front end has to be placed in a location that ensures most users don't experience a lot of latency, or latency that is so variable that the application is unusable. Another factor is latency among the components of the application itself. Most modern applications comprise multiple services or microservices. The longer it takes traffic to move from one to another, the more likely it is that the end users will experience poor performance no matter what.

As a result, engineers and architects should design plans that minimize and reduce the variability of latency among components. One way to do that is to place the components in the same data center, or in an environment in which the data center is directly connected to IaaS.

Engineers also need to pay attention to application attributes that go beyond latency, such as how much data it moves -- both in megabits per second and how many megabytes are required for a typical transaction. Even the pattern of network usage can be significant: An application that sends streams of many small packets needs different things from the network than one that sends streams of fewer, larger packets.

Keep things secure

Even the pattern of network usage can be significant: An application that sends streams of many small packets needs different things from the network than one that sends streams of fewer, larger packets.

Similarly, a good network workload placement policy incorporates a range of cybersecurity assessments. Consider a nonzero-trust environment, for example. Teams might set a policy to push a workload with known vulnerabilities toward a network environment equipped with greater protections, in an attempt to reduce the overall threat surface of the enterprise.

In every situation, a well-designed placement process sends workloads to the environment that meets the organization's compliance needs. Compliance requirements sometimes include the following factors:

  • Keep data in specific geographies, such as a retail sales application keeping data related to buyers within the country in which they reside.
  • Keep data out of specific geographies, such as where the enterprise's privacy protections aren't respected.

To that end, placement policies must take the following considerations into account:

  • Where available compute and storage resources are.
  • Where the related failover or backup resources are.
  • What paths the data might traverse getting to and from all these assets.

These considerations enable teams to place the workload in an environment that meets localization requirements and provides the necessary controls to ensure data goes only where it is allowed to go.

Network workload placement can be complex, but it is crucially important in meeting application delivery goals and mitigating cybersecurity and liability risks. Enterprises should create and maintain a policy that addresses their specific performance and security needs.

John Burke is CTO and principal research analyst with Nemertes Research. With nearly two decades of technology experience, he has worked at all levels of IT, including end-user support specialist, programmer, system administrator, database specialist, network administrator, network architect and systems architect. His focus areas include AI, cloud, networking, infrastructure, automation and cybersecurity.

Dig Deeper on Network infrastructure

Unified Communications
Mobile Computing
Data Center
ITChannel
Close