Modernizing the Data Center Network Is Not About Switches: Part 2
Application availability for all users — anytime, from any location, on any cloud — is critical in meeting business needs.
Part one of this two-part article covered the disruptors to consider when planning for data center network modernization. These disruptors include application development methodology, clouds, software-defined, micro data centers, automation, traffic patterns and security — and more must be considered during the planning phase. It is essential that the business's needs and the applications that drive the company be considered when planning the data center network's future state.
Where do we start?
Begin with the vision of how the data center network can be a service to the business five to seven years from now. A modern data center may be one where applications can be provisioned in an automated and orchestrated way within minutes regardless of on-premise or in a public cloud.
Data center networks can be automated through the use of software-defined networking (SDN). SDN architectures separate the control function from the hardware, making it dynamic, cost-effective, adaptable and ideal for high-bandwidth data centers consisting of multiple locations. SDN solutions are available from various vendors, the most commonly implemented being Cisco ACI, which requires the use of Nexus 9K switches. For those who have more time and programming expertise, SDN can be implemented as an open source project.
Very similar to SDN is Network Function Virtualization (NFV). Like SDN, it moves the control of the network to software. However, it does not rely on any specific switches, and it implements the function of the network such as load balancing, routing and firewalling within servers. VMware NSX is a common choice of NFV solutions. Switches are still required to move data between individual physical servers within the data center. NFV and SDN in the data center provide automation, allowing the network to be provisioned more quickly with consistency of policy, regardless of location, with increased security through segmentation.
"IDC survey data indicates that 45% of IT staff time is taken up by routine operations such as provisioning, configuration, maintenance, monitoring, troubleshooting, and remediation, whereas only 21% is allocated to innovation and new projects." Orchestration can take multiple manual or automated tasks and build in policies that can take the place of human approvals reducing the time it takes to deliver an application-ready environment from weeks to minutes. Orchestration eliminates the slow, repetitive, error-prone task that touches compute, network, and storage using automation tools and scripts. Agility, consistency, efficiency in IT operations, elimination of human error, cost reduction and time-saving are all benefits of automation and orchestration.
Day-two operations provide analytics of the traffic in the data center providing insights into the patterns and ensuring policy consistency. Day-two operations software often has a dashboard for easy viewing as a simple way to manage, monitor and troubleshoot the network. The deep level of visibility available from day-two operation software allows for lifecycle management and proactive troubleshooting with the end goal of self-healing data center networks.
Network analytics is critical to understanding the traffic patterns on the networks. Over 80 percent of network traffic travels east-west in the data center. Once a breach or a threat has affected one server in a traditional VLAN, it affects them all. Segmentation via SDN on the network and micro-segmentation with host-based firewalls provide the zero trust security needed for high-value workloads. Microsegmentation cannot be achieved without understanding how applications talk to each other.
Application Dependency Mapping (ADM) can be used to determine traffic patterns. ADM will reveal outlier traffic that should not exist, allowing for clean up of legacy protocols or improperly configured networks. ADM will also allow us to model the impact of moving an application to public cloud. This process identifies which applications can safely move to public cloud without creating too much traffic hair pinning back to on-premise.
Planning for the scalability of a data center is always a challenge for companies. Cloud bursting allows scalability of the on-premise data centers during peak demands. Bursting has been a topic for some time. It is now a reality with the use of VMware Event Broker Appliance (VEBA), which monitors on-premise capacity and triggers an event alarm when the capacity level passes a set threshold, traffic will then burst into VMWare Cloud on AWS. There will be other solutions on other clouds soon; this, coupled with elasticity, can reclaim unused resources optimizing utilization.
WWT has helped our customers along the journey to a secure automated data center. Our workshops will help you begin the journey and then continue down the path. Instructor-led or self-directed on-demand technical training will help your team gain the skills necessary to support the technology that delivers for the business. Our professional services team provides mentoring so that your team is ready to run things on day two.
We look forward to working with you.