Overall rating: 4.68 Instructor: 4.84 Materials: 4.74 more …
Every major vendor is promoting server virtualization and cloud computing as the magic technologies that will help you cut data center costs while increasing productivity and service availability. It’s true – data center redesign can save you a lot of money ... assuming you can make sense of the incomprehensible soup of technologies and acronyms and choose wisely.
This webinar will help you reach two important goals: understand the data center acronym soup and build a conceptual framework of the data center technologies and solutions.
Can you describe the benefits of server virtualization in two sentences? How about the reasons to deploy Storage Area Network (SAN)? When building a SAN solution, should you use FC, FCoE or iSCSI? Or is it better to use NFS or CIFS? What is the function of an HBA? Why would you need DCB? Can TRILL benefit your network? Should you wait for switches supporting L2MP? Or is OTV a better alternative? Should you buy MPLS/VPN or VPLS services when building a disaster recovery site?
After attending this webinar, you’ll be able to answer all of the above questions.
A data center redesign is most successful when the application teams, server administrators and networking engineers work hand-in-hand from the beginning of the project; late involvement of the networking team (usually when the performance of the new architecture doesn’t meet the expectations) oft results in finger pointing and blame shifting.
To become involved in the early stages of new data center projects, you have to understand the challenges faced by server and storage administrators, be fluent in the technologies they commonly use and prove that you can help them build better solutions. This webinar will give you a clear overview of data center challenges and a conceptual framework you need to quickly absorb the details of new technologies and solutions as they become available.
This webinar is ideal for IT managers and networking engineers that have to understand the big picture: how the Data Center buzzwords and technologies they hear about relate to reduced costs and increased availability of their Data Center services. It will also help engineers with networking or programming background understand the architectural options and solutions used in modern Data Centers.
You want to truly understand a complex problem? Put aside the technology for a moment and follow the money. Data centers are no different. We’re all faced with pressures to reduce the capital and operational expenses while increasing application availability. The only way to reach this goal is through aggressive utilization of modern virtualization technologies that help you increase equipment utilization and reduce electricity and cooling costs.
This section covers most common load balancing mechanisms:
Application-level load balancing: worker processes, event-based web servers, FastCGI offload, caching servers and reverse proxies, database sharding and replicas.
Network-based load balancing: local and global anycasting, local and global DNS load balancing, and load balancers operating in transparent mode, source-NAT mode and direct server return.
Application delivery controller features including session stickiness, TCP parameter adjustment, permanent HTTP sessions, SSL offload, and inter-protocol gateways (SPDY-to-HTTP).
Server virtualization (the ability to run multiply logical machines on the same physical hardware) is the core technology of the modern data center design. It significantly increases equipment utilization, thus reducing power consumption, and drastically reduces the average server deployment time.
This section describes:
Logical servers running within the same physical server might have different security requirements; sometimes you even have to isolate them from each other. Virtual Local Area Networks (VLANs) and private VLANs (PVLAN) were the traditional infrastructure providing the inter-server isolation.
However, be aware that every server virtualization platform contains a virtual switch that extends the bridging domain of your network. The introduction of virtual switches has to be carefully planned ... or you might end up with a fantastic network meltdown due to a bridging loop or a security breach due to extra connectivity invisible to the networking gear.
You can only benefit from advanced server virtualization technologies if the storage used in your data center supports access to the same data from multiple physical servers. A storage area network (SAN) is thus a mandatory component of modern data center designs.
Understanding the SAN evolution definitely helps you to understand the SAN challenges we’re facing today. This section describes how SCSI transformed into Fiber Channel (FC) and iSCSI and explains modern alternatives (iSCSI, FCoE or NFS) and means of extending storage networks over large distances.
Some application developers and server administrators would like to see the Data Center designed as a huge bridged network, as this “design” makes their life extremely easy: every host can communicate with every other host even when using weird technologies that should never have been deployed (example: Microsoft’s NLB in Unicast Mode). The networking engineer trying to introduce scalability and security in such an environment is clearly doomed to fail.
This section describes the true needs for data center bridging, emerging bridging technologies (DCB, L2MP, TRILL), their potential usability and pitfalls, as well as the proper position of routing in the data center design.
Very high application availability is a must for any business that relies heavily on IT infrastructure for its day-to-day operations. The required availability is usually achieved with help of redundant data centers, operating in active/standby or even active/active (load balancing) configurations.
This section describes the basics of high availability data center design and demonstrates how you can use numerous Service Provider services (including dark fiber, MPLS/VPN and VPLS services) to build your redundant infrastructure.
The webinar does not address device configurations or other low-level technical details. We can cover these details in a follow-up discussion during the on-site delivery or you could attend in-depth technology-specific webinars as they become available.