DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Penetration Testing: A Comprehensive Guide
  • What Approach to Assess the Level of Security of My IS?
  • Daily 10 Tech Q&A With Bala
  • Manage Microservices With Docker Compose

Trending

  • The Ultimate Guide to Code Formatting: Prettier vs ESLint vs Biome
  • How Large Tech Companies Architect Resilient Systems for Millions of Users
  • Simplifying Multi-LLM Integration With KubeMQ
  • Designing for Sustainability: The Rise of Green Software
  1. DZone
  2. Data Engineering
  3. IoT
  4. Datacenter Resource Fragmentation

Datacenter Resource Fragmentation

By 
Mike Bushong user avatar
Mike Bushong
·
Oct. 03, 14 · Interview
Likes (0)
Comment
Save
Tweet
Share
8.1K Views

Join the DZone community and get the full member experience.

Join For Free

The concept of resource fragmentation is common in the IT world. In the simplest of contexts, resource fragmentation occurs when blocks of capacity (compute, storage, whatever) are allocated, freed, and ultimately re-allocated to create noncontiguous blocks. While the most familiar setting for fragmentation is memory allocation, the phenomenon plays itself out within the datacenter as well.

But what does resource fragmentation look like in the datacenter? And more importantly, what is the remediation?

The impacts of virtualization

Server virtualization does for applications and compute what fragmentation and noncontiguous memory blocks did for storage. By creating virtual machines on servers, each with a customizable resource footprint, the once large contiguous blocks of compute capacity (each server) can be divided into much smaller subdivisions. And as applications take advantage of this architectural compute model, they become more distributed.

The result of this is an application environment where individual components are distributed across multiple devices, effectively occupying a noncontiguous set of compute resources that must be unified via the network. It is not a stretch to say that for server virtualization to deliver against its promise of higher utilization, the network must act as the Great Uniter.

Not just a virtual phenomenon

While fragmentation is easily explained in a virtualized context, the phenomenon is certainly not only a virtual one. After creation, datacenters grow organically. In the best of times, they grow at very predictable, steady rates. More frequently, they grow in fits and spurts as business requirements heap new application demands on top of existing infrastructure.

This growth model is made even more chaotic because of physical constraints. Rows are finite and have an end. If you want to rack up additional compute next to existing compute for a particular application, you might have to move a row over. But what about when that row itself is taken? Then maybe you move a couple rows over. Or a room over. Or maybe into another datacenter entirely.

Physical locations are also constrained by how much space they have. Even if you have the will to expand, there might simply be no additional real estate to consume. So you build up, in which case the resources you need are now separated by a floor. Or maybe you build out and separate resources by a short distance across the campus. Or across the city. Or maybe even across the country.

Sometimes it’s not even the physical space. With very large footprints, trying to pull enough power from the grid might be impossible. And then there are all the business continuity requirements that frequently lead to datacenter resource sprawl across physical locations.

The point is that growth is rarely linear, and this means that physical resources cannot normally be guaranteed to be in close proximity. What started as a nicely groomed cluster of compute and storage turns into a set of noncontiguous resources spread out across whatever physical footprint your datacenter (or datacenters) occupies.

Unifying contiguous resources

There are, of course, ways to unify resources that suffer from this type of sprawl. In the best of cases, if all of your servers are equivalent, you can migrate VMs over time to achieve continuity. The orchestration of such a feat is nightmarish enough, forgetting for a moment the impact of all that activity and the risk it incurs.

So if there is no datacenter equivalent for defragmentation, what do you do?

The network ends up playing a unifying role. So long as resources are connected, they can work in concert to deliver some application workload. But not all networks are the same, and depending on the spread of resources, the type of network needed varies.

Not all networks are the same

If resources are contained now and forever in a fairly tight geographical space, then providing rack-to-rack or row-to-row connectivity is fairly straightforward. But what if the applications across those resources are more bandwidth hungry? You might need to consider cross-connect and offload solutions. How about if those applications are particularly latency-sensitive? You might favor completely flat architectures over more traditional two- and three-tier networks.

If resources are not so easily contained, the network choices expand. If application workloads are distributed across different rooms in a datacenter, you have to consider the impact of room-to-room connectivity. Is that done through a WAN connection, in which case you take on yet another networking layer? Or do you use optical equipment to stretch an L2 domain across some physical distance, in which case you have to consider laying or leasing fiber?

And even then, as distances grow from a few hundred meters to a few thousand kilometers, the considerations change again.

Conditions will change

Finally, the complexity only increases as you consider that all of this is a moving target. When your business is smaller, perhaps you can keep everything in one location. A few years down the road, maybe you outgrow your site or leasing terms change. Your company acquires another company, and you now have resource sprawl with a datacenter consolidation project on the horizon.

Accounting for all of the potential outcomes is challenging. The best that you can do is create solid architectural building blocks that provide the most optionality for whatever outcomes exist. In that regard, planning for growth is about considering how that growth might materialize and including flexibility as one of the primary requirements around the underlying infrastructure.

The bottom line

As datacenters grow, application resources will become fragmented. The question is not whether you will have to deal with this but rather how quickly your infrastructure can adapt. Architecting with this explicitly in mind could mean the difference between natural evolution or the types of transformation initiatives that stop companies dead in their tracks every 3-5 years.

[Today’s fun fact: Chewing gum while peeling onions will prevent you from crying. It doesn’t work as well in romantic comedies.]

application Network

Published at DZone with permission of Mike Bushong, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Penetration Testing: A Comprehensive Guide
  • What Approach to Assess the Level of Security of My IS?
  • Daily 10 Tech Q&A With Bala
  • Manage Microservices With Docker Compose

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!