Why Modularity and Where to Start

June 7, 2019

Through years of partnering with clients and integrating our solutions into their existing tools, we’ve always found that a bit of flexibility can go a long way. That’s why we’ve built the upcoming Enterprise Edition of our streaming platform to have removable pieces. We know that there is never a one-size fits all solution when it comes to data needs, and it forces our own product team into some great practices to ensure that we support all of the edge cases we allow.

More and more, distinct systems are coming online with the ability to connect directly with their API — and open-source tools, enabling interconnectivity, becoming increasingly popular is more of an evolution on how to see these solutions than just a current trend. This allows our world to connect dots and gain previously unattainable insight.

While our Enterprise Edition is installed onto private computing clusters, it’s worth really evaluating how you can inject modularity into your own applications. Here are three benefits we see from having modular offerings and one watch out as you pursue your own path towards these more complex systems.

Go with the edge-cases

Normally, the best path is to have everyone conform to the same convention, guaranteeing a high level of compatibility without the cost of complexity. However, in practice, working with large, dedicated data teams, you will find patterns specific to the way they work and datasets that are specific to them or long-toothed legacy systems with very deep roots.

This will lead to one of two things: either adding those use-cases to your common specification and expanding its complexity or going with the flow and allowing each client have a custom and unique approach on how their data is processed. We focused on the latter because it allows us to support clients all the way and become more invaluable to them.

Compromise builds stronger relationships

While we do support edge-cases, we also set some recommendations on how data should be treated. Our goal is to move our Enterprise Edition away from being set fast on static data schemas, but also to still influence our client’s decisions on how they implement their solutions. This approach allows the client’s data team to become more informed while also helping us navigate more complex issues and prepares us and our product to become more resilient.

This also enables our product to grow with the data team, giving them launching pads to become more effective and enables their full brain-power to solve specific solutions while giving us a role and allowing us to contribute to their long-term success. This support role in their data infrastructure, in turn, makes us more valuable to them and gives us long, specific contracts that allow us to grow as a company.

Complexity keeps our processes in check

Forcing yourself to support a product that allows for any degree of modularity inflicts a lot of pain if you are not stringent on your own standards. Adding features requires diligent planning, your teams have to religiously document what they create, and the way that all of your microservices communicate to each other must be resilient… so all the things you’re supposed to be doing anyway.

Think of modularity as a type of Chaos Engineering that exists within your product’s architecture, where you have to hedge the possibility misconfigurations and operator errors upon implementation and operation. Every new feature has to be thought through the entire lifecycle of the product and engineered to support existing systems while prepared to accept future modular components.

Not all just roses

However, with all the benefits comes the ultimate hazard: if you are too modular, it can doom your organization’s success from the get-go. By far, the trickiest part of offering modularity is knowing where to draw the lines of where you’re unwilling to go. Focus on the product’s elevator pitch to know where the lines lie as it demonstrates what’s most important to your team and your main value. If you allow dilution of your main value just to fit certain clients, it creates a systemic problem that helps neither you or them. Also, by establishing what is and is not allowed to become modular, it enables the individual contributors within your team to come up with their own ideas around components and modularity. You’d be surprised the amount of innovation that can be had with just a few guardrails.

It’s best to do a few things very well than many things poorly. You’ll want to be deliberate when you select which parts of your product, as maintaining them down the road will always be more work than you believe. If your product would greatly benefit from a large amount on interchangeable components, make sure you leverage the community by documenting how to make the connections into your system and providing boilerplate examples. Best case, you will see people that want certain integrations become enabled to create their own. Worst case, your onboarding and ability to hire contractors just became exponentially easier.

MetaRouter + Modularity

This post opened with a mention of our Enterprise Edition’s “removable pieces”. When the team first designed our systems, we concluded that infrastructural mobility would be essential to the product’s success. As tech stacks become more modular and sophisticated, so must the services hoping to connect with them.

For our purposes, this means having an engine that easily separates by its main components (sub-engines of sorts), elegantly satisfies nuanced conditions, and securely deploys on any private cloud architecture (thanks Kubernetes!). As the team looks to the future, we continually look for ways to make both the internal and external mechanisms of our products more modular, sustainable, and headache-free for our customers.


Originally published at https://www.metarouter.io.

MetaRouter is a data engineering company with a mission to realize the robust and sustainable systems of the future. We create data routing solutions for all sizes, from our private cloud enterprise edition to our accessible hosted cloud offering.

Special thanks to Jonathan Kenney for contributing with this post.