By now, the advantages of microservices—like more agility, modularity, scalability, reliability and so on, are probably familiar to you. But like all new technologies, microservices present new challenges in addition to new benefits.

For that reason, you may be asking yourself whether the challenges associated with microservices outweigh the benefits.

The short answer is a resounding yes. For the long answer, keep reading this article, where I discuss the pros and cons of microservices architectures and then explain how monitoring can help to address them so that you can get the most out of containers and microservices with the least hassle.

Pitfalls of Microservices and Docker Containers

Let’s start by examining the challenges (or pitfalls, if you prefer a more pejorative term) that come with microservices.

The first aspect of microservice and container-based architecture that might seem to be a pitfall is the larger number of moving parts involved. However, I think describing it this way is a disservice (pun intended). Watches, clocks, mobile devices, and operating systems, for example, are made up of many different components of various sizes, yet they can still be elegant, reliable, and maintainable.

So, rather than thinking of microservices’ moving parts as a pitfall in itself, think of this as the source of various challenges that you face when using microservices. With this line of thinking, you are able to identify the specific challenges associated with microservices, and develop a plan for solving them. That’s better than just saying “microservices have lots of moving parts, and that’s bad,” because there is no way to reduce the number of moving parts. They’re part and parcel (again, pun intended!) of microservices.

Following are the specific challenges that result from the complex environment that you create when you use microservices.

1. Microservice and Container Communication

With any SOA-based system, especially for web-based systems, there is a high level of network setup and communication that takes place. As a result, IP addresses need to be configured, network ports need to be assigned, and open, cloud-based services need to be turned on, and message formats between services need to be defined and standardized. Additionally, when you have network-based resources communicating with one another, the chance of failure increases. As a result, load-balancing and fault-tolerant (i.e. redundant) implementations are often desirable. Done well, this can lead to increased availability and reliability, but it can increase complexity and cost as well.

2. Performance Impact of Microservices and Docker Containers

The performance of microservice-based systems can be impacted by additional network activity and associated latency, the overhead of interprocess communication and remote procedure calls, along with cloud-based service calls and the additional security those services impose. Properly designed for and configured, this latency may be negligible, but it can still add up or create a level of uncertainty in meeting service-level agreements.

3. Additional Human Communication Needed

When trouble brews within an organization or development effort, it can often be traced back to ineffective communication or just a lack of it. This is especially true today, as development groups are increasingly distributed geographically. Not to be confused with communication between the microservices themselves, this pitfall involves the additional communication and coordination required across different development teams when microservice-based systems are built and supported. This can be more pronounced when one or more services are built by external organizations.

4. Microservice and Docker Container Deployment Complexity

Before they can communicate, services and containers need to be deployed. Often, the development of microservices is distributed across developers or development teams within an organization, or even across organizations. As a result, different development approaches and platforms that are used can result in different deployment requirements and techniques. Managing the variances in deployment details across many services can be a challenge. Docker containers help to a certain degree, but containers themselves need to be managed, and their deployment can still vary. Overall, this adds to the complexity of automation in a DevOps practice.

5. Differing Microservice and Container Developer Techniques

Related to the pitfall of variance in deployment techniques, varying development philosophies of different development groups can be a pitfall as well. Different languages, platforms, architectural approaches, styles, communication protocols, tools, and so on, can lead to non-uniformity. All of this can be difficult to track and understand. Although these differences can have positive effects, as better designs and approaches are exposed to others to adopt, they can be crippling as well—especially when different skill sets are required.

6. Individual Microservice and Container Configuration

Personally, I despise software configuration. In my experience, it results in wasted time, effort, and it kills productivity. With multiple containers and services to configure, this complexity can be amplified. Each microservice needs to be individually configured, and settings can vary by service depending on language and platform used, OS environment variables set, middleware chosen, databases implemented (i.e. SQL or NoSQL), and so on. Containers can help here, as those details are often enclosed within and abstracted away, but the containers themselves may need to be specifically configured.

7. Lack of Failsafe Measures

Coordinating a large set of microservices to work together to form a single application can lead to fragility. The mean time between failure decreases, while the probability of failure increases. However, losing a service or even multiple services should not bring down the entire system or render your application unusable. Look at Netflix, for example, where their use of a set of tools such as Chaos Monkey and Chaos Gorilla has led to an impressive record of reliability. But Netflix had to specifically engineer this resilience in.

8. Potential Overlap of Concerns

An anti-pattern I call Overlap of Concerns is the antithesis of Separation of Concerns (where software is broken into distinct components, such that each addresses a separate concern). When overlap occurs with microservices, however, making even a minor change to a single discrete area of functionality requires you to make many (maybe small) changes to multiple microservices. The resulting coordination that’s required adds complexity to the testing, deployment and monitoring processes. For example, if a modification results in changes to three microservices—even minor changes—then all three need to be deployed as a unit, and hence rolled back as a unit if circumstances arise.

9. Service Assemblies and Docker Container Pods

Service assemblies define the characteristics of microservices that depend on and call other microservices. Complexity due to component dependencies has a long history in software development and maintenance, whether it involves C modules, C++ classes, COM objects, JavaBeans, or services and containers. Most of us have lived this complexity with Unix libraries (i.e. the complexities in constructing compiler toolchains) and Windows DLLs (also known as a DLL hell). In this scenario, different versions of the same component (a DLL or microservice, it doesn’t matter) are required by different components (or other microservices) that together are part of a single application.

10. Nanoservices and Containers

Microservices are meant to be smaller, more distinct services in an SOA-based architecture. A nanoservice, however, is an anti-pattern where a service is too fine-grained. As a result, the service overhead involved (communication, management, maintenance, and so on) may outweigh its usefulness. This applies to containers as well, when a trivial or relatively simple component is bundled within a container, unable to run within a different container, or bundled with the application directly. In both of these cases, the overhead of service requirements or Docker itself is greater than the benefits it offers, considering the nature of the implementation.

Using Monitoring to Solve the Microservices and Container Challenge

That’s a long list of reasons why microservices and Docker containers are harder to work with. Fortunately, however, there’s a solution to all of these challenges: Monitoring.

With the proper monitoring plan in place, you can manage the extra complexity that arises within containerized environments, and ensure that your investment in Docker and microservices returns the dividends you seek.

Using monitoring to manage microservices and containers requires developing a strategy and toolset for isolating issues and identifying specific services when trouble occurs. Following are the practices and methodologies that will help you arrive at this solution

Monitoring Microservices and Docker Container KPIs

Microservice success begins with the effective monitoring of key product indicators (or KPIs). These give you critical insight into how your microservices are performing, along with the environment they execute in. Here are some qualities to look for when deploying a monitoring approach and leveraging third-party monitoring tools.

Monitoring Scalability

In many ways, microservices monitoring systems need to be even more available and scalable than the services they monitor. And since SOA-based systems can span literally hundreds or thousands of servers (or even unknown quantities and types of servers in a public cloud deployment), monitoring transactions distributed across all of them can be demanding.

Your monitoring systems need to help you ensure that all of those servers (running your

services and containers) are up and running, with ample available disk space and resources, from the surface of server CPU usage to the deeper levels of detecting thread deadlocks. This involves the need to handle massive amounts of data across large numbers of servers, quickly, with real-time dynamic reporting. Given that Docker container deployments can occur rapidly in an agile organization, sometimes with very short lifespans, rapid monitoring and reporting is a necessity.

Root Cause Analysis

Highly granular monitoring and logging is required to pinpoint precisely where problems exist when issues occur. Given the dependencies between microservices and containers in these environments, knowing precisely which service(s) are an issue will enable you to act both correctly and quickly. An immediate but ineffective response is often worse than no response at all.

Root-cause analysis helps to rapidly and accurately identify the services and associated development groups involved when issues arise, helping to avoid misdiagnosis and the finger-pointing that can otherwise occur. Additionally, this analysis needs to include the geographic regions involved in order to mobilize the correct management teams to resolve the issue. Your microservice monitoring solution should automate as much of this analysis as possible in order to ensure the required accuracy and speed, and offer real-time reporting of the facts.

DevOps Support

Microservice development and deployment, being based on Agile, tend to occur quickly and frequently. As a result, a thorough monitoring solution needs to be in place to rapidly provide feedback, and compare behavior of new deployments to previous ones. Automation should be leveraged to help pinpoint potential trouble and help you take action before users are impacted.

Additionally, service configurations can rapidly change, sometimes migrating to and from the cloud. Therefore, your monitoring solution needs to adapt to these changes rapidly as well, giving you confidence that no component in your microservice-based architecture will be missed.

Support Agile Through Feedback of Best Practices

Just as microservices can vary in their architectures, languages, platforms, and components used, your monitoring solution needs to support a wide range of approaches. Providing accurate feedback related to and correlated back to the design decisions that went into each microservice and container enables their future refinement. this feedback will help everyone (including your development teams and vendors) adopt best practices for future success.

This also involves fringe areas of technology not often focused on, such as the use of domain- specific languages, advanced storage systems, and support for mobility. By offering this level of detail based on effective monitoring, development teams will see for themselves which decisions lead to the best results, without being forced to adhere to a dictated standard.

Conclusion: Effective Microservice and Container Implementation and Monitoring

As explained above, microservices and containers bring their own sets of complexities and pitfalls to the software development lifecycle. However, I’ve explained, effective monitoring and reporting of these systems will help drive refinements and improvements that will assist you in overcoming the pitfalls. In the end, with real data to back up future design decisions, microservice-based systems will result in highly scalable and adaptable solutions that will give your organization a competitive advantage.

After all, there’s a reason Docker has become so massively popular in a short time, despite its greater complexity as compared to older forms of infrastructure. Organizations that have adopted Docker effectively have learned to handle the challenges that come with it by revamping their monitoring strategies.