5 Key Considerations for Monitoring OpenShift

Try for free

Platform wheel
Platform wheel

Software is taking over the world

As a result, every business needs to embrace software as a core competency to ensure survival and prosperity. However, transforming into a software company is a significant task–building and running software today is harder than ever. And if you think it’s hard now, consider that–just like most businesses–you are only just at the beginning of the journey.

Speed and scale: a double-edged sword

You invested in OpenShift to build and run your software at a speed and scale that will transform your business.

And that’s where OpenShift excels. But are you prepared for the complexity of building and running applications at speed and scale? As software development transitions to a cloud native approach, you will be dealing with hundreds, if not thousands, of microservices and containers as well as software-defined cloud infrastructure. Any complexity you already face now will be dwarfed in the immediate future to the point that it will become too immense for humans to handle.

No doubt you have invested in monitoring tools–probably lots of them over the years. But traditional monitoring tools don’t work in this new dynamic world of speed and scale that OpenShift enables. That’s why many analysts and industry leaders predict that more than 50% of enterprises will have to entirely replace their traditional monitoring tools in the next few years.

What killed traditional monitoring?
  • Manual effort icon

Manual effort
Slow, manual deployment and configuration coupled with manual upgrades and re-work when environments change. It all means a maximum of just 5% of apps are monitored

  • Monitoring tool proliferation icon

Monitoring tool proliferation
Multiple monitoring tools for different purposes with siloed teams looking at myopic data sets

  • Agent complexity icon

Agent complexity
Complex mix of agents for diverse technologies types each with different deployment, installation, and configuration processes

  • Charts icon

Just a bunch of charts
Data from multiple agents and different sources look great but are just a bunch of charts on a dashboard with no answers

Which brings us to why we’ve written this guide. We understand how important your software is. And we know that choosing the right monitoring platform is mandatory if you want to live by speed and scale, and not die by speed and scale.

We worked with your peers from across industries to arrive at our insights

As a Red Hat Technology partner Partner and an OpenShift Certified Operator, Dynatrace supports some of the world’s most recognized brands. We help them to automate their operations and release better software faster. We have experience monitoring the largest cloud and OpenShift implementations. This gives us a unique perspective into how enterprises manage the significant complexity challenges of speed and scale. Examples include:

  • A large retailer managing 2,000,000 transactions a second
  • An airline with 9,200 agents on 550 hosts capturing 300,000 measurements per minute and more than 3,000,000 events per minute
  • A large health insurer with 2,200 agents on 350 hosts, with 900,000 events per minute and 200,000 measures per minute

Read on to reveal five critical factors that dictate the right monitoring platform for OpenShift.

At Dynatrace, we experienced our own transformation—embracing cloud, automation, containers, microservices, and NoOps. We saw the shift early on and transitioned from delivery software through a traditional on-premise model to the successful hybrid-SaaS innovator it is today. Read the Game changing – From zero to DevOps cloud in 80 days-brief to learn more, too.

Speed icon

26 releases per year

Agility icon

5,000 cloud deployments

Quality icon

93% reduction in production bugs

Innovation icon

Hundreds of developers, no operations

Customers icon


Chapter 1

Hybrid, multi-cloud is the norm

Enterprises are rapidly adopting cloud infrastructure as a service (IaaS), platform as a service (PaaS), and function as a service (FaaS) to increase agility and accelerate innovation. Cloud adoption is so widespread that hybrid, multi-cloud is now the norm. According to RightScale, 81% of enterprises are executing a multi-cloud strategy.

Hybrid Cloud artwork Hybrid cloud
As enterprises migrate applications to the cloud or build new cloud native applications, they are also maintaining traditional applications and infrastructure. Over time, the balance will shift from the traditional tech stack to the new stack, but both new and old will continue to coexist and interact.

Multi cloud artwork Multi-cloud
Different cloud platforms have different features and benefits, technologies, levels of abstraction, price, and geographic footprints that make them suitable for specific services. Enterprises started with a single cloud provider b ut quickly embraced multiple clouds, resulting in highly distributed application and infrastructurearchitectures.

The result of hybrid multi-cloud is bimodal IT—the practice of building and running two distinctly different application and infrastructure environments. Enterprises need to continue to enhance and maintain existing, relatively static environments. They also need to build and run new applications and scalable, dynamic software defined infrastructure in the cloud.

Putting traditional IT to one side for a moment to focus solely on multiple cloud platforms, the frequent output is monitoring tool proliferation. This is because of teams operating in silos, despite critical interdependencies between services running across clouds.

The challenge of multiple monitoring tools across clouds is further compounded when we bring traditional IT back into focus. And with it, the need to monitor and manage a range of existing technologies that also have service interdependencies with cloud environments.

Cloud services artwork

Key consideration
Simplicity and cost saving were the drivers for early cloud adoption. But today, cloud use has evolved into complex and dynamic landscapes that incorporate multiple clouds as well as traditional on-premise technologies. Being able to seamlessly monitor the full technology stack across multiple clouds as well as traditional on-premise technology stacks is critical to automating operations–no matter how highly distributed the applications and infrastructure.

Chapter 2

Microservices and containers introduce speed

Microservices and containers are revolutionizing the way applications are built and deployed. They provide tremendous benefits in terms of speed, agility, and scale. In fact, 98% of enterprise development teams expect microservices to become their default architecture. IDC predicts that by 2022, 90% of all apps will feature microservices architectures.

Close to three in four (72%) CIOs say that monitoring containerized microservices in real-time is almost impossible. Moving to microservices running in containers makes it harder to get visibility into environments. Each container acts like a tiny server, multiplying the number of points you need to monitor. They live, scale, and die based on health and demand. As you scale your OpenShift environment from on-premise to cloud to multi-cloud, the number of dependencies and data generated increases exponentially. This makes it seem impossible to understand the system as a whole.

The traditional approach to instrumenting applications involves manual deployment of multiple agents. When environments consist of thousands of containers with orchestrated scaling, manual instrumentation becomes unfeasible and severely restricts your ability to innovate.

Key consideration
A manual approach to instrumenting, discovering, and monitoring microservices and containers will not work. For dynamic, scalable platforms like OpenShift, a fully automated approach becomes a requirement. For agent deployment, for continuous discovery of containers, and for monitoring the applications and services running within them.

69% of CIOs say Kubernetes has resulted in too many moving parts and too much complexity for IT to manage manually.
- Dynatrace Global CIO Report 2020

Chapter 3

Not all AI is equal

Gartner predicts 30% of IT organizations that fail to adopt AI will no longer be operationally viable by 2022. As enterprises embrace hybrid, multi-cloud environments, the sheer volume of data and massive complexity created will make it impossible for humans to monitor, comprehend, and take action in a timely manner. This critical need for machines to solve data volume and speed challenges resulted in Gartner developing a new category for the industry, known as “AIOps” (or AI for IT Operations).

There is plenty of hype about AI across industries, and making sense of the market noise is difficult. To help, here are three key AI use cases to keep in mind when considering how to monitor your OpenShift platform and applications:

AI and root cause analysis
The biggest benefit of AI to monitoring is the ability to automate root cause analysis. This enables problems to be identified and resolved at speed. An AI engine that has access to more complete data (including third-party data) will provide faster contextual insights.

AI and alert storms
AI is perfectly suited to real-time monitoring and analysis of large data sets. It can provide the most probable reason for a performance issue. AI can also recognize when related anomalies occur within your environment (i.e. when thresholds are broken) to help prevent alert storms.

AI and auto-remediation
AI can be integrated into your CI/CD pipeline, deployment, and remediation processes. This will mean problems are detected instantly, and bad builds are identified earlier so you can automatically remediate or roll back to a previous state.

Many enterprises are trying to address these use cases by adding an AIOps solution to the 10-25+ monitoring tools they already have. This approach may have limited benefits, such as alert noise reduction. But it will have minimal impact on addressing root cause analysis and auto-remediation requirements as it lacks the contextual understanding of the data to draw any meaningful conclusions.

You will also find there are many different approaches to AI. Here are a few of the more popular ones you are likely to encounter as you move towards an AIOps strategy:

Deterministic AI
Star icon Star icon Star icon Star icon Star icon
This gives you the ability to discover the topology of your environment and the metrics produced by all components. It works immediately and adapts to changes without having to re-learn patterns. It is also excellent at event noise reduction (alert storms), dependency detection, root cause analysis, and business impact analysis.

Machine learning AI
Star icon Star icon
This is a metrics-based approach. It takes time to build a data set to which it can compare previous states. Its strongest feature is limiting event noise reduction. However, it does not offer root cause or business impact analysis.

Anomaly-based AI
Star icon
With this form of AI, both event noise reduction and dependency detection are okay. One of the major drawbacks is that it takes a lot of time to build a metrics model that would show a correlation for root cause analysis.

Topology artwork

Key consideration
Not all AI is created equally. Attempting to enhance existing monitoring tools with AI, such as machine learning and anomaly-based AI, will provide limited value. AI needs to be inherent in all aspects of the monitoring platform and see everything in real-time—from the topology of the architecture to dependencies and service flow. AI should also be able to ingest additional data sources for inclusion in the AI algorithms rather than by people having to correlate data via charts and graphs.

30% of IT organizations that fail to adopt AI will no longer be operationally viable by 2022.

Chapter 4

DevOps: Innovation’s soulmate

DevOps is perhaps the most critical consideration when maximizing investment in OpenShift and other cloud technologies. Implemented and executed correctly, DevOps enhances an enterprise’s ability to innovate with speed, scale, and agility. Research shows that high performing DevOps teams have 46x more frequent code deployments and a 440x faster lead-time from commit to deploy.

As enterprises scale DevOps across multiple teams there will be hundreds or thousands of changes a day, resulting in code pushes every few minutes. CI/CD tooling helps mitigate error-prone manual tasks through automated build, test and deployment. But code still has the propensity to make it into production. The complexity of highly dynamic and distributed cloud environments, along with thousands of deployments a day, will only exacerbate this risk.

As the software stakes get higher, shifting performance checks left—that is, earlier in the pipeline— to enable faster feedback loops becomes critical. Yet this is not easy to achieve with a multi-tool approach to monitoring. To be effective, a monitoring solution needs to have a holistic view of every component and every change. It also needs a contextual understanding of the impact each change has on the system as a whole.

Key consideration
To go fast and not break things, automatic performance checks must happen earlier in the pipeline. This requires a monitoring solution with tight integration into a wide range of DevOps tooling. Combined with the right AI, these integrations will also help support the move to AIOps, enabling automated remediation that will limit the business impact of bad software releases.

When selecting your monitoring solution, check which DevOps tooling it integrates with and supports as well as how it will impact your ability to automate things in the future. DevOps tooling artwork

Chapter 5

Digital experiences matter

Enterprises are striving to accelerate innovation without putting customer experiences at risk. But it’s not just end-customer web and mobile app experiences that are at risk. Apps built on OpenShift support a much broader range of services and audiences, including:

  • Wearables, smart homes, smart cars and life-critical health devices that have rapidly developed since the consumerization of IT.
  • Corporate employees working remotely who need access to systems that are in the corporate datacenter but also cloud-based.
  • Office-based employees who rely on smart features for lighting, temperature, safety, and security that depend on machine-to-machine (M2M) communications and the Internet of Things.

The rise of the machines
Machines are used in ways we would have considered unimaginable just a few years ago. They are increasingly being hooked onto the Internet, across all industries. This is creating a colossal communication network on a global scale.

What was simply regarded as user experience is now something altogether different. It has evolved into digital experience across end-users, employees and IoT.

Enterprise IT departments face mounting pressure to accelerate the speed of innovation. Meanwhile, people’s demands for speed, usability, and availability of applications and services continue to rise unabated. Then there is the explosion of IoT devices and the increasingly vast array of technologies involved. Managing and optimizing digital experiences alongside high frequency software release cycles and operating complex hybrid cloud environments presents a major headache.

If digital experiences are not measured then how can enterprises prioritize and react when problems occur? Are they even aware there are problems? And if experiences are quantified, is it in context to the supporting applications, services, and infrastructure that will permit rapid root cause analysis and remediation? These questions must be answered before enterprises are able to deliver the extraordinary digital customer experiences that will ensure they stay relevant and prosper.


Performance of mobile users abandon a site if it takes longer than 5 seconds to load


Impact of unhappy visitors will go to a competing site


Root Cause of customers expect online help resolution within 5 minutes


Revenue of CIOs fear IoT performance problems could derail operations and significantly damage revenues

Key consideration
Enterprises need confidence that they’re delivering—or on the path to delivering—exceptional digital experiences despite increasingly complex environments. To achieve this, they require real-time monitoring and 100% visibility across all types of customer-, employee-, and machine-based experiences. Key things to look for include:

Visualizing icon
Visualizing and prioritizing impact
Can you see how specific issues or overall performance impacts every single user session or device? Are you then able to prioritize by magnitude?

Visibility icon
Visibility from the edge to the core
Do you have a single view across your entire multi-cloud ecosystem—from the performance of users and edge devices to your applications and cloud platforms—and all in context?

Single source\" icon
A single source of truth for all
Are you able to ensure stakeholders—from IT to marketing—have access to the same data so you can avoid silos, finger pointing, and war rooms?

76% of CIOs say multi-cloud deployments make monitoring user experience difficult.
- Dynatrace CIO Complexity Report 2018

Start your free trial now

Get ready to be amazed in 5 minutes or less.

Start Free Trial

OpenShift monitoring resources