Dynatrace news https://www.dynatrace.com/news/ The tech industry is moving fast and our customers are as well. Stay up-to-date with the latest trends, best practices, thought leadership, and our solution's biweekly feature releases. Thu, 28 Sep 2023 17:29:34 +0000 en-US hourly 1 Five best practices to get the most out of customer experience analytics https://www.dynatrace.com/news/blog/five-best-practices-to-get-the-most-out-of-customer-experience-analytics/ https://www.dynatrace.com/news/blog/five-best-practices-to-get-the-most-out-of-customer-experience-analytics/#respond Wed, 27 Sep 2023 19:59:36 +0000 https://www.dynatrace.com/news/?p=59829

What is customer experience analytics: Fostering data-driven decision making In today’s customer-centric business landscape, understanding customer behavior and preferences is crucial for success. Customer experience analytics is the systematic collection, integration, and analysis of data related to customer interactions and behavior with an organization and/or its products and services. The analysis of this data offers […]

The post Five best practices to get the most out of customer experience analytics appeared first on Dynatrace news.

]]>

What is customer experience analytics: Fostering data-driven decision making

In today’s customer-centric business landscape, understanding customer behavior and preferences is crucial for success. Customer experience analytics is the systematic collection, integration, and analysis of data related to customer interactions and behavior with an organization and/or its products and services. The analysis of this data offers valuable insight into the overall customer experience, enabling businesses to optimize their strategies and deliver exceptional experiences.

Customer experience analytics best practices

As organizations establish or advance their customer experience analytics strategy and tools, the following five best practices can help maximize the benefits of these analytics.

1. Define clear objectives

Establish clear objectives and identify specific insights you want to gain from the data. For example, are you looking to understand customer preferences, improve satisfaction, or identify pain points in the customer journey? Defining clear objectives will guide your analysis efforts and help maintain focus on extracting the most relevant and actionable information. It will also help to gain alignment among the necessary stakeholders across executive leadership, digital, product, development, or analytics teams.

2. Capture and consolidate data from multiple sources

To get meaningful insights, it’s crucial to collect comprehensive and relevant data by capturing data from various touchpoints and channels that customers interact with. This may include digital experience monitoring, such as mobile or web real user monitoring, product analytics, website analytics, customer relationship management data, customer feedback, Net Promoter Score (NPS), and more. The data should cover both quantitative metrics (e.g., purchase history, and clickthrough rates) and qualitative feedback (e.g., surveys and reviews). By gathering a range of data, organizations can develop a holistic view of customer journeys and uncover meaningful patterns and trends.

3. Use advanced analytics techniques

Customer experience analytics goes beyond basic reporting. Embrace advanced analytics techniques to unlock deeper insights. Employ segmentation to group customers based on shared characteristics, which allows you to tailor experiences and strategies to specific segments. Implement predictive modeling to forecast customer behavior and identify opportunities for personalized engagements. Embrace sentiment analysis to understand customer emotions and gauge satisfaction levels. With advanced analytics techniques, organizations can extract greater value from data and ultimately make better data-driven decisions.

4. Integrate data sources for a unified view

Customer experience analytics often involves analyzing data from multiple sources. To ensure a unified view of the customer journey, it’s important to integrate these disparate data sources. This integration allows you to connect the dots and gain a comprehensive understanding of customer behavior across touchpoints. Consider how easy it is to integrate different tools and data sources. For example, you may benefit from enriching digital experience monitoring data with insights from web analytics or tracking every step in a business process regardless of the data source – to understand the end-to-end customer experience.

5. Foster a culture of data-driven decision making

To make the most of customer analytics, it’s crucial to foster a culture of data-driven decision making within your organization. Encourage cross-functional collaboration and ensure that decision makers have access to relevant insights. Train employees how to interpret and use customer analytics effectively. Regularly share success stories and case studies that demonstrate the impact of data-driven decision making on customer experience. By instilling a data-driven mindset, organizations empower teams to make informed decisions that drive improvements in customer experience.

Driving decisions with data

Customer experience analytics has the potential to transform how organizations understand and optimize customer interactions. By following these best practices — by defining clear objectives, collecting comprehensive data, using advanced analytics techniques, integrating data sources, and fostering a culture of data-driven decision making, you can extract the greatest value from customer experience analytics. Embrace the power of data to gain actionable insights, enhance customer satisfaction, and drive business growth in today’s competitive landscape.

Read how loanDepot leveraged customer experience analytics with Dynatrace to deliver seamless lending journeys.

The post Five best practices to get the most out of customer experience analytics appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/five-best-practices-to-get-the-most-out-of-customer-experience-analytics/feed/ 0
The platform engineer role: A game-changer or just hype? https://www.dynatrace.com/news/blog/platform-engineering-pureperformance/ https://www.dynatrace.com/news/blog/platform-engineering-pureperformance/#respond Thu, 21 Sep 2023 14:00:39 +0000 https://www.dynatrace.com/news/?p=59766

As organizations become cloud-native and their environments more complex, DevOps teams are adapting to new challenges. Site reliability engineering first emerged to address cloud computing’s new performance needs. Today, the platform engineer role is gaining speed as the newest byproduct of scaling DevOps in the emerging but complex cloud-native world. What is this new discipline, […]

The post The platform engineer role: A game-changer or just hype? appeared first on Dynatrace news.

]]>

As organizations become cloud-native and their environments more complex, DevOps teams are adapting to new challenges. Site reliability engineering first emerged to address cloud computing’s new performance needs. Today, the platform engineer role is gaining speed as the newest byproduct of scaling DevOps in the emerging but complex cloud-native world. What is this new discipline, and is it a game-changer or just hype?

In a recent episode of the PurePerformance podcast, Dynatrace DevOps activist Andreas Grabner and director of sales engineering Brian Wilson sat down to discuss the platform engineer role and its impact. Joining them for the discussion was Saim Safdar, Cloud Native Community Foundation (CNCF) ambassador and member of the CNCF TAG App Delivery Platform Working Group. They explore platform engineering’s multiple definitions, its pros and cons, and how practitioners can shape this emerging DevOps concept.

Understanding the platform engineer role

DevOps is a constantly evolving discipline. At its core, DevOps is a collaborative framework between development and operations teams whose goal is to streamline software development. Platform engineering supports this goal by providing developers with the environments, or platforms, they need to build and run applications.

Safdar views the discipline primarily as a means of lifting some responsibility off developers’ shoulders. “A platform engineer is responsible for reducing developers’ cognitive load while interacting and delivering software,” he said. “The job of the platform team is to define how the environments are built and where they run, and to make sure they’re always available in an easy way.”

The “cognitive load” refers to the additional requirements of building an application beyond the code itself. When developers begin building applications, they also must spin up infrastructure, GitOps, tooling, services meshes, and more to run applications. Platform engineers reduce developers’ workload by providing an internal self-service offering to have those environments automatically created for them. As a result, developers have the freedom to focus on building high-quality, resilient applications.

A new way to collaborate with the platform engineer role

Platform engineering offers a new way for teams to collaborate. The software development lifecycle is a complex system with many moving parts. DevOps practices aim to break down organizational silos and improve communication between development and operations teams.

But Safdar sees additional benefits to platform engineering. “I believe the focus of platform engineering is how we simplify cloud-native computing for average developers,” he said. “I believe this is a focus of DevOps already, but the DevOps world is currently focused on collaborating.”

DevOps teams aim to produce high-quality software quickly, frequently, and securely. Traditionally, teams have achieved this by ensuring operations teams are involved in the development process, and vice versa. Platform engineering takes collaboration a step further. It is a more active approach to collaboration that understands developers’ needs and takes deliberate steps to make their jobs easier.

“No longer do developers need to submit a ticket, wait for a response, etc.,” Safdar said. “[Platform engineering] would lend to easily accessible, prepackaged environments.”

Silos can still reappear

Platform engineering’s benefits have game-changing potential for software delivery. But the new discipline has the negative potential to recreate the silos that the DevOps movement sought to break down.

The problem lies in what Safdar calls the “skill concentration trap.” When creating a platform engineering team, one would likely recruit the most experienced engineers related to what the platform will cover. But this comes with risks if an engineer thinks they are more skilled than the developers. As a result, communication suffers. The developers lose the knowledge they need to run their own software. A silo forms between them and the engineers that could migrate them to the platform.

Communication is key. “The platform needs to be treated as a product,” Grabner said. “You need to understand your users and their needs, challenges, and wants. A product is never finished.”

Another common silo results from the developer side. “Because of platform engineers and the reduced cognitive load, the developers are writing the code they need to,” Wilson said. “But are they still thinking about performance and resiliency, or are they just going to write and push code without thinking about performance because that’s now someone else’s job?”

Education and collaboration are great ways to avoid this pitfall. “Platform engineers need to treat their developers like customers they need to retain,” Grabner said.

The big picture

Digital transformation is constantly accelerating. With the increasing speed of delivery and scale of cloud environments, adaptability is critical to keeping pace with innovation. Platform engineering is the newest byproduct of DevOps evolution, and it holds much promise. With the right platform engineering team, organizations have the potential to see faster, more resilient innovation, and happier IT teams. But collaboration and communication remain more important than ever.

To listen to the full episode, check it out here.

Find the PurePerformance podcast on the following platforms:

The post The platform engineer role: A game-changer or just hype? appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/platform-engineering-pureperformance/feed/ 0
IT carbon footprint: Dynatrace Carbon Impact and Optimization app helps organizations measure cloud computing carbon footprint https://www.dynatrace.com/news/blog/measure-it-carbon-footprint-cloud-computing-carbon-footprint/ https://www.dynatrace.com/news/blog/measure-it-carbon-footprint-cloud-computing-carbon-footprint/#respond Thu, 21 Sep 2023 12:00:48 +0000 https://www.dynatrace.com/news/?p=59724

As global warming advances, growing IT carbon footprints are pushing energy-efficient computing to the top of many organizations’ priority lists. Energy efficiency is a key reason why organizations are migrating workloads from energy-intensive on-premises environments to more efficient cloud platforms. But while moving workloads to the cloud brings overall carbon emissions down, the cloud computing […]

The post IT carbon footprint: Dynatrace Carbon Impact and Optimization app helps organizations measure cloud computing carbon footprint appeared first on Dynatrace news.

]]>

As global warming advances, growing IT carbon footprints are pushing energy-efficient computing to the top of many organizations’ priority lists. Energy efficiency is a key reason why organizations are migrating workloads from energy-intensive on-premises environments to more efficient cloud platforms. But while moving workloads to the cloud brings overall carbon emissions down, the cloud computing carbon footprint itself is growing.

“The cloud now has a greater carbon footprint than the airline industry,” wrote anthropologist Steven Gonzalez Monserrate in a 2022 article from MIT. “A single data center can consume the equivalent electricity of 50,000 homes.” The growing adoption of innovations like generative AI, based on large-language models (LLMs), will only increase demand for cloud computing. This adoption will further impact carbon emissions. Research from 2020 suggests that training a single LLM generates around 300,000 kg of carbon dioxide emissions—equal to 125 round-trip flights from New York to London.

Does that mean the answer is to slow the growth of AI or cloud technologies more broadly? Given the benefits of these innovations, organizations can’t afford to pull back on their efforts to build AI and shift more workloads to the cloud. However, organizations can turn to innovative solutions to improve their energy efficiency by mitigating their cloud computing carbon footprint.

How Dynatrace tracks and mitigates its own IT carbon footprint

The Dynatrace Carbon Impact app helps organization track their IT carbon footprint to optimize and reduce their cloud computing carbon footprint

Like many tech companies, Dynatrace is experiencing increased demand for its SaaS-based Dynatrace platform, which we host on cloud infrastructure. As we onboard more customers, the platform requires more infrastructure, leading to increased carbon emissions. At the same time, many existing customers are migrating from Dynatrace Managed, our on-premises solution, to our SaaS offering. These migrations add to the Dynatrace cloud computing carbon footprint as we onboard more customers’ observability and security workloads. However, since moving on-premises workloads to the cloud can lower the overall carbon footprint by 80% or more, the result is a net reduction in carbon emissions.

Nonetheless, to help mitigate climate change, it’s critically important for organizations to measure, monitor, and reduce their IT carbon footprints. Certainly, this is true for us. We also recognize that many of our customers have the same need. Many cloud service providers offer tools that measure a subscriber’s cloud computing carbon footprint when using their service. But they don’t measure the carbon footprint of the many apps and infrastructure resources running across that subscriber’s multicloud environments. They also can’t assess the IT carbon footprint of a subscriber’s on-premises apps and infrastructure. That’s why we developed Carbon Impact.

The Carbon Impact app assesses carbon emissions and energy consumption from all monitored hosts. It also provides organizations with actionable guidance for how to reduce their overall IT carbon footprint. Developed using guidance from the Sustainable Digital Infrastructure Alliance (SDIA) and expanding on formulas from the open source project Cloud Carbon Footprint, Carbon Impact measures and reports the IT carbon footprint of all Dynatrace-monitored hosts across an organization’s entire hybrid and multicloud environment in a single interface.

dashboard from the Dynatrace Carbon Impact app showing the organization's IT carbon footprint
The Carbon Impact dashboard shows that the Dynatrace carbon footprint is increasing with its expanding business and customer migrations.

Assessing our baseline cloud computing carbon footprint

Using Carbon Impact, we can assess our baseline carbon footprint with accuracy and granularity that’s nearly impossible to glean from other sources. The app’s advanced algorithms and real-time data analytics translate utilization metrics into their CO2 equivalent (CO2e). These metrics include CPU, memory, disk, and network I/O. This analysis provides us with a holistic view of our multicloud environment’s carbon emissions and identifies major emissions sources. As a result, this baseline measurement has become an important component of our sustainability strategy. It increases our awareness across IT and business stakeholders as we use these insights to build action plans to reduce our emissions and track the results of those efforts.

Tracking cloud computing carbon footprint by host

The ‘Hosts’ view details energy and CO2e consumption per host with filters to help narrow the focus to high-impact areas. For example, Dynatrace has been able to view underutilized instances in a specific AWS data center along with top CO2e emitters within a specific host group.

screenshot of CO2e measurements by host measuring cloud computing carbon footprint by host
A host-level breakdown of energy consumption and CO2e impact.

Optimizing host idling and scaling to reduce IT carbon footprint

Carbon Impact automatically reports idle and under-utilized instances as targets for optimization. Because Carbon Impact is integrated with Dynatrace Smartscape® topology modeling, it’s easy to drill into host and process details. Or open a Notebook for ad hoc analysis, giving us insights so we can safely scale down or retire underutilized instances. Using these recommendations, we focused our reduction goals on instances with the highest potential impact, shifting workloads and resizing instances where appropriate.

screenshot showing how to optimize host idling and scaling to reduce IT carbon footprint
Optimization targets include idling and scaling hosts.

Helping organizations track their IT carbon footprint to forge a greener future

At Dynatrace, we’re committed to measuring and reducing our own greenhouse gas emissions. By extension, we want to enable our customers to do the same. The Dynatrace unified observability and security platform makes this possible. Carbon Impact is an example of our contribution to making IT more energy-efficient and sustainable for everyone, even as AI is fueling the data explosion.

We built Carbon Impact using Dynatrace AppEngine, which customers and partners can also use to create additional custom, compliant, and intelligent data-driven apps. The AppEngine uses an easy, low-code approach to unlock the wealth of insights available in modern cloud ecosystems, including revealing where organizations consume their energy.

”As Dynatrace looks to the future, considering our environmental impact is more important than ever,” says Thomas Reisenbichler, VP of Site Reliability Engineering at Dynatrace. “We’re confident that Dynatrace will be able to optimize our cloud infrastructure carbon emissions by leveraging the Dynatrace platform for proactive management and orchestration, utilizing our Carbon Impact app to unlock greater optimization potential, and using green coding initiatives to improve performance.”

Using Carbon Impact, we can now implement efficiency measures driven by the app’s benchmarks and recommendations. Because it facilitates ongoing monitoring and tracks progress toward our sustainability goals, we can adjust our strategy to reduce our IT carbon footprint. By integrating sustainability into our growth strategy—and our product offering—Dynatrace is deepening its commitment to responsible business practices, transparency, and a greener future.

Carbon Impact is an important part of the Dynatrace environmental, social, and governance (ESG) strategy. To learn more about our commitment to our ESG strategy, download the Dynatrace 2023 Global Impact Report.
Already a Dynatrace customer? Download Carbon Impact and start optimizing your own cloud computing carbon footprint.

The post IT carbon footprint: Dynatrace Carbon Impact and Optimization app helps organizations measure cloud computing carbon footprint appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/measure-it-carbon-footprint-cloud-computing-carbon-footprint/feed/ 0
Tech Transforms podcast: Energy department CIO talks national cybersecurity strategy https://www.dynatrace.com/news/blog/tech-transforms-podcast-episode-65/ https://www.dynatrace.com/news/blog/tech-transforms-podcast-episode-65/#respond Wed, 20 Sep 2023 14:00:59 +0000 https://www.dynatrace.com/news/?p=59710 Tech Transforms Podcast

On the Tech Transforms podcast, sponsored by Dynatrace, we talk to some of the most prominent influencers shaping critical government technology decisions.

The post Tech Transforms podcast: Energy department CIO talks national cybersecurity strategy appeared first on Dynatrace news.

]]>
Tech Transforms Podcast

The White House National Cybersecurity Strategy seeks to build a “defensible, resilient digital ecosystem where it is costlier to attack systems than defend them, where sensitive or private information is secure and protected, and where neither incidents nor errors cascade into catastrophic, systemic consequences.”

On Episode 65 of the Tech Transforms podcast, Willie Hicks and I sit down with Ann Dunkin, chief information officer of the Department of Energy (DOE), to discuss her department’s direct involvement in developing the federal cybersecurity strategy.

The principle of “security by design” plays a major role in these efforts. The DOE has designated a national lab to implement security by design and improve understanding. “They’re really focusing on hardware and software systems together,” Dunkin said. “How do you make hardware and software both secure by design?”

The DOE supports the national cybersecurity strategy’s collective defense initiatives. These initiatives recognize that federal agencies must come together to protect the U.S. government as a whole. Dunkin firmly believes these agencies cannot operate in isolation any longer. “There’s too much work we do together,” she said. “There are too many interconnections between our systems. We absolutely have to develop that collective defense.”

Learn about how Dynatrace is helping government customers deliver on essential directives of the White House cybersecurity executive order 

From national cybersecurity strategy to building secure energy systems internationally

During the episode, Dunkin also mentions the DOE’s Partnership for Transatlantic Energy and Climate Cooperation. This partnership is international platform through which the United States, 24 European countries, and the European Union collaborate to build secure, resilient, and climate-conscious energy systems.

“Much of what we do with our European partners is modeled on work we’re doing here in the U.S.,” Dunkin said. “There’s a lot of work in DOE labs around grid resilience. They do a lot of modeling, so we can then [promote] those models, whether it’s a list of cybersecurity controls you should put in place or new technology to help you manage grid failures.”

Dunkin also highlights the two-way knowledge transfer with international partners. “This is very much a two-way street of learning from each other,” she said. “How can we learn from that and how can we help you with some of your other problems? There’s a reason it’s a partnership and not a push.”

This episode of Tech Transforms discusses the National Cybersecurity Strategy and securing a large agency like the DOE, as well as how agencies balance cybersecurity compliance and risk management. This episode of Tech Transforms discusses the National Cybersecurity Strategy and securing a large agency like the DOE, as well as how agencies balance cybersecurity compliance and risk management.

Tune in to the full episode for more insights from Ann Dunkin.

Follow the Tech Transforms podcast

Follow Tech Transforms on Twitter, LinkedIn, Instagram, and Facebook to get the latest updates on new episodes! Listen and subscribe on our website, or your favorite podcast platform, and leave us a review!

The post Tech Transforms podcast: Energy department CIO talks national cybersecurity strategy appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/tech-transforms-podcast-episode-65/feed/ 0
Dynatrace announces start of assessing its Dynatrace platform through IRAP https://www.dynatrace.com/news/blog/dynatrace-announces-start-of-irap-assessment/ https://www.dynatrace.com/news/blog/dynatrace-announces-start-of-irap-assessment/#respond Mon, 18 Sep 2023 16:02:44 +0000 https://www.dynatrace.com/news/?p=59704 Dynatrace embarks on IRAP compliance assessment process for Australian government compliance

In today’s fast-paced digital landscape, both public and private organizations need to stay focused on making sure their digital systems are secure. We’re excited to announce that Dynatrace has teamed up with Sekuro to start assessment of the Dynatrace Platform through IRAP—InfoSec Registered Assessors Program—to bring Dynatrace closer to government agencies in Australia. Dynatrace exists […]

The post Dynatrace announces start of assessing its Dynatrace platform through IRAP appeared first on Dynatrace news.

]]>
Dynatrace embarks on IRAP compliance assessment process for Australian government compliance

In today’s fast-paced digital landscape, both public and private organizations need to stay focused on making sure their digital systems are secure. We’re excited to announce that Dynatrace has teamed up with Sekuro to start assessment of the Dynatrace Platform through IRAP—InfoSec Registered Assessors Program—to bring Dynatrace closer to government agencies in Australia.

Dynatrace exists to make software work perfectly. The platform unifies full-stack observability, business, and security data with continually updated topological and dependency mapping to retain data context. It then combines this contextual awareness with continuous runtime application security, AIOps, and automation to provide answers and intelligent automation from data. This enables innovators to modernize and automate cloud operations at scale, deliver software faster and more securely, and ensure flawless digital experiences.

The InfoSec Registered Assessors Program (IRAP) is a system managed by the Australian Signals Directorate (ASD) to ensure that qualified experts in cybersecurity can evaluate the security of digital systems and services. It helps ensure that these systems meet high standards of safety and compliance with established security guidelines, like the Australian Information Security Manual (ISM) and the Protective Security Policy Framework (PSPF).

Sekuro is a cyber security and digital resiliency solutions provider that helps clients take a strategic approach to cyber security risk mitigation and digital transformation by providing a range of end-to-end services and solutions across the business lifecycle. Sekuro IRAP Assessors are endorsed by the ASD, who ensure suitably qualified cyber security professionals can assist in navigating the Information Security Manual (ISM), Protective Security Policy Framework (PSPF), and other Australian government guidance.

Stay tuned for more updates. Reach out to us to learn more about how Dynatrace can help your government agency deliver better, more secure, and more compliant software and services faster.

The post Dynatrace announces start of assessing its Dynatrace platform through IRAP appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/dynatrace-announces-start-of-irap-assessment/feed/ 0
Dynatrace ranked No. 1 for the Security Operations Use Case in the 2023 Gartner Critical Capabilities for Application Performance Monitoring and Observability report https://www.dynatrace.com/news/blog/dynatrace-ranked-no-1-for-the-security-operations-use-case-in-the-2023-gartner-critical-capabilities-for-application-performance-monitoring-and-observability-report/ https://www.dynatrace.com/news/blog/dynatrace-ranked-no-1-for-the-security-operations-use-case-in-the-2023-gartner-critical-capabilities-for-application-performance-monitoring-and-observability-report/#respond Fri, 15 Sep 2023 14:00:12 +0000 https://www.dynatrace.com/news/?p=59695 Application Security

In the 2023 Magic Quadrant for Application Performance Monitoring (APM) and Observability, Gartner has named Dynatrace a Leader and positioned it highest for Ability to Execute and furthest for Completeness of Vision. Also, Dynatrace ranked #1 across all six Use Cases in the 2023 Gartner® Critical Capabilities for APM and Observability report, including the recently […]

The post Dynatrace ranked No. 1 for the Security Operations Use Case in the 2023 Gartner Critical Capabilities for Application Performance Monitoring and Observability report appeared first on Dynatrace news.

]]>
Application Security

In the 2023 Magic Quadrant for Application Performance Monitoring (APM) and Observability, Gartner has named Dynatrace a Leader and positioned it highest for Ability to Execute and furthest for Completeness of Vision. Also, Dynatrace ranked #1 across all six Use Cases in the 2023 Gartner® Critical Capabilities for APM and Observability report, including the recently introduced Security Operations Use Case (4.46/5).

For Dynatrace customers, relying on traditional scan-based application security tools alone can leave the front door unlocked for attackers. As a result, organizations lack the needed runtime context for prioritization. Observability and security solutions powered by rich data context and intelligent automation are necessary to provide runtime context and close these gaps. Development, security, and operations teams can use these tools to gain actionable insights and, therefore, better defend against critical threats to cloud applications.

The growing need for the convergence of observability and security

With the increased adoption of cloud and hybrid infrastructure to support digital transformation, observability is a prerequisite for success and growth. Solutions that bring together security and observability help businesses improve customer experience by detecting anomalous application behavior, shortening incident remediation time, and forecasting critical future issues.

Solutions that bring together observability and security provide unique insights into application runtimes that security teams have traditionally lacked. Organizations that were early adopters of such solutions found them invaluable when Log4Shell was discovered in December 2021, knowing within minutes not only if they were affected but also the criticality of the breach.

However, most organizations still have an opportunity to adopt such solutions to address emerging threats and vulnerabilities. Further, endemic vulnerabilities such as Log4Shell tend to re-emerge; in fact, Log4Shell remains the most exploited application vulnerability to date. The urgency to better manage application vulnerabilities is higher than ever. In 2023 web applications are the most attacked asset, exploited in more than 60% of breaches.

How application security teams benefit from the convergence of observability and security

The convergence of observability and security can enhance an application security team’s ability to not only detect and prioritize vulnerability risks, but also effectively respond to threats. Organizations will be better positioned to improve their security posture by focusing on what matters, protecting against attacks on vulnerabilities while they are being resolved, effectively hunting for threats, and automating response to incidents.

The convergence of observability and security empowers security operations by providing a more comprehensive, real-time view of an organization’s application environment and security posture. Given the increasing velocity of software application releases, configuration changes, and integrations, the adoption of observability and security tools is vital.

With visibility across the full application stack, security operations will benefit from improved threat detection, faster incident response, a holistic view of environments, proactive threat hunting, context-rich investigations, data-driven decision making, automation and orchestration, and reduced alert fatigue. Moreover, this holistic approach enhances threat detection, incident response, and proactive security measures, ultimately strengthening an organization’s overall cybersecurity posture.

Gartner ranked Dynatrace No. 1 for Security Operations Use Case (4.46/5)

According to Gartner, “A Critical Capabilities document is a comparative analysis that scores competing products or services against a set of critical differentiators identified by Gartner. It shows you which products or services are a best fit in various use cases to provide you actionable advice on which products/services you should add to your vendor shortlists for further evaluation.”1 Not only that, Dynatrace scored highest for Use Cases across the board, including the IT Operations (4.15/5), SRE (Site Reliability Engineering)/Platform Operations (4.08/5), DevOps/AppDev (4.08/5), and Application Owner/Line of Business (4.01/5) Use Cases.

According to the Gartner report, “Application vulnerabilities are responsible for many of the high-profile breaches and intrusions that receive news coverage and are damaging to the reputation and health of the affected organizations. The trace telemetry that APM and observability solutions collect to monitor performance includes valuable security signals as well. Although implementations are nascent, the security capabilities of APM and observability tools have proved to be valuable. The Log4Shell incident in late 2021, in which a longstanding, but recently discovered, vulnerability was being widely and actively exploited, was an outstanding proving ground.”

From our perspective, Dynatrace platform differentiators such as Dynatrace Grail, OneAgent, and Smartscape enable customers to extend their observability investment with application security use cases at the flip of a switch. By leveraging Dynatrace capabilities like Runtime Vulnerability Analytics, Runtime Application Protection, AI-assisted prioritization, and AutomationEngine, customers can improve the effectiveness of their DevSecOps processes while boosting productivity. Not only that, with Security Analytics customers can execute lightning-fast queries across large volumes of observability and security data and automate response by creating data-driven workflows.

Want to learn more?

If you are a Dynatrace customer and are interested in our observability and security platform, please talk to your Dynatrace representative.

Complimentary copies of the 2023 Gartner Magic Quadrant for APM and Observability and the 2023 Gartner Critical Capabilities for APM and Observability report are available on the Dynatrace website.

If you’d like to experience the power of Dynatrace in your environment, please sign up for a free trial.

Gartner disclaimer

1Gartner Research Methodologies, “Critical Capabilities”, 11 September 2023. https://www.gartner.com/en/research/methodologies/research-methodologies-gartner-critical-capabilities/

Gartner, Magic Quadrant for Application Performance Monitoring and Observability, Gregg Siegfried, Mrudula Bangera, Matt Crossley, Padraig Byrne, 5 July 2023.

Gartner, Critical Capabilities for Application Performance Monitoring and Observability, Mrudula Bangera, Padraig Byrne, Matt Crossley, Gregg Siegfried, 10 July 2023. Out of the 6 Use Cases identified in the Critical Capabilities report, Dynatrace was one of the vendors to score highest in these Use Cases.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and MAGIC QUADRANT is a registered trademark of Gartner, Inc. and/or its affiliates and are used herein with permission. All rights reserved.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose

The post Dynatrace ranked No. 1 for the Security Operations Use Case in the 2023 Gartner Critical Capabilities for Application Performance Monitoring and Observability report appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/dynatrace-ranked-no-1-for-the-security-operations-use-case-in-the-2023-gartner-critical-capabilities-for-application-performance-monitoring-and-observability-report/feed/ 0
Three priorities for driving operational resilience in financial services in the U.K. using PRA SS1/21 https://www.dynatrace.com/news/blog/driving-operational-resilience-in-uk-with-ss1-21/ https://www.dynatrace.com/news/blog/driving-operational-resilience-in-uk-with-ss1-21/#respond Thu, 14 Sep 2023 16:13:09 +0000 https://www.dynatrace.com/news/?p=59663 operational resilience in FinServ for SS1/21 compliance in the UK

Operational resilience has become a key goal in the financial services industry, especially in the U.K. with regulations such as PRA SS1/21. That’s because over the past several years, financial services firms have become more innovative in their use of technology. Digital trends such as open banking and embedded payments and their supporting cloud-native technology […]

The post Three priorities for driving operational resilience in financial services in the U.K. using PRA SS1/21 appeared first on Dynatrace news.

]]>
operational resilience in FinServ for SS1/21 compliance in the UK

Operational resilience has become a key goal in the financial services industry, especially in the U.K. with regulations such as PRA SS1/21. That’s because over the past several years, financial services firms have become more innovative in their use of technology. Digital trends such as open banking and embedded payments and their supporting cloud-native technology stacks have driven this resourcefulness.

However, these trends bring a level of complexity that can quickly overwhelm even the savviest of teams. 67% of CIOs in financial services say their environment’s complexity is too great for humans to manage.

For financial services firms in the U.K., the issue is particularly pressing because of regulatory concerns, such as PRA SS1/21. These regulations place demands on providers to meet key requirements to ensure the operational resilience and availability of critical financial services.

The three most pertinent requirements are the need for tracking impact tolerances, business service mapping, and testing critical services.

Operational resilience priority 1: Tracking impact tolerances

Regulations such as PRA SS1/21 demand a standardized approach to logging and reporting service interruptions. One approach to standardization from regulators and the industry has been using “impact tolerances” to track downtime. An impact tolerance sets a maximum threshold for service interruption, including the following:

  • Maximum length of time for service interruption
  • Maximum volume of disrupted transactions
  • Maximum value of disrupted transactions

To ensure that teams don’t exceed their impact tolerances, financial services firms need to find ways to log the performance of their services in real time. Observability solutions such as Dynatrace can help organizations do this by automatically noting and logging service disruptions as they happen. By defining impact tolerances as service-level objectives (SLOs), teams can then track their performance relative to the threshold.

The best way to avoid falling behind on impact tolerances is to act early using automated warnings. With an automatic observability platform, teams can receive alerts when the system has burned through the impact tolerance. As a result, teams can act well before reaching the threshold.

Operational resilience priority 2: Business service mapping

Regulations like PRA SS1/21 also require financial services firms to identify team members and resources they need to deliver their services. With this chain of responsibility, they can map out the degree to which team members and resources must be committed to guarantee they don’t exceed their impact tolerances.

Application mapping and visualization technologies such as Dynatrace SmartScape® can help with this task dramatically. Dynatrace automatically generates an interactive map of their applications and services, which visualizes the relationship between components, giving a clear view of all the dependencies among them.

Financial services firms can then marry this application map with the information they have on who “owns” each component. By combining this with their workflow management tools, team members can receive automatic notifications when an item they’re responsible for causes disruption. Critically, this allows financial services firms to plan ahead in terms of allocating people and resources throughout their stack to minimize the risk of operational disruption.

Operational resilience priority 3: Testing critical systems

Once financial services firms have established their impact tolerances and assembled a business service map, regulations like PRA SS1/21 require them to test the performance of their applications. As part of this, teams must routinely test their ability to remain within impact tolerances in severe but plausible disruption scenarios and drill their recovery and response arrangements to ensure they are effective.

The Dynatrace platform supports this effort by enabling teams to conduct synthetic monitoring and testing, which simulates the significant but plausible scenarios that financial services firms should be testing for. Testing using synthetic monitoring enables firms to fine-tune the parameters of individual tests. This fine-tuning means they can have permutations that account for every eventuality, leaving no stone unturned in identifying how their systems will behave in different scenarios.

Additionally, financial services firms can use a unified observability platform such as Dynatrace to obtain code-level detail from their tests. This detail enables DevOps and security teams to gain precise information on aspects of their applications that need to be hardened to boost operational resiliency.

An investment in SS1/21 compliance

Whilst it may initially seem cumbersome, the efforts that financial services providers invest in complying with regulations surrounding operational resilience, such as SS1/21, can pay dividends. Using a unified observability platform to accelerate and automate this compliance will have a significant impact on their ability to deliver seamless digital experiences, carving out a lasting competitive advantage.

The post Three priorities for driving operational resilience in financial services in the U.K. using PRA SS1/21 appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/driving-operational-resilience-in-uk-with-ss1-21/feed/ 0
Implementing AWS well-architected pillars with automated workflows https://www.dynatrace.com/news/blog/implementing-aws-well-architected-pillars/ https://www.dynatrace.com/news/blog/implementing-aws-well-architected-pillars/#respond Wed, 13 Sep 2023 16:52:01 +0000 https://www.dynatrace.com/news/?p=59517 Dynatrace | AWS

If you use AWS cloud services to build and run your applications, you may be familiar with the AWS Well-Architected framework. This is a set of best practices and guidelines that help you design and operate reliable, secure, efficient, cost-effective, and sustainable systems in the cloud. The framework comprises six pillars: Operational Excellence, Security, Reliability, […]

The post Implementing AWS well-architected pillars with automated workflows appeared first on Dynatrace news.

]]>
Dynatrace | AWS

If you use AWS cloud services to build and run your applications, you may be familiar with the AWS Well-Architected framework. This is a set of best practices and guidelines that help you design and operate reliable, secure, efficient, cost-effective, and sustainable systems in the cloud. The framework comprises six pillars: Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization, and Sustainability.

Six AWS well architected pillars

But how can you ensure that your applications meet these pillars and deliver the best outcomes for your business? And how can you verify this performance consistently across a multicloud environment that also uses Microsoft Azure and Google Cloud Platform frameworks? Because Google offers its own Google Cloud Architecture Framework and Microsoft its Azure Well-Architected Framework, organizations that use a combination of these platforms triple the challenge of integrating their performance frameworks into a cohesive strategy.

This is where unified observability and Dynatrace Automations can help by leveraging causal AI and analytics to drive intelligent automation across your multicloud ecosystem. The Dynatrace platform approach to managing your cloud initiatives provides insights and answers to not just see what could go wrong but what could go right. For example, optimizing resource utilization for greater scale and lower cost and driving insights to increase adoption of cloud-native serverless services.

In this blog post, we’ll demonstrate how Dynatrace automation and the Dynatrace Site Reliability Guardian app can help you implement your applications according to all six AWS Well-Architected pillars by integrating them into your software development lifecycle (SDLC).

Dynatrace AutomationEngine workflows automate release validation using AWS Well-Architected pillars

With Dynatrace, you can create workflows that automate various tasks based on events, schedules or Davis problem triggers. Workflows are powered by a core platform technology of Dynatrace called the AutomationEngine. Using an interactive no/low code editor, you can create workflows or configure them as code. These workflows also utilize Davis®, the Dynatrace causal AI engine, and all your observability and security data across all platforms, in context, at scale, and in real-time.

One of the powerful workflows to leverage is continuous release validation. This process enables you to continuously evaluate software against predefined quality criteria and service level objectives (SLOs) in pre-production environments. You can also automate progressive delivery techniques such as canary releases, blue/green deployments, feature flags, and trigger rollbacks when necessary.

This workflow uses the Dynatrace Site Reliability Guardian application. The Site Reliability Guardian helps automate release validation based on SLOs and important signals that define the expected behavior of your applications in terms of availability, performance errors, throughput, latency, etc. The Site Reliability Guardian also helps keep your production environment safe and secure through automated change impact analysis.

But this workflow can also help you implement your applications according to each of the AWS Well-Architected pillars. Here’s an overview of how the Site Reliability Guardian can help you implement the six pillars of AWS Well-Architected.

AWS well-architected six pillars workflow
A Dynatrace Workflow that uses Dynatrace Site Reliability Guardian to implement the six AWS well-architected pillars

AWS Well-Architected pillar #1: Performance efficiency

The performance efficiency pillar focuses on using computing resources efficiently to meet system requirements, maintaining efficiency as demand changes, and evolving technologies.

A study by Amazon found that increasing page load time by just 100 milliseconds costs 1% in sales. Storing frequently accessed data in faster storage, usually in-memory caching, improves data retrieval speed and overall system performance. Beyond efficiency, validating performance thresholds is also crucial for revenues.

Once configured, the continuous release validation workflow powered by the Site Reliability Guardian can automatically do the following:

  • Validate if service response time, process CPU/memory usage, and so on, are satisfying SLOs
  • Stop promoting the release into production if the error rate in the logs is too high
  • Notify the SRE team using communication channels
  • Create a Jira ticket or an issue on your preferred Git repository if the release is violating the set thresholds for the performance SLOs
Pillar #1 performance efficiency of the AWS well architected pillars
The continuous release validation workflow powered by Dynatrace Site Reliability Guardian automatically verifies performance efficiency validation success and threshold violation cases

SLO examples for performance efficiency

The following examples show how to define an SLO for performance efficiency in the Site Reliability Guardian using Dynatrace Query Language (DQL).

Validate if response time is increasing under high load utilizing OpenTelemetry spans

fetch spans 
| filter endpoint.name == "/api/getProducts" 
| filter k8s.namespace.name == "catalog" 
| filter k8s.container.name == "product-service" 
| filter http.status_code == 200 
| summarize avg(duration) // in milliseconds
fetch spans result for AWS well architected pillar #1
* Please note that the Traces on Grail feature is currently in private preview, and the DQL syntax is subject to change.

Check if process CPU usage is in a valid range

timeseries val = avg(dt.process.cpu.usage) 
,filter in(dt.entity.process_group_instance, "PROCESS_GROUP_INSTANCE-ID") 
| fields avg = arrayAvg(val) // in percentage

CPU result

AWS Well-Architected pillar #2: Security

The security pillar focuses on protecting information system assets while delivering business value through risk assessment and mitigation strategies.

The continuous release validation workflow powered by Site Reliability Guardian can automatically do the following:

  • Check for vulnerabilities across all layers of your application stack in real-time, getting help from Dynatrace Davis Security Score as a validation metric
  • Block releases if they do not meet the security criteria
  • Notify the security team of the vulnerabilities in your application and create an issue/ticket to track the progress
Davis Security Score for AWS well architected pillar #2, security
Davis Security Score against third-party vulnerabilities

SLO examples for security

The following examples show how to define an SLO for security in the Site Reliability Guardian using DQL.

Runtime Vulnerability Analysis for a Process Group Instance – Davis Security Assessment Score

fetch events 
| filter event.kind == "SECURITY_EVENT" 
| filter event.type == "VULNERABILITY_STATE_REPORT_EVENT" 
| filter event.level == "ENTITY" 
| filter in("PROCESSGROUP_INSTANCE_ID",affected_entity.affected_processes.ids) 
| sort timestamp, direction:"descending" 
| summarize  
{  
status=takeFirst(vulnerability.resolution.status), 
score=takeFirst(vulnerability.davis_assessment.score), 
affected_processes=takeFirst(affected_entity.affected_processes.ids) 
}, 
by: {vulnerability.id, affected_entity.id} 
| filter status == "OPEN"  
| summarize maxScore=takeMax(score)

security score result

AWS Well-Architected pillar #3: Cost optimization

The cost optimization pillar focuses on avoiding unnecessary costs and understanding managing tradeoffs between cost capacity performance.

The continuous release validation workflow powered by the Site Reliability Guardian can automatically do the following:

  • Detect underutilized and/or overprovisioned resources in Kubernetes deployments considering the container limits and requests
  • Determine the non-Kubernetes-based applications that underutilize CPU, memory, and disk
  • Simultaneously validate if performance objectives are still in the acceptable range when you reduce the CPU, memory, and disk allocations
A graphic that shows the cost optimization without affecting the application performance for AWS well-architected pillar #3, cost performance
A graph that shows the cost optimization without affecting the application performance

SLO examples for cost optimization

The following examples show how to define an SLO for cost optimization in the Site Reliability Guardian using DQL.

Reduce CPU size and cost by checking CPU usage

To reduce CPU size and cost, check if CPU usage is below the SLO threshold. If so, test against the response time objective under the same Site Reliability Guardian. If both objectives pass, you have achieved your cost reduction on CPU size.

Screenshot of cost performance objective SLO in support of the AWS well-architected pillars

Here are the DQL queries from the image you can copy:

timeseries cpu = avg(dt.containers.cpu.usage_percent), 
filter: in(dt.containers.name, "CONTAINER-NAME") 
| fields avg = arrayAvg(cpu) // in percentage
fetch logs 
| filter k8s.container.name == "CONTAINER-NAME" 
| filter k8s.namespace.name == "CONTAINER-NAMESPACE" 
| filter matchesPhrase(content, "/api/uri/path") 
| parse content, "DATA '/api/uri/path' DATA 'rt:' SPACE? FLOAT:responsetime "  
| filter isNotNull(responsetime) 
| summarize avg(responsetime) // in milliseconds

Reduce disk size and re-validate

If the SLO specified below is not met, you can try reducing the size of the disk and then validating the same objective under the performance efficiency validation pillar. If the objective under the performance efficiency pillar is achieved, it indicates successful cost reduction for the disk size.

timeseries disk_used = avg(dt.host.disk.used.percent), 
filter: in(dt.entity.host,"HOST_ID") 
| fields avg = arrayAvg(disk_used) // in percentage

Disk used results

AWS Well-Architected pillar #4: Reliability

The reliability pillar focuses on ensuring a system can recover from infrastructure or service disruptions and dynamically acquire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues.

The continuous release validation workflow powered by the Site Reliability Guardian can automatically do the following:

  • Monitor the health of your applications across hybrid multicloud environments using Synthetic Monitoring and evaluate the results depending on your SLOs
  • Proactively identify potential availability failures before they impact users on production
  • Simulate failures in your AWS workloads using Fault Injection Simulator (FIS) and test how your applications handle scenarios such as instance termination, CPU stress, or network latency. SRG validates the status of the resiliency SLOs for the experiment period.
World map showing reliability metrics in support of AWS well architected pillar #4, reliability
Application availability validation across the world using Dynatrace Synthetic monitoring

SLO examples for reliability

The following examples show how to define an SLO for reliability in the Site Reliability Guardian using DQL.

Success Rate – Availability Validation with Synthetic Monitoring

fetch logs 
| filter log.source == "logs/requests" 
| parse content,"JSON:request" 
| fieldsAdd httpRequest = request[httpRequest] 
| fieldsAdd httpStatus = httpRequest[status] 
| fieldsAdd success = toLong(httpStatus < 400) 
| summarize successRate = sum(success)/count() * 100 // in percentage

logs request result

Number of Out of memory (OOM) kills of a container in the pod to be less than 5

timeseries oom_kills = avg(dt.kubernetes.container.oom_kills), 
filter: in(k8s.cluster.name,"CLUSTER-NAME") and in(k8s.namespace.name,"NAMESPACE-NAME") and in(k8s.workload.kind,"statefulset") and in (k8s.workload.name,"cassandra-workload-1") 
| fields sum = arraySum(oom_kills) // num of oom_kills

Reliability OOM kills result

AWS Well-Architected pillar #5: Operational excellence

The operational excellence pillar focuses on running and monitoring systems to deliver business value and continually improve supporting processes and procedures.

With continuous release validation workflow powered by the Site Reliability Guardian, you can:

  • Automatically verify service or application changes against key business metrics such as customer satisfaction score, user experience score, and Apdex rating
    Apdex rating in support of AWS well-architected pillar #5, operational excellence
  • Enhance collaboration with targeted notifications of relevant teams using the Ownership feature
  • Create an issue on your preferred Git repository to track and resolve the invalidated SLOs
  • Trigger remediation workflows based on events such as service degradations, performance bottlenecks, security vulnerabilities
  • Validate your CI/CD performance over time, considering the execution times, pipeline performance, failure rate, etc., which shows your operational efficiency in your software delivery pipeline.Screenshot of pipeline metrics in support of AWS well-architected pillar #5, operational excellence

SLO examples for operational excellence

The following examples show how to define an SLO for operational excellence in the Site Reliability Guardian.

Apdex rating validation of a web application

  1. Navigate to “Service-level objectives” and click on “Add new SLO” button
  2. Select “User experience” as a template. It will auto-generate the metric expression as the following:
    (100)*(builtin:apps.web.actionCount.category:filter(eq("Apdex category",SATISFIED)):splitBy())/(builtin:apps.web.actionCount.category:splitBy())
  3. Replace your application name in the entityName attribute:
    type("APPLICATION"),entityName("APPLICATION-NAME")
  4. Add a success criteria depending on your needs
    Success criteria for operational excellence
  5. Reference this SLO in your Site Reliability Guardian objective

AWS Well-Architected pillar #6: Sustainability

The sustainability pillar focuses on minimizing environmental impact and maximizing the social benefits of cloud computing.

The continuous release validation workflow powered by the Site Reliability Guardian can automatically do the following:

  • Measure and evaluate carbon footprint emissions associated with cloud usage
  • Leverage observability metrics to identify underutilized resources for reducing energy consumption and waste emissions
Sustainability dashboard supporting AWS well-architected pillar #6, Sustantainability
The Dynatrace Carbon Impact Dashboard evaluates the carbon impact of the resources in the cloud

SLO examples for sustainability

The following examples show how to define an SLO for sustainability in the Site Reliability Guardian using DQL.

Carbon emission total of the host running the application for the last 2 hours

fetch bizevents, from: -2h 
| filter event.type == "carbon.report" 
| filter dt.entity.host == "HOST-ID" 
| summarize toDouble(sum(emissions)), alias:total // total CO2e in grams

carbon emissions results supporting the AWS well-architected pillar #6, sustainability

Under-utilized memory resource validation

timeseries memory=avg(dt.containers.memory.usage_percent), by:dt.entity.host 
| filter dt.entity.host == "HOST-ID" 
| fields avg = arrayAvg(memory) // in percentage

memory resource validation for AWS well-architected pillar #6, sustainability

Validate all six AWS Well-Architected pillars automatically

Workflows with the Site Reliability Guardian can help validate your applications against each pillar of the AWS Well-Architected Framework in your software development lifecycle. Check out the Site Reliability Guardian by installing it from the Dynatrace Hub, then share your feedback and let us know how you use it. Head over to the Dynatrace Community to see our plans for additional features. We’d love to hear your suggestions and ideas.

For more about how Site Reliability Guardian helps organizations automate change impact analysis, performance, and service level objectives, join us for the on-demand Observability Clinic, Site Reliability Guardian with DevSecOps activist Andreas Grabner.

The post Implementing AWS well-architected pillars with automated workflows appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/implementing-aws-well-architected-pillars/feed/ 0
Citrix monitoring with Dynatrace: Easily observe your entire Citrix ecosystem https://www.dynatrace.com/news/blog/citrix-monitoring-with-dynatrace-easily-observe-your-entire-citrix-ecosystem/ https://www.dynatrace.com/news/blog/citrix-monitoring-with-dynatrace-easily-observe-your-entire-citrix-ecosystem/#respond Wed, 13 Sep 2023 15:07:28 +0000 https://www.dynatrace.com/news/?p=59670

The Dynatrace Citrix monitoring extension can now ingest observability signals from Citrix PowerShell SDK cmdlets in addition to existing metrics related to users, sessions, and Virtual Delivery Agent (VDA). The Citrix PowerShell SDK provides access to metrics from Citrix Studio and additional metrics that aren't readily available in Citrix Studio but are commonly used by Citrix performance engineers.

The post Citrix monitoring with Dynatrace: Easily observe your entire Citrix ecosystem appeared first on Dynatrace news.

]]>

Citrix is critical infrastructure

For businesses operating in industries with strict regulations, such as healthcare, banking, or government, Citrix virtual apps and virtual desktops are essential for simplified infrastructure management, secure application delivery, and compliance requirements. Many companies rely on Citrix as a critical component of their infrastructure that demands thorough observability and integrated analytics across the entire application landscape. Automated AI-powered analytics are necessary to match the scale of monitoring these enterprises require.

When it comes to tackling this challenge, Dynatrace is the ideal solution. We gained valuable insights and expertise through years of collaboration with numerous Citrix users. Our journey began in 2019 with the introduction of the Dynatrace Citrix monitoring extension. Since then, we’ve maintained ongoing partnerships with customers, ensuring their Citrix observability requirements are met while keeping up with the latest AIOps (AI for IT Operations) developments.

Listen, learn, improve, and repeat

The latest update to the Citrix monitoring extension is now available. This update improves the ability to observe Citrix users and delivery agents within a Citrix environment using the Citrix SDK, which is designed specifically for Citrix admins. Our largest customers have already adopted the new observability signals included in this release to ensure the reliability of Citrix landscapes with thousands of VDAs.

Effortlessly monitor your Citrix environment with Dynatrace

The Citrix monitoring process now employs two methods to collect metrics and provide complete Citrix performance observability. The VDA extension, which focuses on users and sessions, was upgraded to enable the gathering of metrics for landscape health. This is achieved using the Citrix PowerShell SDK, either from a host where Citrix Studio is located or from the Delivery Controller host.

VDA characteristics: Citrix user experience

The VDA metrics collected from the extension offer valuable insights into how your end users interact with Citrix. This includes end-user performance when logging in and establishing a session, as well as response times.

Real user monitoring Citrix in Dynatrace screenshot

This approach utilizes Dynatrace Digital Experience monitoring to observe each Citrix user’s activity in detail throughout each step of the session setup process. This is accomplished by implementing Citrix recommended practices and metrics, which are well-documented.

User action analysis Citrix in Dynatrace screenshot

By adopting an outside-in perspective, Citrix admins gain insight into end-user experience and how it correlates with the Citrix system activities that admins are responsible for. This approach also assists Citrix users in comprehending the impact of Citrix on app delivery without requiring an in-depth understanding of Citrix’s inner workings or specialized monitoring tools.

By collecting landscape metrics, you get a clear picture of how your Citrix landscape is configured and prepared for your Citrix end users. You can monitor delivery groups, VDAs, catalogs, delivery controllers, broker services, and license status through easy-to-use dashboards, which can be used as a starting point. Alternatively, you can create your own dashboards and alerting profiles to ensure that your reporting aligns with your infrastructure and monitoring practices.

Citrix dashboard in Dynatrace screenshot

Citrix admins use an inside-out perspective to begin health assessments, troubleshoot issues, and plan landscape progression. Dynatrace collects various metrics, including the number of VDAs, active sessions, available desktops, and more. It also maintains topological relationships between monitored entities, such as site, group, and controller, along with any relevant tags applied by Citrix admins. This is crucial for maintaining large Citrix landscapes, as we have observed while working with customers who manage tens of thousands of Citrix VDAs across multiple sites.

Citrix properties and tags in Dynatrace screenshot

One observability platform for everyone

With the Dynatrace platform, Citrix administrators can now easily monitor the health of both their infrastructure and applications. This unified observability eliminates any confusion or blame-shifting, as everyone can use the same Dynatrace lens to analyze the entire application stack and delivery chain.

Start monitoring Citrix now

If you already use the extension, just upgrade it to get started. If you want to start monitoring, activate the extension in Dynatrace Hub.

  • Install OneAgent on all Citrix hosts
    • Infrastructure Monitoring mode is enough unless you plan to monitor Java or .NET apps that run on Citrix hosts.
  • Activate the Citrix extension in Dynatrace Hub
  • Enable VDA mode on VDAs
    • The most convenient approach is to instrument your VDA golden image. Install OneAgent on the golden image, boot it and connect to Dynatrace, activate and configure the Citrix extension using Dynatrace Hub, and then enable VDA mode.
  • Enable Powershell SDK mode on hosts where the Powershell SDK is installed. Typically, this is where Citrix Studio is installed.

Expand Citrix monitoring to include NetScaler or F5 BigIP

Activate NetScaler or BigIP extensions that fit your environment and benefit from complete visibility into the application delivery chain, including the network tier.

If you run Citrix, most likely, it’s front-ended with NetScaler ADC. Or (less likely) with F5 BigIP load balancer. Dynatrace collects metrics and topology information from load balancers and analyzes these critical network devices for performance and health. Visibility into load balancer performance is essential when you’re responsible for application delivery. Dynatrace brings you this visibility in context with your entire application delivery infrastructure.

Add synthetic availability tests to proactively check infrastructure health

Enable synthetic monitors to check the availability of your Citrix login from client locations.

Use Dynatrace Synthetic to test the HTTP availability of your Citrix login page. You may also use the Ping extension to check the availability of any network resources with TCP, ICMP, or UDP tests.

Does Dynatrace replace Citrix Studio and Director?

No, Citrix Studio and Director are focused on Citrix and provide application lifecycle management capabilities. Dynatrace provides infrastructure observability and user experience monitoring. Those who don’t manage Citrix components may be sufficiently served by Dynatrace. Those who manage Citrix will use Dynatrace as a common observability platform for Citrix and apps that Citrix delivers.

What’s next

Upgrade your Citrix extension to the new version and benefit from complete landscape monitoring. At Perform 2022, we showcased how our largest customers benefit from this capability. Now, it’s available to all customers.

Don’t forget to share your feedback in the Dynatrace Community.

The post Citrix monitoring with Dynatrace: Easily observe your entire Citrix ecosystem appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/citrix-monitoring-with-dynatrace-easily-observe-your-entire-citrix-ecosystem/feed/ 0
Dynatrace SaaS release notes version 1.275 https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-275/ https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-275/#respond Tue, 12 Sep 2023 11:45:18 +0000 https://www.dynatrace.com/news/?p=59667 Dynatrace SaaS Release Notes

We have released Dynatrace version 1.275. To learn what’s new, have a look at the release notes.

The post Dynatrace SaaS release notes version 1.275 appeared first on Dynatrace news.

]]>
Dynatrace SaaS Release Notes

We have released Dynatrace version 1.275. To learn what’s new, have a look at the release notes.

The post Dynatrace SaaS release notes version 1.275 appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-275/feed/ 0
Dynatrace Managed release notes version 1.274 https://www.dynatrace.com/news/blog/dynatrace-managed-release-notes-version-1-274/ https://www.dynatrace.com/news/blog/dynatrace-managed-release-notes-version-1-274/#respond Mon, 11 Sep 2023 11:38:24 +0000 https://www.dynatrace.com/news/?p=59665 Managed Release Notes

We have released Dynatrace Managed version 1.274. To learn what’s new, have a look at the release notes.

The post Dynatrace Managed release notes version 1.274 appeared first on Dynatrace news.

]]>
Managed Release Notes

We have released Dynatrace Managed version 1.274. To learn what’s new, have a look at the release notes.

The post Dynatrace Managed release notes version 1.274 appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/dynatrace-managed-release-notes-version-1-274/feed/ 0
Extend business observability: Extract business events from online databases (Part 2) https://www.dynatrace.com/news/blog/extend-business-observability-extract-business-events-from-online-databases-part-2/ https://www.dynatrace.com/news/blog/extend-business-observability-extract-business-events-from-online-databases-part-2/#respond Fri, 08 Sep 2023 17:53:47 +0000 https://www.dynatrace.com/news/?p=59135 business observability

In part 1 of this blog series, we explored the concept of business observability, its significance, and how real-time visibility aids in making informed decisions. In part 2, we’ll show you how to retrieve business data from a database, analyze that data using dashboards and ad hoc queries, and then use a Davis analyzer to […]

The post Extend business observability: Extract business events from online databases (Part 2) appeared first on Dynatrace news.

]]>
business observability

In part 1 of this blog series, we explored the concept of business observability, its significance, and how real-time visibility aids in making informed decisions. In part 2, we’ll show you how to retrieve business data from a database, analyze that data using dashboards and ad hoc queries, and then use a Davis analyzer to predict metric behavior and detect behavioral anomalies.

Dataflow overview

business events from databases

Dynatrace ActiveGate extensions allow you to extend Dynatrace monitoring to any remote technology that exposes an interface. Dynatrace users typically use extensions to pull technical monitoring data, such as device metrics, into Dynatrace.

However, as we highlighted previously, business data can be significantly more complex than simple metrics. To accommodate this complexity, we created a new Dynatrace extension.

Create an extension to query complex business data

Creating an ActiveGate extension with the Dynatrace extension framework is easy; there’s a tutorial on using the ActiveGate Extension SDK that guides you through making an extension to monitor a demo application bundled with the SDK.

Similar to the tutorial extension, we created an extension that performs queries against databases. Notably, the SQL query is not limited to specific columns or data with specific metric values (int or float). Instead, the data can be of any type, including string, Boolean, timestamp, or duration.

There are three high-level steps to set up the database business-event stream.

  1. Create and upload the extension that connects to the database and extracts business data in any form.
  2. Configure the extension with the appropriate database credentials, query names, Dynatrace endpoint, and tokens necessary to send the business data to Grail.
  3. Once the data is received in Grail, you can explore, manipulate, and analyze the data, utilizing advanced techniques such as filtering, grouping and aggregation, calculations and transformations, time windowing, and much more. Further, you can set alerts based on predefined or auto-adaptive thresholds.

Step-by-step: Set up a custom MySQL database extension

Now we’ll show you step-by-step how to create a custom MySQL database extension for querying and pushing business data to the Dynatrace business events endpoint.

A step-by-step how to create a custom MySQL database extension for querying and pushing business data to the Dynatrace business events endpoint.

Create and upload the extension

  1. Download the extension ZIP fileDon’t rename the file. This is a sample extension that connects to a MySQL database and pushes business events to Dynatrace.
  2. Unzip the ZIP file to the plugin deployment directory of your ActiveGate host (found at /opt/dynatrace/remotepluginmodule/plugindeployment/).
  3. In the Dynatrace menu, go to Settings > Monitored technologies > Custom extensions and select Upload Extension.
  4. Upload the ZIP file.
  5. Once uploaded, extract the ZIP file at the same location.
  6. Configure the information needed to query business observability data from the target database.
    There are three configuration sections, as shown below in the Dynatrace web UI.

Dynatrace extension settings SQL DB

Configuration details

Database configuration

  • Endpoint name: Any label to identify this connection. This is used for identification purposes.
  • SQL IP/Hostname: The database IP or hostname.
  • SQL Username: Username of the user who has permission to login on the SQL server remotely and access the database.
  • SQL Password: Password for the username.
  • SQL DB: The database name.

Bizevents API and token configuration

  • Endpoint to Push Bizevents: Bizevents API that will receive the business data.
  • Client ID to generate token: Client ID used to generate OAuth token. To generate client-id, refer to our OAuth documentation.
  • Client secret to generate token: Client secret for token generation.

Define your SQL Queries

  • Queryname 01: Unique name to identify the query to ensure data identification and retention within Dynatrace.
  • Query 01: SQL query to retrieve data.
  • Interval 01: Frequency in minutes for executing the configured query.
  • Add multiple queries (depending on the requirement) with the above config for each query.

Define the retention period with matcher DQL and bucket assignment

Data stored in Grail can be preserved for extended periods, up to 10 years. To achieve this, we’ll create a Grail bucket specifically designed to retain data for a duration of 10 years (3,657 days).

Here is a JSON response from an API that successfully created a bucket capable of storing data for a period of up to 10 years.

JSON response from an API

After obtaining a bucket with a suitable retention period, it’s time to create a DQL matching rule that effectively filters events and directs them to the appropriate Grail bucket. This ensures that the data is retained for the correct duration while restricting access to users who are authorized for that specific bucket.

DQL matching rule in Dynatrace

Analyze the data in real-time using Dashboards or collaborate with colleagues using Notebooks

In the screen recording provided below, we begin by examining the business data ingested into Grail using a notebook. This initial overview provides a broad perspective of the ingested data. However, real insights emerge when we delve deeper and analyze specific events over time. As you follow along in the video, you’ll notice the ability to determine the day of the week for each transaction and visualize the data in a user-friendly bar chart.

The video below showcases a business dashboard that effectively visualizes important events, including pending withdrawals and deposits from the past hour, transaction amounts throughout the week, transaction queue status from the previous hour, and the overall transaction status.

Enhance data insights with real-time ad hoc queries

While predefined dashboards can offer comprehensive overviews, they don’t always anticipate and meet the needs of business analysts. Dynatrace Query Language (DQL) is a powerful tool for exploring your data and discovering patterns, identifying anomalies and outliers, creating statistical modeling, and more based on data stored in Dynatrace Grail. Now we’ll use a Dynatrace Notebook to execute our DQL queries.

In the below query, we’re specifically searching for pending deposit transactions greater than $8,000 that occurred between 10:00:04 AM and 12:00:00 AM on August 21, 2023. The query for pending deposit transactions within a specific time frame is useful for real-time analysis, issue investigation, performance assessment, impact assessment, and compliance/auditing purposes.

Pending transactions query in Dynatrace screenshot

Proactive alerting for accumulating business transactions: Mitigating business impact

To ensure timely action and address potential bottlenecks, we can set up alerts that notify you when pending transactions accumulate within a short period. These alerts serve as early business warnings, allowing you to take necessary measures to prevent disruptions and minimize delays in transaction processing.

Pending depoist Custom alert in Dynatrace

In the above recording, we demonstrate an alert specifically designed to notify when there is a significant increase in pending transactions. This alert serves as a valuable tool in maintaining operational efficiency, ensuring business continuity, and delivering optimal customer experiences.

Forecast business data Using a Davis analyzer

In the context of monitoring business-related data such as sales, orders, payments, withdrawals, deposits, and pending transactions, Dynatrace Davis analyzers offer valuable forecast analysis capabilities. Davis analyzers offer a broad range of general-purpose artificial intelligence and machine learning (AI/ML) functionality, such as learning and predicting time series, detecting anomalies, or identifying metric behavior changes within time series.

By utilizing a Davis analyzer, organizations can predict future trends and patterns in their payment and transaction data. This forecast analysis helps businesses anticipate customer behavior, plan for fluctuations in transaction volumes, and optimize their operations accordingly.

For example, by applying forecast analysis to payment data, businesses can identify potential cash flow issues or predict periods of high transaction activity. This type of insight enables you to proactively manage liquidity, ensure sufficient funds are available, and make informed decisions about resource allocation.

business forecasting

Conclusion

By combining proactive alerts and leveraging AI-powered insights, we can effectively manage pending transactions, optimize processes, and ensure smooth operations.

To address the business need for extracting business data from databases, we demonstrated using a custom database extension to bring the data into Dynatrace. This integration allows seamless connectivity to a variety of databases, enabling the real-time retrieval and storage of business data.

By leveraging the powerful combination of business, security, and observability, organizations gain immediate access to their critical business data without any delays or data staleness. The real-time nature of the data extraction ensures that decision-makers have up-to-date information at their fingertips, empowering them to make timely and informed decisions.

Furthermore, we showcased the flexibility and versatility of the Dynatrace platform in exploring and analyzing the extracted data. By seamlessly integrating the data into Notebooks and Dashboards, organizations can gain comprehensive insights into trends, patterns, and key performance indicators relevant to their business. This empowers data analysts and business users to delve deep into the data, uncover valuable insights, and derive actionable intelligence.

Additionally, we demonstrated the power of custom alerts in Dynatrace. By defining specific thresholds for key business KPIs, the platform can proactively monitor data and generate alerts whenever a breach or potential issue is detected. This proactive alerting capability ensures that stakeholders are promptly notified of any anomalies or deviations, enabling them to take immediate corrective actions and mitigate risks. More advanced use cases integrate with automation workflows to automate recovery actions.

Through seamless database connectivity, real-time data retrieval, exploratory capabilities, proactive alerting, and automation, organizations can enhance their overall operational efficiency, customer satisfaction, and business performance. The integration of the Dynatrace observability platform with the custom database extension provides organizations with a solution to extract, analyze, and act upon their at-rest business data, driving success in a rapidly evolving business landscape.

The post Extend business observability: Extract business events from online databases (Part 2) appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/extend-business-observability-extract-business-events-from-online-databases-part-2/feed/ 0
Extend business observability: Extract business events from online databases (Part 1) https://www.dynatrace.com/news/blog/extend-business-observability-extract-business-events-from-online-databases-1/ https://www.dynatrace.com/news/blog/extend-business-observability-extract-business-events-from-online-databases-1/#respond Fri, 08 Sep 2023 15:22:15 +0000 https://www.dynatrace.com/news/?p=59117 business observability

Business leaders benefit from in-the-moment business insights. They frequently articulate the need for real-time visibility into business data to support agile business decisions. But existing business intelligence (BI) tools often lack the broad context, ease of data access, and real-time insights needed to understand and improve customer experience and complex business processes. The key challenges […]

The post Extend business observability: Extract business events from online databases (Part 1) appeared first on Dynatrace news.

]]>
business observability

Business leaders benefit from in-the-moment business insights. They frequently articulate the need for real-time visibility into business data to support agile business decisions. But existing business intelligence (BI) tools often lack the broad context, ease of data access, and real-time insights needed to understand and improve customer experience and complex business processes.

The key challenges include:

  • Business data is often difficult to access, resulting in fragile data pipelines.
  • Data is not delivered in real time; it’s often delayed by weeks or longer.
  • Business data often lacks IT context, which prevents effective BizOps collaboration.

Dynatrace business events address these systemic problems, delivering real-time business observability to business and IT teams with the precision and context required to support data-driven decisions and improve business outcomes.

Dynatrace business events provide precise, real-time business metrics that support fine-grained business decisions and auditable business reporting. They offer lossless access to hard-to-reach business data embedded in in-flight application payloads, ensuring that valuable information is not missed. Additionally, Dynatrace business events enable organizations to explore and analyze large, long-term data sets without pre-indexing, which allows for flexible and comprehensive data analysis.

Extend business observability to data at rest

In our past blog post about business agility, we looked at a retail sales use case example to investigate potential causes of underperforming store locations. We also looked at a pizza chain example, connecting each customer order to the fulfillment process milestones that followed, including the handoff to the delivery agent.

In both examples, we used Dynatrace OneAgent® deep payload inspection to capture business data in motion. There are also many cases where business data—transactional, inventory, or financial—is at rest or in use, stored in a database. For comprehensive business observability, you need access to this data in real time. This can be accomplished using Dynatrace extensions. Dynatrace extensions can easily query data from various databases and store the results in Grail™, the Dynatrace data lakehouse. Once the data is in Grail, it can be transformed, queried, reported to dashboards, and more.

Business data is more than metrics

Dynatrace Extensions enable the expansion of Dynatrace monitoring to encompass any technology that provides an interface. For instance, the SQL datasource facilitates universal database queries across commonly used databases, subsequently transmitting the results to Dynatrace in the form of metrics or logs.

However, in the real world, business-related data isn’t limited to metrics. Business data should be viewed through a different lens, storing it separately while preserving the unique characteristics that enable business observability:

  • Certain business data, such as product names, customer details, sentiments, order dates, payment methods, and more, are not simple metrics. Instead, they can consist of various data types: strings, integers, float, timestamps, and combinations of values.
  • Such business observability can’t reside in traditional databases or data warehouses and thus needs to be in a data lakehouse that can unify and contextually analyze observability, security, and business.
  • Metrics lack the contextual information to automatically trigger actions such as targeted outreach to impacted customers or automations to remediate process anomalies. Business events, however, capture specific occurrences or actions, allowing organizations to understand triggers, respond promptly, and foster collaboration among teams for improved customer experiences and business outcomes.

To get past the basic metric limitations, we created a custom extension to extract business data from existing databases and store it in Grail. Here’s a peek at the approach:

extension diagram

Business observability

Business observability refers to gaining insights into a business’s operation, performance, and behavior in real time. It involves collecting and analyzing data from various sources within an organization, such as IT systems, applications, customer interactions, and business processes, to gain a comprehensive view of how the business is functioning. An effective business observability solution should make it easy to ingest business data from any source, including databases.

Similar to the concept of observability in IT systems and applications, business observability focuses on capturing data at different layers of the business and making it easily accessible and understandable for analysis and decision-making. It goes beyond traditional business intelligence by providing real-time, granular, and contextual data that enables organizations to identify patterns, trends, anomalies, and correlations across different business dimensions.

Illustrating the value of business observability

Business observability helps you understand and evaluate the performance and effectiveness of systems in achieving their intended business goals. While observing individual requests is essential for performance engineering purposes, taking a business lens perspective provides deeper insights into the actual value delivered by the underlying system.

For example, consider an e-commerce website aiming to maximize sales. By implementing business observability, you can analyze conversion rates, sales patterns, and order fulfillment times. This enables you to identify bottlenecks, optimize user experiences, and make data-driven decisions to improve sales performance.

Similarly, in the case of a ride-sharing app, business observability allows you to monitor metrics like ride acceptance rates, driver and rider satisfaction, and average wait times. By analyzing these business-oriented indicators, you can optimize an app’s algorithms, allocate resources effectively, and enhance the overall experience for both riders and drivers.

For an insurance provider, business observability provides insights into key metrics such as policy sign-ups, claim processing times, and customer satisfaction levels. By closely monitoring these business-focused metrics, you can identify areas for improvement, streamline processes, and deliver better service to your customers.

Business observability not only ensures that systems perform well technically, it also ensures that systems are aligned with their intended business objectives. By gaining visibility into the business value delivered by these systems, you can make informed decisions, optimize performance, and ultimately achieve your business goals more effectively.

In part two of this blog series, you’ll see how we approached the Database Business Events Stream solution. We’ll cover using Notebooks for analysis, setting up alerts for critical business thresholds, and how to harness a Davis analyzer for predictive analytics.

The post Extend business observability: Extract business events from online databases (Part 1) appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/extend-business-observability-extract-business-events-from-online-databases-1/feed/ 0
What is predictive AI? How this data-driven technique gives foresight to IT teams https://www.dynatrace.com/news/blog/what-is-predictive-ai-how-this-data-driven-technique-gives-foresight-to-it-teams/ https://www.dynatrace.com/news/blog/what-is-predictive-ai-how-this-data-driven-technique-gives-foresight-to-it-teams/#respond Tue, 05 Sep 2023 16:37:38 +0000 https://www.dynatrace.com/news/?p=59522 predictive capacity management

Predictive AI uses statistical algorithms and other advanced machine learning techniques to anticipate what might happen next in a system. By analyzing patterns and trends, predictive analytics enables teams to take proactive actions to prevent problems or capitalize on opportunity.

The post What is predictive AI? How this data-driven technique gives foresight to IT teams appeared first on Dynatrace news.

]]>
predictive capacity management

Technology and operations teams work to ensure that applications and digital systems work seamlessly and securely. They handle complex infrastructure, maintain service availability, and respond swiftly to incidents.

But when these teams work in largely manual ways, they don’t have time for innovation and strategic projects that might deliver greater value. Therefore, the integration of predictive artificial intelligence (AI) in the workflows of these teams has become essential to meet service-level objectives, collaborate effectively, and boost productivity.

What is predictive AI?

Predictive AI uses statistical algorithms and other advanced machine learning techniques to anticipate what might happen next in a system.

Predictive AI uses machine learning, data analysis, statistical models, and AI methods to predict anomalies, identify patterns, and create forecasts. By analyzing patterns and trends, predictive analytics helps identify potential issues or opportunities, enabling proactive actions to prevent problems or capitalize on advantageous situations.

When predictive AI is combined with a data lakehouse, like Dynatrace Grail, it can deliver value by automatically providing prescriptive insights using data from digital user experience layer to the infrastructure layer with full data context using supporting data, such as relationships, dependencies, and other context within entities and events. While investigative techniques such as root-cause analysis are essential for teams striving to understand issues that have already occurred, predictive AI techniques such as forecasting and anomaly prediction help teams preempt issues. With the advances in causal AI (that is, AI that can explain cause and effect by identifying root-cause issues), teams want to take it to the next level and combine it with predictive AI to create a seamless foresight-to-hindsight continuum of data-driven answers and prescriptive insights.

The importance of predictive AI for ITOps, DevSecOps, and SRE teams

  1. Early detection of anomalies. Predictive AI empowers site reliability engineers (SREs) and DevOps engineers to detect anomalies and irregular patterns in their systems long before they escalate into critical incidents. By identifying subtle deviations in system behavior, engineers can take preemptive measures to avert potential downtime, performance issues, or security threats.
  2. Proactive resource allocation. Through predictive analytics, SREs and DevOps engineers can accurately forecast resource needs based on historical data. This enables efficient resource allocation, avoiding unnecessary expenses and ensuring optimal performance.
  3. Capacity planning. Understanding future capacity requirements is crucial for maintaining system stability. Predictive AI assists engineers in predicting demand fluctuations and adjusting resource capacities accordingly, ensuring seamless user experiences.
  4. Enhanced incident response. Predictive analytics can anticipate potential failures and security breaches. SREs and DevOps engineers can implement targeted remediation strategies and prioritize incident response efforts to minimize the impact on systems and users.
  5. Continuous improvement. By analyzing past incidents and performance metrics, predictive analytics helps SREs and DevOps engineers identify areas for improvement. This data-driven approach fosters continuous refinement of processes and systems.

Predictive AI-based capacity management and automation

Proactive capacity management is essential for avoiding outages and ensuring that an organization’s applications and services are always available. Operators need to closely observe business-critical resource capacities such as storage, CPU, and memory to avoid outages that are driven by resource shortages. However, traditional capacity management approaches are often reactive and time-consuming. Using Dynatrace Grail and Davis AI, predictive capacity management is straightforward:

  • use Notebooks to explore important capacity indicators;
  • create workflows to trigger forecast reporting at regular intervals; and
  • use Davis AI for Workflows to automate the prediction and remediation of future capacity demands.

Predictive capacity management is a powerful tool that can help improve the availability and performance of applications and services. By using Dynatrace Grail and Davis AI, you can gain the insights you need to make proactive decisions about capacity planning and gain additional benefits:

  • Increased visibility into future capacity demands. Predictive capacity management can help you to anticipate what your future capacity demands will likely be. This provides organizations with the ability to make proactive decisions about capacity planning, such as adding additional resources or scaling back resources that are not being used.
  • Improved decision making for capacity planning. With predictive capacity management, you can make more informed decisions about capacity planning. This is because you have a better understanding of your future capacity demands and the impact of those demands on applications and services.
  • Reduced costs associated with unplanned capacity increases. Unplanned capacity increases are costly. Organizations may need to purchase additional resources or pay for overtime. Predictive capacity management can reduce these costs by enabling organizations to plan for future capacity demands.
  • Increased customer satisfaction. When your applications and services are available and performing well, your customers are happy. Predictive capacity management can help you to improve customer satisfaction by reducing the number of outages and performance problems.

This is just one example of predictive AI in action. But in fact, that there are numerous use cases for ITOps, DevSecOps, and SRE teams where they get the foresight into issues before they escalate into costly problems and preemptively addressed. They see improved efficiency, reduced risks of security breaches, and better compliance with industry regulations.

The post What is predictive AI? How this data-driven technique gives foresight to IT teams appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/what-is-predictive-ai-how-this-data-driven-technique-gives-foresight-to-it-teams/feed/ 0
OneAgent release notes version 1.273 https://www.dynatrace.com/news/blog/oneagent-release-notes-version-1-273/ https://www.dynatrace.com/news/blog/oneagent-release-notes-version-1-273/#respond Tue, 05 Sep 2023 16:33:59 +0000 https://www.dynatrace.com/news/?p=59619 OneAgent Product News

We released Dynatrace OneAgent and ActiveGate version 1.273. To learn what’s new, have a look at: OneAgent release notes ActiveGate release notes

The post OneAgent release notes version 1.273 appeared first on Dynatrace news.

]]>
OneAgent Product News

We released Dynatrace OneAgent and ActiveGate version 1.273. To learn what’s new, have a look at:

The post OneAgent release notes version 1.273 appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/oneagent-release-notes-version-1-273/feed/ 0
Cloud observability delivers on business value https://www.dynatrace.com/news/blog/cloud-observability-delivers-on-business-value/ https://www.dynatrace.com/news/blog/cloud-observability-delivers-on-business-value/#respond Tue, 05 Sep 2023 15:40:21 +0000 https://www.dynatrace.com/news/?p=59525

Cloud observability enables organizations to deliver business value by reducing costs, minimizing IT incidents, and providing better user experiences, as CEO Rick McConnell outlined at the recent Innovate conference.

The post Cloud observability delivers on business value appeared first on Dynatrace news.

]]>
Cloud observability can bring business value, said Rick McConnell, CEO at Dynatrace.

Organizations have clearly experienced growth, agility, and innovation as they move to cloud computing architecture. But without effective cloud observability, they continue to experience challenges in their cloud environments.

Organizations face cloud complexity, data explosion, and a pronounced lack of ability to manage their cloud environments effectively. They need solutions such as cloud observability—the ability to measure a system’s current state based on the data it generates—to help tame cloud complexity and better manage applications, infrastructure, and data within their IT landscapes.

As a result, many IT teams have turned to cloud observability platforms to reduce blind spots in their cloud architecture, resolve problems rapidly, and deliver better customer experiences. Ultimately, cloud observability helps organizations to develop and run “software that works perfectly,” said Dynatrace CEO Rick McConnell during a keynote at the company’s Innovate conference in Săo Paulo in late August.

Increasingly discriminating users require software to work perfectly, McConnell noted, or these customers may defect to other brands. Indeed, some research indicates that 54% of customers will defect after just one poor experience with a company.

“As users, we expect this,” McConnell said during the conference keynote. “Whether we’re buying something online or scheduling travel or making a bank transfer…. [These applications] have to work perfectly.”

Ultimately, McConnell noted, effective cloud observability needs to deliver business value to organizations by wrangling cloud complexity and enabling users.

“We [at Dynatrace] like to think we make order out of chaos,” McConnell said, “to provide flawless and secure digital interactions. Then we can achieve this objective of delivering software that works perfectly.”

Data explosion and cloud complexity brings cloud management challenges

McConnell noted that rising interest rates and soaring costs have created a backdrop in which organizations need to do more with less. At the same time, the scale of the cloud environments that need to be managed is exploding.

According to a recent Forbes article, Internet users are creating 2.5 quintillion bytes of data each day. How do we manage our environments when it creates more workloads and more complexity?” McConnell said.

“Workloads are exploding, more apps, more infrastructure, more to manage,” McConnell said. “We can’t do it the way we have always done it. … That’s why we have to transform our businesses.”

McConnell also noted that while cloud platforms have brought velocity to organizations’ efforts to grow and innovate, cloud-native environments necessarily invite complexity that requires management and monitoring.

“[Cloud] services have enabled us to deliver more faster, but it has [also] resulted in fragmented tools and challenging customer experiences,” he said. “It’s harder to keep cloud workloads up and running than the olden days.”

A modern cloud observability platform can address the growing need for organizations to do more with less as cloud complexity and data volumes increase.

McConnell carved out several cloud observability trends to watch that can help organizations manage complexity, reduce costs, innovate, and secure their environments.

  1. Cloud modernization. Cloud platforms continue to deliver massive value. According to data cited by McConnell, Amazon Web Services, Microsoft Azure, and Google Cloud Platform grew in the last quarter, ending in June [2023], and jointly delivered almost $50 billion. “That’s a growth by 2x over two years,” McConnell noted. “We are all using and deploying and employing more cloud services every day.”
  2. Unified observability. McConnell noted that effective, unified observability delivers precise answers on activity in cloud environments, not just dashboards that display red, green, and yellow alerts with little analysis of what exactly has gone wrong. “You have to know precisely what is going on in your environment to be able to troubleshoot rapidly,” he said. Further, cloud observability should help teams proactively address problems before they affect users. “You want to know how to predict and resolve issues before they occur.”
  3. Realizing business value through cloud observability. McConnell noted that organizations are awakening to the full potential of using cloud observability: realizing business value.

There have been several axes on which organizations can realize business value:

  • Cost savings
  • Improved software uptime
  • Reduced troubleshooting
  • Better predictive capabilities
  • Deeper integration into code

Why Dynatrace cloud observability is different

Dynatrace features several differentiators that set the observability apart in realizing business value.

1. Dynatrace Grail. As McConnell noted, Dynatrace Grail is a massively parallel processing data lakehouse that enables teams to ingest and store large volumes of data in context and without up-front manual work.

“You can do a better job of parsing [data], analyzing it, and using it to deliver answers. Grail brings all of this together,” McConnell noted. “We maintain data in context.”

2. Hypermodal AI. Hypermodal AI combines three forms of artificial intelligence: predictive AI, causal AI, and generative AI. Let’s look at these three forms of AI:

  • Causal AI is an artificial intelligence technique that uses fault-tree analysis to determine the exact underlying causes and effects of events or behavior.
  • Predictive AI uses machine learning (ML) and statistical methods to recommend future actions based on data from the past.
  • Generative AI is artificial intelligence capable of generating code, text, and other types of output using trained, generative models.

McConnell noted that Dynatrace AI is qualitatively different from those of other companies because it brings these forms of artificial intelligence together. The combination is synergistic. Further, while generative AI is a productivity tool—and a key focus of the technology community today—its data outputs are only as good as the underlying inputs provided by causal AI.

This is why causal AI is so critical. With causal AI, an observability platform can identify the precise source of problems rather than simply ingest data that indicates correlations. With causal AI, teams can identify the precise root cause of an issue and all the entities that are affected, as well as display the relationships among these entities in Smartscape’s topological map.

Further, predictive AI, combined with causal AI, enables teams to deploy machine learning on top of causal AI to predict what might happen in the future based on statistics, enabling teams to avoid issues before they occur or to auto-remediate these problems through software code.

3. Automation. McConnell noted that, ultimately, cloud observability helps organizations move from manual, time-consuming, costly effort to automated action. “Automation is the ultimate game changer,” McConnell said. “You must enable business process automation to improve that environment I described earlier of that NOC (Network Operations Center) of manual engagement. We continue to invest in capabilities to achieve that.”

Realizing business value with cloud observability

Dynatrace customers have reaped this kind of business value using the cloud observability platform.

Consider a financial planning company that needed to better manage its performance. It had the burden of a legacy environment. After it moved to a cloud-native environment, its teams were able to move faster and develop software more rapidly. But that velocity also resulted in an increase in incidents. McConnell noted that today, using Dynatrace observability, the company anticipates a cost savings of $1.5 million in the first three months of deploying the platform.

Second, a U.K.-based telecommunications company reduced incidents by 50%, reduced mean time to repair by 90%, and expects to save £28 million over three years. “This is what we’re trying to deliver,” McConnell noted about the telecom’s results. “Not just better user experience, but true economic value.”

Finally, organizations have been able to use the Dynatrace observability and application security platform to avoid the costly losses wrought by Log4Shell, a zero-day vulnerability that emerged in late 2021. For one software development and cloud services company, Dynatrace application security automatically identified and prioritized vulnerabilities that required remediation. As a result, this company experienced a significant improvement in its ability to immediately address vulnerabilities for itself and its clients.

Dynatrace cloud observability helps organizations move beyond break-fix. With true cloud observability, they can reduce costs, reduce incidents, and automate formerly manual tasks. That frees organizations to grow, innovate, and become strategic as they operate in dynamically changing and uncertain business environments.

The post Cloud observability delivers on business value appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/cloud-observability-delivers-on-business-value/feed/ 0
Versatile observability for relational databases https://www.dynatrace.com/news/blog/versatile-observability-for-relational-databases/ https://www.dynatrace.com/news/blog/versatile-observability-for-relational-databases/#respond Tue, 05 Sep 2023 14:07:25 +0000 https://www.dynatrace.com/news/?p=59552 observability for relational databases

Dynatrace has expanded its database monitoring features by introducing extensions for relational databases such as IBM DB2, SAP HanaDB, MySQL, PostgreSQL, and Snowflake. These extensions come with specialized domain expertise and handpicked metrics, significantly enhancing database observability. They offer query-level visibility, in-depth custom metrics, and log analysis that help you pinpoint issues in server load.

The post Versatile observability for relational databases appeared first on Dynatrace news.

]]>
observability for relational databases

Running Databases efficiently is crucial for business success

Monitoring databases is essential in large IT environments to prevent potential issues from becoming major problems that result in data loss or downtime. Additionally, monitoring allows for proactive maintenance and optimization, leading to improved system performance and user experience.

Many environments rely on relational databases due to their structured format, which consists of tables, columns, and rows. This makes them ideal for managing structured data. However, horizontal scaling of these databases can take time and effort.

Database monitoring with topology context

With Dynatrace, you can easily monitor the performance of your database layer, even in complex environments.

See all detected databases in Dynatrace
See all detected databases

All databases running on your server instances are autodetected, so you can easily check your database performance statistics and settings.

Database monitoring details in Dynatrace screenshot
Drill down into further details

You can review the availability and performance of high availability replicas and AlwaysOn groups in SQL Server. An extension-built topology model allows you to easily navigate between all the entities that make up your database servers or cluster architecture and review specific statistics in their proper context.

Track your database performance, regardless of the vendor

We’ve expanded our database monitoring capabilities with new extensions for relational databases:

By using these extensions, you can gain access to domain expertise and carefully selected metrics that will improve the observability of your databases. They enable you to monitor the performance of your database layer, even in complex environments.

Customize database monitoring to fit your needs

In short, our solution provides a comprehensive view of your database infrastructure, enabling all stakeholders to work together seamlessly to resolve issues. With our extension framework, you can access the most relevant metrics and logs, gaining deeper insights into your database’s performance. By leveraging query-level visibility and custom metric-event tracking, you can identify areas that require improvement, ensuring your business operates smoothly.

Start monitoring your databases

To begin monitoring your critical databases, follow the link listed above that relates to your particular database type. Then, use the Extension Activation Wizard to activate your Dynatrace database extension.

What’s next?

Currently, we’re working on adding additional extensions to further improve overall performance, including JMX-based connection pool monitoring, NoSQL, in-memory extensions, better overall database entity monitoring, and query plans.

The post Versatile observability for relational databases appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/versatile-observability-for-relational-databases/feed/ 0
Dynatrace SaaS release notes version 1.274 https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-274/ https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-274/#respond Wed, 30 Aug 2023 15:13:41 +0000 https://www.dynatrace.com/news/?p=59476 Dynatrace SaaS Release Notes

We have released Dynatrace version 1.274. To learn what’s new, have a look at the release notes.

The post Dynatrace SaaS release notes version 1.274 appeared first on Dynatrace news.

]]>
Dynatrace SaaS Release Notes

We have released Dynatrace version 1.274. To learn what’s new, have a look at the release notes.

The post Dynatrace SaaS release notes version 1.274 appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-274/feed/ 0
Dynatrace and Google unleash cloud-native observability for GKE Autopilot https://www.dynatrace.com/news/blog/dynatrace-and-google-unleash-cloud-native-observability-for-gke-autopilot/ https://www.dynatrace.com/news/blog/dynatrace-and-google-unleash-cloud-native-observability-for-gke-autopilot/#respond Wed, 30 Aug 2023 13:00:17 +0000 https://www.dynatrace.com/news/?p=59442

Cloud-native observability for Google’s fully managed GKE Autopilot clusters demands new methods of gathering metrics, traces, and logs for workloads, pods, and containers to enable better accessibility for operations teams. Managed Kubernetes clusters on GKE Autopilot have gained unprecedented momentum among enterprises. GKE Autopilot empowers organizations to invest in creating elegant digital experiences for their […]

The post Dynatrace and Google unleash cloud-native observability for GKE Autopilot appeared first on Dynatrace news.

]]>

Cloud-native observability for Google’s fully managed GKE Autopilot clusters demands new methods of gathering metrics, traces, and logs for workloads, pods, and containers to enable better accessibility for operations teams.

Managed Kubernetes clusters on GKE Autopilot have gained unprecedented momentum among enterprises. GKE Autopilot empowers organizations to invest in creating elegant digital experiences for their customers in lieu of expensive infrastructure management. This increased agility requires ways of collecting and analyzing observability signals such as metrics, logs, and traces. Dynatrace’s collaboration with Google addresses these needs by providing simple, scalable, and innovative data acquisition for comprehensive analysis and troubleshooting.

Thanks to the collaboration between Dynatrace and Google, customers can now unlock cloud-native Dynatrace deployments on GKE Autopilot and take full advantage of Dynatrace’s AI-powered, context-aware observability platform.

The challenge of data acquisition in GKE Autopilot clusters

Setting up cloud-native observability in managed GKE Autopilot clusters has its challenges, primarily because Kubernetes nodes and infrastructure are abstracted. While fully managed Kubernetes solutions, such as GKE Autopilot, offer considerable benefits, they also shift control from Kubernetes operations teams to cloud vendors, affecting core observability infrastructure.

To leverage the best of GKE Autopilot and cloud-native observability, Dynatrace and Google focused especially on Dynatrace’s innovative use of Container Storage Interface (CSI) pods. These CSI pods provide a unique way of solving a handful of infrastructure problems.

  • Agent logs security. The CSI pod is mounted to application pods using an overlay file system. Dynatrace OneAgent logs are isolated by container to reduce the attack surface of each container.
  • Instant instrumentation. The CSI pod offers a prepared file system, mounted automatically, and includes unzipped agent binaries to every application pod. This means application pods, instrumented with Dynatrace, start instantly.
  • Minimal disk consumption. The CSI pod provides the same set of agent binaries to application pods without consuming space on their ephemeral or persistent disks

This solution by Dynatrace and Google means businesses on GKE can rapidly and securely deploy cloud-native observability without risking application pod startup times. Every enterprise using GKE Autopilot can combine the advantages of Google’s managed Kubernetes infrastructure with Dynatrace’s world-class observability platform.

How GKE Autopilot works

Deploy the Dynatrace Operator on GKE Autopilot

Getting started with Dynatrace on GKE Autopilot takes only a few minutes. First, we create a small Kubernetes cluster in the Google Cloud Console.

How to create a cluster in the Google Cloud Console

We use the Dynatrace Operator Helm chart to deploy Dynatrace Kubernetes Application observability as described in the documentation.

After deploying Dynatrace to GKE Autopilot, application pods are fully observable with out-of-the-box Kubernetes dashboards, the full power of Davis for anomaly detection and causal correlation, world-class distributed tracing, memory and CPU profiling, and powerful deep code-level insights using method hotspots.

The following sections highlight just a few of these features, including the Kubernetes Dashboard, a Workloads page with Davis resource utilization and failure correlations, and a Method Hotpot view for exploring these failures more deeply.

Kubernetes Workload Dashboard on GKE Autopilot

This dashboard displays all workloads deployed to a GKE Autopilot cluster. It also shows CPU throttling on the Deliveries workload, which indicates a potential problem to explore in the workloads view.

Kubernetes Workload Dashboard on GKE Autopilot in Dynatrace

GKE Autopilot Kubernetes Workload view

This built-in Dynatrace screen shows resource utilization, throughput, related pods, Kubernetes Services, microservices, logs, and events. Here we asked Davis, the Dynatrace AI engine, to correlate CPU usage against other signals. In this case, Davis found that a Java Spring Micrometer metric Failed Deliveries is highly correlated with CPU spikes.

GKE Autopilot Kubernetes Workload view in Dynatrace

Service Insights via Method Hotspots

The Method Hotspot view is typically accessed from a distributed trace or unified service analysis screen. This example shows the specific method causing performance problems and delivery failures.

Service Insights via Method Hotspots in Dynatrace

Cloud-native observability for Kubernetes with Dynatrace and GKE Autopilot

The Dynatrace-Google partnership to provide unparalleled observability on GKE Autopilot exemplifies how technology can evolve to meet the observability demands of fully managed Kubernetes infrastructure. By providing managed Kubernetes clusters with innovative observability data collection at scale, customers can focus more on their digital transformation journeys.

To learn more about how to start using the new GKE Autopilot integration, view our documentation. For a free 15-day Dynatrace trial, you can sign up here.

The post Dynatrace and Google unleash cloud-native observability for GKE Autopilot appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/dynatrace-and-google-unleash-cloud-native-observability-for-gke-autopilot/feed/ 0
Tech Transforms podcast: How to ensure both zero trust and positive user experiences at federal agencies https://www.dynatrace.com/news/blog/tech-transforms-zero-trust-and-user-experiences/ https://www.dynatrace.com/news/blog/tech-transforms-zero-trust-and-user-experiences/#respond Wed, 30 Aug 2023 12:00:08 +0000 https://www.dynatrace.com/news/?p=59423 Tech Transforms Podcast

On the Tech Transforms podcast, sponsored by Dynatrace, we talk to some of the most prominent influencers shaping critical government technology decisions.

The post Tech Transforms podcast: How to ensure both zero trust and positive user experiences at federal agencies appeared first on Dynatrace news.

]]>
Tech Transforms Podcast

Can zero trust and positive user experiences coexist within the government?

On the one hand, the United States has mandates such as the White House Executive Order (EO) 14028 on “Improving the Nation’s Cybersecurity.” This mandate directs federal agencies to advance toward a zero-trust architecture that “eliminates implicit trust in any one element, node, or service.” On the other hand, Dynatrace research indicates that government leaders are increasingly prioritizing positive user experience, with two-thirds of respondents saying this is “very” important.

On Episode 51 of the Tech Transforms podcast, we tackle zero trust and user experience. United States Patent and Trademark Office Chief Information Officer Jamie Holcombe reveals how agencies can have the best of both worlds as they pursue IT modernizations: zero trust enforcement that does not arrive at the expense of user experience.

Learn how zero trust and a better user experience can co-exist with unified observability.

“Trust nothing and no one” security policies can enhance user experiences

Within the Cybersecurity and Infrastructure Security Agency (CISA), there are five Zero Trust Maturity Model pillars: identity, devices, networks, applications/workloads, and data. The teams associated with these pillars use expansive observability to help them establish optimal digital experience monitoring. The result is frictionless, secure interactions. As a result, agencies that implement zero trust can enhance user experiences, not diminish them.

During the conversation, Holcombe focused on the following key areas to accomplish exceptional and secure digital interactions:

  • Pay attention to all five pillars, not just the “identity” pillar.
  • Extend authentication controls for what really matters.
  • Overcome user resistance before it even occurs.
Podcast cover for the Tech Transforms podcast episode 51 with Jamie Holcombe This episode of Tech Transforms explores to balance a zero trust architecture with positive user experiences

Tune in to the full episode for more insights from Holcombe for more on zero trust, user experience, and additional technology topics for agencies.

Follow the Tech Transforms podcast

Follow Tech Transforms on Twitter, LinkedIn, Instagram, and Facebook to get the latest updates on new episodes! Listen and subscribe on our website, or your favorite podcast platform, and leave us a review!

The post Tech Transforms podcast: How to ensure both zero trust and positive user experiences at federal agencies appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/tech-transforms-zero-trust-and-user-experiences/feed/ 0
Dynatrace achieves Google Cloud Ready – Cloud SQL designation https://www.dynatrace.com/news/blog/dynatrace-achieves-google-cloud-ready-cloud-sql-designation/ https://www.dynatrace.com/news/blog/dynatrace-achieves-google-cloud-ready-cloud-sql-designation/#respond Tue, 29 Aug 2023 13:00:07 +0000 https://www.dynatrace.com/news/?p=59418 Dynatrace | Google Cloud Platform

Dynatrace has announced that it has successfully achieved the Google Cloud Ready – Cloud SQL designation for Cloud SQL, Google Cloud’s fully-managed, relational database service for MySQL, PostgreSQL, and SQL Server. More about Google Cloud Ready – Cloud SQL Google Cloud Ready – Cloud SQL is a new designation for Google Cloud’s technology partners’ solutions […]

The post Dynatrace achieves Google Cloud Ready – Cloud SQL designation appeared first on Dynatrace news.

]]>
Dynatrace | Google Cloud Platform

Dynatrace has announced that it has successfully achieved the Google Cloud Ready – Cloud SQL designation for Cloud SQL, Google Cloud’s fully-managed, relational database service for MySQL, PostgreSQL, and SQL Server.

More about Google Cloud Ready – Cloud SQL

Google Cloud Ready – Cloud SQL is a new designation for Google Cloud’s technology partners’ solutions that integrate with Cloud SQL. Google Cloud Ready – Cloud SQL recognizes the partner solutions that have met a core set of functional requirements and have been validated in collaboration with Google Cloud engineering teams.

Dynatrace has closely collaborated with Google Cloud to add support for Cloud SQL, MySQL, PostgreSQL, or SQL Server into Dynatrace solutions in addition to tuning existing functionality for optimal outcomes.

By earning this designation, Dynatrace has proven that its platform has met a core set of functional and interoperability requirements when integrating with Cloud SQL and has refined documentation for ease of onboarding by our mutual customers. This designation enables customers to discover and have confidence that the Dynatrace offerings and solutions they use today work well with Cloud SQL. This designation can also save time in evaluating Dynatrace solutions for organizations that are not already using them.

More about Dynatrace

Purpose-built for the cloud, Dynatrace automatically discovers, baselines, and intelligently monitors dynamic hybrid-cloud environments​ while enabling auto-deployment, configuration, and intelligence. With AI continuously baselining performance and serving precise root causation and contextual data for rapid MTTR, Dynatrace delivers business impact so organizations can confidently optimize and deliver exceptional user experiences with a single view across the Google Cloud ecosystem. This includes Google Compute Engine, Google Kubernetes Engine, Anthos, and hybrid and multicloud environments — from users and edge devices to apps and cloud platforms.

Dynatrace and Google Cloud

Being part of this program, Dynatrace has more opportunities to collaborate closely with Google Cloud partner engineering and Cloud SQL teams to develop joint roadmaps.

“The Google Cloud Ready – Cloud SQL designation gives customers confidence that the Dynatrace monitoring solution has gone through a formal certification process and will deliver the best possible performance with Cloud SQL,” said Ritika Suri, Director of Technology Partnerships at Google Cloud. “With Dynatrace, customers can easily discover products and save time on evaluating them so they can more easily optimize their business performance.”

“We greatly value the strong partnership and collaboration we have with Google Cloud,” says Kacey Leckenby, Senior Director of Global Cloud Alliances at Dynatrace. “Through programs like CloudReady, we ensure that Dynatrace is continually focused on co-developed cloud-native solutions that empower our customers to achieve the outcomes of their Google Cloud investment that are essential to their business.”

Learn more about Dynatrace’s expertise with Google Cloud or check out more information on Google Cloud Ready – Cloud SQL.

The post Dynatrace achieves Google Cloud Ready – Cloud SQL designation appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/dynatrace-achieves-google-cloud-ready-cloud-sql-designation/feed/ 0
Customer expectations for retail: Beyond digital experience https://www.dynatrace.com/news/blog/customer-expectations-for-retail-beyond-digital-experience/ https://www.dynatrace.com/news/blog/customer-expectations-for-retail-beyond-digital-experience/#respond Mon, 28 Aug 2023 14:27:19 +0000 https://www.dynatrace.com/news/?p=59408 Business observability

Digital experience has long been the focus of e-commerce organizations looking to foster loyalty and improve business outcomes, especially during holiday seasons. Digital experience creates a first impression, and first impressions matter; however, what happens after the conversion also creates a lasting impression, often with a larger impact on loyalty and business outcomes.

The post Customer expectations for retail: Beyond digital experience appeared first on Dynatrace news.

]]>
Business observability

Digital experience is often considered the most important customer-facing aspect of digital commerce. This is typically the first thing that comes to mind for IT professionals working in the retail industry when evaluating holiday readiness. While digital experience has many facets, transaction speed usually ranks among the most important. Almost two decades ago, a Google experiment showed that fast-loading transactions are more important to customers than content quality—even small increases in transaction delay result in substantially more abandoned sessions. That lesson remains important. (Though the three-second rule for page load time is often misinterpreted).

CEOs of hybrid retailers prioritize e-commerce growth over in-store shopping, investing heavily in their online storefronts. IT teams spend months preparing for the peak traffic they anticipate will arrive with holiday shopping. However, this is a dynamic target; shopping behaviors are increasingly unpredictable, customer expectations continue to rise, and fierce competition makes cultivating loyalty more challenging than ever. These challenges can be summarized by this quote, paraphrased here from Adobe’s 2021 Digital Trends report: “Your customers are digital, unpredictable, and easy to lose.”

From first to lasting impressions

But there’s more to digital experience than speed. Digital experience, measured by fast, frictionless user journeys, paints an incomplete picture, tracking just the beginning of the customer relationship. What happens after the conversion creates a lasting impression with a larger impact on loyalty and your business.

Let’s shift our focus to the backend systems and business processes, the behind-the-scenes heroes of end-to-end customer experience. These retail-business processes must work together efficiently to orchestrate customer satisfaction:

  • Inventory management ensures you can anticipate and meet dynamic customer demand.
  • Order processing workflow is triggered by customer orders.
  • Order fulfillment is the packaging and delivery of orders to customers.

From a customer perspective, the nuances of these business processes are uninteresting as long as they work. Increasingly, however, order fulfillment is a differentiating customer-facing aspect of the end-to-end customer journey, often with digital touchpoints woven into the experience. The fulfillment clock starts ticking the moment a customer purchases your product. Yet fulfillment is often an area over which retailers have little visibility or control.

Customers value real-time visibility into order status and delivery tracking. However, these fulfillment processes are often strained under the pressure of increased online shopping, next-day delivery expectations, and environment-friendly choices. Flexible delivery options, including “buy online, pick up in store” (BOPIS), curbside pickup, self-service lockers, and gig economy delivery require even greater real-time coordination to commit to competitive and narrowing delivery windows. Decentralized last-mile delivery strategies such as micro-fulfillment centers complicate inventory management and order fulfillment oversight.

Technology to the rescue?

Solutions such as inventory management, order management, and delivery optimization can introduce new challenges:

System integration. To effectively leverage multiple systems to manage orders, inventory, and logistics, retailers must invest in often complex integrations. Unsynchronized and siloed data prevents real-time decision-making and business automation.

Multi-channel logistics. Most retailers work with multiple carriers to handle deliveries, resulting in disparate tracking systems. Aggregating tracking information and presenting it to customers in a uniform way can be a challenge.

Real-time updates: Customers expect real-time visibility into fulfillment milestones beyond order confirmation, including packing, shipping, and delivery notifications. Self-service tracking information, preferred by most customers, becomes especially difficult when there are delays or disruptions.

Embracing business observability

Successful retailers benefit from real-time insights into business processes across all milestones. While each system and service provider might adhere to SLOs, the end-to-end health of the process is greater than the sum of its parts. How can you discover optimization opportunities, patterns behind recurring disruptions, or the root cause of an anomaly? The answer lies in the context—connecting business process KPIs to system performance becomes the starting point for real-time business/IT collaboration and automated remediation. The resulting agility supports targeted responses to process disruptions, anomalies, and bottlenecks as they happen, not when daily or weekly reports are produced, not when your call center is inundated, not when your Net Promoter Score (NPS) plummets. To accomplish this transformation, IT teams need to expand their observability scope to include business KPIs.

How Dynatrace can help

Recent platform innovations have made monitoring end-to-end business processes such as order fulfillment easier. Consider these requirements for effective business observability.

  • Business data must be accurate to instill the confidence to make business decisions.
  • Business data can come from many sources, including OneAgent, RUM, external business systems, and log files.
  • Business data must be easy to access without modifying code to reduce the burden on development and maintenance resources.
  • Business data must remain granular over long retention periods to support long-running business processes and “needle in the haystack” queries.
  • Business data must be unified, regardless of the source or data type.
  • Business data must be easily queried to answer unanticipated questions without upfront indexing.

Business events deliver real-time business observability to business and IT teams with the precision and context to support data-driven decisions and improve business outcomes. Business events extract critical business data from your IT systems with lossless precision and can illuminate dark data quickly and easily, wherever that data exists.

Business events from any data source
Business events from any data source

Order fulfillment process example

Retail order fulfillment is a good example of business process monitoring, a use case enabled by these innovations. Fulfillment processes vary between retailers, with subprocesses that might introduce branches and loops. It’s a good practice to identify process milestones as a starting point; these should be relatively consistent. For example:

  1. Purchase confirmation
  2. Order picked from the warehouse
  3. Shipping label created
  4. Order accepted by the delivery agent
  5. Delivery confirmation
  6. Survey completed

Once you’ve defined the list of milestones, identify where to capture the data.

  • Purchase confirmation: E-commerce platform (via OneAgent)
  • Order picked: Warehouse management system (via OneAgent)
  • Shipping label created: Warehouse management system (via OneAgent)
  • Order scanned by delivery agent: Agent logistics system (via API)
  • Delivery confirmation: Agent logistics system (via API)
  • Survey: VoC solution (via API or database query)

The Business Flow app, developed using Dynatrace® AppEngine, makes it easy to configure business process milestones for an end-to-end view of process throughput, delays, and anomalies.

Business Flow
Business Flow

Become a business observability champion

Want to see how it’s done? Watch this 30-minute webinar to see how Mitchells & Butlers leverages real-time, context-rich analytics to optimize process efficiencies, discover and respond to dynamic customer behavioral patterns, and drive confident business decisions.

The post Customer expectations for retail: Beyond digital experience appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/customer-expectations-for-retail-beyond-digital-experience/feed/ 0
Tech Transforms podcast: Supply chain meets modernization https://www.dynatrace.com/news/blog/tech-transforms-supply-chain-modernization/ https://www.dynatrace.com/news/blog/tech-transforms-supply-chain-modernization/#respond Mon, 28 Aug 2023 12:00:48 +0000 https://www.dynatrace.com/news/?p=59385 Tech Transforms Podcast

On the Tech Transforms podcast, sponsored by Dynatrace, we talk to some of the most prominent influencers shaping critical government technology decisions.

The post Tech Transforms podcast: Supply chain meets modernization appeared first on Dynatrace news.

]]>
Tech Transforms Podcast

The global supply chain took a major hit during the COVID-19 pandemic. While it has since returned to a relatively normal state, the need for supply chain modernization has emerged.

Dr. Aaron Drew, the Technical Director for the Supply Chain Management Product Line at the U.S. Department of Veterans Affairs Office of Information and Technology, is actively advancing supply chain modernization efforts within the Department of Veterans Affairs (VA). His goal: to better prepare the agency for today, tomorrow, and the future.

Episode 64 of the Tech Transforms podcast is all about the challenges of supply chain management. I sat down with Dr. Drew to discuss modernization within an agency as large as the VA, which is now the second-largest federal agency after the Department of Defense. Dr. Drew touches on the pertinence of supply chain modernization and the challenges of navigating technology procurement. He also underscores the importance of visibility when it comes to implementing government-mandated security requirements, such as the National Cybersecurity Strategy.

Software supply chain attacks are on the rise. Here’s why.

Why supply chain modernization is critical

Throughout the conversation, Dr. Drew explains the ongoing VA supply chain modernization efforts, particularly implementing lessons learned from the COVID-19 pandemic and its impact on both healthcare and the broader supply chain. He emphasizes the need for new tools that are both developed and implemented with the end user’s problems and day-to-day use at the forefront.

“What makes it easier is if you know what problem you’re trying to solve,” Dr. Drew said. “The problem came from the people. I visited some VA medical centers; I went to the cemeteries; I was over at fulfillment centers; I was riding in the VA vans. I understand, and I lived with the problem.”

During our chat, Dr. Drew outlined the steps an organization can take to modernize and maximize applications for end users. He also suggested ways to capitalize on data analytics to better prepare our nation for times of need. Dive into the full episode to hear his insights.

Tech Transforms episode 64 cover, featuring guest Dr. Aaron Drew. This episode of Tech Transforms explores challenges and provides advice for supply chain management and modernization.

Follow the Tech Transforms podcast

Follow Tech Transforms on Twitter, LinkedIn, Instagram, and Facebook to get the latest updates on new episodes! Listen and subscribe on our website, or your favorite podcast platform, and leave us a review!

The post Tech Transforms podcast: Supply chain meets modernization appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/tech-transforms-supply-chain-modernization/feed/ 0
IT modernization improves public health services at state human services agencies https://www.dynatrace.com/news/blog/it-modernization-improves-public-health-services/ https://www.dynatrace.com/news/blog/it-modernization-improves-public-health-services/#respond Fri, 25 Aug 2023 18:42:52 +0000 https://www.dynatrace.com/news/?p=59379 IT modernization at health and human services agencies for state and local governments requires a comprehensive observability for modernizing IT

IT modernization improves public health services at state human services agencies For many organizations, the pandemic was a crash course in IT modernization as agencies scrambled to meet the community’s needs as details unfolded. The early days of the pandemic highlighted the importance of communicating with the public and getting the message right. As science […]

The post IT modernization improves public health services at state human services agencies appeared first on Dynatrace news.

]]>
IT modernization at health and human services agencies for state and local governments requires a comprehensive observability for modernizing IT

IT modernization improves public health services at state human services agencies

For many organizations, the pandemic was a crash course in IT modernization as agencies scrambled to meet the community’s needs as details unfolded.

The early days of the pandemic highlighted the importance of communicating with the public and getting the message right. As science and instructions from public health officials often changed daily, citizen faith in public health institutions eroded. There was mass confusion over how to prevent the virus, what to do if you were sick, who was eligible for vaccines, and where to get them.

The lack of data blocked agencies from making informed decisions about a quickly mutating virus that posed different risks to different populations. Public health leaders, policymakers, and elected officials struggled to respond to the pandemic due to chronic underinvestment in public health at the federal, state, and local levels.

The pandemic has transformed how government agencies such as Health and Human Services (HHS) operate. Program staff depend on the reliable functioning of critical program systems and infrastructure to provide the best service delivery to the communities and citizens HHS serves, from newborn infants to persons requiring health services to our oldest citizens.

Modernizing IT through digital transformation promises to enable HHS to meet the ever-changing expectations of citizens and employees. Yet upgrading antiquated systems and effectively integrating them into modern workflows is challenging despite the potential to increase efficiency, improve security, reduce the higher operational costs associated with supporting tech debt, and avoid the foreseeable breaks in end-of-life apps.

The costs and challenges of technical debt

Retaining older systems brings both direct and indirect costs. For example, older systems may require additional support personnel or contractors to operate and maintain. If components have exceeded their typical life expectancy, a relatively minor system change may take a prolonged period to complete, and unexpected outages are more likely. Both can result in lost productivity for IT teams and staff in the field.

Further, legacy custom-developed apps were not built to meet the present-day user experience that HHS clients and partners expect. Upgrades and modifications, if available, are complex and expensive, so it isn’t easy to keep them secure and functional. Keeping the app working often requires the services of staff with institutional memory to rewrite and update code bases.

Generally, older technologies were not built to meet current expectations, are inefficient, difficult to maintain, and costlier to support long-term. IT modernization can help.

Enable DevOps teams to modernize legacy apps

Too many HHS IT organizations have an inventory of outdated applications with duplicative functionality, questionable states of health, and security vulnerabilities. It may be challenging to accurately catalog where applications are stored if some are maintained within a current infrastructure model while others are not.

It’s practically impossible for teams to modernize when they can’t visualize all the dependencies within their infrastructure, processes, and services. Yet avoiding IT modernization can negatively affect the quality of support an agency can offer to the community as well as the agency’s reputation.

Modernizing IT requires that teams have the capability to auto-discover dependencies and visualize their environments so they can understand the codebase and its underlying dependencies. These insights and intelligent automation enable teams to modernize outdated apps with confidence. The team can also focus on developing new cloud-native apps that provide the scalability necessary to deliver reliable services, especially during times of crisis when families need HHS the most.

Modernizing IT enables automation to work at scale

For organizations with scarce resources and heavy workloads, automated workflows can enable IT teams to monitor, manage, secure, and troubleshoot applications at scale. Having the right tools to resolve problems is vital to maintaining the continuity of operations and building a happier workforce.

To help teams automate workflows across their full multicloud stack, Dynatrace provides a head start on IT modernization based on a platform approach. Built on the AutomationEngine, Dynatrace automated multicloud workflows allow teams to visualize and automate processes. Once created, teams can customize these automated operations to specific environments or scenarios as necessary. In practice, automated workflows enable teams to move beyond application monitoring and event understanding and to take targeted action.

The Dynatrace unified platform converges observability and security to break down silos and empower end-to-end automation, helping teams deliver faster time to value without compromising security.

Proactively manage app performance and reliability

For HHS, system performance is mission-critical for program areas within the state’s safety net. When services experience unexpected outages, clients may experience delays when applying for online health benefits or receiving critical care.

Modernizing IT with continuous observability alerts IT Ops to real-time application outages or degraded performance. It discovers the root cause of issues, so the team may resolve them before they impact constituents. This moves the team away from reactive incident management that contributes to extended outages due to manual escalations.

Understanding how dependencies affect application performance positions the team to make better decisions, such as adjusting service architecture or infrastructure to improve application performance or holding third-party vendors accountable for causing performance issues.

IT modernization reduces security risks

Millions of citizens trust HHS with personal, financial, and other sensitive information requiring protection at the highest levels. Cyberattacks are rapidly evolving and can potentially expose sensitive client and agency information, disrupt critical operations, and violate the public’s trust.

HHS tends to have large numbers of systems, networks, and devices, which collectively increase complexity and the potential for failure, as does unsupported legacy tech. Recovering from a breach is costly and time-consuming and may result in penalties, audit findings, and the loss of program funding.

Modernizing IT with Dynatrace Application Security provides agencies with unified observability, security, and intelligent automation. This is crucial for a state agency, such as HHS, that electronically holds Protected Health Information (PHI) and Federal Tax Information (FTI).

IT modernization is essential for HHS

The IT environment directly impacts the level of functionality and resiliency that citizens can expect from HHS. Modernizing IT significantly benefits communities and the citizens HHS serves by improving security and regulation of applications and platforms, advancing service delivery quality and efficiency, as well as offering good stewardship of every taxpayer dollar by increasing productivity and reducing operating costs.

Dynatrace provides analytics and automation for unified observability and security. To learn more or to start a free trial, visit Dynatrace for state and local government.

If your team is feeling the demand for increased digital services but is unsure of how to start an IT modernization journey, review our three-part Journey to the Cloud eBook series for State and Local Government.

Step 1: Understanding your legacy environment

In this ebook, we explain the urgent need for IT modernization and the cornerstone of a move to the cloud by comprehending your existing IT environment. A successful cloud migration starts with understanding which applications and workloads should move to the cloud and in what order.

Step 2: Design and activate your cloud migration strategy

It takes smart planning to migrate applications to the cloud with minimal disruption. In this ebook, we walk you through best practices for developing a cloud migration strategy, including examples from other agencies that have established a cloud-forward roadmap.

Step 3: Optimize your transition to the cloud with observability

With your move to the cloud, an observability strategy is imperative. Traditional tools for monitoring and observing modern software stacks struggle to deal with the dynamic and changing nature of cloud environments. In this eBook, we explain how advanced observability can help you ensure optimal application availability and performance for the best possible end-user experiences.

The post IT modernization improves public health services at state human services agencies appeared first on Dynatrace news.

]]>
https://www.dynatrace.com/news/blog/it-modernization-improves-public-health-services/feed/ 0