background preloader

Wikipedia

Facebook Twitter

Application performance management. In the fields of information technology and systems management, Application Performance Management (APM) is the monitoring and management of performance and availability of software applications.

Application performance management

APM strives to detect and diagnose application performance problems to maintain an expected level of service. APM is "the translation of IT metrics into business meaning ([i.e.] value). Real user monitoring. Real user monitoring (RUM) is a passive monitoring technology that records all user interaction with a website or client interacting with a server or cloud-based application.

Real user monitoring

Monitoring actual user interaction with a website or an application is important to operators to determine if users are being served quickly and without errors and, if not, which part of a business process is failing. Software as a service (SaaS) and application service providers (ASP) use RUM to monitor and manage service quality delivered to their clients. Real user monitoring data is used to determine the actual service-level quality delivered to end-users and to detect errors or slowdowns on web sites. The data may also be used to determine if changes that are promulgated to sites have the intended effect or cause errors. Organizations also use RUM to test website or application changes prior to deployment by monitoring for errors or slowdowns in the pre-deployment phase. Passive monitoring. Passive monitoring is a technique used to capture traffic from a network by generating a copy of that traffic, often from a span port or mirror port or via a network tap.

Passive monitoring

This low risk implementation provides one of the highest values within Application Performance Management in terms of application visibility for the business. In general, this can be up and running providing details of the application performance in less than two days. It helps lay the foundation for performance trending and predictive analysis.[1] It can be analyzed in a sniffer such as WiresharkIt can be examined for flows of traffic, providing information on "top talkers" in a network as well as TCP round-trip time.It can be reassembled according to an application's state machine into end-user activity (for example, into database queries, e-mail messages, and so on.) This kind of technology is common in Real User Monitoring and now supports multiple protocol analytics (e.g. Synthetic monitoring. Synthetic monitoring is valuable because it enables a webmaster to identify problems and determine if his website or web application is slow or experiencing downtime before that problem affects actual end-users or customers.

Synthetic monitoring

This type of monitoring does not require actual web traffic so it enables companies to test web applications 24x7, or test new applications prior to a live customer-facing launch. Business transaction management. Business transaction management (BTM), also known as business transaction monitoring, application transaction profiling or user defined transaction profiling, is the practice of managing information technology (IT) from a business transaction perspective.

Business transaction management

It provides a tool for tracking the flow of transactions across IT infrastructure, in addition to detection, alerting, and correction of unexpected changes in business or technical conditions. BTM provides visibility into the flow of transactions across infrastructure tiers, including a dynamic mapping of the application topology. Using BTM, application support teams are able to search for transactions based on message context and content – for instance, time of arrival or message type – providing a way to isolate causes for common issues such as application exceptions, stalled transactions, and lower-level issues such as incorrect data values.[1] A number of factors have led to the demand for the development of BTM software: Notes[edit]

Systems management. Systems management refers to enterprise-wide administration of distributed systems including (and commonly in practice) computer systems.

Systems management

[citation needed] Systems management is strongly influenced by network management initiatives in telecommunications. The application performance management (APM) technologies are now a subset of Systems management. Maximum productivity can be achieved more efficiently through event correlation, system automation and predictive analysis which is now all part of APM.[1] Centralized management has a time and effort trade-off that is related to the size of the company, the expertise of the IT staff, and the amount of technology being used: System management may involve one or more of the following tasks: Functions[edit] Functional groups are provided according to International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Common management information protocol (X.700) standard.

System monitoring. In systems engineering, a system monitor (SM) is a process within a distributed system for collecting and storing state data.

System monitoring

This is a fundamental principle supporting Application Performance Management. Overview[edit] The argument that system monitoring is just a nice to have, and not really a core requirement for operational readiness, dissipates quickly when a critical application goes down with no warning.[1] The configuration for the system monitor takes two forms: configuration data for the monitor application itself, andconfiguration data for the system being monitored. See: System configuration. Application Response Measurement. Application Response Measurement (ARM) is an open standard published by the Open Group for monitoring and diagnosing performance bottlenecks within complex enterprise applications that use loosely-coupled designs or service-oriented architectures.

Application Response Measurement

History[edit] Version 1 of ARM was developed jointly by Tivoli Software and Hewlett Packard in 1996. Version 2 was developed by an industry partnership (the ARM Working Group) and became available in December 1997 as an open standard approved by the Open Group. ARM 4.0 was released in 2003 and revised in 2004. As of 2007[update], ARM 4.1 version 1 is the latest version of the ARM standard. Introduction[edit] Current application design tends to be more complex and distributed over networks. Within distributed applications it is not easy to estimate if the application performs well.

Are business transactions succeeding and, if not, what is the cause of failure? ARM helps answer these questions. Information Technology Infrastructure Library. ITIL (formerly known as the Information Technology Infrastructure Library) is a set of practices for IT service management (ITSM) that focuses on aligning IT services with the needs of business.

Information Technology Infrastructure Library

In its current form (known as ITIL 2011 edition), ITIL is published as a series of five core volumes, each of which covers a different ITSM lifecycle stage. Although ITIL underpins ISO/IEC 20000 (previously BS15000), the International Service Management Standard for IT service management, the two frameworks do have some differences. ITIL describes processes, procedures, tasks, and checklists which are not organization-specific, but can be applied by an organization for establishing integration with the organization's strategy, delivering value, and maintaining a minimum level of competency.

It allows the organization to establish a baseline from which it can plan, implement, and measure. It is used to demonstrate compliance and to measure improvement. History[edit] The Five Volumes :