WO2001008037A2 - A system, method and computer program for determining capability levels of processes to evaluate operational maturity of an organization - Google Patents

A system, method and computer program for determining capability levels of processes to evaluate operational maturity of an organization Download PDF

Info

Publication number
WO2001008037A2
WO2001008037A2 PCT/US2000/020353 US0020353W WO0108037A2 WO 2001008037 A2 WO2001008037 A2 WO 2001008037A2 US 0020353 W US0020353 W US 0020353W WO 0108037 A2 WO0108037 A2 WO 0108037A2
Authority
WO
WIPO (PCT)
Prior art keywords
capability
level
management
assessment
attributes
Prior art date
Application number
PCT/US2000/020353
Other languages
French (fr)
Other versions
WO2001008037A3 (en
Inventor
Nancy S. Greenberg
Colleen R. Winn
Original Assignee
Accenture Llp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Accenture Llp filed Critical Accenture Llp
Priority to AU62384/00A priority Critical patent/AU6238400A/en
Publication of WO2001008037A2 publication Critical patent/WO2001008037A2/en
Publication of WO2001008037A3 publication Critical patent/WO2001008037A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Definitions

  • the present invention relates to IT operations organizations and more particularly to evaluating a maturity of an operations organization by determining capability levels of process areas.
  • frameworks and gap analysis have been used to capture the best practices of IT management and to determine areas of improvement. While the frameworks and gap analysis are intended to capture weaknesses in processes that are observable, it does not provide data with sufficient objectivity and granularity upon which a comprehensive improvement plan can be built.
  • a system, method, and article of manufacture consistent with the principles of the present invention are provided for determining capability levels of a process area as a part of an operational maturity investigation.
  • a plurality of process attributes are first defined along with a plurality of generic practices for each of the process attributes. Also defined are a plurality of capability levels in terms of groups of the process attributes. Each of the process attributes are then rated based on achievement of the corresponding generic practices. It is then determined which of the capability levels is achieved by a process area. Such determination is based on the rating of the process attributes of the capability levels. Thereafter, the capability level is outputted for gauging a maturity of an operations organization.
  • the capability levels may each be achieved upon the ratings of the process attributes of the capability level surpassing a predetermined amount.
  • each capability level may be defined by the process attributes of a lower capability level and further defined by at least one more process attribute.
  • the process attributes may include process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and/or process change.
  • the capability levels may include performed informally, planned and tracked, well defined, quantitatively controlled, and/or continuously improving.
  • the present invention provides a basis for organizations to gauge performance, and assists in planning and tracking improvements to the operations environment.
  • the present invention further affords a basis for defining an objective improvement strategy in line with an organization's needs, priorities, and resource availability.
  • the present invention also provides a method for determining the overall operational maturity of an organization based on the capability levels of its processes.
  • the present invention can thus be used by organizations in a variety of contexts.
  • An organization can use the present invention to assess and improve its processes.
  • An organization can further use the present invention to assess the capability of suppliers in meeting their commitments, and hence better manage the risk associated with outsourcing and sub-contract management.
  • the present invention may be used to focus on an entire IT organization, on a single functional area such as service management, or on a single process area such as a service desk.
  • Figure 1 is a schematic diagram of a hardware implementation of one embodiment of the present invention.
  • Figure 2 is a flowchart illustrating generally the steps associated with the present invention
  • Figure 3 is an illustration showing the relationships of the process category, process area, and base practices of the operations environment dimension in accordance with one embodiment of the present invention
  • Figure 4 is an illustration showing a measure of each process area to the capability levels according to one embodiment of the present invention.
  • Figure 5 is an illustration showing various determinants of operational maturity in accordance with one embodiment of the present invention.
  • Figure 6 is an illustration showing an overview of the operational maturity model
  • Figure 7 is an illustration showing a relationship of capability levels, process attributes, and generic practices in accordance with one embodiment of the present invention.
  • Figure 8 is an illustration showing a capability rating of various attributes in accordance with one embodiment of the present invention.
  • Figure 9 is an illustration showing a mapping of attribute ratings to the process capability levels determination in accordance with one embodiment of the present invention.
  • Figure 10 is an illustration showing assessment roles and responsibilities in accordance with one embodiment of the present invention
  • Figure 11 is an illustration showing the process area rating in accordance with one embodiment of the present invention.
  • the present invention comprises a collection of best practices, both from a technical and management perspective.
  • the collection of best practices is a set of processes that are fundamental to a good operations environment.
  • the present invention provides a definition of an "ideal” operations environment, and also acts as a road map towards achieving the "ideal" state.
  • Figure 1 is a schematic diagram of one possible hardware implementation by which the present invention may be carried out. As shown, the present invention may be practiced in the context of a personal computer such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.
  • a personal computer such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.
  • FIG. 1 A representative hardware environment is depicted in Figure 1, which illustrates a typical hardware configuration of a workstation in accordance with one embodiment having a central processing unit 110, such as a microprocessor, and a number of other units interconnected via a system bus 112.
  • the workstation shown in Figure 1 includes a Random Access Memory (RAM) 114, Read Only.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • I/O adapter 118 for connecting peripheral devices such as disk storage units 120 to the bus 112
  • a user interface adapter 122 for connecting a keyboard 124, a mouse 126, a speaker 128, a microphone 132, and/or other user interface devices such as a touch screen (not shown) to the bus 112
  • communication adapter 134 for connecting the workstation to a communication network 135 (e.g., a data processing network) and a display adapter 136 for connecting the bus 112 to a display device 138.
  • a communication network 135 e.g., a data processing network
  • display adapter 136 for connecting the bus 112 to a display device 138.
  • the workstation typically has resident thereon an operating system such as the Microsoft Windows NT or Windows/95 Operating System (OS), the IBM OS/2 operating system, the MAC OS, or UNIX operating system.
  • OS Microsoft Windows NT or Windows/95 Operating System
  • IBM OS/2 operating system the IBM OS/2 operating system
  • MAC OS the MAC OS
  • UNIX operating system the operating system
  • a preferred embodiment of the present invention is written using JAVA, C, and the C++ language and utilizes object oriented programming methodology.
  • Object oriented programming has become increasingly used to develop complex applications. As OOP moves toward the mainstream of software design and development, various software solutions require adaptation to make use of the benefits of OOP.
  • OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program.
  • An object is a software package that contains both data and a collection of related structures and procedures. Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task.
  • OOP therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.
  • OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture.
  • a component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each others capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to differentiate between an object and a class of objects at this point.
  • An object is a single instance of the class of objects, which is often just called a class.
  • a class of objects can be viewed as a blueprint, from which many objects can be formed.
  • OOP allows the programmer to create an object that is a part of another object.
  • the object representing a piston engine is said to have a composition-relationship with the object representing a piston.
  • a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.
  • OOP also allows creation of an object that "depends from” another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition.
  • a ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic.
  • the object representing the ceramic piston engine is called a derived object, and it inherits all of the aspects of the object representing the piston engine and adds further limitation or detail to it.
  • the object representing the ceramic piston engine "depends from" the object representing the piston engine. The relationship between these objects is called inheritance.
  • the object or class representing the ceramic piston engine inherits all of the aspects of the objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class.
  • the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons.
  • Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with it (e.g., how many pistons in the engine, ignition sequences, lubrication, etc.).
  • a programmer would call the same functions with the same names, but each type of piston engine may have different/overriding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymo ⁇ hism and it greatly simplifies communication among objects.
  • composition-relationship With the concepts of composition-relationship, encapsulation, inheritance and polymo ⁇ hism, an object can represent just about anything in the real world. In fact, our logical perception of the reality is the only limit on determining the kinds of things that can become objects in object- oriented software. Some typical categories are as follows:
  • Objects can represent physical objects, such as automobiles in a traffic-flow simulation, electrical components in a circuit-design program, countries in an economics model, or aircraft in an air-traffic-control system.
  • Objects can represent elements of the computer-user environment such as windows, menus or graphics objects.
  • An object can represent an inventory, such as a personnel file or a table of the latitudes and longitudes of cities.
  • An object can represent user-defined data types such as time, angles, and complex numbers, or points on the plane.
  • OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system, or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.
  • OOP enables software developers to build objects out of other, previously built objects.
  • C++ is an OOP language that offers a fast, machine-executable code.
  • C++ is suitable for both commercial-application and systems-programming projects.
  • C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.
  • Encapsulation enforces data abstraction through the organization of data into small, independent objects that can communicate with each other. Encapsulation protects the data in an object from accidental damage, but allows other objects to interact with that data by calling the object's member functions and structures.
  • Class hierarchies and containment hierarchies provide a flexible mechanism for modeling real-world objects and the relationships among them.
  • Class libraries are very flexible. As programs grow more complex, more programmers are forced to adopt basic solutions to basic problems over and over again.
  • a relatively new extension of the class library concept is to have a framework of class libraries. This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers. Frameworks also represent a change in the way programmers think about the interaction between the code they write and code written by others.
  • event loop programs require programmers to write a lot of code that should not need to be written separately for every application.
  • the concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making these things all work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some of the generic capabilities of the framework with the specific capabilities of the intended application.
  • Application frameworks reduce the total amount of code that a programmer has to write from scratch.
  • the framework is really a generic application that displays windows, supports copy and paste, and so on, the programmer can also relinquish control to a greater degree than event loop programs permit.
  • the framework code takes care of almost all event handling and flow of control, and the programmer's code is called only when the framework needs it (e.g., to create or manipulate a proprietary data structure).
  • a programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.
  • a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e.g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.
  • default behavior e.g., for menus and windows
  • Behavior versus protocol Class libraries are essentially collections of behaviors that one can call when one wants those individual behaviors in a program.
  • a framework provides not only behavior but also the protocol or set of rules that govern the ways in which behaviors can be combined, including rules for what a programmer is supposed to provide versus what the framework provides.
  • a preferred embodiment of the invention utilizes HyperText Markup Language (HTML) to implement documents on the Internet together with a general-pu ⁇ ose secure communication protocol for a transport medium between the client and the Newco. HTTP or other protocols could be readily substituted for HTML without undue experimentation. Information on these products is available in T. Berners-Lee, D. Connoly, "RFC 1866: Hypertext
  • HTML Markup Language - 2.0
  • R. Fielding H, Frystyk, T. Berners-Lee, J. Gettys and J.C. Mogul, "Hypertext Transfer Protocol - HTTP/1.1 : HTTP Working Group Internet Draft” (May 2, 1996).
  • HTML is a simple data format used to create hypertext documents that are portable from one platform to another.
  • HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains.
  • HTML has been in use by the World-Wide Web global information initiative since 1990. HTML is an application of ISO Standard 8879; 1986 Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).
  • SGML Standard Generalized Markup Language
  • HTML has been the dominant technology used in development of Web-based solutions.
  • HTML has proven to be inadequate in the following areas: Poor performance; • Restricted user interface capabilities;
  • UI User Interface
  • Custom “widgets” e.g., real-time stock tickers, animated icons, etc.
  • client-side performance is improved.
  • Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance.
  • Dynamic, real-time Web pages can be created. Using the above-mentioned custom UI components, dynamic Web pages can also be created.
  • Sun's Java language has emerged as an industry-recognized language for "programming the Internet.”
  • Sun defines Java as: "a simple, object-oriented, distributed, inte ⁇ reted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword- compliant, general-pu ⁇ ose programming language.
  • Java supports programming for the Internet in the form of platform-independent Java applets.”
  • Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add "interactive content” to Web documents (e.g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e.g.,
  • ActiveX Technologies to give developers and Web designers wherewithal to build dynamic content for the Internet and personal computers.
  • ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content.
  • the tools use Internet standards, work on multiple platforms, and are being supported by over 100 companies.
  • the group's building blocks are called ActiveX Controls, small, fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages.
  • ActiveX Controls work with a variety of programming languages including Microsoft Visual C++, Borland Delphi, Microsoft Visual Basic programming system and, in the future, Microsoft's development tool for Java, code named "Jakarta.”
  • ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications.
  • One embodiment of the present invention includes three different, but complementary dimensions that together provide a framework which can be used in assessing and rating the IT operations of an organization.
  • the following three dimensions constitute the framework of the present invention: 1) Operations Environment Dimension, 2) Capability Dimension, and 3) Maturity Dimension.
  • the first dimension describes and organizes the standard operational activities that any IT organization should perform.
  • the second dimension provides a context for evaluating the performance quality of these operational activities. This dimension specifies the qualitative characteristics of an operations environment and orders these characteristics on a scale denoting rising capability.
  • the final dimension uses this capability scale and outlines a method for deriving a capability rating for specific IT process groups and the entire organization.
  • the Operations Environment and Capability dimensions provide the foundation for determining the quality or capability level of the organization's IT operations.
  • the Operations Environment dimension can be viewed as a descriptive mapping of a model operations environment.
  • the Capability dimension can be construed as a qualitative mapping of a model operations environment.
  • the Maturity dimension builds on the foundation set by these two dimensions to provide a method for rating the maturity level of the entire IT organization.
  • FIG. 2 is a flow chart illustrating the various steps associated with the different dimensions of the present invention. As shown, a plurality of process areas of an operations organization are first defined in terms of either a goal or a piupose in operation 200. The process areas are then grouped into categories, as indicated in operation 202. It should be noted that the categories are grouped in terms of process areas having common characteristics.
  • process capabilities are received for the process areas of the operations organization.
  • Such data may be generated via a maturity questionnaire which includes a set of questions about the operations environment that sample the base practices in each process area of the present invention.
  • the questionnaire may be used to obtain information on the capability of the IT organization, or a specific IT area or project.
  • category capabilities are calculated for the categories of the process areas in operation 206.
  • a maturity of the operations organization is subsequently determined based on the category capabilities of the categories in operation 208.
  • the user-specified or measured parameters i.e., capability of each of the process areas, may be inputted by any input device, such as the keyboard 124, the mouse 126, the microphone 132, a touch screen (not shown), or anything else such as an input port that is capable of relaying such information.
  • the definitions, grouping, calculations and determinations may be carried out manually or via the CPU 110, which in turn may be governed by a computer program stored on a computer readable medium, i.e., the RAM 114, ROM 116, the disk storage units 120, and/or anything else capable of storing the computer program.
  • a computer readable medium i.e., the RAM 114, ROM 116, the disk storage units 120, and/or anything else capable of storing the computer program.
  • dedicated hardware such as an application specific integrated circuit (ASIC) may be employed to accomplish the same.
  • any one or more of the definitions, grouping and determinations may be carried out manually or in combination with the computer.
  • the outputting of the determination of the maturity of the operations organization may be effected by way of the display 138, the speaker 128, a printer (not shown) or any other output mechanism capable of delivering the output to the user. It should be understood that the foregoing components need not be resident on a single computer, but also may be a component of either a networked client and/or a server.
  • the Operations Environment Dimension is characterized by a set of process areas that are fundamental to the effective technical execution of an operations environment. More particularly, each process is characterized by its goals and pu ⁇ ose, which are the essential measurable objectives of a process. Each process area has a measurable pu ⁇ ose statement, which describes what has to be achieved in order to attain the defined pu ⁇ ose of the process area.
  • goals refer to a summary of the base practices of a process area that can be used to determine whether an organization or project has effectively implemented the process area.
  • the goals signify the scope, boundaries, and intent of each process area.
  • the process goals and pu ⁇ ose may be achieved in an IT organization through the various lower level activities; such as tasks and practices that are carried out to produce work products. These performed tasks, activities and practices, and the characteristics of the work products produced are the indicators that demonstrate whether the specific process goals or pu ⁇ ose is being achieved.
  • work product describes evidence of base practice implementation. For example, a completed change control request, a resolved trouble ticket, and/or a service level agreement (SLA) report.
  • SLA service level agreement
  • Process Categories The operations environment is partitioned into three process areas: Process Categories, Process Areas and Base Practices which reflect processes within any IT organization.
  • Figure 3 depicts and summarizes the relationship of the Process Categories 300, Process Areas 302, and Base
  • a Process Category has a defined pu ⁇ ose and measurable goals and consists of logically related set of Process Areas that collectively address the pu ⁇ ose and goals, in the same general area of activity.
  • Process Categories The pu ⁇ ose of Process Categories is to organize Process Areas according to common IT functional characteristics. There are four process categories defined in the present invention: Service Management, Systems Management, Managing Change, and IT Operations Planning. Process Categories are described as follows:
  • Process Areas are the second level in the operations hierarchy.
  • the elements of this level are a collection of Base Practices that are performed to achieve the defined pu ⁇ ose of the Process
  • Process Areas refer to a collection of Base Practices that are performed sequentially, concurrently and/or iteratively to achieve the defined pu ⁇ ose of the process area.
  • the pu ⁇ ose describes the unique functional objectives of the process area when instantiated in a particular environment. Satisfying the pu ⁇ ose statement of a process area represents the first step in building process area capability.
  • Process Areas for the Service Management Category include service level management, operations level management, service desk, user administration, and service pricing.
  • the pu ⁇ ose of service level management may be to document the information technology services to be delivered to users. Note that this pu ⁇ ose states a unique functional objective (to establish requirements), and provides a context (service level).
  • Base Practices are the lowest level in the operation hierarchy. Base Practices are essential activities that an IT organization performs to achieve the pu ⁇ ose of a Process Area. A base practice is what an IT organization does.
  • Base Practices of service level management may be to assess business strategy, audit current service levels, determine service requirements and IT's ability to deliver services, prepare a draft SLA, identify the charge-back structure, and agree to SLAs with customers.
  • the Process Areas are expressed in terms of their goals, whereas Base Practices are tasks that need to be carried out to achieve those goals.
  • Base Practices may have work products associated with them.
  • a work product is evidence of base practice implementation, for example, a completed change control request, a resolved trouble ticket, and/or a SLA report.
  • a service desk example of a process area and associated base practices is as follows:
  • Example The Service Desk will receive requests of all types, including requests for new users, moves, and updates to software or hardware. All requests are logged and tracked in the same manner as incidents and problems.
  • Capability Dimension refers to formalizing the process performance into quantifiable range of expected results based on the process capability level that can be achieved by following the process.
  • Process capability dimension characterizes the level of capability of each process area within an organization. In other words, the process capability dimension describes how well the processes in the process dimension are performed.
  • the Capability Dimension measures how well an IT organization performs its operational processes. In determining capabilities, the Base Practices are viewed as a guide to what should be done. The related Generic Practices deal with the effectiveness in which the Base Practices are carried out. Capability Levels, Process Attributes, and Generic Practices describe the Process Capability. The present invention has five levels of Process Capability that can be applied to any Process Area. The Capability Dimension provides a means to formalize and quantify the process performance. The Capability Dimension describes how well the processes are performed as contrasted with Base Practices that describe what an IT organization does.
  • Capability Dimension consists of three components: Capability Levels, Process Attributes, and Generic Practices. These are described below.
  • Capability Levels indicate increasing levels of process maturity and are comprised of one or more generic practices that work together to provide a major enhancement in the capability to perform the process.
  • the Capability Level is the highest level of the Capability dimension.
  • the Capability Level of a process determines its performance and effectiveness.
  • Each Capability Level has certain Process Attributes associated with it.
  • a Process Attribute is comprised of a set of Generic Practices that provide criteria for improving performance.
  • a particular Capability Level is achieved when all the Process Attributes associated with it and with preceding levels are present. Therefore, once the Capability Level is determined, those Process Attributes - and associated Generic Practices - that are required to enhance capability can be identified. In other words, Capability Levels offer a staged guideline for improving the capability to perform the defined processes.
  • Capability Levels provide two benefits: they acknowledge dependencies and relationships among the Base Practices of a Process Area, and they help an IT organization identify which improvements should be performed first, based on a plausible sequence of process implementation.
  • Each level provides a major enhancement in capability to that provided by its predecessors in the fulfillment of the process pu ⁇ ose. For example, at capability Level 1, Base Practices are performed. The performance is ad hoc, informal, and unpredictable. At capability Level 2, the performing of Base Practices are planned and tracked versus just performed - thereby offering a significant improvement over Level 1 practice.
  • Capability Levels are applied to each Process Area independent of other
  • Process Areas An assessment is performed to determine Process Capability for each Process Area, as illustrated in Figure 4.
  • an assessment refers to a diagnostic performed by a trained team to evaluate aspects of an organization's IT operations environment processes.
  • the trained team determines the state of the operational processes, identifies pressing operational process related issues, and obtains organizational support for a process improvement program.
  • Process Areas can, and may, exist at different levels of capability.
  • the ability to rate Process Areas independently enables an IT organization to focus on process improvement priorities driven from business goals and strategic directions. An example of this is illustrated in Figure 4.
  • process attributes refer to features of a process that can be evaluated on a scale of achievement (performed, partially performed, not performed, etc.) which provide a measure of the capability of the process.
  • measures of capability are based on a set of nine Process Attributes. Process Attributes are used to determine whether a process has reached a given capability. The nine Process Attributes are:
  • the attributes are evaluated on a four-point scale of achievement. Achieving a given Capability
  • Level depends on the rating assigned to one or more of these attributes.
  • Generic Practices refer to activities that contribute to the capability of managing and improving the effectiveness of the operations environment Process Areas.
  • a generic practice is applicable to any and all Process Areas.
  • Tt contributes to overall process management, measurement, and the institutionalization capability of the Process Areas.
  • Operational Maturity Dimension characterizes the maturity of an entire operations IT organization.
  • maturity refers to the degree of order (structure or systemization) and effectiveness of a process.
  • the degree of order determines its state of maturity. Less mature processes are less ordered and less effective; more mature processes are more ordered and more effective.
  • the Capability Dimension focuses on the determination of the capability of individual processes, within an operations organization, in achieving their stated goals and pu ⁇ ose.
  • the Operational Maturity Dimension determines the IT organizational maturity by focusing on a collection of processes at a certain level of capability in order to characterize the evolution of the operations
  • Maturity in the overall context of present invention, is applied to an IT organization as a whole.
  • the Maturity Level is determined by the Capability Level of the four Process Categories: Service Management, Systems Management, Managing Change, and IT Operations
  • Operational maturity is defined by a staged model, wherein a operational maturity level 500 cannot be reached until all Process Categories driving it have themselves reached a certain maturity level. Similarly, a category Capability Level 502 cannot be reached until all Process Areas 302 contained in it have reached a certain Process Capability Level 504. This staging is illustrated in Figure 5.
  • Maturity Level refers to a sequence of key intermediate states leading to the goal state. Each state builds incrementally on the preceding state.
  • the assessment tool of the present invention is flexible to accommodate an assessment of a Process Category or just a Process Area. As shown in Figure 5, an assessment could end at the Process Area Level with the Process Capability Level or Process Area Maturity determined. An assessment could also be performed to assess all the Process Areas within a Process Category to determine the Process Category Maturity Level.
  • the framework of the present invention which consists of the three dimensions described previously, is illustrated in Figure 6.
  • the Operations Environment Dimension 600 the box in the center of Figure 6, divides all IT processes into Process Categories 300.
  • Process Categories 300 divide into a finite number of Process Areas 302.
  • Process Areas 302 consist of a finite number of Base Practices 304.
  • Each Process Area within a category is assigned a Capability Level 504 based on the performance of Process Attributes 601 comprised of a finite number of Generic Practices 602 applicable to that process (shown in the box on the right).
  • the IT organization's operational maturity 603 present invention is based on a clustering of process capabilities, as illustrated in the third box to the left.
  • the framework of the present invention is designed to support an IT organization's need to assess and improve their operational capability.
  • the structure of the model enables a consistent appraisal methodology to be used across diverse Process Areas.
  • the distinction between essential operations and process management-focused elements therefore allows a systematic approach to process improvement.
  • Capability Dimension of the present invention measures how capable an IT organization is in achieving the pu ⁇ ose of its various Process Areas.
  • Capability Levels, Process Attributes, and Generic Practices describe the Process Capability.
  • the Capability Levels, their characteristics, the Process Attributes, and the Generic Practices that comprise them are discussed in more detail.
  • the present invention has five levels of Process Capability that can be applied to any Process Area. As mentioned before, Generic Practices are grouped by Process Attributes, and Process Attributes determine the Capability Level. Capability Levels build upon one another; levels cannot, therefore, be skipped.
  • Figure 7 tabulates the relationship of Generic Practices and Process Attributes to Capability Levels.
  • Level 1 Level 2
  • GP Generic Practices
  • Level 1 Performed Informally At this Level, all Base Practices are generally performed, but operations may be ad hoc and occasionally chaotic. Consistent planning and tracking of performance is not performed. Good performance depends on individual knowledge and effort. Operational support and services are generally adequate, but quality and efficiency depend on how well individuals within the IT organization perceive that tasks should be performed. The capability to perform an activity is not generally repeatable or transferable.
  • ATT 1A Process Performance - the extent to which the execution of the process employs a set of practices which uses identifiable input work products to produce identifiable output work products that are adequate to satisfy the pu ⁇ ose of the process.
  • a process may exist but it may be informal and undocumented.
  • Process Area performance is dependent on how efficiently the Base Practices are implemented.
  • Work products such as completed change control requests, resolved trouble tickets, etc., which are related to base practice implementation are periodically reviewed and placed under version control. Corrective action is taken when variances in services and work products occur.
  • ATT 2 A Performance Management - the extent to which the execution of the process is managed in order to produce work products within a stated time and resource requirement.
  • the related Generic Practices are: GP2.1 Establish and maintain a policy for performing operational tasks.
  • Policy is a visible way for the operations environment personnel and the management team to set expectations.
  • the form of policies varies widely depending on the local culture. Policy typically specifies that plans are documented, managed and controlled, and that reviews are conducted. Policy provides guidance for performing the operational tasks and processes.
  • Resources include adequate funding, appropriate physical facilities, skilled people, and appropriate tools. This practice ensures that the level of effort, appropriate skills mix, tools, workspace, and other direct resources are available to perform the operational task and processes.
  • Training provides a common basis for repeatable performance. Even if the operations personnel or management have satisfactory technical skills and knowledge, there is almost always a need to establish a common understanding of the operational process activities and how skills are applied in them. Training, and how it is delivered, may change with process capability due to changes in how the process is performed and managed.
  • measurement implies that the metrics have been defined and selected, and data has been collected. Building a history of measures, such as cost and schedule variances, is a foundation for managing by data. Quality measures may be collected and used, but result in maximum impact at Level 4 when they are subjected to quantitative process control.
  • Open communication ensures that there is common understanding, that decisions are consensual, and that team members are kept aware of decisions made. Communication is needed when changes are made to plans, products, processes, activities, requirements, and responsibilities.
  • the commitments, expectations, and responsibilities are documented and agreed upon within the project group. Commitment may be obtained by negotiation, by using input and feedback, or through joint development of solutions to issues. Issues are tracked and resolved within the group. Communication occurs periodically and whenever the status changes. The participants have access to data, status information, and recommended actions.
  • Process Attribute ATT 2B Work Product Management - the extent to which the process is managed to produce work products that are documented and controlled, and that meet their functional and nonfunctional requirements, in line with the work product quality goals of the process.
  • Requirements may come from the business customer, policies, standards, laws, regulations, etc. The applicable requirements are documented and available for verification activities.
  • Base Practices are performed with the assistance of an available, well-defined, and operations-wide process infrastructure. The processes are tailored to meet the specific needs of a certain practice.
  • Data from using the process are gathered to determine if modifications or improvements should be made. This information is used in planning and managing the day-to-day execution of multiple projects within the IT organization, and for short and long-term process improvement.
  • ATT 3 A Process Resource - the extent to which the execution of the process uses suitable skilled human resources and process infrastructure effectively to contribute to the defined business goals of the operations environment.
  • GP3.1 Define policies and procedures at an IT level. Policies, standards, and procedures are established at an IT level for common use throughout the operations environment.
  • GP3.2 Define tasks that satisfy the process purpose and business goals consistently and repeatedly. This includes:
  • ATT 3B Process Definition - the extent to which the execution of the process uses a definition, based upon a standard process, that enables it to contribute to the defined business goals of the IT organization.
  • this practice embodies the pro-active planning of personnel. This includes the selection of proper work forces, training, and dissemination.
  • GP 3.4 Provide feedback in order to maintain knowledge and experience.
  • the standard process repository is to be kept up-to-date, through a continuous feedback system based on experiences gained from using the defined process.
  • ATT 4A Process Measurement - the extent to which measures are used to ensure that the implementation of the process supports its execution, and contributes to the achievement of IT organizational goals.
  • GP4.1 Establish measurable quality objectives for the operations environment. These quality objectives can be tied to the strategic quality goals of the IT organization, the particular needs and priorities of the customer, or the tactical needs of a specific group or project.
  • Process definitions are modified to reflect the quantitative nature of process performance.
  • ATT 4B Process Control the extent to which the execution of the process is controlled through the collection and analysis of measures that correct the performance of the process in order to reliably achieve the defined process goals.
  • the related Generic Practices are:
  • GP4.3 Provide adequate resources and infrastructure for data collection.
  • GP4.4 Use data analysis methods and tools to manage and improve the process. This includes the identification of analysis and control techniques appropriate to the process; the provision of adequate resources and infrastructure for analysis and process control; analysis of available measures to identify process control parameters; and, identification of deviations and employment of corrective actions.
  • Level 5 is the highest achievement level from the viewpoint of Process Capability. Continuous process improvement is enabled by quantitative feedback from the process and from pilot studies of innovative ideas and new technology. A focus on widespread, continuous improvement should permeate the IT organization. The IT organization should establish quantitative performance goals for process effectiveness and efficiency, based on its business goals and strategic objectives.
  • Process Attribute ATT 5A Continuous Improvement - the extent to which changes to the process are identified and implemented to ensure continuous improvement in the fulfillment of the defined business goals of the IT organization.
  • Improvements may be based on incremental operational refinements or through innovations, such new technologies. Improvements may typically be driven by the following activities:
  • ATT 5B Process Change - the extent to which changes to the definition, management, and performance of the process is controlled to better achieve the business goals of the IT organization.
  • the deployment activities include:
  • Identifying improvement opportunities in a systematic and proactive manner to continuously improve the process Establishing an implementation strategy based on the identified opportunities to improve process performance according to business goals. Implementing changes to selected areas of the tailored process according to the implementation strategy.
  • Rating Framework The rating framework requires identification of objective attributes or characteristics of a practice or work product of an implemented process to validate that Base Practices are performed, and Generic Practices are followed. Assessment Indicators determine Process Attribute ratings which then are used to determine Capability Level.
  • Assessment Indicators refer to objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process.
  • Assessment Indicators to help rate the Process Attributes.
  • Assessment Indicators are objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process.
  • Assessment Indicators are evidence that Base Practices are performed, and Generic Practices are followed.
  • the indicators are not intended to be regarded as a mandatory checklist to be followed, but rather are a guide to enhance an assessment team's objectivity in making their judgments of a process's performance and capability.
  • the rating framework adds definition and reliability to the present invention, and thereby improves repeatability.
  • Assessment Indicators are determinants of Process Attribute ratings for each Process Capability attribute.
  • Each assessed process profile consists of a set of Process Attribute ratings.
  • Each attribute rating represents a judgment by the assessment team of the extent to which the attribute is achieved.
  • Figure 8 illustrates the Process Attribute rating represented on a four-point scale of achievement.
  • the indicators determine attributes rating which then are used to determine Capability Level.
  • the rating scale defined below is used to describe the degree of achievement of the defined capability characterized by Process Attributes. Once the appropriate rating for each Process
  • Attribute is determined, ratings can be combined to assign the Capability Level achieved by the assessed process.
  • Figure 9 represents the mapping of attribute ratings to the process Capability Levels determination.
  • the first step is to identify if the appropriate Base Practices are performed at all.
  • the necessary foundation for improving the capability of any process is to at least demonstrate that the Base Practices are being performed.
  • the assessment team may then formulate an objective judgment of process performance attribute through different means such as analysis of the work products (i.e., reviewing completed trouble tickets), demonstration of evidence of process implementations (i.e., are escalation procedures documented and understood?), interviews with process performers (i.e., discuss daily activities with Service Desk personnel), and other means as appropriate (i.e., does the Service Desk have a dedicated phone number that users should call to report incident/problems/requests or a dedicated email address, etc.).
  • Achievement of Base Practices is an indication that Process Area goals are being met.
  • the increasing capability of a process to effectively achieve its goals and objectives is based upon attribute rating.
  • the attribute rating is determined by the performance of the associated Generic Practices.
  • Evidence of effective performance of the Generic Practices associated with a Process Attribute supports the assessment team's judgement of the degree of achievement of the attributes.
  • Process Category capabilities are determined from capability ratings of its Process Areas. Once all Process Areas of a category are rated the lowest rating assigned to a Process Area becomes the category rating as well. Similarly, the operational maturity rating is determined from Process Category rating within the IT organization. Once all Process Categories are rated then the lowest rating assigned to a Process Category becomes the IT organizational maturity.
  • an assessment team collects the evidence on the implementation of the processes being assessed and determines their compatibility as defined in the framework of the present invention.
  • the objective of the assessment is to identify the di ferences and the gaps between the actual implementations of the processes in the assessed operational IT organization with respect to the present invention.
  • Using the framework of the present invention ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
  • the assessment process is used to appraise an organization's IT operations environment process capability. Defining a reference model ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
  • An IT organization can perform an assessment for a variety of reasons. An assessment can be performed in order to assess the processes in the IT operations environment with the pu ⁇ ose of improving its own work and service processes. An IT organization can also perform an assessment to determine and better manage the risks associated with outsourcing. In addition, an assessment can be performed to better understand a single functional area such as systems management, on a single process area such as a performance management, or on the entire IT operations environment.
  • Team members include the client sponsor, the assessment team lead, assessment team members, and client participants.
  • assessment scope refers to organizational entities and components selected for inspection.
  • a clear understanding of the pu ⁇ ose of the framework, constraints, roles, responsibilities, and outputs are needed prior to the start of the assessment. Therefore, in preparation for the assessment, the assessment team lead and the client sponsor work together to reach agreement on the scope and goals of the assessment. Once agreement is reached, the assessment team lead ensures that the IT operational processes selected for the assessment are sufficient to meet the assessment pu ⁇ ose and may provide output that is representative of the assessment scope.
  • An assessment plan is developed based on the goals identified by the client sponsor.
  • the plan consists of detailed schedules for the assessment and potential risks identified with performing the assessment.
  • Assessment team members, assessment participants, and areas to be assessed are selected.
  • Work products are identified for initial review, and the logistics for the on-site visit are identified and planned.
  • the assessment team members must receive adequate training on the framework of the present invention and the assessment process. It is essential that the assessment team be well-trained on the present invention to ensure that they may have the ability to inte ⁇ ret the data obtained during the assessment.
  • the team must have comprehensive understanding of the assessment process, its underlying principles, the tasks necessary to execute it, and their role in performing the tasks.
  • Gather Assessment Input Maturity questionnaires are distributed to participants prior to the client site visit. Maturity questionnaires exist for each process area of the present invention, and tie back to base practices, process attributes and generic practices. Completed questionnaires provide the assessment team with an overview of the IT operational process capability of the IT organization. The responses assist the team in focusing their investigations, and provide direction for later activities such as interviews and document reviews. Assessment team members prepare exploratory questions based on Interview Aids and responses to the maturity questionnaires.
  • Interview Aids refers to a set of exploratory questions about the operations environment which are used during the interview process to obtain more detailed information on the capability of the IT organization.
  • the interview aids are used by the assessment team to guide them through interview sessions with assessment participants.
  • a Kick off meeting is scheduled at the start of the on-site activities.
  • the pu ⁇ ose of the meeting is to provide the participants with an overview of present invention and the assessment process, to set expectations, and to answer any questions about the process.
  • a client sponsor of the assessment may participate in the presentation to show visible support and stress the importance of the assessment process to everyone involved.
  • Data for the assessment are obtained from several sources: responses to the maturity questionnaires, interview sessions, work products, and document reviews. Documents are reviewed in order to verify compliance. Interviewing provides an opportunity to gain a deeper understanding of the activities performed, how the work is performed, and processes currently in use. Interviewing provides the assessment team members with identifiable assessment indicators for each Process Area appraised. Interviewing also provides the opportunity to address all areas of the present invention within the scope of the assessment. Interviews are scheduled with IT operations managers, supervisors, and operations personnel. IT operations managers and supervisors are interviewed as a group in order to understand their view of how the work is performed in the IT organization, any problem areas of which they are aware, and improvements that they feel need to be made. IT operations personnel are interviewed to collect data within the scope of the assessment and to identify areas that they can and should improve in the IT organization.
  • the pu ⁇ ose of solidifying this information is to summarize and consolidate information into a manageable set of findings.
  • the data is then categorized into Process Areas of the present invention.
  • the assessment team must reach consensus on the validity of the data and whether sufficient information in the areas evaluated has been collected. It is the team's responsibility to obtain sufficient information on the components of the present invention within the scope of the assessment for the required areas of the IT organization before any rating can be done.
  • follow- up interviews may occur for clarification.
  • Initial findings are generated from the information collected thus far, and presented to the assessment participants.
  • the pu ⁇ ose of presenting initial findings is to obtain feedback from the individuals who provided information during the various interviews. Ratings are not considered until after the initial findings presentations, as the assessment team is still collecting data.
  • Initial findings are presented in multiple sessions in order to protect the confidentiality of the assessment participants Feedback is recorded for the team to consider at the conclusion of all of the initial findings presentations
  • assessments associated with the foregoing service desk example are as follows
  • Process Measurement GP4 1 Establish measurable quality Service levels are based on strategic objectives for the services of the operations business needs vs industry standards environment's standard and defined processes
  • GP4 2 Determine the quantitative process Metrics are automatically collected capability of the defined process from the problem management tool (vs collected manually)
  • GP4 3 Provide adequate resources and Ties to systems management are in infrastructure for data collection place Tickets are automatically created when systems management tools detect faults Adequate resources are in place to analyze and report on Service Desk data
  • the rating process may begin.
  • the first step in the rating process is to determine if Process Area goals are being met. Process Area goals arc considered met when all base practices are performed. Each process attribute for each Process Area within the assessment scope is then rated. Process attnbutes are rated based on the existence of and compliance to generic practices.
  • the Assessment Indicator Rating template the assessment team identifies assessment indicators for each process area to determine whether or not process attributes are achieved. Ratings are always established based on consensus of the entire assessment team. Questionnaire responses, interview notes, and documentation are used to support ratings; confirmation from two sources in different contexts (e.g., two people in different meetings) ensures compliance of an activity.
  • the team reviews all weaknesses that relate to the associated generic practices. If the team determines that a weakness is strong enough to impact the process attribute, the process attribute is rated "not achieved.” If it is decided that there are no significant weaknesses that have an impact on a process attribute, it is rated "fully achieved.” For a Process Area to be rated “fully achieved,” all process attributes for the Process Area must be rated “fully achieved.” A Process Area may be rated fully achieved, largely achieved, partially achieved, or not achieved.
  • Assignment of a maturity level rating is optional at the discretion of the sponsor. For a particular maturity level rating to be achieved, all Process Areas within and below the maturity level must be satisfied. For example, for an IT organization to be rated at maturity level 4, all Process Areas at level 4, level 3 and at level 2 must have been investigated during the assessment, and all Process Areas must have been rated achieved by the assessment team. The final findings presentation is developed by the team to present to the sponsor and the IT organization the strengths and weaknesses observed for each Process Area within the assessment scope, the ratings of each Process Area, and the maturity level rating if desired by sponsor.
  • the final assessment results are presented to the client sponsor. During the final presentation, the assessment team must ensure that the IT organization understands the issues that were discovered during the assessment and the key issues that it faces. Operational strengths are presented to validate what the IT organization is doing well. Strengths and weaknesses are presented for each process area within the assessment scope as well as any issues that affect process and are un-related to the present invention. A Process Area profile is presented showing the individual Process Area ratings in detail.
  • An executive overview session is held in order to allow the senior IT Operations manager to clarify any issues with the assessment team, to confirm his or her understanding of the operational process issues, and to gain full understanding of the recommendations report.
  • the assessment team collects feedback from the assessment participants and the assessment team on the process, packages information that needs to be saved for historical purposes.
  • Figure 10 describes the roles and responsibilities of those involved with the assessment process.
  • FIG. 11 represents the indicator types and their relationship to the determination of Process Area rating.
  • assessment indicators consist of base practices and general practices. At the next level, the base practices and general practices are assessed by process implements, work products, practice performance, resources and infrastructure.
  • Base Piactice 1 1 3 Determine Service Requirements What is the process by which service requirements are defined? Who is involved in this process?
  • KPIs Key Performance Indicators
  • SLA Management involves the creation, management, reporting, and discussion of Service Description Level Agreements (SLAs) with users and the providers within Information Technology (IT)
  • SLAs Service Description Level Agreements
  • IT Information Technology
  • SLA Definition The SLA document defines, in specific and quantifiable terms, the level of service that is to be delivered to users In the enterprise environment, many design and configuration alternatives are available that affect a given system's response time, availability, development cost, and ongoing operational costs
  • a SLA clarifies the business objectives and constraints for an application system, and forms the basis for both application design and system configuration choices
  • SLA Control It is important that the services described in SLAs are carefully aligned with current business needs, monitored to ensure that they are performed as described, and updated in line with changes to business needs
  • PA Purpose OLA Management involves the creation, management, reporting, and discussion of Operations Level Agreements with providers within the organization, as well as external suppliers and vendors
  • An OLA is an agreement between the IT organization and those delivering the constituent services of the system OLAs enable the IT organization to provide the level of service stipulated in a Service Level Agreement as supporting services are guaranteed m the OLA OLA Management involves the following
  • OLA Definition An OLA outlines the type of service that will be delivered to the users from each service provider OLA Definition works with service providers to define
  • OLAs are defined for suppliers who are external to the IT organization They may take the form of maintenance contracts, warranties, or service contracts Further formal or informal OLAs may also be created for internal suppliers, depending on the size of the organization
  • OLA Control It is important that the services described in OLAs are carefully aligned with current business needs, monitored to ensure that they are performed as described, and updated in line with changes to business needs
  • OLA Review The reports generated from tracking OLAs are reviewed to ensure that the OLAs are carefully aligned with current business needs and if necessary updated to be in line with business needs In enterprise environments, this process becomes more complex as more components are required to perform these services
  • PA's Base 1 2 1 Determine operational items Practices 1 2 2 Group related operational items
  • PA Goals To define a quantifiable service level that represents a minimum level of service for each service delivered
  • IT will create reports based on data gathered internally when possible These reports will be cross-referenced with those from the external vendor to ensure accuracy
  • OLAs contain e.g. workloads, cost of service, targets, type of support etc.
  • OLA outline each key business application e.g. penalties, tools used to maintain the OLA
  • Process Area OLA Management involves the creation, management, reporting, and discussion of Description Operations Level Agreements with suppliers and vendors
  • OLAs enable the IT organization to provide the level of service stipulated in a Service Level Agreement as supporting services are guaranteed in the OLA
  • An OLA is an agreement between the IT organization and those delivering the constituent services of the system Operational Level Management involves the following
  • OLA Definition An OLA outlines the type of service that will be delivered to the users from each service provider OLA Definition works with seivice providers to define
  • OLAs are defined for suppliers who are external to the IT organization They may take the form of maintenance contracts, warranties, or service contracts Further formal or informal OLAs may also be created for internal suppliers, depending on the size of the organization OLA Reporting The actual production of trend reports are necessary to monitor and meter the effectiveness of an OLA
  • OLA Control It is important that the services described in OLAs are carefully aligned with current business needs, monitored to ensure that they are performed as described, and updated in line with changes to business needs
  • OLA Review The reports generated from tracking OLAs are reviewed to ensure that the OLAs are carefully aligned with current business needs and if necessary updated to be in line with business needs In enterprise environments, this process becomes more complex as more components are required to perform these services
  • the Service Desk consists of the following functions:
  • Incident Management An incident is a single occurrence of an issue that affects the delivery of normal or expected services. Incident Management strives to resolve as high a proportion of incidents as possible prior to passing them on to other areas.
  • Problem Management A problem is the underlying cause of one or more incidents. Problem Management utilizes the skills of experts and support groups to fix and prevent recurring incidents by determining and fixing the underlying problems causing the incidents.
  • Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk. Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Further sub- functions of Request Management are: Request Logging Impact Analysis Authorization Prioritization
  • the Service Desk provides a single point of contact for users with problems or specific Description service request
  • the Service Desk forms part of an organization's strategy to enable users and business communities to achieve business objectives through the use of technology
  • the Service Desk main objectives are:
  • the Service Desk consists of the following functions
  • Incident Management An incident is a single occurrence of an issue that affects the delivery of normal or expected services Incident Management strives to resolve as high a proportion of incidents as possible prior to passing them on to other areas
  • Problem Management A problem is the underlying cause of one or more incidents Problem Management utilizes the skills of experts and support groups to fix and prevent recurring incidents by determining and fixing the underlying problems causing the incidents
  • Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer Further sub- functions of Request Management are Request Logging Impact Analysis Authorization Prior ittzation
  • Performance reports (resolution, response, trending, etc )
  • Base Practice 1 4 1 Determine projected service/equipment costs and depreciation schedule for distributed technical environment
  • Base Practice 1 4 6 Define products/services in terms useful to customers
  • Base Practice 1 4 7 Determine service price costs and model/evaluate costs
  • Base Practice 1 4 8 Determine cost allocation plans for services and equipment
  • Service Pricing & Cost Service Costing & Pricing projects and monitors costs for the management of operations, provision of service, equipment installation, etc Based upon the projected cost and business needs, a service pricing strategy may be developed to re-allocate costs within the organization If developed, the service pricing strategy will be documented, communicated to the users, monitored and adjusted to ensure that it is both comprehensive
  • Billing & Accounting The purpose of Billing & Accounting is to gather information for calculating actual cost, determine chargeback costs and bill users for services rendered
  • Base Practice 2 1 8 Link multi-step batch processes based on success/failure of previous jobs 1. What procedures are in place for initiating, monitoring or stopping jobs?
  • Process Area Production Scheduling determines the requirements for the execution of scheduled jobs Description across a distributed environment A production schedule is then placed to meet these requirements, taking into consideration other processes occurring throughout the distributed environment (e g , software and data distribution, and remote backup/restoration of data )
  • BP Description Re-initiahzing printers can be powering on off a printer to starting/stopping a print queue in a distributed environment
  • Process Area Output and Print Management monitors all of the printing and/or done across a distributed Descnption environment and is responsible for managing the printers and the printing for both central and remote locations
  • Base Practice 2 3 1 Transfer files on a scheduled basis
  • Can file types e.g. VSAM, PDS, etc.
  • Can file types e.g. VSAM, PDS, etc.
  • Process Area File Transfer and Control initiates and monitors the files being transferred throughout the Description system as part of the business processing (e g , nightly batch runs) File transfers can take place in a bi-directional fashion between hosts, servers and woikstations
  • directories e.g. authentication information, access control profiles, etc.
  • Process Area Network Services Process Area is comprised of the following two areas Description
  • Directory Ser-vices is the function of publishing and maintaining organized inventories of information resources to make them available to networked customers
  • Directory Management can apply to internal directories as well as the publishing of directory information for global directory services
  • DNS ensures that IP services are provided to devices within an enterprise Whether dealing with a new or existing capability, the communications address management function demands that high-level business requirements be taken into consideration
  • Example Backups can be scheduled and performed during low network traffic times Begin an incremental backup at I 00am and confirming that the backup is completed befoi e network traffic picks up in the morning, or doing complete backups on weekends when the network traffic is low
  • Base Practice 2 5 1 Test Central/Remote Backup/Restore/ Archival Procedure Periodically

Abstract

A system, method, and article of manufacture are provided for determining capability levels of a process area in an operational maturity investigation. A plurality of process attributes are first defined along with a plurality of generic practices for each of the process attributes. Also defined are a plurality of capability levels in terms of groups of the process attributes. Each of the process attributes are then rated based on achievement of the corresponding generic practices. It is then determined which of the capability levels is achieved by a process area. Such determination is based on the rating of the process attributes of the capability levels. Thereafter, the capability level is outputted for gauging a maturity of an operations organization.

Description

A SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR DETERMINING
CAPABILITY LEVELS OF PROCESSES FOR PROCESS ASSESSMENT PURPOSES
IN AN OPERATIONAL MATURITY INVESTIGATION
FIELD OF INVENTION
The present invention relates to IT operations organizations and more particularly to evaluating a maturity of an operations organization by determining capability levels of process areas.
BACKGROUND OF INVENTION
Triggered by a recent technology avalanche and a highly competitive global market, the management of information systems is undergoing a revolutionary change. Both information technology and business directions are driving information systems management to a fundamentally new paradigm. While business bottom lines are more tightly coupled with information technology than ever before, studies indicate that many CEOs and CFOs feel that they are not getting their money's worth from their IT investments. The complexity of this environment demands that a company have a formal way of assessing its IT capabilities, as well as a specific and measurable path for improving them.
In initiatives to address these issues, various frameworks and gap analysis have been used to capture the best practices of IT management and to determine areas of improvement. While the frameworks and gap analysis are intended to capture weaknesses in processes that are observable, it does not provide data with sufficient objectivity and granularity upon which a comprehensive improvement plan can be built.
There is thus a need to add further objectivity and consistency to conventional framework and gap analysis. SUMMARY OF INVENTION
A system, method, and article of manufacture consistent with the principles of the present invention are provided for determining capability levels of a process area as a part of an operational maturity investigation. A plurality of process attributes are first defined along with a plurality of generic practices for each of the process attributes. Also defined are a plurality of capability levels in terms of groups of the process attributes. Each of the process attributes are then rated based on achievement of the corresponding generic practices. It is then determined which of the capability levels is achieved by a process area. Such determination is based on the rating of the process attributes of the capability levels. Thereafter, the capability level is outputted for gauging a maturity of an operations organization.
In one aspect of the present invention, the capability levels may each be achieved upon the ratings of the process attributes of the capability level surpassing a predetermined amount.
Further, each capability level may be defined by the process attributes of a lower capability level and further defined by at least one more process attribute.
In yet another aspect of the present invention, the process attributes may include process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and/or process change. Further, the capability levels may include performed informally, planned and tracked, well defined, quantitatively controlled, and/or continuously improving.
The present invention provides a basis for organizations to gauge performance, and assists in planning and tracking improvements to the operations environment. The present invention further affords a basis for defining an objective improvement strategy in line with an organization's needs, priorities, and resource availability. The present invention also provides a method for determining the overall operational maturity of an organization based on the capability levels of its processes.
The present invention can thus be used by organizations in a variety of contexts. An organization can use the present invention to assess and improve its processes. An organization can further use the present invention to assess the capability of suppliers in meeting their commitments, and hence better manage the risk associated with outsourcing and sub-contract management. In addition, the present invention may be used to focus on an entire IT organization, on a single functional area such as service management, or on a single process area such as a service desk.
BRIEF DESCRIPTION OF DRAWINGS
The invention may be better understood when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
Figure 1 is a schematic diagram of a hardware implementation of one embodiment of the present invention;
Figure 2 is a flowchart illustrating generally the steps associated with the present invention;
Figure 3 is an illustration showing the relationships of the process category, process area, and base practices of the operations environment dimension in accordance with one embodiment of the present invention;
Figure 4 is an illustration showing a measure of each process area to the capability levels according to one embodiment of the present invention;
Figure 5 is an illustration showing various determinants of operational maturity in accordance with one embodiment of the present invention;
Figure 6 is an illustration showing an overview of the operational maturity model;
Figure 7 is an illustration showing a relationship of capability levels, process attributes, and generic practices in accordance with one embodiment of the present invention;
Figure 8 is an illustration showing a capability rating of various attributes in accordance with one embodiment of the present invention;
Figure 9 is an illustration showing a mapping of attribute ratings to the process capability levels determination in accordance with one embodiment of the present invention;
Figure 10 is an illustration showing assessment roles and responsibilities in accordance with one embodiment of the present invention; and Figure 11 is an illustration showing the process area rating in accordance with one embodiment of the present invention.
DISCLOSURE OF INVENTION
The present invention comprises a collection of best practices, both from a technical and management perspective. The collection of best practices is a set of processes that are fundamental to a good operations environment. In other words, the present invention provides a definition of an "ideal" operations environment, and also acts as a road map towards achieving the "ideal" state.
Figure 1 is a schematic diagram of one possible hardware implementation by which the present invention may be carried out. As shown, the present invention may be practiced in the context of a personal computer such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.
A representative hardware environment is depicted in Figure 1, which illustrates a typical hardware configuration of a workstation in accordance with one embodiment having a central processing unit 110, such as a microprocessor, and a number of other units interconnected via a system bus 112. The workstation shown in Figure 1 includes a Random Access Memory (RAM) 114, Read Only. Memory (ROM) 116, an I/O adapter 118 for connecting peripheral devices such as disk storage units 120 to the bus 112, a user interface adapter 122 for connecting a keyboard 124, a mouse 126, a speaker 128, a microphone 132, and/or other user interface devices such as a touch screen (not shown) to the bus 112, communication adapter 134 for connecting the workstation to a communication network 135 (e.g., a data processing network) and a display adapter 136 for connecting the bus 112 to a display device 138.
The workstation typically has resident thereon an operating system such as the Microsoft Windows NT or Windows/95 Operating System (OS), the IBM OS/2 operating system, the MAC OS, or UNIX operating system. Those skilled in the art may appreciate that the present invention may also be implemented on other platforms and operating systems.
A preferred embodiment of the present invention is written using JAVA, C, and the C++ language and utilizes object oriented programming methodology. Object oriented programming (OOP) has become increasingly used to develop complex applications. As OOP moves toward the mainstream of software design and development, various software solutions require adaptation to make use of the benefits of OOP.
OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program. An object is a software package that contains both data and a collection of related structures and procedures. Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task. OOP, therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.
In general, OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture. A component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each others capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to differentiate between an object and a class of objects at this point. An object is a single instance of the class of objects, which is often just called a class. A class of objects can be viewed as a blueprint, from which many objects can be formed.
OOP allows the programmer to create an object that is a part of another object. For example, the object representing a piston engine is said to have a composition-relationship with the object representing a piston. In reality, a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.
OOP also allows creation of an object that "depends from" another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition. A ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic. In this case, the object representing the ceramic piston engine is called a derived object, and it inherits all of the aspects of the object representing the piston engine and adds further limitation or detail to it. The object representing the ceramic piston engine "depends from" the object representing the piston engine. The relationship between these objects is called inheritance.
When the object or class representing the ceramic piston engine inherits all of the aspects of the objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class. However, the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons. Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with it (e.g., how many pistons in the engine, ignition sequences, lubrication, etc.). To access each of these functions in any piston engine object, a programmer would call the same functions with the same names, but each type of piston engine may have different/overriding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymoφhism and it greatly simplifies communication among objects.
With the concepts of composition-relationship, encapsulation, inheritance and polymoφhism, an object can represent just about anything in the real world. In fact, our logical perception of the reality is the only limit on determining the kinds of things that can become objects in object- oriented software. Some typical categories are as follows:
• Objects can represent physical objects, such as automobiles in a traffic-flow simulation, electrical components in a circuit-design program, countries in an economics model, or aircraft in an air-traffic-control system.
• Objects can represent elements of the computer-user environment such as windows, menus or graphics objects.
• An object can represent an inventory, such as a personnel file or a table of the latitudes and longitudes of cities.
• An object can represent user-defined data types such as time, angles, and complex numbers, or points on the plane. With this enormous capability of an object to represent just about any logically separable matters, OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system, or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.
If 90% of a new OOP software program consists of proven, existing components made from preexisting reusable objects, then only the remaining 10% of the new software project has to be written and tested from scratch. Since 90% already came from an inventory of extensively tested reusable objects, the potential domain from which an error could originate is 10% of the program. As a result, OOP enables software developers to build objects out of other, previously built objects.
This process closely resembles complex machinery being built out of assemblies and sub- assemblies. OOP technology, therefore, makes software engineering more like hardware engineering in that software is built from existing components, which are available to the developer as objects. All this adds up to an improved quality of the software as well as an increased speed of its development.
Programming languages are beginning to fully support the OOP principles, such as encapsulation, inheritance, polymoφhism, and composition-relationship. With the advent of the C++ language, many commercial software developers have embraced OOP. C++ is an OOP language that offers a fast, machine-executable code. Furthermore, C++ is suitable for both commercial-application and systems-programming projects. For now, C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.
The benefits of object classes can be summarized, as follows:
• Objects and their corresponding classes break down complex programming problems into many smaller, simpler problems.
• Encapsulation enforces data abstraction through the organization of data into small, independent objects that can communicate with each other. Encapsulation protects the data in an object from accidental damage, but allows other objects to interact with that data by calling the object's member functions and structures.
• Subclassing and inheritance make it possible to extend and modify objects through deriving new kinds of objects from the standard classes available in the system. Thus, new capabilities are created without having to start from scratch.
• Polymoφhism and multiple inheritance make it possible for different programmers to mix and match characteristics of many different classes and create specialized objects that can still work with related objects in predictable ways.
• Class hierarchies and containment hierarchies provide a flexible mechanism for modeling real-world objects and the relationships among them.
• Libraries of reusable classes are useful in many situations, but they also have some limitations. For example:
• Complexity. In a complex system, the class hierarchies for related classes can become extremely confusing, with many dozens or even hundreds of classes. • Flow of control. A program written with the aid of class libraries is still responsible for the flow of control (i.e., it must control the interactions among all the objects created from a particular library). The programmer has to decide which functions to call at what times for which kinds of objects.
• Duplication of effort. Although class libraries allow programmers to use and reuse many small pieces of code, each programmer puts those pieces together in a different way.
Two different programmers can use the same set of class libraries to write two programs that do exactly the same thing but whose internal structure (i.e., design) may be quite different, depending on hundreds of small decisions each programmer makes along the way. Inevitably, similar pieces of code end up doing similar things in slightly different ways and do not work as well together as they should.
Class libraries are very flexible. As programs grow more complex, more programmers are forced to reinvent basic solutions to basic problems over and over again. A relatively new extension of the class library concept is to have a framework of class libraries. This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers. Frameworks also represent a change in the way programmers think about the interaction between the code they write and code written by others. In the early days of procedural programming, the programmer called libraries provided by the operating system to perform certain tasks, but basically the program executed down the page from start to finish, and the programmer was solely responsible for the flow of control. This was appropriate for printing out paychecks, calculating a mathematical table, or solving other problems with a program that executed in just one way.
The development of graphical user interfaces began to turn this procedural programming arrangement inside out. These interfaces allow the user, rather than program logic, to drive the program and decide when certain actions should be performed. Today, most personal computer software accomplishes this by means of an event loop which monitors the mouse, keyboard, and other sources of external events and calls the appropriate parts of the programmer's code according to actions that the user performs. The programmer no longer determines the order in which events occur. Instead, a program is divided into separate pieces that are called at unpredictable times and in an unpredictable order. By relinquishing control in this way to users, the developer creates a program that is much easier to use. Nevertheless, individual pieces of the program written by the developer still call libraries provided by the operating system to accomplish certain tasks, and the programmer must still determine the flow of control within each piece after it's called by the event loop. Application code still "sits on top of the system.
Even event loop programs require programmers to write a lot of code that should not need to be written separately for every application. The concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making these things all work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some of the generic capabilities of the framework with the specific capabilities of the intended application.
Application frameworks reduce the total amount of code that a programmer has to write from scratch. However, because the framework is really a generic application that displays windows, supports copy and paste, and so on, the programmer can also relinquish control to a greater degree than event loop programs permit. The framework code takes care of almost all event handling and flow of control, and the programmer's code is called only when the framework needs it (e.g., to create or manipulate a proprietary data structure).
A programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.
Thus, as is explained above, a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e.g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.
There are three main differences between frameworks and class libraries:
• Behavior versus protocol. Class libraries are essentially collections of behaviors that one can call when one wants those individual behaviors in a program. A framework, on the other hand, provides not only behavior but also the protocol or set of rules that govern the ways in which behaviors can be combined, including rules for what a programmer is supposed to provide versus what the framework provides.
• Call versus override. With a class library, the code the programmer instantiates objects and calls their member functions. It's possible to instantiate and call objects in the same way with a framework (i.e., to treat the framework as a class library), but to take full advantage of a framework's reusable design, a programmer typically writes code that overrides and is called by the framework. The framework manages the flow of control among its objects. Writing a program involves dividing responsibilities among the various pieces of software that are called by the framework rather than specifying how the different pieces should work together. • Implementation versus design. With class libraries, programmers reuse only implementations, whereas with frameworks, they reuse design. A framework embodies the way a family of related programs or pieces of software work. It represents a generic design solution that can be adapted to a variety of specific problems in a given domain. For example, a single framework can embody the way a user interface works, even though two different user interfaces created with the same framework might solve quite different interface problems.
Thus, through the development of frameworks for solutions to various problems and programming tasks, significant reductions in the design and development effort for software can be achieved. A preferred embodiment of the invention utilizes HyperText Markup Language (HTML) to implement documents on the Internet together with a general-puφose secure communication protocol for a transport medium between the client and the Newco. HTTP or other protocols could be readily substituted for HTML without undue experimentation. Information on these products is available in T. Berners-Lee, D. Connoly, "RFC 1866: Hypertext
Markup Language - 2.0" (Nov. 1995); and R. Fielding, H, Frystyk, T. Berners-Lee, J. Gettys and J.C. Mogul, "Hypertext Transfer Protocol - HTTP/1.1 : HTTP Working Group Internet Draft" (May 2, 1996). HTML is a simple data format used to create hypertext documents that are portable from one platform to another. HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains.
HTML has been in use by the World-Wide Web global information initiative since 1990. HTML is an application of ISO Standard 8879; 1986 Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).
To date, Web development tools have been limited in their ability to create dynamic Web applications which span from client to server and interoperate with existing computing resources. Until recently, HTML has been the dominant technology used in development of Web-based solutions. However, HTML has proven to be inadequate in the following areas: Poor performance; • Restricted user interface capabilities;
Can only produce static Web pages;
Lack of interoperability with existing applications and data; and Inability to scale.
Sun Microsystem's Java language solves many of the client-side problems by:
• Improving performance on the client side;
• Enabling the creation of dynamic, real-time Web applications; and
• Providing the ability to create a wide variety of user interface components. With Java, developers can create robust User Interface (UI) components. Custom "widgets" (e.g., real-time stock tickers, animated icons, etc.) can be created, and client-side performance is improved. Unlike HTML, Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance. Dynamic, real-time Web pages can be created. Using the above-mentioned custom UI components, dynamic Web pages can also be created.
Sun's Java language has emerged as an industry-recognized language for "programming the Internet." Sun defines Java as: "a simple, object-oriented, distributed, inteφreted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword- compliant, general-puφose programming language. Java supports programming for the Internet in the form of platform-independent Java applets." Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add "interactive content" to Web documents (e.g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e.g.,
Netscape Navigator) by copying code from the server to client. From a language standpoint, Java's core feature set is based on C++. Sun's Java literature states that Java is basically, "C++ with extensions from Objective C for more dynamic method resolution."
Another technology that provides similar function to JAVA is provided by Microsoft and
ActiveX Technologies, to give developers and Web designers wherewithal to build dynamic content for the Internet and personal computers. ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content. The tools use Internet standards, work on multiple platforms, and are being supported by over 100 companies. The group's building blocks are called ActiveX Controls, small, fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages. ActiveX Controls work with a variety of programming languages including Microsoft Visual C++, Borland Delphi, Microsoft Visual Basic programming system and, in the future, Microsoft's development tool for Java, code named "Jakarta." ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications. One of ordinary skill in the art readily recognizes that ActiveX could be substituted for JAVA without undue experimentation to practice the invention. One embodiment of the present invention includes three different, but complementary dimensions that together provide a framework which can be used in assessing and rating the IT operations of an organization. The following three dimensions constitute the framework of the present invention: 1) Operations Environment Dimension, 2) Capability Dimension, and 3) Maturity Dimension.
The first dimension describes and organizes the standard operational activities that any IT organization should perform. The second dimension provides a context for evaluating the performance quality of these operational activities. This dimension specifies the qualitative characteristics of an operations environment and orders these characteristics on a scale denoting rising capability. The final dimension uses this capability scale and outlines a method for deriving a capability rating for specific IT process groups and the entire organization.
The Operations Environment and Capability dimensions provide the foundation for determining the quality or capability level of the organization's IT operations. The Operations Environment dimension can be viewed as a descriptive mapping of a model operations environment. In a similar manner, the Capability dimension can be construed as a qualitative mapping of a model operations environment. The Maturity dimension builds on the foundation set by these two dimensions to provide a method for rating the maturity level of the entire IT organization.
Figure 2 is a flow chart illustrating the various steps associated with the different dimensions of the present invention. As shown, a plurality of process areas of an operations organization are first defined in terms of either a goal or a piupose in operation 200. The process areas are then grouped into categories, as indicated in operation 202. It should be noted that the categories are grouped in terms of process areas having common characteristics.
Next, in operation 204, process capabilities are received for the process areas of the operations organization. Such data may be generated via a maturity questionnaire which includes a set of questions about the operations environment that sample the base practices in each process area of the present invention. The questionnaire may be used to obtain information on the capability of the IT organization, or a specific IT area or project.
Thereafter, category capabilities are calculated for the categories of the process areas in operation 206. A maturity of the operations organization is subsequently determined based on the category capabilities of the categories in operation 208. The user-specified or measured parameters, i.e., capability of each of the process areas, may be inputted by any input device, such as the keyboard 124, the mouse 126, the microphone 132, a touch screen (not shown), or anything else such as an input port that is capable of relaying such information. Further, the definitions, grouping, calculations and determinations may be carried out manually or via the CPU 110, which in turn may be governed by a computer program stored on a computer readable medium, i.e., the RAM 114, ROM 116, the disk storage units 120, and/or anything else capable of storing the computer program. In the alternative, dedicated hardware such as an application specific integrated circuit (ASIC) may be employed to accomplish the same. As an option, any one or more of the definitions, grouping and determinations may be carried out manually or in combination with the computer.
Further, the outputting of the determination of the maturity of the operations organization may be effected by way of the display 138, the speaker 128, a printer (not shown) or any other output mechanism capable of delivering the output to the user. It should be understood that the foregoing components need not be resident on a single computer, but also may be a component of either a networked client and/or a server.
Operations Environment Dimension The Operations Environment Dimension is characterized by a set of process areas that are fundamental to the effective technical execution of an operations environment. More particularly, each process is characterized by its goals and puφose, which are the essential measurable objectives of a process. Each process area has a measurable puφose statement, which describes what has to be achieved in order to attain the defined puφose of the process area.
In the present description, goals refer to a summary of the base practices of a process area that can be used to determine whether an organization or project has effectively implemented the process area. The goals signify the scope, boundaries, and intent of each process area.
The process goals and puφose may be achieved in an IT organization through the various lower level activities; such as tasks and practices that are carried out to produce work products. These performed tasks, activities and practices, and the characteristics of the work products produced are the indicators that demonstrate whether the specific process goals or puφose is being achieved.
In the present description, work product describes evidence of base practice implementation. For example, a completed change control request, a resolved trouble ticket, and/or a service level agreement (SLA) report.
The operations environment is partitioned into three process areas: Process Categories, Process Areas and Base Practices which reflect processes within any IT organization. Figure 3 depicts and summarizes the relationship of the Process Categories 300, Process Areas 302, and Base
Practices 304 of the Operations Environment Dimension. This breakdown provides a grouping by type of activity. The activities characterize the performance of a process. The three level hierarchy is described as follows.
Process Categories (300)
In the present description, a Process Category has a defined puφose and measurable goals and consists of logically related set of Process Areas that collectively address the puφose and goals, in the same general area of activity.
The puφose of Process Categories is to organize Process Areas according to common IT functional characteristics. There are four process categories defined in the present invention: Service Management, Systems Management, Managing Change, and IT Operations Planning. Process Categories are described as follows:
Figure imgf000018_0001
Figure imgf000019_0001
Figure imgf000019_0002
Figure imgf000019_0003
Figure imgf000020_0001
Figure imgf000020_0002
Process Areas (302)
Process Areas are the second level in the operations hierarchy. The elements of this level are a collection of Base Practices that are performed to achieve the defined puφose of the Process
Area.
In the present description, Process Areas refer to a collection of Base Practices that are performed sequentially, concurrently and/or iteratively to achieve the defined puφose of the process area. The puφose describes the unique functional objectives of the process area when instantiated in a particular environment. Satisfying the puφose statement of a process area represents the first step in building process area capability.
Examples of Process Areas for the Service Management Category include service level management, operations level management, service desk, user administration, and service pricing. To illustrate further, the puφose of service level management may be to document the information technology services to be delivered to users. Note that this puφose states a unique functional objective (to establish requirements), and provides a context (service level).
Base Practices (304) Base Practices are the lowest level in the operation hierarchy. Base Practices are essential activities that an IT organization performs to achieve the puφose of a Process Area. A base practice is what an IT organization does.
For example, Base Practices of service level management may be to assess business strategy, audit current service levels, determine service requirements and IT's ability to deliver services, prepare a draft SLA, identify the charge-back structure, and agree to SLAs with customers. The Process Areas are expressed in terms of their goals, whereas Base Practices are tasks that need to be carried out to achieve those goals. Base Practices may have work products associated with them. A work product is evidence of base practice implementation, for example, a completed change control request, a resolved trouble ticket, and/or a SLA report.
A service desk example of a process area and associated base practices is as follows:
Figure imgf000021_0001
Figure imgf000022_0001
Base Practices
Figure imgf000022_0002
Figure imgf000023_0001
Example The Service Desk will receive requests of all types, including requests for new users, moves, and updates to software or hardware. All requests are logged and tracked in the same manner as incidents and problems.
Capability Dimension
In the present description, Capability Dimension refers to formalizing the process performance into quantifiable range of expected results based on the process capability level that can be achieved by following the process. Process capability dimension characterizes the level of capability of each process area within an organization. In other words, the process capability dimension describes how well the processes in the process dimension are performed.
The Capability Dimension measures how well an IT organization performs its operational processes. In determining capabilities, the Base Practices are viewed as a guide to what should be done. The related Generic Practices deal with the effectiveness in which the Base Practices are carried out. Capability Levels, Process Attributes, and Generic Practices describe the Process Capability. The present invention has five levels of Process Capability that can be applied to any Process Area. The Capability Dimension provides a means to formalize and quantify the process performance. The Capability Dimension describes how well the processes are performed as contrasted with Base Practices that describe what an IT organization does.
The Capability Dimension consists of three components: Capability Levels, Process Attributes, and Generic Practices. These are described below.
Capability Levels
In the present description, Capability Levels indicate increasing levels of process maturity and are comprised of one or more generic practices that work together to provide a major enhancement in the capability to perform the process.
The Capability Level is the highest level of the Capability dimension. The Capability Level of a process determines its performance and effectiveness. Each Capability Level has certain Process Attributes associated with it. A Process Attribute is comprised of a set of Generic Practices that provide criteria for improving performance. A particular Capability Level is achieved when all the Process Attributes associated with it and with preceding levels are present. Therefore, once the Capability Level is determined, those Process Attributes - and associated Generic Practices - that are required to enhance capability can be identified. In other words, Capability Levels offer a staged guideline for improving the capability to perform the defined processes.
Capability Levels provide two benefits: they acknowledge dependencies and relationships among the Base Practices of a Process Area, and they help an IT organization identify which improvements should be performed first, based on a plausible sequence of process implementation.
Each level provides a major enhancement in capability to that provided by its predecessors in the fulfillment of the process puφose. For example, at capability Level 1, Base Practices are performed. The performance is ad hoc, informal, and unpredictable. At capability Level 2, the performing of Base Practices are planned and tracked versus just performed - thereby offering a significant improvement over Level 1 practice.
In this architecture, the Capability Levels are applied to each Process Area independent of other
Process Areas. An assessment is performed to determine Process Capability for each Process Area, as illustrated in Figure 4.
In the present description, an assessment refers to a diagnostic performed by a trained team to evaluate aspects of an organization's IT operations environment processes. The trained team determines the state of the operational processes, identifies pressing operational process related issues, and obtains organizational support for a process improvement program.
Therefore, different Process Areas can, and may, exist at different levels of capability. The ability to rate Process Areas independently enables an IT organization to focus on process improvement priorities driven from business goals and strategic directions. An example of this is illustrated in Figure 4.
Process Attributes In the present description, process attributes refer to features of a process that can be evaluated on a scale of achievement (performed, partially performed, not performed, etc.) which provide a measure of the capability of the process. Within the framework of the present invention, measures of capability are based on a set of nine Process Attributes. Process Attributes are used to determine whether a process has reached a given capability. The nine Process Attributes are:
Process Performance Performance Management Work Product Management Process Definition Process Resource • Process Measurement
Process Control Process Change Continuous Improvement
The attributes are evaluated on a four-point scale of achievement. Achieving a given Capability
Level depends on the rating assigned to one or more of these attributes.
Generic Practices
In the present description, Generic Practices refer to activities that contribute to the capability of managing and improving the effectiveness of the operations environment Process Areas. A generic practice is applicable to any and all Process Areas. Tt contributes to overall process management, measurement, and the institutionalization capability of the Process Areas.
For example, the allocation of adequate resources to a process is a Generic Practice and is applicable to all processes. Service Level Management and Migration Control are two different
Process Areas with different Base Practices, goals, and puφoses. However, they share the same Generic Practice of allocation of adequate resources.
Maturity Dimension Operational Maturity Dimension characterizes the maturity of an entire operations IT organization. In the present description, maturity refers to the degree of order (structure or systemization) and effectiveness of a process. The degree of order determines its state of maturity. Less mature processes are less ordered and less effective; more mature processes are more ordered and more effective. The Capability Dimension focuses on the determination of the capability of individual processes, within an operations organization, in achieving their stated goals and puφose. The Operational Maturity Dimension determines the IT organizational maturity by focusing on a collection of processes at a certain level of capability in order to characterize the evolution of the operations
IT organization as they improve.
The term Maturity, in the overall context of present invention, is applied to an IT organization as a whole. The Maturity Level is determined by the Capability Level of the four Process Categories: Service Management, Systems Management, Managing Change, and IT Operations
Planning. Operational maturity is defined by a staged model, wherein a operational maturity level 500 cannot be reached until all Process Categories driving it have themselves reached a certain maturity level. Similarly, a category Capability Level 502 cannot be reached until all Process Areas 302 contained in it have reached a certain Process Capability Level 504. This staging is illustrated in Figure 5.
In the present description, Maturity Level refers to a sequence of key intermediate states leading to the goal state. Each state builds incrementally on the preceding state.
Even though it is recommended that an entire operational assessment be conducted, the assessment tool of the present invention is flexible to accommodate an assessment of a Process Category or just a Process Area. As shown in Figure 5, an assessment could end at the Process Area Level with the Process Capability Level or Process Area Maturity determined. An assessment could also be performed to assess all the Process Areas within a Process Category to determine the Process Category Maturity Level.
The framework of the present invention, which consists of the three dimensions described previously, is illustrated in Figure 6. The Operations Environment Dimension 600, the box in the center of Figure 6, divides all IT processes into Process Categories 300. Process Categories 300 divide into a finite number of Process Areas 302. Process Areas 302 consist of a finite number of Base Practices 304. Each Process Area within a category is assigned a Capability Level 504 based on the performance of Process Attributes 601 comprised of a finite number of Generic Practices 602 applicable to that process (shown in the box on the right).
In turn, the IT organization's operational maturity 603 present invention is based on a clustering of process capabilities, as illustrated in the third box to the left.
The framework of the present invention is designed to support an IT organization's need to assess and improve their operational capability. The structure of the model enables a consistent appraisal methodology to be used across diverse Process Areas. The distinction between essential operations and process management-focused elements therefore allows a systematic approach to process improvement.
Capability Determination As described in the previous section, the Capability Dimension of the present invention measures how capable an IT organization is in achieving the puφose of its various Process Areas. Within the context of the present invention, Capability Levels, Process Attributes, and Generic Practices describe the Process Capability. In this section, the Capability Levels, their characteristics, the Process Attributes, and the Generic Practices that comprise them are discussed in more detail.
The present invention has five levels of Process Capability that can be applied to any Process Area. As mentioned before, Generic Practices are grouped by Process Attributes, and Process Attributes determine the Capability Level. Capability Levels build upon one another; levels cannot, therefore, be skipped.
Figure 7 tabulates the relationship of Generic Practices and Process Attributes to Capability Levels.
The following section explains in greater detail what is meant by Level 1, Level 2, and so forth. Each Level is described in terms of its characteristics and the Generic Practices (GP) assigned to it.
Level 1 : Performed Informally At this Level, all Base Practices are generally performed, but operations may be ad hoc and occasionally chaotic. Consistent planning and tracking of performance is not performed. Good performance depends on individual knowledge and effort. Operational support and services are generally adequate, but quality and efficiency depend on how well individuals within the IT organization perceive that tasks should be performed. The capability to perform an activity is not generally repeatable or transferable.
Process Attribute
ATT 1A: Process Performance - the extent to which the execution of the process employs a set of practices which uses identifiable input work products to produce identifiable output work products that are adequate to satisfy the puφose of the process.
In order to achieve this capability, Base Practices of the process must be implemented and work products must be produced that satisfy the process puφose. The related Generic Practice is:
GP1.1 Ensure that Base Practices are performed.
When all base practices are performed, the puφose of the process area is satisfied. A process may exist but it may be informal and undocumented.
Level 2: Planned and Tracked
At this Level, performance of the Base Practices in the Process Area is planned and tracked. The necessary discipline is in place to repeat earlier successes with similar characteristics.
There is general recognition that the Process Area performance is dependent on how efficiently the Base Practices are implemented. Work products, such as completed change control requests, resolved trouble tickets, etc., which are related to base practice implementation are periodically reviewed and placed under version control. Corrective action is taken when variances in services and work products occur.
Process Attribute
ATT 2 A: Performance Management - the extent to which the execution of the process is managed in order to produce work products within a stated time and resource requirement. The related Generic Practices are: GP2.1 Establish and maintain a policy for performing operational tasks.
Policy is a visible way for the operations environment personnel and the management team to set expectations. The form of policies varies widely depending on the local culture. Policy typically specifies that plans are documented, managed and controlled, and that reviews are conducted. Policy provides guidance for performing the operational tasks and processes.
GP2.2 Allocate sufficient resources to meet expectations.
Resources include adequate funding, appropriate physical facilities, skilled people, and appropriate tools. This practice ensures that the level of effort, appropriate skills mix, tools, workspace, and other direct resources are available to perform the operational task and processes.
GP2.3 Ensure personnel receive the appropriate type and amount of training.
Ensure that the individuals are appropriately trained on how to perform the operational tasks and processes. Training provides a common basis for repeatable performance. Even if the operations personnel or management have satisfactory technical skills and knowledge, there is almost always a need to establish a common understanding of the operational process activities and how skills are applied in them. Training, and how it is delivered, may change with process capability due to changes in how the process is performed and managed.
GP2.4 Collect data to measure performance.
The use of measurement implies that the metrics have been defined and selected, and data has been collected. Building a history of measures, such as cost and schedule variances, is a foundation for managing by data. Quality measures may be collected and used, but result in maximum impact at Level 4 when they are subjected to quantitative process control.
GP2.5 Maintain communication among team members.
Open communication ensures that there is common understanding, that decisions are consensual, and that team members are kept aware of decisions made. Communication is needed when changes are made to plans, products, processes, activities, requirements, and responsibilities.
The commitments, expectations, and responsibilities are documented and agreed upon within the project group. Commitment may be obtained by negotiation, by using input and feedback, or through joint development of solutions to issues. Issues are tracked and resolved within the group. Communication occurs periodically and whenever the status changes. The participants have access to data, status information, and recommended actions.
Process Attribute ATT 2B: Work Product Management - the extent to which the process is managed to produce work products that are documented and controlled, and that meet their functional and nonfunctional requirements, in line with the work product quality goals of the process.
In order to achieve this capability, a process needs to have stated functional and non-functional requirements for work products, including integrity, and to produce work products that fulfill the stated requirements. The related Generic Practices are:
GP2.6 Ensure work products satisfy documented requirements.
Requirements may come from the business customer, policies, standards, laws, regulations, etc. The applicable requirements are documented and available for verification activities.
GP2.7 Employ version control to manage changes to work products.
Place identified work products under version control, or configuration management to provide a means of controlling work products and services.
Level 3: Well-Defined
At Level 3, Base Practices are performed with the assistance of an available, well-defined, and operations-wide process infrastructure. The processes are tailored to meet the specific needs of a certain practice.
Data from using the process are gathered to determine if modifications or improvements should be made. This information is used in planning and managing the day-to-day execution of multiple projects within the IT organization, and for short and long-term process improvement.
Once the environment is stable, common practices for performing the processes are collected, defined in a consistent manner, and used as the basis for long-term improvement across the operations environment. At this level, the proper mechanism is in place to distribute knowledge and experience throughout the operations environment. Process Attribute
ATT 3 A: Process Resource - the extent to which the execution of the process uses suitable skilled human resources and process infrastructure effectively to contribute to the defined business goals of the operations environment.
In order to achieve this capability, a process needs to have an infrastructure available that fulfills stated needs, and adequate human resources. The related Generic Practices are:
GP3.1 Define policies and procedures at an IT level. Policies, standards, and procedures are established at an IT level for common use throughout the operations environment.
GP3.2 Define tasks that satisfy the process purpose and business goals consistently and repeatedly. This includes:
Identifying the standard process from those available in the IT organization that is appropriate to the process puφose and the business goals of the IT organization.
Tailoring the standard process to obtain a defined process appropriate for the task at hand, implementing the defined process to achieve the process puφose consistently and repeatedly, and to support the business goals of the organization.
Process Attribute
ATT 3B: Process Definition - the extent to which the execution of the process uses a definition, based upon a standard process, that enables it to contribute to the defined business goals of the IT organization.
In order to achieve this capability, a process needs to be executed according to a standard definition that has been suitably tailored to the needs of the process instance. The standard process needs to be capable of supporting the stated business goals of the IT organization. The related Generic Practices are:
GP3.3 Plan for human resources proactively.
Unlike training at Capability Level 2, this practice embodies the pro-active planning of personnel. This includes the selection of proper work forces, training, and dissemination. GP 3.4 Provide feedback in order to maintain knowledge and experience.
The standard process repository is to be kept up-to-date, through a continuous feedback system based on experiences gained from using the defined process.
Level 4: Quantitatively Controlled
At this Level, processes and services are quantitatively measured, understood, and controlled.
Detailed measures of performance are collected and analyzed.
Establishing common processes within an operations environment enables more sophisticated methods of performing activities. These activities include controlling processes and results quantitatively; integrating processes across groups, or fine-tuning processes to different services.
At this Level, measurable process goals are established for each defined process and associated services. Detailed measures of performance are collected and analyzed. This data enables quantitative understanding of the processes and an improved ability to predict performance. Performance is objectively managed, the quality of services is quantitatively known, and defects are selectively identified and corrected.
Process Attribute
ATT 4A: Process Measurement - the extent to which measures are used to ensure that the implementation of the process supports its execution, and contributes to the achievement of IT organizational goals.
In order to achieve this capability, a process needs to have defined measures that enable an execution to be controlled. The related Generic Practices are:
GP4.1 Establish measurable quality objectives for the operations environment. These quality objectives can be tied to the strategic quality goals of the IT organization, the particular needs and priorities of the customer, or the tactical needs of a specific group or project.
The measurements referred to here go beyond the traditional service level and end product measurements. They are intended to imply sufficient understanding of the processes being used to enable the IT organization to set and use intermediate goals for work-product quality. GP4.2 Automate data collection.
Process definitions are modified to reflect the quantitative nature of process performance.
Measurements become inherent in the process definition and are collected as the process is being performed.
Process Attribute
ATT 4B Process Control - the extent to which the execution of the process is controlled through the collection and analysis of measures that correct the performance of the process in order to reliably achieve the defined process goals. The related Generic Practices are:
GP4.3 Provide adequate resources and infrastructure for data collection.
Since the success of Level 4 lies fundamentally on collection of proper data, automated methods should be in place to collect them. This includes software tools and meaningful placement of appropriate metrics for collection of the relevant data.
GP4.4 Use data analysis methods and tools to manage and improve the process. This includes the identification of analysis and control techniques appropriate to the process; the provision of adequate resources and infrastructure for analysis and process control; analysis of available measures to identify process control parameters; and, identification of deviations and employment of corrective actions.
Level 5: Continuously Improving
Level 5 is the highest achievement level from the viewpoint of Process Capability. Continuous process improvement is enabled by quantitative feedback from the process and from pilot studies of innovative ideas and new technology. A focus on widespread, continuous improvement should permeate the IT organization. The IT organization should establish quantitative performance goals for process effectiveness and efficiency, based on its business goals and strategic objectives.
Once critical business objectives are consistently evaluated and compared against process capability, continuous improvement can be institutionalized within the operations environment. This results in a cycle of continuous learning.
Process Attribute ATT 5A: Continuous Improvement - the extent to which changes to the process are identified and implemented to ensure continuous improvement in the fulfillment of the defined business goals of the IT organization.
In order to achieve this capability, it is necessary to continuously identify and implement improvements to the tailored process, and provide input to make changes to the standard process definition. The related Generic Practices are:
GP5.1 Continually improve tasks and processes Improvements may be based on incremental operational refinements or through innovations, such new technologies. Improvements may typically be driven by the following activities:
• Identifying and approving changes to the standard process definition on the basis of quantitative understanding of the process.
• Providing adequate resources to effectively implement the approved changes in affected tailored processes.
• Implementing the approved changes to the affected tailored processes.
• Validating the effectiveness of process change on the basis of measurement of actual performance against the process and business goals.
Process Attribute
ATT 5B: Process Change - the extent to which changes to the definition, management, and performance of the process is controlled to better achieve the business goals of the IT organization.
In order to achieve this capability, a process may use quantitative methods to identify and implement changes to the standard process definition. The related Generic Practices are:
GP5.2 Deploy "best practices" across the IT organization.
Improved practices must be deployed across the operations environment to allow their benefit to be felt across the IT organization. The deployment activities include:
Identifying improvement opportunities in a systematic and proactive manner to continuously improve the process. Establishing an implementation strategy based on the identified opportunities to improve process performance according to business goals. Implementing changes to selected areas of the tailored process according to the implementation strategy.
Validating the effectiveness of process change on the basis of measurements of actual performance against process and business goals, and then feedback to the standard process definition.
Rating Framework The rating framework requires identification of objective attributes or characteristics of a practice or work product of an implemented process to validate that Base Practices are performed, and Generic Practices are followed. Assessment Indicators determine Process Attribute ratings which then are used to determine Capability Level.
In the present description, Assessment Indicators refer to objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process.
Process Capability Rating The cornerstone of a rating framework is the identification and description of Assessment
Indicators to help rate the Process Attributes. Assessment Indicators are objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process. Assessment Indicators are evidence that Base Practices are performed, and Generic Practices are followed. The indicators are not intended to be regarded as a mandatory checklist to be followed, but rather are a guide to enhance an assessment team's objectivity in making their judgments of a process's performance and capability. The rating framework adds definition and reliability to the present invention, and thereby improves repeatability.
Assessment Indicators are determinants of Process Attribute ratings for each Process Capability attribute. Each assessed process profile consists of a set of Process Attribute ratings. Each attribute rating represents a judgment by the assessment team of the extent to which the attribute is achieved. Figure 8 illustrates the Process Attribute rating represented on a four-point scale of achievement.
The indicators determine attributes rating which then are used to determine Capability Level. The rating scale defined below is used to describe the degree of achievement of the defined capability characterized by Process Attributes. Once the appropriate rating for each Process
Attribute is determined, ratings can be combined to assign the Capability Level achieved by the assessed process. Figure 9 represents the mapping of attribute ratings to the process Capability Levels determination.
As an example, to assess the capability of a particular instance of a Service Desk process, the first step is to identify if the appropriate Base Practices are performed at all. The necessary foundation for improving the capability of any process is to at least demonstrate that the Base Practices are being performed. The assessment team may then formulate an objective judgment of process performance attribute through different means such as analysis of the work products (i.e., reviewing completed trouble tickets), demonstration of evidence of process implementations (i.e., are escalation procedures documented and understood?), interviews with process performers (i.e., discuss daily activities with Service Desk personnel), and other means as appropriate (i.e., does the Service Desk have a dedicated phone number that users should call to report incident/problems/requests or a dedicated email address, etc.).
Achievement of Base Practices is an indication that Process Area goals are being met. The increasing capability of a process to effectively achieve its goals and objectives is based upon attribute rating. The attribute rating is determined by the performance of the associated Generic Practices. Evidence of effective performance of the Generic Practices associated with a Process Attribute supports the assessment team's judgement of the degree of achievement of the attributes.
Operational Maturity Rating
Up to now, the discussion has focused on the capability rating of Process Areas. To detennine the maturity level of an organization, the third dimension of the architecture of the present invention, the capability ratings are used.
Process Category capabilities are determined from capability ratings of its Process Areas. Once all Process Areas of a category are rated the lowest rating assigned to a Process Area becomes the category rating as well. Similarly, the operational maturity rating is determined from Process Category rating within the IT organization. Once all Process Categories are rated then the lowest rating assigned to a Process Category becomes the IT organizational maturity.
For example, if the Process Categories of an IT organization are rated as follows, then this particular IT organization would receive a maturity level rating of "1".
Process Category Capability Rating
Service Management 2
Systems Management 1
IT Operations Planning 3
Managing Change 2
In the present invention, the concept of capability is applied to processes, and the concept of maturity is applied to IT organizations.
Assessment Process
In performing an assessment, an assessment team collects the evidence on the implementation of the processes being assessed and determines their compatibility as defined in the framework of the present invention. The objective of the assessment is to identify the di ferences and the gaps between the actual implementations of the processes in the assessed operational IT organization with respect to the present invention. Using the framework of the present invention ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
The assessment process is used to appraise an organization's IT operations environment process capability. Defining a reference model ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
An IT organization can perform an assessment for a variety of reasons. An assessment can be performed in order to assess the processes in the IT operations environment with the puφose of improving its own work and service processes. An IT organization can also perform an assessment to determine and better manage the risks associated with outsourcing. In addition, an assessment can be performed to better understand a single functional area such as systems management, on a single process area such as a performance management, or on the entire IT operations environment.
Three phases are defined in the assessment model: Planning and Preparing, Performing, and Distributing Results. All phases of the assessment are performed using a team-based approach.
Team members include the client sponsor, the assessment team lead, assessment team members, and client participants.
Plan and Prepare for the Assessment Determine Assessment Scope
In the present description, assessment scope refers to organizational entities and components selected for inspection. A clear understanding of the puφose of the framework, constraints, roles, responsibilities, and outputs are needed prior to the start of the assessment. Therefore, in preparation for the assessment, the assessment team lead and the client sponsor work together to reach agreement on the scope and goals of the assessment. Once agreement is reached, the assessment team lead ensures that the IT operational processes selected for the assessment are sufficient to meet the assessment puφose and may provide output that is representative of the assessment scope.
An assessment plan is developed based on the goals identified by the client sponsor. The plan consists of detailed schedules for the assessment and potential risks identified with performing the assessment. Assessment team members, assessment participants, and areas to be assessed are selected. Work products are identified for initial review, and the logistics for the on-site visit are identified and planned.
Train the Assessment Team
The assessment team members must receive adequate training on the framework of the present invention and the assessment process. It is essential that the assessment team be well-trained on the present invention to ensure that they may have the ability to inteφret the data obtained during the assessment. The team must have comprehensive understanding of the assessment process, its underlying principles, the tasks necessary to execute it, and their role in performing the tasks.
Gather Assessment Input Maturity questionnaires are distributed to participants prior to the client site visit. Maturity questionnaires exist for each process area of the present invention, and tie back to base practices, process attributes and generic practices. Completed questionnaires provide the assessment team with an overview of the IT operational process capability of the IT organization. The responses assist the team in focusing their investigations, and provide direction for later activities such as interviews and document reviews. Assessment team members prepare exploratory questions based on Interview Aids and responses to the maturity questionnaires.
In the present description, Interview Aids refers to a set of exploratory questions about the operations environment which are used during the interview process to obtain more detailed information on the capability of the IT organization. The interview aids are used by the assessment team to guide them through interview sessions with assessment participants.
Assessment participants prepare documentation for the assessment team members to review. Documentation about the IT operational processes allows the assessment team to tie IT organization data to the present invention.
Conduct Assessment
A Kick off meeting is scheduled at the start of the on-site activities. The puφose of the meeting is to provide the participants with an overview of present invention and the assessment process, to set expectations, and to answer any questions about the process. A client sponsor of the assessment may participate in the presentation to show visible support and stress the importance of the assessment process to everyone involved.
Gather Data
Data for the assessment are obtained from several sources: responses to the maturity questionnaires, interview sessions, work products, and document reviews. Documents are reviewed in order to verify compliance. Interviewing provides an opportunity to gain a deeper understanding of the activities performed, how the work is performed, and processes currently in use. Interviewing provides the assessment team members with identifiable assessment indicators for each Process Area appraised. Interviewing also provides the opportunity to address all areas of the present invention within the scope of the assessment. Interviews are scheduled with IT operations managers, supervisors, and operations personnel. IT operations managers and supervisors are interviewed as a group in order to understand their view of how the work is performed in the IT organization, any problem areas of which they are aware, and improvements that they feel need to be made. IT operations personnel are interviewed to collect data within the scope of the assessment and to identify areas that they can and should improve in the IT organization.
Examples of maturity questionnaires associated with the foregoing service desk example are as follows:
Questions
Base Practice: 1.3.1 Call Attention
What methods are available to users for communication with the Service Desk, and do users have access to resources needed for such communication9
Are all users infoimed how and when to contact the Service Desk9 If so, how
Do all users receive the same level of support9 If no, how does support differ7
Do you gather call statistics like total volume of calls and number of abandoned calls7 If so, can we access this information7
Is there a need for after-hours support7 If so, what type of after-houis support does the Service Desk provide7
Base Practice: 1.3.2 Incident/Request Logging
1 What is the procedure for logging incidents/requests, and is this followed in all cases ?
Is a priority level assigned to the incident/request at time of receipt and how is it determined7 Base Practice: 1.3.3 Incident/Request Qualification
Do Service Desk personnel have access to a catalogue/database of frequently occurring incidents and their solutions, and does its format allow for rapid access and search7
How often is this catalogue/database accessed to provide an immediate solution or work-around to the user7 (e g , all calls, some calls, very few calls)
How frequently is this catalogue/database updated7
What other resources exist to aid Service Desk personnel with immediate incident resolution7 Base Practice: 1.3.4 Incident/Request Assignment
Is there a defined time frame within which the incident/request should be assigned and is it usually followed7
Are users notified of receipt, status and approximate time to resolution (if possible) of incident/request and provided with the incident request ID7
By what process is the appropriate personnel determined for handling an incident/request7
Is a defined system used for assigning responsibility for an incident/request to the appropriate personnel7 (e g tiouble tickets are generated and sent to appropriate personnel)
Is a record made of the person to whom the incident/request is assigned ;
Base Practice: 1.3.5 Incident & Problem Resolution
Are non-resolved incidents/problems escalated according to procedures defined in SLAs7 2 How are appropriate resources notified that the incident/problem has been escalated7
While problem resolution is in process, is a work-around solution determined and conveyed to the user7
When a problem is escalated or a resolution has been determined, is the log updated7
Does the Service Desk or the party to whom the problem was escalated "own" the problem7
Base Practice: 1.3.6 SLA & OLA Tracking and Monitoring
What is the system for tracking and monitoring the problem resolution process for an incident/request7 What types of issues (e g excessive reassignments, deviations from estimated task times) are flagged and what action is taken to address them
Base Practice: 1.3.7 Resolution Confirmation
Are useis notified of incident/request resolution7 Is confirmation sought from the user to verify that incident/request has been resolved satisfactorily7 If such confirmation is not obtained what is done7
Base Practice: 1.3.8 Incident / Request Closure
How is an incident/request closed7 What records are made7
If it exists, is a solution database updated with the incident/problem and solution for future reference7 What parties are informed of a closure7
Base Practice: 1.3.9 Trends and Repetitive Incident Analysis
Are incidents analyzed to detect trends and identify underlying problems? If so, by what process7
Are users notified of known incidents proactively before they report the incident7
Base Practice: 1.3.10 Service Level Control
Does the Service Desk generate reports comparing actual service levels (eg. Number of incidents resolved at initial call, resolution time by severity) with target service levels7
Who receives these reports and for what purposes7
How are service levels targets set and what is the process for reviewing/updating them7
Do the users communicate their views of support to the Service Desk and agree with the Service Desk's assessment of incident and problem management7
Base Practice: 1.3.11 Receive Requests
Are requests handled immediately or do they require provisioning/approval7
Does the Service Desk coordinate the approval of requests with the appropriate functions and notify requester of approval/rejection7
If request requnes functions outside the Service Desk, how does the Service Desk pass lesponsibihty to the appropriate personnel7
Do SLAs exist between the Service Desk and the end user community7
Do agreements exist between the Service Desk and the next level of support (internal or external)7 Generic Questions for Process Area
Are the policies for Service Desk operation outlined in a document7 How are employees made aware of these policies
What mechanisms are in place to ensure policies aie followed7
How frequently are Service Desk policies leviewed and/or modified? What is the process foi such policy updates?
Are the current staff and resources of the Service Desk adequate for satisfactorily meeting usei needs
What type of qualification and or training do Service Desk personnel have7
Are Service Desk operations periodically reviewed in order to identify and implement potential improvements7 Who manages this process7
Solidify Information
The puφose of solidifying this information is to summarize and consolidate information into a manageable set of findings. The data is then categorized into Process Areas of the present invention. The assessment team must reach consensus on the validity of the data and whether sufficient information in the areas evaluated has been collected. It is the team's responsibility to obtain sufficient information on the components of the present invention within the scope of the assessment for the required areas of the IT organization before any rating can be done. Follow- up interviews may occur for clarification.
Initial findings are generated from the information collected thus far, and presented to the assessment participants. The puφose of presenting initial findings is to obtain feedback from the individuals who provided information during the various interviews. Ratings are not considered until after the initial findings presentations, as the assessment team is still collecting data. Initial findings are presented in multiple sessions in order to protect the confidentiality of the assessment participants Feedback is recorded for the team to consider at the conclusion of all of the initial findings presentations
Examples of assessments associated with the foregoing service desk example are as follows
Level 1
Figure imgf000043_0001
Level 2
Figure imgf000043_0002
Figure imgf000044_0001
Level 3
Figure imgf000044_0002
Level 4
Process Attribute Generic Practice Example of Assessment Indicator
Process Measurement GP4 1 Establish measurable quality Service levels are based on strategic objectives for the services of the operations business needs vs industry standards environment's standard and defined processes
GP4 2 Determine the quantitative process Metrics are automatically collected capability of the defined process from the problem management tool (vs collected manually)
GP4 3 Provide adequate resources and Ties to systems management are in infrastructure for data collection place Tickets are automatically created when systems management tools detect faults Adequate resources are in place to analyze and report on Service Desk data
Process Control GP4 4 Use the quantitative process capability Service levels are revised after to manage the process reviewing actual data on Service
Figure imgf000045_0001
Level 5
Figure imgf000045_0002
Rating
After the assessment team consolidates all of the data, the rating process may begin. The experience and training that the assessment team has provides them with the knowledge needed to inteφret the data obtained during the assessment. The first step in the rating process is to determine if Process Area goals are being met. Process Area goals arc considered met when all base practices are performed. Each process attribute for each Process Area within the assessment scope is then rated. Process attnbutes are rated based on the existence of and compliance to generic practices. Using the Assessment Indicator Rating template, the assessment team identifies assessment indicators for each process area to determine whether or not process attributes are achieved. Ratings are always established based on consensus of the entire assessment team. Questionnaire responses, interview notes, and documentation are used to support ratings; confirmation from two sources in different contexts (e.g., two people in different meetings) ensures compliance of an activity.
For each process attribute, the team reviews all weaknesses that relate to the associated generic practices. If the team determines that a weakness is strong enough to impact the process attribute, the process attribute is rated "not achieved." If it is decided that there are no significant weaknesses that have an impact on a process attribute, it is rated "fully achieved." For a Process Area to be rated "fully achieved," all process attributes for the Process Area must be rated "fully achieved." A Process Area may be rated fully achieved, largely achieved, partially achieved, or not achieved.
Assignment of a maturity level rating is optional at the discretion of the sponsor. For a particular maturity level rating to be achieved, all Process Areas within and below the maturity level must be satisfied. For example, for an IT organization to be rated at maturity level 4, all Process Areas at level 4, level 3 and at level 2 must have been investigated during the assessment, and all Process Areas must have been rated achieved by the assessment team. The final findings presentation is developed by the team to present to the sponsor and the IT organization the strengths and weaknesses observed for each Process Area within the assessment scope, the ratings of each Process Area, and the maturity level rating if desired by sponsor.
Wrap up and Distribution of Results
The final assessment results are presented to the client sponsor. During the final presentation, the assessment team must ensure that the IT organization understands the issues that were discovered during the assessment and the key issues that it faces. Operational strengths are presented to validate what the IT organization is doing well. Strengths and weaknesses are presented for each process area within the assessment scope as well as any issues that affect process and are un-related to the present invention. A Process Area profile is presented showing the individual Process Area ratings in detail.
An executive overview session is held in order to allow the senior IT Operations manager to clarify any issues with the assessment team, to confirm his or her understanding of the operational process issues, and to gain full understanding of the recommendations report.
When the assessment has been completed and findings have been presented, the assessment team collects feedback from the assessment participants and the assessment team on the process, packages information that needs to be saved for historical purposes.
Figure 10 describes the roles and responsibilities of those involved with the assessment process.
As shown, various roles that may be involved with the execution of the present invention include a client sponsor, assessment participants, an assessment team leader, and assessment team members. It should be noted that any of such roles and responsibilities may be automated per the desires of the user. Figure 11 represents the indicator types and their relationship to the determination of Process Area rating. As shown, evidence of process performance and process capability is provided by assessment indicators. Such assessment indicators, in turn, consist of base practices and general practices. At the next level, the base practices and general practices are assessed by process implements, work products, practice performance, resources and infrastructure.
A plurality of examples of additional process areas and associated generic/base practices will now be set forth. In addition, maturity questionnaires are also provided for each example. Given this information, the foregoing principles of the present invention may be employed for determining capability levels of vanous process areas for process assessment puφoses in an operational maturity investigation.
Figure imgf000047_0001
PA Goals To define services to be delivered (by application and/or business unit)
To define a quantifiable service level that represents a minimum level of service for each service delivered
To gather and compare actual service statistics, and to identify and resolve service deviations
To regularly review services being delivered and determine if they are appropriately fulfilling SLA requirements.
To ensure IT can deliver services required by the business
To regularly report on SLA compliance
PA's Metrics Percentage of SLAs signed off on time
Number of iterations of the SLA before sign off
Percentage of SLAs not signed off at the same time as the corresponding OLAs
Percentage of SLA Reports delivered on time
Base Practices
Figure imgf000048_0001
Figure imgf000049_0001
References
Figure imgf000049_0002
Process Area SLA Management
Level 1
Assessment Indicators Process Performance
Gener tc Practice Ensure that Base practices are performed
Figure imgf000049_0003
Level 2
Figure imgf000049_0004
Figure imgf000050_0001
Level 3 Assessment Indicators
Figure imgf000050_0002
Figure imgf000051_0001
Level 4 Assessment Indicators
Level 5 Assessment Indicators
Figure imgf000051_0003
Process Capability Assessment Instrument Interview Guide
Process Area 1 1 SLA Management
Questions
Base Practice 1 1 1 Assess Business Strategy
What actions are taken to incorporate the business strategy into the process of defining service goals and strategy?
What relevant components are drawn from the business strategy (e.g. service measures, volume projections, workloads etc.)?
What parties are involved in this process?
Is there any tie with capacity management and planning? If so, please describe the tie.
How often is the strategy assessed
Base Practice 1 1 2 Audit Current Service Levels
As part of the SLA preparation process, what is the procedure for auditing existing service levels? What information is gathered? Is this process carried out in accordance with predefined guidelines? Which service areas are audited?
Who carries out the audit and who receives the audit results?
What type of report or document is the output of the audit process?
Base Piactice 1 1 3 Determine Service Requirements What is the process by which service requirements are defined? Who is involved in this process?
Do the service requirements specify all service items and their associated service levels?
Are Key Performance Indicators (KPIs) and metrics for evaluating service levels determined? How often are service requirements revisited?
Base Practice: 1.1.4 Determine Ability to Deliver Services
Prior to preparing the SLAs, how was IT's ability to deliver services gauged?
Was capability evaluated in all service areas? What types of information were considered?
Did this process involve the Capacity Planning & Modeling function?
In what form were the capability evaluation results reported, to whom and for what purposes?
Base Practice: 1.1.5 Prepare Draft SLA
What is the procedure for drafting SLAs? What parties are involved?
What does the SLA contain (e.g. specific applications, workload, cost of service, measure of service, type of support etc.)?
Does the SLA outline each key business application (e.g. penalties for SLA violation, tools to maintain SLAs, manager/owner of SLA etc.)?
Are separate user groups determined based on different service requirements and unique SLAs created for each group? If so, do standard guidelines exist?
Does the process of preparing SLAs include identifying potential suppliers to support the service requirements?
Are provisions for normal/contingency/disaster conditions specified in the SLA?
Are monitoring and reporting procedures defined?
Are escalation procedures defined for instances when SLAs are not met?
Has what constitutes a failed SLA and the penalties for failure been determined?
Are provisions for rewards made for cases when service exceeds requirements? _ . _
Base Practice: 1.1.6 Identify Charge Back, Budget or Cost Structure Components ,__
Was a chargeback structure determined as part of the SLA preparation process? If so, for what components is the chargeback determined?
How is the chargeback structure utilized in relation to service level management?
Do you have or do any budgeting or costing that is used in SLA management?
Base Practice: 1.1.7 Agree to SLAs with Users
To what parties are SLAs submitted for approval?
How is approval of the SLA documented?
Where is information about the finalized SLA stored? Are SLA summaries available to users?
Is there a system for users to communicate desired changes to services provided?
Base Practice: 1.1.8 Report on SLA Performance
Are actual statistics required to measure service delivery gathered and in what format are they stored?
Is information on service delivery collected according to prescribed schedules?
Are actual service statistics compared to targets defined in the SLAs?
Are users' input on SLA performance obtained (e.g. surveys)?
What types of reports are produced based on the statistics gathered?
Who reviews these reports and what is the process for ascertaining SLA compliance?
What procedures are in place to monitor and address SLA breaches?
Does the need for short-term deviations to SLAs due to business requirements arise, and how is it managed?
Generic Questions for Process Area
How often are SLAs re-examined and updated? Approximately how many hours are allocated to review and discuss SLAs?
Are there personnel who control and manage new and existing SLAs? What relevant qualifications and/or training do they have?
Do you think the resources allocated to managing SLAs are adequate? Please explain.
Is the SLA management process periodically evaluated with the intent of identifying possible improvements? How frequently does this occur and what is the process?
Process Capability Assessment Instrument Process Area 1 1 SLA Management
Process Area SLA Management involves the creation, management, reporting, and discussion of Service Description Level Agreements (SLAs) with users and the providers within Information Technology (IT) A SLA is a formal agreement between a user who requires information services and the IT organization responsible for providing those services SLA Management involves the following areas
SLA Definition The SLA document defines, in specific and quantifiable terms, the level of service that is to be delivered to users In the enterprise environment, many design and configuration alternatives are available that affect a given system's response time, availability, development cost, and ongoing operational costs A SLA clarifies the business objectives and constraints for an application system, and forms the basis for both application design and system configuration choices
SLA Reporting The actual production of trend reports is necessary to monitor and meter the effectiveness of a SLA
SLA Control It is important that the services described in SLAs are carefully aligned with current business needs, monitored to ensure that they are performed as described, and updated in line with changes to business needs
SLA Review The reports generated from tracking SLAs are reviewed to ensure that the SLAs are carefully aligned with current business needs and if necessary updated to be in line with business needs In enterprise environments, this process becomes more complex as more components are required to perform these services
Questionnaire
Process Area 1 1 SLA Management (Business Relationship Management)
Figure imgf000053_0001
Work Product list
Process Area 1 1 SLA Management (Business Relationship Management)
SLA process flow
Sample SLA document
IT capability report
SLA performance reports
User survey results
Charge back structure document Responsibility matrix
SLA Communication flow
Job description of SLA manager and staff
OLA Management (1.2)
PA Number 1 2
PA Name OLA Management
PA Purpose OLA Management involves the creation, management, reporting, and discussion of Operations Level Agreements with providers within the organization, as well as external suppliers and vendors An OLA is an agreement between the IT organization and those delivering the constituent services of the system OLAs enable the IT organization to provide the level of service stipulated in a Service Level Agreement as supporting services are guaranteed m the OLA OLA Management involves the following
OLA Definition An OLA outlines the type of service that will be delivered to the users from each service provider OLA Definition works with service providers to define
Whether a particular service level can be met, and how it will be met through operational levels
Which provιder(s) can supply a service, or part of a service
Roles and responsibilities
What constitutes a failure to meet the OLA, and corresponding penalties (if appropriate)
Procedures for momtormg operational levels
Cost structures
How the service will be measured
Contractual arrangements with the providers
Formal OLAs are defined for suppliers who are external to the IT organization They may take the form of maintenance contracts, warranties, or service contracts Further formal or informal OLAs may also be created for internal suppliers, depending on the size of the organization
OLA Reporting The actual production of trend reports are necessary to monitor and meter the effectiveness of an OLA
OLA Control It is important that the services described in OLAs are carefully aligned with current business needs, monitored to ensure that they are performed as described, and updated in line with changes to business needs
OLA Review The reports generated from tracking OLAs are reviewed to ensure that the OLAs are carefully aligned with current business needs and if necessary updated to be in line with business needs In enterprise environments, this process becomes more complex as more components are required to perform these services
PA's Base 1 2 1 Determine operational items Practices 1 2 2 Group related operational items
1 2 3 Identify suppliers of operational items
1 2 4 Finalize service suppliers
1 2 5 Prepare OLAs
1 2 6 Agree to OLAs with suppliers
1 2 7 Report on OLA performance
PA Goals To define a quantifiable service level that represents a minimum level of service for each service delivered
To gather and compare provider service statistics, and to identify and resolve service deviations
To regularly review services being delivered, as specified in the OLA, to determine if they are appropriately fulfilling the requirements.
To regularly report on OLA compliance
PA's Metrics Percentage of OLAs signed off on time Number of iterations of the OLA before sign off Percentage of OLA Reports delivered on time Base Practices
Figure imgf000055_0001
of an external vendor, IT will create reports based on data gathered internally when possible These reports will be cross-referenced with those from the external vendor to ensure accuracy
References
Figure imgf000056_0001
Process Area: OLA Management
Level 1
Assessment Indicators Process Performance
Generic Practice: Ensure that Base practices are performed
Base Practice Example of Assessment Indicator Assessment Indicators at Client
1.2.1 Determine Operational items corresponding to service level operational items requirements (as defined in SLAs) are mapped.
1.2.2 Group related Categories of related operational items have operational items been created, (e.g. by hardware, by type of equipment - laptop, CD ROMs, etc.)
1.2.3 Identify suppliers of A list of potential suppliers for each set of operational items operational items exists. Information exists that shows other suppliers were evaluated.
1.2.4 Finalize service Team can explain how service supplier was suppliers selected, what criteria was used to finalize decision, etc.
1.2.5 Prepare OLAs An OLA between the IT organization and each supplier exists.
1 2.6 Agree to OLAs with Final OLA exists, and is approved. suppliers
1.2.7 Report on OLA Regular reports as specified in the OLA are performance generated and distributed to appropriate parties.
Level 2
Figure imgf000056_0002
Figure imgf000057_0001
Level 3 Assessment Indicators
Figure imgf000057_0002
Level 4 Assessment Indicators
Figure imgf000057_0003
Figure imgf000058_0001
Level 5 Assessment Indicators
Figure imgf000058_0002
Process Capability Assessment Instrument: Interview Guide
Process Area 1.2 OLA Management
Questions
Base Practice: 1.2.1 Determine Operational Items
What is the process by which the key operational items required to support the SLAs is determined? What personnel are assigned responsibility for identifying these key operational items?
Base Practice. 1.2.2 Group Related Operational Items
1 What criteria are used to group operational items together?
Please describe or list the various groupings of operational items-
Does each defined group of operational items typically fall under one OLA?
Base Practice: 1.2.3 Identify Suppliers of Operational Items
What procedure is used to identify potential service providers?
Do service providers include both internal and external organizations?
What information about the service providers is collected?
Are any preliminary negotiations conducted with the suppliers to determine what type of contractual terms they would consider?
Base Practice. 1 2.4 Finalize Service Suppliers
What selection criteria (e.g. cost, training requirements, tools required) are considered when choosing the service providers?
Does a formal system for evaluating potential suppliers exist to aid in the selection process?
Is a list of alternative or back-up suppliers determined?
Base Practice 1.2.5 Prepare OLAs
How are OLAs prepared and negotiated with suppliers? Is a standardized procedure followed for each OLA?
What do OLAs contain (e.g. workloads, cost of service, targets, type of support etc.)? Does the OLA outline each key business application (e.g. penalties, tools used to maintain the OLA)?
Has a document specifying standard contents of an OLA been created? Are OLAs prepared according to the specifications in this document?
Figure imgf000059_0001
Process Capability Assessment Instrument
Process Area 1 2 OLA Management
Process Area OLA Management involves the creation, management, reporting, and discussion of Description Operations Level Agreements with suppliers and vendors OLAs enable the IT organization to provide the level of service stipulated in a Service Level Agreement as supporting services are guaranteed in the OLA An OLA is an agreement between the IT organization and those delivering the constituent services of the system Operational Level Management involves the following
OLA Definition An OLA outlines the type of service that will be delivered to the users from each service provider OLA Definition works with seivice providers to define
Whether a particular service level can be met, and how it will be met through operational levels
Which provιder(s) can supply a service, or part of a service
Roles and responsibilities
What constitutes a failure to meet the OLA, and corresponding penalties (if appropriate)
Procedures for monitoring operational levels
Cost structures
How the service will be measured
Contractual arrangements with the providers
Formal OLAs are defined for suppliers who are external to the IT organization They may take the form of maintenance contracts, warranties, or service contracts Further formal or informal OLAs may also be created for internal suppliers, depending on the size of the organization OLA Reporting The actual production of trend reports are necessary to monitor and meter the effectiveness of an OLA
OLA Control It is important that the services described in OLAs are carefully aligned with current business needs, monitored to ensure that they are performed as described, and updated in line with changes to business needs
OLA Review The reports generated from tracking OLAs are reviewed to ensure that the OLAs are carefully aligned with current business needs and if necessary updated to be in line with business needs In enterprise environments, this process becomes more complex as more components are required to perform these services
Questionnaire
Process Area 1 2 OLA Management (Service Partner Management)
Figure imgf000060_0001
Work Product list
Process Area 1 2 OLA Management (Service Partner Management)
Sample OLA document
Service level performance reports
OLA compliance reports
Vendor/supplier selection information
Responsibility matrix
OLA Communication flow
Job Description of OLA manager and staff
Figure imgf000060_0002
To help users when required.
To manage problem resolution.
To log and document problems types, their frequency, and associated workarounds.
To produce management reports on levels of service and user satisfaction.
The Service Desk consists of the following functions:
Incident Management - An incident is a single occurrence of an issue that affects the delivery of normal or expected services. Incident Management strives to resolve as high a proportion of incidents as possible prior to passing them on to other areas.
Problem Management - A problem is the underlying cause of one or more incidents. Problem Management utilizes the skills of experts and support groups to fix and prevent recurring incidents by determining and fixing the underlying problems causing the incidents.
Request Management - Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk. Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Further sub- functions of Request Management are: Request Logging Impact Analysis Authorization Prioritization
Figure imgf000061_0001
Base Practices
Figure imgf000061_0002
Figure imgf000062_0001
Figure imgf000063_0001
Figure imgf000063_0002
Process Area Service Desk
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000063_0003
Figure imgf000064_0001
Level 2
Figure imgf000064_0002
Level 3 Assessment Indicatois
Figure imgf000064_0003
Figure imgf000065_0001
Level 4 Assessment Indicators
Figure imgf000065_0002
Level 5 Assessment Indicators
Figure imgf000065_0003
Process Capability Assessment Instrument Interview Guide
Process Area 1 3 Service Desk
Questions
Base Practice 1 3 1 Call Attention
What methods are available to users for communication with the Service Desk, and do users have access to resources needed for such communication?
Are all users informed how and when to contact the Service Desk? If so, how?
Do all users receive the same level of support? If no, how does support differ?
Do you gather call statistics like total volume of calls and number of abandoned calls? If so, can we access this information?
Is there a need for after-hours support? If so, what type of after-hours support does the Service Desk provide?
Base Practice 1 3 2 Incident/Request Logging
1 What is the procedure for logging incidents/requests, and is this followed in all cases7
Is a priority level assigned to the incident/request at time of receipt and how is it determined? Base Practice 1 3 3 Incident Request Qualification
Do Service Desk personnel have access to a catalogue/database of frequently occurring incidents and their solutions, and does its format allow for rapid access and search?
How often is this catalogue/database accessed to provide an immediate solution or work-around to the user? (e.g., all calls, some calls, very few calls)
How frequently is this catalogue/database updated?
What other resources exist to aid Service Desk personnel with immediate incident resolution?
Base Practice 1 3 4 Incident/Request Assignment
Is there a defined time frame within which the incident/request should be assigned and is it usually followed?
Are users notified of receipt, status and approximate time to resolution (if possible) of incident/request and provided with the incident/request ID?
By what process is the appropriate personnel determined for handling an incident/request?
Is a defined system used for assigning responsibility for an incident/request to the appropriate personnel? (e.g. trouble tickets are generated and sent to appropriate personnel)
Is a record made of the person to whom the incident/request is assigned?
Base Practice 1 3 5 Incident & Problem Resolution
Are non-resolved incidents/problems escalated according to procedures defined in SLAs?
2. How are appropriate resources notified that the incident/problem has been escalated?
While problem resolution is in process, is a work-around solution determined and conveyed to the user?
When a problem is escalated or a resolution has been determined, is the log updated?
Does the Service Desk or the party to whom the problem was escalated "own" the problem?
Base Practice 1 3 6 SLA & OLA Tracking and Monitoring
What is the system for tracking and monitoring the problem resolution process for an incident/request?
What types of issues (e.g. excessive reassignments, deviations from estimated task times) are flagged and what action is taken to address them?
Base Practice 1 3 7 Resolution Confirmation
Are users notified of incident/request resolution?
Is confirmation sought from the user to verify that incident/request has been resolved satisfactorily? If such confirmation is not obtained what is done?
Base Practice 1 3 8 Incident / Request Closure
How is an incident/request closed? What records are made?
If it exists, is a solution database updated with the incident/problem and solution for future reference?
What parties are informed of a closure?
Base Practice 1 3 9 Trends and Repetitive Incident Analysis
Are incidents analyzed to detect trends and identity underlying problems? It so, by what process? Are users notified of known incidents proactively belore they report the incident?
Base Practice 1 3 10 Service Level Control
Does the Service Desk generate reports comparing actual service levels (eg. Number of incidents resolved at initial call, resolution time by severity) with target service levels?
Who receives these reports and for what purposes?
How are service levels targets set and what is the process lor reviewing/updating them?
Do the users communicate their views ol support to the Service Desk and agree with the Service Desk's assessment of incident and problem management?
Base Practice 1 3 1 1 Receive Requests
Are requests handled immediately or do they require provisioning/approval?
Does the Service Desk coordinate the approval of requests with the appropriate tunctions and notify requester ol approval/rejection?
If request requires functions outside the Service Desk, how does the Service Desk pass responsibility to the appropriate personnel?
Do SLAs exist between the Service Desk and the end user community?
Do agreements exist between the Service Desk and the next level of support (internal or external)?
Generic Questions for Process Area
Are the policies for Service Desk operation outlined in a document? How are employees made aware of these policies? What mechanisms are in place to ensure policies are followed?
How frequently are Service Desk policies reviewed and/or modified? What is the process for such policy updates?
Are the current staff and resources of the Service Desk adequate for satisfactorily meeting user needs.
What type of qualification and/or training do Service Desk personnel have?
Are Service Desk operations periodically reviewed in order to identify and implement potential improvements? Who manages this process?
Are any metrics computed to assess the Service Desk performance? It so, please describe them. Are targets for these metrics established and performance assessed against them?
Process Capability Assessment Instrument
Process Area 1 3 Service Desk
Process Area The Service Desk provides a single point of contact for users with problems or specific Description service request The Service Desk forms part of an organization's strategy to enable users and business communities to achieve business objectives through the use of technology
The Service Desk main objectives are
To help users when required
To manage problem resolution
To log and document problems types, their frequency, and associated workarounds
To produce management reports on levels of service and user satisfaction
The Service Desk consists of the following functions
Incident Management - An incident is a single occurrence of an issue that affects the delivery of normal or expected services Incident Management strives to resolve as high a proportion of incidents as possible prior to passing them on to other areas
Problem Management - A problem is the underlying cause of one or more incidents Problem Management utilizes the skills of experts and support groups to fix and prevent recurring incidents by determining and fixing the underlying problems causing the incidents
Request Management - Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer Further sub- functions of Request Management are Request Logging Impact Analysis Authorization Prior ittzation
Questionnaire
Figure imgf000067_0001
Figure imgf000067_0002
Figure imgf000068_0001
Work Product list
Process Area 1 3 Service Desk
Trouble ticket
Employee training handbook
User surveys
Performance reports (resolution, response, trending, etc )
SLA
Sample log record for an incident/request
Staffing plan document
Figure imgf000068_0002
Figure imgf000069_0001
Base Practices
Figure imgf000069_0002
Figure imgf000070_0001
Figure imgf000070_0002
Figure imgf000071_0001
References
Figure imgf000071_0002
Process Area Service Pricing
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000071_0003
Figure imgf000072_0001
Level 2
Figure imgf000072_0002
Figure imgf000073_0001
Level 3 Assessment Indicators
Figure imgf000073_0002
Level 4 Assessment Indicators
Figure imgf000073_0003
Level 5 Assessment Indicators
Figure imgf000073_0004
Figure imgf000074_0001
Process Capability Assessment Instrument Interview Guide
Process Area 1 4 Service Pricing
Questions
Base Practice 1 4 1 Determine projected service/equipment costs and depreciation schedule for distributed technical environment
What is the process for projecting costs of service and equipment capacity enhancements? How frequently does this occur?
Can costs be projected on a customer-group basis?
Can service costs be broken down by implementation, operation and overhead for each service? How are depreciation schedules determined?
Are projected costs and depreciation figures used to decide between leasing and purchasing?
Currently, what is the approximate percentage of leased and purchased equipment?
Base Practice 1 4 2 Determine if chargeback is appropriate
What criteria are used to determine which items will be charged back?
Are departments or other appropriate parties informed of the items with associated charges? Are there any known "hidden costs" (e.g. users spending business time helping other users)?
What types of costs are not charged to department/project/individuals?
Base Practice 1 4 3 Determine usage trends
What information is collected on service/equipment usage? Where is this information stored?
What type of trending analysis is performed using this data (e.g. frequency of calls to Service Desk per department)
For what purposes are trend data used?
Base Practice 1 4 4 Prepare budgets and ensure that data is valid and correct
What is the process for creating budgets? Does each department follow a standard procedure?
What information is analyzed while preparing budgets? Are projected service/product costs, expected growth and past budgetary needs considered?
Are periodic audits of the budget performed to ensure the use of accurate and valid data?
Do budgets include contingencies for unanticipated growth or product/service needs?
Base Practice 1 4 5 Identify product/service options associated with service level objectives
Are SLAs or service level objectives reviewed to verify that all needed products/services are being offered?
At present, are all products/services covered by SLAs?
If a cost cannot be tied back to an SLA, does an evaluation of the need or justication for that service/product occur?
Who is responsible for the process of checking product/service options against SLAs?
Base Practice 1 4 6 Define products/services in terms useful to customers
How are appropriate parties informed of services/products offered?
Is information sent of additional costs for non-standard products/services?
Base Practice 1 4 7 Determine service price costs and model/evaluate costs
How are service costs finalized? Who is in charge of this process
What type of cost modeling is done? Why was this strategy settled on?
Has a pricing strategy been defined? If yes, please describe.
Does the pricing strategy map back to the services being provided?
Base Practice 1 4 8 Determine cost allocation plans for services and equipment
What is the procedure for creating cost allocation plans for services and equipment?
Figure imgf000075_0001
Process Capability Assessment Instrument
Process Area 1 4 Service Pricing
Process Area | Service Pricing is compnsed of the following areas Description
Service Pricing & Cost Service Costing & Pricing projects and monitors costs for the management of operations, provision of service, equipment installation, etc Based upon the projected cost and business needs, a service pricing strategy may be developed to re-allocate costs within the organization If developed, the service pricing strategy will be documented, communicated to the users, monitored and adjusted to ensure that it is both comprehensive
Figure imgf000076_0001
Billing & Accounting. The purpose of Billing & Accounting is to gather information for calculating actual cost, determine chargeback costs and bill users for services rendered
Questionnaire
Figure imgf000076_0002
Work Product list
Process Area 1 4 Service Pricing
Depreciation schedules Sample budget
Service price listing or catalogue Chargeback algorithm or strategy Chargeback reports
User Administration 1.5)
Figure imgf000076_0003
Figure imgf000077_0001
Base Practices
Figure imgf000077_0002
References
Figure imgf000077_0003
Process Area User Administration
Level 1
Assessment Indicators Process Performance Gener ic Practice Ensure that Base practices are performed
Figure imgf000078_0001
Level 2
Figure imgf000078_0002
accounts, and deletion of accounts
Level 3 Assessment Indicators
Figure imgf000079_0001
Level 4 Assessment Indicators
Figure imgf000079_0002
Level 5 Assessment Indicators
Figure imgf000079_0003
Process Capability Assessment Instrument: Interview Guide
Process Area 1.5 User Administration
Figure imgf000080_0001
3. Are there regularly scheduled training programs that address User Administration procedures? If so, what type of training provided?
4. Do you find that adequate resources are allocated tor User Administration? Please elaborate.
Process Capability Assessment Instrument
Figure imgf000081_0001
Questionnaire
Figure imgf000081_0002
Figure imgf000081_0003
Work Product list
Figure imgf000081_0004
User Administration Maintenance Status Report
Termination List
Change of Name Request Form
Access Control Profile Document
Network Group Access Property Document
Figure imgf000081_0005
Figure imgf000082_0001
Base Practices
Figure imgf000082_0002
Figure imgf000083_0001
References
Figure imgf000084_0001
Process Area Production Scheduling
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000084_0002
Level 2
Figure imgf000084_0003
Figure imgf000085_0001
Level 3 Assessment Indicators
Figure imgf000085_0002
Level 4 Assessment Indicators
Figure imgf000085_0003
Level 5 Assessment Indicators
Figure imgf000085_0004
Figure imgf000086_0002
Process Capability Assessment Instrument Interview Guide
Process Area 2 1 Production Scheduling
Figure imgf000086_0001
utilizing job dependencies, data set, events or physical calendar)?
Is there one master production schedule for prioritizing purposes? If not, how many are there and what are their functions?
Base Practice 2 1 8 Link multi-step batch processes based on success/failure of previous jobs 1. What procedures are in place for initiating, monitoring or stopping jobs?
2. What type of notification is sent/alert system is provided due to a job failure?
How is verification sent that a job has been successfully completed?
Base Practice 2 1 9 Initiate batch obs to application schedule
1. Can tasks be initiated and managed on key server platlorm? On other server platforms?
2. Which applications require significant batch processing? Is this done daily, hourly?
3 Is a separate scheduling component available or managing batch process?
Base Practice 2 1 10 Performance and recovery planning
1. Are jobs monitored for completion/failure?
2. How is the production schedule changed to account for failures and delays?
3. How does one recover/rollback from failed jobs? Is it automatic?
4. Can batch streams be modified?
5. What is the procedure for terminating or canceling jobs?
6. When is production performance surveyed and results verified?
Base Practice 2 1 1 1 Maintain schedule information
1. Describe any workload balancing capabilities provided?
2. Are forecasting mechanisms available? When/how are they used?
3. What reports are produced that provide network traffic data?
4. What tools are used to quantify that the production schedule is meeting goals?
5. What other historical data is used to maintain performance?
Generic Questions for Process Area
1. What are the procedures/policies for the current version of production scheduling? (e.g. Process of submitting a job.)
2. What reports are produced for management, operations and customers that show production performance measurements and verifications? How are these used to manage the production scheduling process?
3. Explain the training provided to the production scheduling staff" regarding procedures, systems and interaction with other functions and their importance (e.g. event management, backup and restore, fault recovery, etc.)?
4. Is the process/procedure for production scheduling reviewed for continuous improvement? If yes, how?
5. Has there been a shortage of resources while performing the production scheduling process?
6. When continuous improvements are executed, how is the improvement validated against business and performance goals (e.g. benchmarks, basic measurements, etc.)?
7. What objectives are established to measure the quality of operation standards and processes?
8. What reports are distributed to customers, management and staff that provide feedback verify adherence regarding the production scheduling process/procedure?
Process Capability Assessment Instrument
Process Area 2 1 Production Scheduling
Process Area Production Scheduling determines the requirements for the execution of scheduled jobs Description across a distributed environment A production schedule is then placed to meet these requirements, taking into consideration other processes occurring throughout the distributed environment (e g , software and data distribution, and remote backup/restoration of data )
Questionnaire
Process Area 2 1 Production Scheduling
Figure imgf000087_0001
Figure imgf000088_0001
Work Product list
Process Area 2 1 Production Scheduling
Example of an existing production schedule and work flow diagrams
Existing operating procedure manuals
Software scheduling software documentation, detailed and quick reference Examples of custom (or packaged) screens prompting for scheduling information needed to execute jobs or job streams
Phone list of who to call for different types of problems
Existing reports that analyze business customers performance
Existing reports that review network traffic and hardware during the monitoring process Existing reports that leview network traffic trend data to validate job performance
Results of any network performance testing across the netwoik (e g RMON, SNMP, etc )
Figure imgf000088_0002
Base Practices
BP Number 2 2 1
BP Name Re-initiahze printers
BP Description Re-initiahzing printers can be powering on off a printer to starting/stopping a print queue in a distributed environment
Example By powering off a pi inter all data that has been sent to the pr inter queue will be deleted This can be par ticularly helpful in a situation such as this, if a postscript pr intjob gets sent to a printer that cannot handle postscript
BP Number 2 2 2
Figure imgf000089_0001
References
Figure imgf000089_0002
Process Area Print Management
Level 1
Assessment Indicator Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000089_0003
Figure imgf000090_0001
Level 2
Figure imgf000090_0002
Level 3 Assessment Indicators
Figure imgf000090_0003
Figure imgf000091_0001
Level 4 Assessment Indicators
Figure imgf000091_0002
Level 5 Assessment Indicators
Figure imgf000091_0003
Process Capability Assessment Instrument: Interview Guide
Process Area 2.2 Print Management
Figure imgf000092_0001
rocess apa iity Assessment nstrument Process Area 2 2 Pπnt Management
Process Area Output and Print Management monitors all of the printing and/or done across a distributed Descnption environment and is responsible for managing the printers and the printing for both central and remote locations
Questionnaire
Process Area 2 2 Print Management
Figure imgf000093_0001
Work Product list
Process Area 2 2 Print Management
Operator's manual for output/print management personnel
Customer's manual for available output/print resources
Examples of any forms/paper stock used for non-typical print jobs
List of equipment/supplies used for non-typical print jobs (e g feeders, inks, etc )
Figure imgf000093_0002
Base Practices
Figure imgf000093_0003
Figure imgf000094_0001
References
Figure imgf000094_0002
Process Area File Transfer and Control
Level 1
Assessment Indicators Process Performance
Gener ic Practice Ensure that Base practices are performed
Figure imgf000094_0003
Figure imgf000095_0001
Level 2
Figure imgf000095_0002
Level 3 Assessment Indicators
Figure imgf000095_0003
Figure imgf000096_0001
Level 4 Assessment Indicators
Figure imgf000096_0002
Level 5 Assessment Indicators
Figure imgf000096_0003
Process Capability Assessment Instrument Interview Guide
Process Aiea 2 3 File Transfer and Control
Questions
Base Practice 2 3 1 Transfer files on a scheduled basis
Has the schedule of file transfers to and from devices been determined? If yes, what is the schedule? Who is responsible for this task? Is it under version control? Does the schedule encompass all aspects of the service provider at the organizational level? Can file transfers be initiated by the sender and/or the receiver? What is their customer level (e.g. administrator, all customers, some customers, etc.) and do they write scripts or assign priorities levels via an interface?
Can concurrent file transfers be performed? If yes, please explain how?
Can automated conditional file transfers be performed? If yes, please explain how?
Base Practice 2 3.2 Determine backup and recovery scheme
Are file transfer events logged? If yes how and is this information kept for historical purposes?
Are failed file transfers retried? If yes, by whom or is it automatic?
Has the backup/recovery scheme for a file transfer been invoked? If no, why? If yes, what was the end result (e.g. lost data, transfer complete, etc.)? Who is responsible for creating scheme and is it under version control?
Is there notification of a successful/failed file transfer? If yes, how is this performed (e.g. e-mail, banner message, report, etc.) and to whom (e.g. administrator, initiator, etc.)? Is fault management made a are of failures? If yes, how?
Is there a check for successful file transfers? If yes, how are these checks performed and logged?
Base Practice. 2 3.3 Transfer files on an ad hoc basis
Are files transferred on an ad hoc basis? If yes, what are the most common reasons and by whom? Do these transfers interfere with other process areas (e.g. production scheduling, output/print management, etc.)?
Who can perform or initiate an ad hoc file transfer (e.g. administrators, all customers, customers with permission, etc.)? Is it performed by senders, receivers or both?
Can ad hoc files be transferred concurrently? If yes, please explain how this is being done7
Base Practice: 2.3.4 Location, format, and file verification
Can space for a transferred file be dynamically allocated? If no, what is the customers recourse if there is a problem?
Can file types (e.g. VSAM, PDS, etc.) be converted? If yes, what is the most common? How are they converted? What tools do you use to convert them?
Have file formats (e.g. ASCII to EBCDIC) been converted? If yes, what is the most common? What tools do you use and how are they converted?
4. Are files being compressed/decompressed at source and at target? If yes, how?
5. Can files be renamed at source and/or target? Can files be created, written over or deleted? If yes to either, please explain the process of how this is done.
6. Can transferred files be merged or appended to? If yes, is this method used often?
7. What are the most common platforms encountered during file transfer? Has there been a problem with any particular platform? If yes, explain.
Are files transferred being encrypted/decrypted? It" no, why? If yes, please explain how? What tools are being used?
Generic Questions for Process Area
Are file transfer times defined and/or evaluated for number of destinations, machines and platforms? If yes, explain?
Is there a policy established and maintained for file transfer and control? Is this process followed?
3. Are adequate resources available for file transfer and control? If no, explain?
4. Is training provided for all new employees within file transfer and control? If no explain? Are subsequent training times available for file transfer and control personnel to learn new processes, technologies, etc.? If yes, explain. Are proactive plans made for future personnel needs? If yes, explain.
5. Are reports to customers, administration and other groups provided as a means for process update and feedback? If yes, who gets these reports. If no, explain how feedback is provided?
Is the file transfer and control process and procedure reviewed for continuous improvement purposes? Are these improvements deployed and measured against process and business goals?
Are strategic goals in place for tile transfer and control? If yes, what are they and can they be measured? Are metrics collected on the file transfer and control process? Is this process automated with use of software, tools, etc.? Are the metrics analyzed for process parameters and deviation identification? Process Capability Assessment Instrument
Process Area 2 3 File Transfer and Control
Process Area File Transfer and Control initiates and monitors the files being transferred throughout the Description system as part of the business processing (e g , nightly batch runs) File transfers can take place in a bi-directional fashion between hosts, servers and woikstations
Questionnane
Process Area 2 3 File Transfer and Control
Figure imgf000098_0001
Work Product list
Process Area 2 3 File Transfer and Control
Sample of a file tiansfer and control schedule
Sample of a backup and recovery scheme
List of file types and formats used during file conversions
Reports metrics, concerns and/or issues regarding file transfer and control
Figure imgf000098_0002
DNS
To ensure that uninterrupted addressing services are provided to devices within an enterprise
PA's Metrics Directory Services
% of lost or misplaced files per month
% of employees with restricted directory access
Number of times per month communication between directories is disrupted
DNS
Average number of IP addresses available
% of IP or DNS problems reported per month average response time to IP or DNS Problems
% of customers experiencing down time due to IP and DNS issues
Base Practices
Figure imgf000099_0001
Figure imgf000100_0001
References
Figure imgf000101_0001
Process Area Network Services
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000101_0002
Level 2
Figure imgf000102_0001
Level 3 Assessment Indicators
Figure imgf000102_0002
Figure imgf000103_0001
Level 4 Assessment Indicators
Figure imgf000103_0002
Level 5 Assessment Indicators
Figure imgf000103_0003
Process Capability Assessment Instrument Interview Guide Process Area 2.4 Network Services
Questions
Base Practice: 2.4.1 Populate Directories
What is the process for adding first time directory information to new directories? Is there a different process for populating old directories? If so, please describe
How often does populating new directories occur and who approves this?
How are directory permission properties defined and gathered?
How often are directory permission properties surveyed and altered?
Does the process of populating existing directories take various system needs into consideration? (E.g. Does directory population follow a convenient and logical schedule?)
Base Practice: 2.4.2 Manage Directories
Who is responsible for managing the network directories? What is the overall process for managing the directories?
How is the directory content volume monitored and managed1
How are the relationships between directories managed
How often is the interface between different directories updated?
How is the content of different directories maintained?
Do you have directories that require synchronization? What is the process for synchronizing the directories?
Base Practice: 2.4.3 Determine Organizational Impacts
Are organizational and business impacts taken into consideration when determining and designing various network services? (e.g. directory structure, permissions, etc.) If yes, how?
What processes are in place to determine organizational impacts?
Base Practice: 2.4.4 Extract Information from Directories
What type of information do you gather or extract from directories (e.g. authentication information, access control profiles, etc.)?
How do you store the information collected from directories?
Are you creating reports from this data? If yes, what types of reports are you creating? 7
Is anyone managing inconsistencies or flagging abnormalities? If yes, who and how are they flagging or correcting the abnormalities? Is their communication between the Network Services and Fault Management or Monitoring teams when sever abnormalities occur?
Base Practice: 2.4.5 Identify Component Options
What physical and logical components have you identified in your environment? How did you determine what components were needed for your environment?
Is there a process for categorizing different network components? Are different people responsible for the different types of components? If yes, who are they and do they just receive training on the specific component types they are responsible for?
Base Practice: 2.4.6 Document Strategic Drivers (e.g. geography, security, etc.)
What are some of the strategic drivers identified for providing the optimum network services? Is there an order of importance for the strategic drivers you have identified? If yes, please elaborate.
Are your strategic drivers documented? Are they revisited when a business or organizational change happens? How are they kept in line with the business or organizational needs?
Base Practice: 2.4.7 Outline Guiding Principles for Communication Address Planning
Do you have any guiding principles in place that allow the address team to develop and share a common vision for all addressing functions? If yes, what are some of these guiding principles?
Are there common processes and practices across several of your networking functions? If yes, which ones?
Is there a lot of cross functionality between your network groups? If yes, please explain the cross functionality?
Base Practice: 2.4.3 Address and Domain Maintenance
How often is address maintenance performed? What processes are used for the addition, deletion, maintenance, and modification of addresses? How often is domain maintenance performed? What processes are used for the addition, deletion, maintenance, and modification of domains?
How are the address tables maintained?
What is the process for maintaining DNS?
Base Practice 2 4 5 Address Design Process
1. Are address design and technical network diagrams created? Are they updated? If so, how otten?
2. Are conflicts or network issues taken into consideration when the address system is being designed? If yes, what conflicts or issues are considered and how are the network solutions modified? Is there a process to follow for making changes?
Base Practice 2 4 6 IP Technology Research Process
How often is emerging technology considered and evaluated for the current network?
Are there defined processes that determine whether a new technology would enhance or improve the current network system? If so, what are they?
If a new technology is being considered what type of testing or research is done to ensure that the technology meets the business needs?
Genenc Questions for Process Area
1. Are training classes provided and do all new Network Services personnel attend training on the defined Directory Maintenance and Communication Address Planning processes? If so what type of training ensures adequate execution of these established directory management and address servicing procedures?
2. Are current resources and procedures periodically assessed with the intent to promote continuous improvement? What is the approval process for proposed solutions? Are all potential stakeholders involved in the decision process? How often are these solutions implemented and by whom?
3. How are routine network services and continuous improvement solutions evaluated for impact?
4. Do you find that the resources allocated to network services is adequate? Please elaborate.
Process Capability Assessment Instrument
Process Area 2 4 Network Services
Process Area Network Services Process Area is comprised of the following two areas Description
Directory Ser-vices is the function of publishing and maintaining organized inventories of information resources to make them available to networked customers Directory Management can apply to internal directories as well as the publishing of directory information for global directory services
DNS ensures that IP services are provided to devices within an enterprise Whether dealing with a new or existing capability, the communications address management function demands that high-level business requirements be taken into consideration
Questionnaire
Figure imgf000105_0001
Figure imgf000106_0001
Work Product list
Process Area 2 4 Network Services
Access Control Profiles
Network Traffic Flow Diagrams
IP Addiess Availability Report
DHCP Address Lease Contracts
IP Addiess Tables
Copy of current documented Address Plan
Figure imgf000106_0002
Base Piactices
Figure imgf000106_0003
Figure imgf000107_0001
Example Backups can be scheduled and performed during low network traffic times Begin an incremental backup at I 00am and confirming that the backup is completed befoi e network traffic picks up in the morning, or doing complete backups on weekends when the network traffic is low
References
Figure imgf000108_0001
Process Area: Backup/Restore/Archiving
Level 1
Assessment Indicators Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000108_0002
Level 2
Figure imgf000108_0003
Figure imgf000109_0001
Level 3 Assessment Indicators
Figure imgf000109_0002
Level 4 Assessment Indicators
Figure imgf000109_0003
Level 5 Assessment Indicators
Figure imgf000109_0004
across the organization reflect any new/modified procedures and all appropriate parties within the organization receive notification.
Process Capability Assessment Instrument Inteiview Guide
Process Area | 2 5 Backup/Restore/ Archiving
Questions
Base Practice 2 5 1 Test Central/Remote Backup/Restore/ Archival Procedure Periodically
What type of periodic testing of the backup/restore/archival procedures is performed? 7
Are both central and remote backup/restore/archiving tested?
How (in what format) and to whom are the testing results reported?
Have your tests typically been successful? What constitutes a successful test?
Base Practice 2 5 2 File Backup Steps and Considerations
Have the backup requirements been defined and documented for the following items:
Customer, operations, applications responsibilities
Remote vs. central backups
Frequency of backups
Components to be backed up
What type of application or automated process is used for backup?
Are backup and restore processes managed centrally or remotely';
What type of backup (full, incremental, export) is performed and how often?
What media (tape, magnetic disc, cartridge etc.) is used for backup? Why was this medium chosen?
If the system is unavailable to customers during backups, how is system unavailability managed?
If parts of the system are down during a scheduled backup, is a manual backup performed when the system gets back online?
Where is backed-up/archived data stored? For what length of time is data stored?
Does the backup and restore process require manual intervention?
What type of monitoring of the backup process is performed?
Are backup records made? If yes, what information is documented?
Base Practice 2 5 3 File Restoiation Steps and Considerations
What events warrant a restoration and how is the process initiated? Are these policies documented 9
Can customers submit requests for particular files to be restored? How are customer requests logged and tracked?
Can single/multiple objects be restored from the backup media?
Can a full/incremental backup be restored centrally and remotely?
What type of monitoring is done of the restoration process?
Are notification procedures in place to inform customers and service providers of success/failure of restoration?
Base Practice 2 5 4 Compress and Index Information Being Archived
Is archiving triggered automatically or must it be manually initiated?
How is data compressed and indexed prior to being archived?
Base Practice 2 5 5 Notify that Backup/Restoration/ Archival Process has been Completed Successfully/Failed
Who receives notification of the outcome of the backup/restore/archival process?
How is this notification sent?
What action, if any, is taken on receipt of the notification?
Base Practice 2 5 6 Peiform Housekeeping on the Backup/ Aichival Library
What maintenance tasks are performed on the backup/archival library? Who is responsible for maintaining the library?
Is storage media labeled? What information is recorded on the label? Does labeling follow documented specifications?
How many copies of backup data are made, and how many generations are maintained? Are copies stored in different locations?
How is integrity of stored and retrieved files ensured (e.g. resurrecting relationships)?
Base Practice 2 5 7 Synchronize Backups and Restores Does a predefined schedule for regular backups and restores exist? If so, when do backups and restores occur?
What is the process for scheduling a backup/restore not regularly planned? Who manages this process?
Are there any indicators in the application that can help signal when a backup is needed if it does not fall on one of the scheduled backup times?
Generic Questions for Process Area
Are any quantitative targets set with regard to the backup/restore/archive process (e.g. % of successful backups per month)? If so, what are they? Are these targets achieved? How frequently are they evaluated?
Is the backup/restore/archive process periodically reviewed and new technologies evaluated with the purpose of identifying potential improvements? How frequently does this occur?
Do you find that adequate resources are allocated to managing the backup/restore/archive process? What type of training do backup/restore/archive personnel receive?
Process Capability Assessment Instrument
Process Area 2 5 Backup/Restore/Archivmg
Process Area Backup/Restore/ Archive Management considers all of the back-ups and restoiations that need Description to take place across a distributed system for master copies of data Archiving saves and stores information across the distributed environment These processes may occur centrally or in distributed locations
Questionnaire
Process Area | 2 5 Backup/Restore/Archiv g
Figure imgf000111_0001
Work Product list
Process Area 2 5 Backup/Restore/ Archiving
Backup requirements document
Sample backup log
Document outlining schedule of backups (e g full, incremental, differential)
SLA outlining backup and restore agreements
Monitoring (2.6)
PA Number 2.6
PA Name Monitoring
Figure imgf000112_0001
Base Practices
BP Number 2 6 1
BP Name Poll for current status
BP Description If necessary, gather information on the current status of the distributed environment This may negatively impact performance, based upon polling cycle
Example This information could be gather ed through SNMP gets, pings, or management agents
BP Number 2 6 2
BP Name Gather and document monitoring information
BP Description Receive information from element management systems Receive information fi om components in distributed system Reformat information to a standard message type
Example Ping, service and process data should be recorded in a standard log or database for tracking purposes
BP Number 2 6 3
BP Name Classify events/Assign severity levels/ Assess impact
BP Description | Once the data or event is pulled in, the event is defined or classified A seventy level and
Figure imgf000113_0001
Figure imgf000114_0001
References
Figure imgf000114_0002
Process Area Monitoring
Level 1
Assessment Indicators Process Performance
Gener ic Pr actice Ensure that Base practices are performed
Figure imgf000114_0003
Figure imgf000115_0001
Level 2
Figure imgf000115_0002
Level 3 Assessment Indicators
Figure imgf000115_0003
Figure imgf000116_0001
Level 4 Assessment Indicators
Figure imgf000116_0002
Level 5 Assessment Indicatois
Figure imgf000116_0003
Process Capability Assessment Instrument Interview Guide
Process Area | 2 6 Monitoring
Questions
Base Practice 2 6 1 Poll for Cuπent Status, if necessary
How is polling of the current status of the network done?
Does polling impact the performance of the network? If so, how?
Base Practice 2 6 2 Gather and Document Monitoring Information
From what sources is monitoring information gathered (e.g. element management systems, network components)?
Has a document been created that specifies the type of information that should be collected for monitoring purposes? Are these specifications followed?
In what format is monitoring information stored?
Base Practice 2 6 3 Classify events/Assign severity levels/Assess impact
How do you classify or define your events? What system or applications do you used for gathering, defining, and classifying events?
How are severity levels and system impact determined?
Base Practice: 2.6.4 Analyze Faults
What type of preliminary analysis of a fault event occurs? Is the extent of the fault investigated? If so is this process automated?
Does your monitoring tool have the capability to correlate multiple events?
Can your tool provide a high level view and then enable "drilling down" to analyze a fault?
Base Practice: 2.6.5 Route Faults to be Corrected
How is routing of faults to the appropriate resource managed?
Are fault notifications anticipated due to other errors being received?
Is a determination of the customers/devices affected by the fault made and are those customers notified?
If a fault puts the system at risk, are appropriate resources (e.g. help desk) notified?
Once the fault is identified, are associated alarms suppressed?
Is fault handling tracked to ensure successful resolution? (e.g. trouble ticket logged)
Does a fault log exist, and is the appropriate level of documentation made? If so, please describe the information recorded.
Are fault statistics reported and managed? Are targets set for statistics relating to fault management and how well are these met?
Base Practice: 2.6.6 Map Event Types to Pre-defined Procedures
What types of events activate pre-defined resolution procedures?
How are these pre-defined procedures managed? How was the decision made of which events to set up with pre-defined solutions? How frequently is the collection of such events updated?
What mechanism is in place to check for successful execution of pre-defined procedures when necessary?
Base Practice: 2.6.7 Log Events Locally and/or Remotely
Where are event records stored?
For what time length is event data stored?
Who accesses the event log and for what purposes?
Base Practice: 2.6.8 Suppress Duplicated informational Messages Until Thresholds are Reached
What mechanism checks for duplicated/informational messages and clears them from the event log unless a threshold is reached?
Base Practice: 2.6.9 Display Status Information on Console(s) in Multiple Formats
1. What types of current status information can be obtained?
In what formats can such status information be viewed (e.g. graph, map, log)?
Base Practice: 2.6.10 Display Status Information in Multiple Locations
In what locations is status information displayed?
Do personnel other than operations staff access this status information? If so who does and for what purposes?
Base Practice: 2.6.1 1 Issue Commands on Remote Processors/Hosts
What types of commands can be run on remote processors/hosts?
Can commands to remote processors/hosts be initiated both manually or by an application?
Base Practice: 2.6.12 Set up and Change Local and/or Remote Filters
For what types of purposes are router filters set up?
How frequently does the need arise for these filters to be changed?
What is the procedure for changing filters? Who manages this process?
Base Practice: 2.6.13 Set up and Change Local and/or Remote Threshold Schemes
How are thresholds determined for critical nodes?
Do these thresholds meet SLAs?
Under what circumstances are these thresholds changed?
What is the procedure for changing threshold schemes? Who controls this process?
Base Practice: 2.6.14 Analyze Traffic Patterns
What information about network traffic is collected?
What types of conclusions are sought in analyzing the traffic data? Are there predefined guidelines for the analysis that needs to be done?
Who performs this analysis and how frequently?
Base Practice: 2.6.15 Send Broadcast Messages
Are there provisions for sending broadcast messages? What circumstances necessitate broadcast messages?
Who has the ability/responsibility for sending broadcast messages?
How frequently are broadcast messages sent?
Genenc Questions for Process Area
What personnel are involved in the monitoring process? What roles to they play? What type of relevant qualification/training do they have?
Are personnel trained to decipher monitoring data, understand the processes involved in monitoring a distributed environment, and how to make changes to the monitoring system?
Are the monitoring software and process periodically evaluated with the intent ol identifying potential improvements? Who facilitates this evaluation process?
Do you feel that adequate resources are allocated for monitoring purposes? Please elaborate.
Process Capability Assessment Instrument
Process Area 2 6 Monitoring
Process Area Monitoring verifies that the system is continually functioning in accordance with defined Description SLAs Monitoring consists of the following functions
Event Management: receives, logs, classifies and presents event messages on a console(s) based on pre-established filters or thresholds Event information is sent from such components as hardware, applications/system software, communications resources, etc If an event is classified as "negative" (i e , a fault), event management forwards the event on to fault management for diagnosis and correction
Fault Management: a negative event has been brought to the attention of the system, actions are undertaken within Fault Management to define, diagnose and correct the fault Although it may be possible to automate this process, human intervention may be required to perform at least some of these management tasks
Questionnaire
Figure imgf000118_0001
Figure imgf000118_0002
Work Product list Process Area 2 6 Monitoring
Sample of event log Network status map Reports on traffic patterns Reports on faults
Figure imgf000119_0001
Base Practices
Figure imgf000119_0002
Figure imgf000120_0001
References
Figure imgf000120_0002
Process Area Peiformance Management
Level 1
Assessment Indicators Process Performance
Generic Pi actice Ensure that Base practices are performed
Figure imgf000120_0003
Figure imgf000121_0001
Level 2
Figure imgf000121_0002
Level 3 Assessment Indicatois
Figure imgf000121_0003
Level 4 Assessment Indicators
Figure imgf000122_0001
Level 5 Assessment Indicators
Figure imgf000122_0002
Process Capability Assessment Instrument Interview Guide
Process Area 2 7 Performance Management
Questions
Base Practice 2 7 1 Monitor Resources Utilization/Performance to Ensure Adequacy of Resources How are systems/applications/network workloads monitored to check for adequacy?
What condition qualifies a resource as inadequate, and what action occurs il an inadequacy is noted? Are these procedural policies documented?
Who is responsible for monitoring adequacy of resources?
How is trending data reported to the service provider for planning?
Base Practice 2 7 2 Establish Thresholds foi Each Critical Node
How are thresholds measured and determined for managed resources?
Do these thresholds meet SLAs?
Base Practice 2 7 3 Pπontize Information and Flag Abnormalities
How is utilization monitored vis-a-vis thresholds?
As utilization is monitored, what types of abnormalities are flagged?
What is the procedure for handling abnormalities and who is responsible for ensuring that the necessary action occurs?
Base Practice 2 7 4 Capture, Save, Summarize and Collate Necessary Capacity Statistics Are capacity statistics collected on an on-going basis?
For how long is this capacity data saved?
What types ol summary or trend reports on capacity are generated? How often?
Who reviews these reports and for what purposes? Base Practice 2 7 5 Create Reports on Utilization/Capacity/Performance
What types of reports on utilization/capacity/performance are generated?
Are guidelines for the format and contents of regular reports documented?
Base Practice 2 7 6 Disseminate Reports to Appropriate Parties
1. WWhhoo rreecceeiivveess tthhee uuttiilliizzaattiioonn//ccaappaacciittyy//ppeerrffoormance reports and for what purposes? Ho kiwt/ I fVrύeAqiuiαenn ttKly; a1r-0e I thhoecsoe r roenpΛorrts r dl iiscttrriihblulttPefdl *??
Base Practice 2 7 7 Determine Wheie Performance Requires Short-term Adjustments
Are adjustments to performance data made to account for down time related to repairs, upgrades, etc. (to ensure trending information is not skewed)? If so, in what situations are adjustments made?
Who decides on the appropriate adjustments, and on what basis?
Base Practice 2 7 8 Isolate the Cause of the Performance Problem
Is system-wide data gathered and analyzed to identify the source of a performance problem? How is this data reported? Does any trending occur?
What is the mechanism or procedure by which the cause ol a performance problem is isolated using system-wide data?
Generic Questions for Process Area
What personnel are involved in the Performance Management process? What roles to they play? What type of relevant qualification/training do they have?
Is a documented set of procedural policies followed in activities related to managing pertormance? Are any data collected for use in assessing performance management? If so, please describe the information collected and any metrics that are computed. Are targets for the metrics set and performance evaluated against those targets?
Do you feel that adequate resources are allocated to performance management? Please elaborate.
Process Capability Assessment Instrument
Process Area 2 7 Performance Management
Process Area Performance Management ensures that the required resources are available at all times Description throughout the distπbuted system to meet the agreed upon SLAs This includes the monitoring and management of end-to-end performance based on utilization, capacity and overall performance statistics If necessary, Performance Management can make adjustments to the production environments to either enhance performance or to rectify degraded performance
Questionnaire
Figure imgf000123_0001
Figure imgf000123_0002
Work Product list
Process Area 2 7 Performance Management Capacity reports
Utilization reports
Performance reports
Document listing thresholds for managed resources
Figure imgf000124_0001
Base Practices
Figure imgf000124_0002
Figure imgf000125_0001
Figure imgf000126_0001
References
Figure imgf000126_0002
Process Area Security Planning & Management
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000126_0003
Figure imgf000127_0001
Level 2
Figure imgf000127_0002
Level 3 Assessment Indicators
Figure imgf000127_0003
Figure imgf000128_0001
Level 4 Assessment Indicators
Figure imgf000128_0002
Level 5 Assessment Indicators
Figure imgf000128_0003
Process Capability Assessment Instrument Interview Guide
Process Area | 2 8 Security Management & Planning
Questions
Base Practice 2 8 1 Define Security Objectives
What types of issues are covered by the formal security policy?
Was the security policy submitted to management for approval?
Is the security policy documented and available to customers and management?
Base Practice 2 8 2 Develop security plan and policies
Please describe the contents of the security plan?
What was the process for creating the security plan and policies?
Who is involved in the creation of the security plan/policies and who views the completed document?
Base Practice 2 8 3 Obtain feedback & update security plan
What is the procedure by which new factors that affect the system's security are determined and incorporated into security planning?
Who is responsible for identifying and monitoring factors that might necessitate changes to the current security plan?
How does the security planning function receive information on planned changes to the distributed environment? Who is responsible for communicating such information? How are developments of new technology (that threatens or enhances security) tracked and taken into consideration for security planning?
Base Practice. 2.8.4 Establish Security
List all security software (encryption, authentication, virus protection, remote access, proactive evaluation etc. ) that currently protects your system?
What other types of security measures have been implemented?
How are customers informed of the importance of network security and their responsibilities in supporting security?
Base Practice: 2.8.3 Receive Information from Human Resources Regarding Employee Comings and
Goings
How is information on employee comings and goings communicated by Human Resources?
How long after an employee's departure is the account disabled?
Who is responsible for creating and deleting accounts?
Base Practice: 2.8.4 Maintain Accounts and Ids
Who is responsible for maintaining accounts, passwords and IDs?
Are customer, supervisor and resource profiles maintained?
Do any shared login ids exist on the system? If so, for what purposes?
Does a default "guest" login ID exist on the system? If so, for what purpose and how are access rights controlled?
Are there any specifications for valid customer passwords, such as minimum length, character combinations etc.?
How frequently are customers required to change their passwords? Are customers required to change their password after an administrative reset (e.g. customer forgets password)?
Are customer accounts locked out when consecutive failed logins occur? If yes, how many failed login attempts cause a lock-out? How long is the account locked before it is reset automatically?
Are customer accounts disabled when they are inactive for a set period of time? If so, what is this time period?
Base Practice: 2.8 5 Log Security Events
What types of event information are logged for security monitoring purposes?
Where are these logs stored and for what time period?
Who has access to the security event logs and for what purposes?
How are log records protected from alteration by unauthorized personnel?
Base Practice: 2.8 6 Check for Viruses and Clean up any Found
What forms of virus protection does your system have?
Are viruses checked for only when a virus scan is explicitly ordered by the customer, or does the virus checker implicitly monitor all file accesses? If the former is the case, is there a mechanism to ensure customers routinely run virus scans?
How frequently are updates to the anti-virus product received?
Base Practice: 2.8 7 Audit Logs
Is the security log monitoring process automated? If so, what types of events generate alerts7
Are the logs reviewed regularly for abnormalities that might not be automatically flagged?
What types of summary reports are created from the log information? Who receives these reports and for what purposes7
Base Practice: 2.8 8 Take Coπectrve Actions for Security Violations
What is the procedure for dealing with security violations? Are these procedural guidelines documented and viewed by security personnel?
Are security violations handled off-line?
When are security violations escalated and what is the process for doing so? Are escalation policies documented?
What types of reports are generated on security violations? Who reviews these reports and for what purposes?
Base Practice- 2.8 9 Monitoi Security Plan for its Effectiveness
At time of security plan creation, were any means for judging plan effectiveness specified? If so, what are these methods, and are they routinely employed?
How frequently are security data reviewed to assess effectiveness of security plan? Who is responsible for performing these reviews?
Are any quantitative targets related to security set? Are these typically met? If they are not met, what is done?
What types of explicit testing (e.g. running hacker tools) of the system's security are performed? How frequently?
Generic Questions for Process Area
Do you find that adequate resources are devoted to planning, implementing and monitoring system security?
Are security policies and procedures documented and communicated to appropriate personnel? What type of training do security personnel receive?
Process Capability Assessment Instrument
Process Area 2 8 Security Planning & Management
Process Area Security Planning initially involves defining the organization's security policy and Description developing a security "plan of action". An ongoing function of Security Planning is to evaluate the effectiveness of the existing security plan -particularly in the context of changing technologies - and plan for future security needs
Security Management controls both physical and logical security for the distributed system Due to the nature of a distributed environment, security may need to be managed centrally, remotely or through a combination of the two methods Security Management also handles the logging of proper and illegal access, provides a way to audit security information, lectify security breaches and address unauthorized use of the system.
Process Capability Assessment Instrument Questionnaire
Figure imgf000130_0001
Process Capability Assessment Instrument Work Product list
Process Area 2 8 Security Planning & Management
Security policy document
Security plans and procedures document
Sample of security log
Security violations reports
Report on any tests of the secuiity system
Physical Site Planning &Management (2.9)
Figure imgf000131_0001
Base Practices
Figure imgf000131_0002
Figure imgf000132_0001
References
Figure imgf000132_0002
Process Area Physical Site Planning & Management
Level 1
Assessment Indicators Process Performance
Gener ic Practice Ensure that Base practices are performed
Figure imgf000132_0003
Level 2
Figure imgf000132_0004
Figure imgf000133_0001
Level 3 Assessment Indicators
Figure imgf000133_0002
Level 4 Assessment Indicators
Figure imgf000133_0003
Level 5 Assessment Indicators
Figure imgf000134_0001
Interview Guide
Process Area | 2 9 Physical Site Planning & Management
Questions
Base Practice 2 9 0 Determine physical site needs
Is there a procedure in place that plans for the control and management of construction, development or changes to the physical site7 If yes, what it is7 Is it followed7 Who is responsible for this plan7
Is the physical site planning handled via one plan or several7 If more than one, why7 Is feedback collected for one or all plans7 If yes, how often and by whom7
Are plans determined by balancing implementation costs with estimated business benefits7 If yes, by whom (e g team, individual, management, etc )7
Does planning consider the following requirements and functions hardware capacity and layout, HVAC and fire suppression, power, structural planning (l e mitigate Manmade or natural disaster), and integration with security planning & management7 If yes, explain
Are business goals established foi physical site planning and incorporated7 If yes, by whom7 How often is the plan reviewed7
Base Practice 2 9 1 Test environmental/regulatory control plans periodically on a per-site basis 1. Is testing performed regarding environmental and regulatory controls on a periodic basis? If yes, how often for each site and by whom? If no, explain?
What are the main environmental/regulatory concerns for each site? Please prioritize and explain? Are the plans for testing updated to include new equipment, regulations, etc.? If yes, how often are they reviewed and by whom?
Base Practice 2 9 2 Notify appropriate part of environmental failure on a per-site basis
When a failure is encountered are there identified contacts who you notify for each site? If yes, how is notification done (e.g. pager, e-mail, phone, etc.)?
What are the most common failures within each site and how often do they occur? Ho is leedback from the various sites collected (e.g. reports, conference calls, e-mail, etc.)?
Are data collected regarding the types of failures, response time, locations, reasons, etc? II yes, what data are collected and who receives this data? Are data collected on a manual basis or automatically?
Base Practice 2 9 3 Monitor progress of corrective actions to failure on a per-site basis
Are corrective actions, in response to previous failures, monitored per site? If yes, how are they monitored and who is responsible for this? Are other related groups notified of changes or issues concerning any corrective action? If yes, ho and when? If no, explain?
Are metrics collected on the progress or status of physical site management procedures for each site? If yes, how often and are these collections done manually or are there software/automation tools in use? Are these metrics analyzed against goals and quantified objectives? If yes, by whom?
Base Practice 2 9 4 Monitor physical site management plan for its effectiveness on a per-site basis
Are business goals and strategies tor each site used to measure the success or failure of corrections and/or the general operation procedures for physical site management?
Are the physical site management tasks continuously improved? If yes, are these improvements deployed and measured for effectiveness?
Are enough resources available, as far as equipment, space, procedures, software and/or personnel on each site? If no, explain how the addition of resources would improve the effectiveness of a site (e.g. better monitoring, quicker response time, accurate data, etc.)?
Base Practice 2 9 5 Provide feedback on physical site management to physical site planning function Is feedback from physical site management forwarded to physical site planning? If yes, how (e.g. conference calls, reports, e-mail, etc.)?
Are the plans, procedure reviews, issues and problems for each site collected and addressed via one centralized group or is each site a completely separate entity? If separate, does each communicate with physical site planning?
Generic Questions for Process Area
Is there a written policy regarding physical site management's procedures? If yes, is it followed? Is version controlled enacted on this plan? Are change control documents regarding the plan cut and forwarded to appropriate departments?
Is training made available to new hires within physical site management? Is follow-up training covering new technologies, procedures, etc. provided? Are plans made for luture employment needs within physical site management?
Is the entire physical site management process reviewed for continuous improvement? If yes, by whom and how often? Are the improvements deployed and measured against business goals and metrics? If yes, by whom?
Process Capability Assessment Instrument
Figure imgf000135_0001
Questionnaire
Figure imgf000135_0002
Work Product list Process Area 2 9 Physical Site Planning & Management
Procedures noting physical site planning (e g expansion, new layout, etc )
Procedures regarding environmental/regulatory control plans for each site
Failure monitoring/reporting procedures for each site
Reports noting status of physical site management for each site
List of risk issues for physical site management (e g earthquakes, wild fires, temperature extremes, brown/black outs, frequency of lighting strikes, tornadoes, etc ) for each site
Figure imgf000136_0001
Base Practices
Figure imgf000136_0002
Figure imgf000137_0001
References
Figure imgf000137_0002
Process Area Mass Stoiage Management
Level 1
Assessment Indicators Process Performance
Gener ic Practice Ensure that Base practices are performed
Figure imgf000138_0001
Level 2
Figure imgf000138_0002
Level 3 Assessment Indicators
Figure imgf000138_0003
Figure imgf000139_0001
Level 4 Assessment Indicators
Figure imgf000139_0002
Level 5 Assessment Indicators
Figure imgf000139_0003
Process Capability Assessment Instrument Interview Guide
Process Area | 2 10 Mass Storage Management
Questions
Base Practice: 2.10.1 Monitor and Control Storage Usage
What type of system or tool do you have in place for monitoring and controlling storage usage? What utilities does it have?
Can the tool support all the operating systems within the distributed environment?
Does the tool have the ability to assess the physical file placement and determine space availability?
Does the tool allow for reordering of files to eliminate fragmentation?
What media types are used for storage? Can the tool monitor all these media types?
Who oversees or manages the monitoring and control process? What are their responsibilities?
Base Practice: 2.10.2 Define Usage Standards for Storage Media
What information is specified as part of the storage media's usage standards? Are system descriptions, operational procedures, help-desk/problem resolution contacts, Mass Storage Management configuration files etc. included? Where is the usage standards documentation stored and who accesses these documents? Who is responsible for maintaining usage standards documentation?
How frequently are usage standards reviewed and updated? What is the process for doing so?
Base Practice: 2.10.3 Disk Space Management for Mass Storage
What is the procedure for determining shared disk space requirements?
On what basis is disk-space partitioning done?
How is disk space allocation kept track of?
How frequently are disk space requirements reevaluated and space reallocated?
Base Practice- 2.10.4 Rectify Problems with Stateless File Systems
What mechanisms are employed to rectify backup problems resulting from stateless file systems?
Has an assessment been made of how well these mechanisms deal with the problem? If so, what was the outcome ol" the assessment?
Base Practice: 2.10.5 Locate Datasheets According to Access Priority
Does a storage media hierarchy (based on ease of access) exist and is data stored at particular levels based on defined strategies or priorities? If so, what are the levels of the hierarchy (e.g. online, nearline, offline) and how is data assigned to a particular level?
Are data moved around within the hierarchy? What circumstances initiate such location changes? Is there an automated process for discerning what datasheets should be moved? (e.g. the storage management software keeps track of the number of times particular files are accessed and determines which files should be moved to make retrieval more efficient)? If" manual intervention is required, what needs to be done and who does it?
Do you have any means of gauging the efficiency of your data organization at a particular time? If so how frequently is the efficiency assessed? Are any efficiency-related targets set?
Base Practice: 2.10.6 Tape Management
What is your procedure for requesting, locating and loading tapes?
Where are tapes stored? How is the location of each tape in storage tracked?
How do you ensure that all tapes are labeled? What information is recorded on the label?
Generic Questions for Process Area
Are problems ever experienced in running backups due to large data volumes, inadequate bandwidth or sub-optimal hardware/software support?
What type of training do storage management personnel receive on standards, policies and actual operation of the mass storage management system?
Are procedures audited to verify that standards and policies are being followed?
Are storage management operations periodically reviewed with the purpose of identifying potential improvements?
Do you find that the resources devoted to mass storage management satisfactorily meet the storage needs of the organization?
Process Capability Assessment Instrument
Process Area 2.10 Mass Storage Management
Process Area Mass storage involves those activities related to the handling of various types of centralized Description and distributed storage media (e.g., tapes, disks, etc.) including the monitoring and controlling of storage resources and their usage. Mass Storage Management can be viewed as providing the top level of storage management with support form Archiving and Backup/Restore Management. _____
Questionnaire
Process Area 2.10 Mass Storage Management
Figure imgf000140_0001
Figure imgf000141_0001
Work Product list
Process Area | 2 5 Backup/Restore/ Archiving
Storage policies document Naming standards document Tape management procedures Usage level reports
Figure imgf000141_0002
Base Practices
Figure imgf000142_0001
References
Figure imgf000142_0002
Process Area- Release Management
Level 1
Assessment Indicators Process Performance Generic Practice Ensure that Base practices are perfoimed
Figure imgf000143_0001
Level 2
Figure imgf000143_0002
Level 3 Assessment Indicators
Figure imgf000143_0003
GP3.2 Define tasks that New release management personnel receive satisfy the process training on the process. Subsequent training is purpose and business provided for new technologies, software or goals consistently and procedures. Future employee requirements are repeatedly addressed.
Process GP3.3: Plan for human Release management handles releases according Resource resources proactively to the stated policy vs. ad hoc.
GP3.4 Provide feedback Release management receive feedback from SLA, in order to maintain Service Desk, etc. via e-mail, reports or meetings knowledge and regarding changes, concerns or issues. experience
Level 4 Assessment Indicators
Figure imgf000144_0001
Level 5 Assessment Indicators
Figure imgf000144_0002
Process Capability Assessment Instrument Interview Guide
Process Area 3 1 Release Management
Questions
Base Practice 3 1 1 Analyze change request priorities
Have change request priorities been analyzed (emergency, non-emergency)? How?
Are rollout plans put into place? Who is involved in this?
How are emergencies documented?
Base Practice 3 1 2 Confirm technical feasibility of the lelease package
Are SLAs considered for technical/compliance issues? If no, why not?
How is the technical feasibility of the release package confirmed (e.g. meetings,
Figure imgf000145_0001
Process Capability Assessment Instrument
Process Area 3 1 Release Management
Process Aiea Release Management is the overall process of delivering an on-time release into production. Description Release Management is broken down into several aieas, which are described below.
Release Planning
Release Planning coordinates the release of updates to the distributed and central sites. Due to the fact that any change in the distributed environment may impact other components, releases must be planned carefully to ensure that a change will not negatively impact the distributed system
Release Planning defines the content of a release, groups new or changed software, data, procedures, training material and upgrade packages for distribution and implementation, applies versions to the release components, and creates a lelease schedule
Release Tracking
Release Tracking is the process of monitoring the progiess of release contents and all releases
Questionnaire
Process Aiea | 3 1 Release Management
Figure imgf000146_0001
Work Product list
Process Area 3 1 Release Management
Documented release procedures Example of a past release schedule
Example of configuration parameters
Example of build procedures and scripts
Example of operations procedures
Example of customer procedures Example of customer training materials
Example of legacy data interfaces
Example of early release rollout process successes and failures
Figure imgf000146_0002
Figure imgf000147_0001
Base Practices
Figure imgf000147_0002
Figure imgf000148_0001
References
Figure imgf000148_0002
Process Area Change Control
Level 1
Assessment Indicators Process Performance
Genet ic Pr actice Ensure that Base practices are performed
Figure imgf000148_0003
Figure imgf000149_0001
Level 2
Figure imgf000149_0002
Level 3 Assessment Indicators
Figure imgf000149_0003
Figure imgf000150_0001
Level 4 Assessment Indicators
Figure imgf000150_0002
Level 5 Assessment Indicators
Figure imgf000150_0003
Process Capability Assessment Instrument Interview Guide
Process Area 3 2 Change Control
Questions
Base Practice 3 2 1 Change Initiation
How is a change initiated? Is a change-request form completed and submitted? What information is required on a change-request form?
Is confirmation ol request receipt sent?
Where is a change-request logged? What information is recorded when a change-request is logged? Does each change-request receive a priority level? If so, what are the various priority levels and what action or service level does a particular priority level warrant? Does a documented policy specify these actions/levels?
Does the requestor specify the criticality of the change or do change control personnel determine the request's priority level? If the latter is the case, on what basis is a criticality level assigned to the request?
Base Practice 3 2 2 Change Impact Analysis/Assessment
What type of analysis of the change's impact is performed? What issues are considered? Are both technical and business implications taken into consideration?
Is the effort required to complete the change determined?
Who performs the analysis and who reviews it?
What are the consequences of the change impact analysis (i.e. is the change request rejected if the change analysis yields particular results)?
Base Practice 3 2 3 Change Approval
Whose approval is needed before a change request can be implemented? Does the person(s) whose approval is necessary depend on the scope or priority level of the change?
Ho is approval obtained and documented?
Is the change requestor notified of change approval or rejection?
Base Practice 3 2 4 Change Communication and Scheduling
Once approval is obtained, what is the process for estimating the time and scheduling the change? Are other completion times and dates factored into the estimated time of a change to be implemented?
Does a master schedule exist on which the change is noted, or how is the scheduled change communicated to appropriate parties?
Base Practice 3 2 5 Change Implementation Planning and Preparation
Who is notified of an impending change?
How does change notification take place?
How much time before the implementation of the change does notification occur?
If the system or parts of the system will be unavailable during the change implementation, how is this unavailability managed?
Base Practice 3 2 6 Change Request Tracking
What is the process for tracking the implementation of a change request?
What events or conditions related to the change request are logged, i.e. when is the change request status updated?
Is the log reviewed to identity changes that might be overdue or that require additional action?
Base Practice 3 2 7 Change Implementation
If necessary are change requests escalated/re-routed? What is the process for doing so? Is this process documented and lollowed?
How is successful completion of the requested change testified or verified?
Who is responsible for verilying the successful completion of the change?
Base Practice 3 2 8 Change Backout and Contingency Planning
For what types of changes are back-out or contingency plans devised? Does a policy exist specifying changes that require such plans?
Where are back-out/contingency plans documented?
How frequently (often, rarely, never) are these back-out or contingency plans utilized?
Base Practice 3 2 9 Change Reporting
What reports are generated pertaining to changes? What are the contents of these reports?
Do the reports follow documented guidelines on format and content?
How frequently are these reports created and disseminated?
Who views these reports and lor what purposes?
Base Practice 3 2 10 Change Post-Implementation Reviews
Is requestor notified of change completion and confirmation received?
What is the process for closing a change request?
Is an audit trail of each change request stored? If so, what documentation is saved?
Can the audit trail for a particular change request be obtained? If so, ho ?
Geneπc Questions for Process Area
Are any metrics (e.g. percent of change requests completed on time, percent of requests put on hold) collected to measure performance of the change control process? H so, what are they?
Are any quantitative performance targets set for change control? If so, please describe them. Is performance evaluated against these targets?
What type of training do change control personnel receive? Are employees aware of all document policies and procedures?
Is the change control process periodically reviewed/evaluated with the intent of identifying potential improvements?
Process Capability Assessment Instrument
Process Area 3 2 Change Control
Process Area Change Control is lesponsible for coordinating and controlling all change adrrunistiation Description activities with the enterprise environment (I e document, impact, authorize, schedule, implementation control) Change Control determines if and when a change will be carried through in the enterprise environment Change potentially covers all events that impact application software, systems software, or hardware
Changes may often be divided into categories, for example
New capability, such as new applications or hardware components
Modifications, which can change functionality, improve performance, etc
Maintenance, typically to correct errors
Emergency, which require immediate attention and correction/implementation
Questionnaire
Process Area 3 2 Change Control
Figure imgf000152_0001
Work Product list
Process Area 3 2 Change Control
Change request form Sample change control log record
Change control reports
Complete audit trail of a change request
Impact analysis results
Master change control schedule Example of back-out/contingency plan
Validation 3.3
Figure imgf000152_0002
Figure imgf000153_0001
Base Practices
Figure imgf000153_0002
Figure imgf000154_0001
References
Figure imgf000154_0002
Process Area Validation
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000154_0003
Level 2
Figure imgf000154_0004
Figure imgf000155_0001
Level 3 Assessment Indicators
Figure imgf000155_0002
Level 4 Assessment Indicators
Figure imgf000155_0003
Level 5 Assessment Indicators
Figure imgf000156_0001
Process Capability Assessment Instrument: Interview Guide
Process Area 3.3 Validation
Questions
Base Practice: 3.3.1 Determine what needs to be tested for the product
What is the process for identifying all that needs to be tested for a new product? Are business requirements reviewed and taken into consideration?
Has a general set of technical standards been defined for components of the distributed environment? If so, are the testing requirements defined to ensure that compliance with these standards will be tested?
For any product are there certain standard tests performed (e.g. capacity, operability, compatibility etc.)? If so, what are these tests?
Base Practice: 3.3.2 Prepare test plans
What tasks are completed while preparing test plans?
Is a test environment specified, and the necessary preparations detailed?
How is the appropriate testing approach and test model developed?
What test plan documents are produced? Are these a standard set of documents produced for every testing project? If not, how might they vary?
Are all resources required for the testing process identified? Who is in charge of identifying them,
(i.e. are others consulted for this decision or is this just done by the validation team)?
Who is involved in creating the final test plans? Who reviews the final test plan documents?
Base Practice: 3.3.3 Document test inputs and expected results
What document(s) are prepared detailing all test inputs to be used and the expected results? What other information do these documents contain? Are these documents prepared according to predefined specifications?
Are the test inputs/expected results directly linked back to individual testing requirements identified earlier?
Base Practice: 3.3.4 Install new product in test environment
Please describe the test environment used for testing. Does a single environment exist for all testing purposes?
Does the test environment cover all operating systems, configurations, applications, etc. that are in the production environment?
What tasks or activities are involved in preparing the test environment for the installation of a new product (e.g. verifying proper setup of hardware, software, network, clear data from previous tests, load test data in appropriate regions)? Are these procedures documented?
Can information be copied from the production environment to the test environment? If so, typically what information is transferred? How is this information transferred?
Is the product's installation method documented and installation issues noted? Does the installation follow a standard process or policy for all new installations? If yes, please describe this policy or process.
Base Practice: 3.3.5 Test product and evaluate results
Are all predefined testing requirements tested? Are any mechanisms in place to ensure that all specified test cases are run? If yes, what are these mechanisms?
Are any tools used for automated testing? If so, please describe them. Approximately what proportion of testing is automated and what proportion is performed manually?
Who manages/controls the testing process? What are his/her responsibilities?
In addition to testing the product functionality, is the product's business functionality verified (i.e. does the product meet the business requirements for which it is intended)? If so, what is the process for doing so?
If appropriate, is the product tested on customers to check system navigation/ease of use and adequacy of training/job aids that accompany the product?
What reports or documents are produced as the output of the testing process? What information is presented and who receives this information? Have reporting guidelines been defined?
Base Practice 3 3 6 Perform regression testing on environment and system's functionality What is the process for identifying the requirements for regression testing?
Is any tool employed for automated regression testing? If so, please elaborate. Does this tool meet all regression testing requirements? If not, where does it fall short? How are these shortcomings addressed?
Are any manual or automated test scripts created and retained for reuse during future regression testing activities? If yes, are these test scripts periodically updated or changed to accommodate new processes or requirements? Who updates these scripts?
If regression testing results show that the product has unintended impacts on other areas, what is done? Is the change rolled back? Who decides that a roll back should occur and at what point during the process does this happen?
Generic Questions for Process Area
Does a designated "validation team" exist? If so, please describe the roles and responsibilities of members of the team. How does the team coordinate its activities?
What other groups does validation interface with? Where do requests for testing of a particular product originate?
Are the testing process and new technologies periodically evaluated to identify potential improvements? Are associated future human resource requirements considered? How frequently does such a review occur? Who is involved in the process?
What type of training do testing personnel receive? Does formal training occur or does training primarily occur on-the-job?
Are any statistics collected for purposes of evaluating the testing process (e.g. percent of successful migrations of tested products)? If so please describe them and the method by which they are collected. Are targets for these metrics set? What is the process for assessing performance against these targets? How has performance been vis-a-vis the targets defined?
Do you find that adequate resources are allocated for validation activities. Please elaborate.
Process Capability Assessment Instrument
Process Area 3 3 Validation
Process Area Validation involves testing potential hardware and software for the distributed environment Description prior to procurement to determine how well a product will fulfill the requirements identified. Validation also ensures that the implementation of a new product will not adversely affect the existing environment
Questionnaire
Figure imgf000157_0001
Figure imgf000158_0001
Work Product list
Process Area 4 3 Validation
Sample test plans (e g test requirements, test execution schedule)
Sample testing documents (e g test scripts)
Sample test report
Technical standaids required of all products
De lo ment 3.4
Figure imgf000158_0002
Base Practices
Figure imgf000158_0003
Figure imgf000159_0001
References
Figure imgf000159_0002
Process Area Deployment
Level 1
Assessment Indicators Process Performance
Gener ic Pi actice Ensure that Base practices are pei formed
Figure imgf000159_0003
Figure imgf000160_0001
Level 2
Figure imgf000160_0002
Level 3 Assessment Indicators
Figure imgf000160_0003
Level 4 Assessment Indicators
Figure imgf000160_0004
Process GP4.1 Establish measurable Deployment plan is based on strategic Measurement quality objectives for the business needs vs. industry standards. operations environment
GP4.2 Automate data Metrics are automatically collected from collection the deployment schedule vs. collected manually
GP4.3 Provide adequate Metrics automatically collected by resources and infrastructure deployment personnel are analyzed and for data collection reported. Deployment software tool maybe linked to physical site management schedule and might reflect scheduling conflict via e-mail message.
Process Control GP4.4 Use data analysis Deployment is evaluated against methods and tools to manage performance goals and metrics for and improve the process suggested improvements and revisions to the process.
Level 5 Assessment Indicators
Figure imgf000161_0002
Process Capability Assessment Instrument Interview Guide
Process Area 3 4 Deployment
Figure imgf000161_0001
stakeholders7 If so, please describe how7
In the past, what have been the reoccurring problems and issues considered by the deployment schedule?
Do the deployment schedules allow time for "catch-up" or recovery time for deployment errors?
Base Practice 3 4 4 Report on progress of deployment plan
How are audits performed regarding rollout activities and reported? Are adequate resources provided for this task?
What mode of communication is used to distribute reports? How often?
Are qualified /quantifiable deployment milestones determined and reported to all internal/external groups?
Are customers provided with a contact person to communicate progress/issues/problems (e.g. service desk personnel, deployment contact)?
What data is collected and reported upon with deployment?
Base Practice 3 4 5 Disseminate leports to appropriate parties
1. Who receives reports noting progress/success/failures/concerns about deployment?
2. How often are these report disseminated to internal/external stakeholders?
Base Practice 3 4 6 Provide feedback on the deployment to deployment planning
Do other departments monitor and respond to deployment feedback? If so, whom and what type of feedback do you receive?
How does the deployment team/personnel receive feedback from stakeholders (e.g. through service desk request tickets, deployment public mailbox, etc.)?
Is this information tracked and used for current and future deployment ease and troubleshooting?
Generic Questions for Process Area
Is training provided that reviews the deployment process/procedure? It yes, describe the training.
Is training provided for all customers effected by the deployment? If yes, describe the training.
Are the deployment activities and processes monitored tor continuous improvement? If yes, how?
Have any changes been enacted and validated after they have been identified as a continuous improvement area?
Process Capability Assessment Instrument
Process Area 3 4 Deployment
Process Area Deployment monitors the rollout schedule against the activities taking place to ensure that Description rollout happens smoothly according to the planned schedule As there are many dependencies within a distributed system, deployment can become highly complex and must be synchronized
In addition, numerous groups within and external to the organization will be involved in the rollout Deployment is responsible for managing these groups, coordinating the information received from these groups, and determining whether or not the schedule will be negatively impacted by any activity taking place If changes to the schedule are required, Deployment is responsible for coordinating the changes across all of the groups involved and seek management approval for the changes
Questionnaire
Process Area [ 3 4 Deployment
Figure imgf000163_0001
Work Product list
Process Area | 3 4 Deployment
Example of a previous deployment plan
Example of tiaimng schedule/materials that was provided to employees who recently received deployed application Example of a pre\ IOUS deployment reports
A copy of the standard procedures regarding deployment Example of a backout strategy if deployment is not successful
Software & Data Distribution 3.5
Figure imgf000163_0002
Base Practices
BP Number 3 5 1
BP Name Identify Architecture appropriate for environment
Figure imgf000164_0001
References
Figure imgf000164_0002
Process Area Software & Data Distribution
Level 1
Assessment Indicators Process Performance
Genei ic Practice Ensure that Base practices are performed
Figure imgf000164_0003
Level 2
Figure imgf000164_0004
Figure imgf000165_0001
Level 3 Assessment Indicators
Figure imgf000165_0002
Level 4 Assessment Indicators
Figure imgf000165_0003
Figure imgf000166_0001
Level 5 Assessment Indicators
Figure imgf000166_0002
Process Capability Assessment Instrument
Process Area 3 5 Software & Data Distribution
Process Aiea The Software and Data Distribution process allows software and data to be installed or Description updated on hosts, servers and workstations providing customers with new and improved system functionality Distributed architectures require compatibility between software and data on the various machines within the system and, at times, across different platforms (e g , MVS host and Windows clients) Updates therefore must be carefully planned, synchronized, executed and, if necessary, regressed
Questionnaire
Process Area 3 5 Software & Data Distribution
Figure imgf000166_0003
Work Product list
Process Area 3 5 Software & Data Distribution Example of Software Performance Evaluation
Example of "Manual" Distribution Package sent to User's
Example output of Software/Data Distribution Reports (Successes/Failures/etc )
Example of Asset Inventory Report for Software/Data Distribution
Cuirent copy of Detailed Design Plan
Example of Change Control Document
Figure imgf000167_0001
Base Practices
Figure imgf000167_0002
Figure imgf000168_0001
References
Figure imgf000168_0002
Process Area Migration Control
Level 1
Assessment Indicators Process Performance
Generic Pr actice Ensuie that Base practices are performed
Figure imgf000168_0003
Figure imgf000169_0001
Level 2
Figure imgf000169_0002
Level 3 Assessment Indicators
Figure imgf000169_0003
Level 4 Assessment Indicators
Figure imgf000170_0001
Level 5 Assessment Indicators
Figure imgf000170_0002
Interview Guide
Process Area 3 6 Migration Control
Questions
Base Practice 3 6 1 Assemble the release package
Are tools, software, space and version controls always in place to secure a complete and bundled release7 If yes, who does this and how7 If no, explain
Who does migration control coordinate this process with (e g Change Control, Validation, Deployment, Software and Data Distribution, etc )7 Explain the interactions
Base Practice 3 6 2 Maintain integrity of all master release packages
Are all master release packages maintained in their own file and directory structure7 If no, explain
Are all documents for the master release package archived/maintained7 If yes, by whom (e g owners, developers, programmers, etc ) and are they accessible7 *
Base Practice 3 6 3 Implement version control on release received from development
Is version control maintained on release software from development7 If yes, how and who is responsible7 How is feedback provided (e g reports, form provided, etc )7
Is change control made aware of releases received from development7 If yes, how7 If no, explain Base Practice 3 6 4 Migrate proper versions of release from development to test environment Are versions validated to ensure that the correct versions of releases are migrated into the test environment7 If yes, how and by whom7
Is validation made aware of release migration into the environment7 If yes, how7 If no, explain
Base Practice 3 6 5 Receive confirmation that release package has been tested successfully How is confirmation received regarding successful testing7 By whom and to whom is this information senC
2 Are all schedules updated with this information7 If yes, which ones If no, why7
Base Practice 3 6 6 Notify appropriate parties of status of release package's migration
How are other parties notified of release package's migration7 Who would be the typical receivers of such information7
Do other parties supply feedback to migration control regarding concerns, problems or collaborative efforts7 If yes, how is typical communication handled (e g e-mail, reports, meetings, etc )7
Base Practice 3 6 7 Maintain migration libraries
Are migration libraries maintained7 If yes, by who and how7 If no, explain how historical software or versions are kept7
2 How long are migration libraries maintained for7
Generic Questions for Process Area
Is there a formal policy in place that covers the entire migration control process7 If yes, is it followed and who is responsible for its maintenance If no, explain
Is theie training in place for new employees7 If yes, explain the training provided (e g ad hoc, on the job, formal, lecture)7 Is follow-up training provided on new technologies and procedures for all migration control employees7 Explain
Are data collected on the migration process7 If yes, is this automated7 Are metrics gathered noting more statistical information7 If yes, explain what metrics are collected and what tools aie used (e g software, programs, etc )
Are strategic goals in place for migration control7 If yes, what are they and are they measured against metrics7 Are these metrics analyzed against business goals and leported on If yes, how and by whom7 If no, explain
Is the migration control process reviewed for continuous improvement7 If yes, are these improvements ever deployed and measured against metrics and business goals7
Aie there enough lesources provided for the migration control process (e g software, tools, personnel, etc )7 If no, explain7
Process Capability Assessment Instrument
Process Area 3 6 Migration Control
Process Area Migration Control is the process of testing updates to the distributed system prior to being Description released into the distributed environment To control the updates as they move from the development into the production environment, Migration Control ensures that the proper updates are received from development versioned according to the version strategy of Release Planning moved into the test environment moved from the test environment into the production environment after the pre release tests have been successfully completed
Questionnaire
Figure imgf000171_0001
packages are complete and are also referenced on the Release Management schedule7
Is version control maintained on software entities?
Are checks in place to validate that correct versions of fixes and releases have been processed from development into the testing environment7
Do you receive confirmation that a release package has been tested successfully7
Are other parties notified of the migration status regarding a release package7
Are migration libraries maintained7
Work Product list
Process Area 3 6 Migration Control
A copy of the policy or procedure guide regarding migration control
Samples of changes requests noting migiation control information
Samples of reports noting migration control status and future schedules
A copy of a migration control schedule/calendar for a typical software migration process
Figure imgf000172_0001
Base Practices
Figure imgf000172_0002
References
Figure imgf000172_0003
Figure imgf000172_0004
Figure imgf000173_0001
Base Practices
Figure imgf000173_0002
Figure imgf000174_0001
Process Area Content Management
Level 1
Assessment Indicator s Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000174_0002
Level 2
Figure imgf000174_0003
Figure imgf000175_0001
Level 3 Assessment Indicators
Figure imgf000175_0002
Level 4 Assessment Indicators
Figure imgf000175_0003
Figure imgf000176_0001
Level 5 Assessment Indicators
Figure imgf000176_0002
Process Capability Assessment Instrument Interview Guide
Process Area | 3 8 Content Management
Questions
Base Practice 3 8 1 Content Development
Are meetings held to discuss verbal and graphical content of each application7 If yes, who attends7 How often are these meetings held7
Is a web template used to standardize information and aesthetics for every application? Who developed the template? Is there a purpose for its specific design?
Has a standardized list of approved text, image and multi-media formats been agreed upon? If yes, what are they? What was the process of composing this list?
Is there a procedure/policy regarding the content development? If yes, what is the procedure? Is the procedure followed?
Base Practice 3 8 2 Content Approval
1. Is there a procedure for content approval? If yes, what is it?
Who reviews content for approval purposes? If yes, whose concerns do they represent (e.g. legal, marketing, engineering, etc.)?
Are meetings held on a scheduled basis for content approval matters? It yes, who attends?
4. Is version control established for all web related documents?
Base Practice 3 8 3 Content Integiation
1. Who is responsible for migrating documents into the production environment? Is migration pertormed on an ad-hoc bases or on a scheduled basis? What is the process for migrating documents?
How is old or outdated material archived/stored when new data is migrated onto the system to replace it?
Base Practice 3 8 4 Technical Review
Are technical standards and procedures established lor content review? If yes, what are they? Who conducts these reviews?
How are technical problems/concerns reported to the author, customers, content management or the web master (e.g. meetings, reports, e-mails)? Does Content Management coordinate an action plan/corrections with the author (e.g. scheduled, prioritized, ad hoc, etc.)?
What are the most common technical problems encountered? What are the future technical threats or issues to be considered? How are these problems fixed or resolved?
Base Practice 3 8 5 Content Testing 1. Is the content tested before or after it is integrated into the production environment? When testing content, which environments/platforms are checked for problems/issues (e.g. unix, standalone, network)?
Who is responsible for testing? How is feedback provided from and to content management, customers, authors, web masters, etc.?
Base Practice 3 8 6 Content Restoration
Has any part or all of an archived web site ever been migrated into a production environment? If yes, explain the reason?
Who handles content restoration? What are the most common problems encountered when replacing current pages with older versions?
Is there an approval procedure as to what is restored and when? If yes, what is the process?
Base Practice 3 8 7 Content Aging
Does the web site contain date sensitive/volatile content that must be updated often? If yes, how often and by whom?
Is the site checked for relevant and current intormation on a scheduled basis? If yes, by whom? How frequently does such a check occur?
Are files removed from a site (e.g. erased, archived), updated to include historical information/content or both? Is content volume an issue?
Are metrics gathered regarding content management? If yes, explain what data is gathered, why, and who is it distributed to?
Generic Questions for Process Area
Is a policy established, maintained and followed for the entire content management process? If yes, please describe it.
Are there enough personnel available in content management to perform all necessary tasks and manage the different types of contents (video, voice, etc.)? If no, why?
Is training provided for new content management personnel? If yes, how is it performed (e.g. on the job, scheduled, ad-hoc)?
Is tormal training provided on a continuous basis for all content management personnel? If yes, describe training.
Are metrics collected? Is software used to perform metric collection on an automated basis? If yes, what program are used? What data is being collected?
Is the content management process reviewed for continuous improvement? If yes, is this process measured? How?
Are all documents processed through the content management personnel prior to migration in a production environment? If no, why?
Are strategic goals established for content management? Are these measured? If yes, how?
Is the content management process compared against goals and metrics? Do these comparisons lead to suggested improvements for the process? Are deployed improvements then validated via metrics?
Does content management lack any resources that are needed to perform tasks and follow procedure? If yes, what ?
Process Capability Assessment Instrument
Process Area 3 8 Content Management
Process Area Content Management represents the people, processes, and technologies that allow a net- Description centπc site to maintain up-to-date, secure, and valid contents for its customers
Questionnaire
Figure imgf000177_0001
Figure imgf000178_0001
Work Product List
Process Area 3 8 Content Management
Content Management Manual
Example of any Content Management Reports
Example of a web page that progressed through Content Management cycle
Metrics collected for the Content Management process
Examples of tracking documents/reports noting the status of web pages throughout the Content Management process
Figure imgf000178_0002
Base Practices
BP Number 3 9 1
BP Name Acquire new/increased number of licenses
BP Description The purpose of this activity is to ensure that licenses are purchased, authorized, and tracked for software being used
Example Forecasting new or additional software licenses for new employees can be based off of year ly reports and future company growth forecasting
BP Number 3 9 2
BP Name Delete expired software and corresponding licenses
BP Description The purpose of this activity is to identify expired licenses of software that is no longer needed, ensure that the software is removed, and ensure that there is no violation of license agreements
Example Company is conver ting from WordPerfect 6 I to Word 97 The licensing agreement with WordPerfect is about to expu e Once Wor d 97 is rolled out to the company a cleanup needs to be performed to removed Word Perfect 6 I before the expiration date of the licensing agreement
BP Number 3 9 3
BP Name Support various license types
BP Description The purpose of this activity is to know if/ when licenses expiration dates are getting close to
Figure imgf000179_0001
Process Area License Management
Level 1
Assessment Indicators Process Performance
Generic Pr acttce Ensure that Base practices are performed
Figure imgf000179_0002
Level 2
Figure imgf000179_0003
Figure imgf000180_0001
Level 3 Assessment Indicators
Figure imgf000180_0002
Level 4 Assessment Indicators
Figure imgf000180_0003
Level 5 Assessment Indicators
Figure imgf000180_0004
Process Capability Assessment Instrument Interview Guide
Process Area 3 9 License Management
Questions
Base Practice 3 9 1 Acquire New/Increased Number of Licenses
How are new/increased number of licenses acquired? By whom?
Are the software programs used authorized by the original manufacturer? If no, explain?
Are housekeeping duties performed on license information? If yes, when? How?
Is the ability available to track, run detailed reports with version information, and measure the license management process regarding software licenses? If yes, how?
Does license management authorize license use? If yes, how?
Base Practice 3 9 2 Delete expired software and corresponding licenses
Is there a process in place for removing software with expired licenses? If yes, what is the process? How often does this occur?
Are there any reports or data collected on software where the license has expired? If so, what detailed information is collected on the expired software? What is done with the data?
Base Practice 3 9 3 Support Various License Types
1 Are various license types supported7 If yes, identify7
How are license renewals handled? By whom?
Are notices sent when license expiration dates are near? If yes, how is notification sent?
Is unlicensed software searched for? If yes, how (physical, system)?
What is done when unlicensed software is discovered?
Generic Questions for Process Areas
What is the license management process?
2. Are reviews for the license management process conducted for continual improvement?
3. If improvements are implemented, how are the outcomes measured?
4. What training is provided to new and existing personnel regarding the license management process?
5. What license management reports are generated to management for review/feedback?
6. What policy, standards or procedures have been established for license management?
7. What are the needs, priorities and quantitative goals for license management?
8. Are any resources lacking that would facilitate data collection regarding license management?
Process Capability Assessment Instrument
Process Area 3 9 License Management
Process Area License Management ensures that software licenses are properly maintained This is Description especially important since organizations are legally bound to maintain license arrangements These arrangements are complex and can be based on the number of copies, on the number of shared servers, on dates, etc
Questionnaire
Process Area J 9 License Management
Figure imgf000181_0001
Work Product list
Process Area 3 9 License Management
Sample Software License Agreement
Sample of Software License Purchases
List of available software with details (expiration date, number of customers, etc )
Customer's Guide for Software Tracking Program
Figure imgf000182_0001
Base Practices
Figure imgf000182_0002
Figure imgf000183_0001
References
Figure imgf000183_0002
Process Area Asset Management
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000183_0003
Level 2
Figure imgf000183_0004
Figure imgf000184_0001
Level 3 Assessment Indicators
Figure imgf000184_0002
Level 4 Assessment Indicators
Figure imgf000184_0003
Level 5 Assessment Indicators
Figure imgf000184_0004
Process Capability Assessment Instrument. Interview Guide
Process Area 1 3 10 Asset Management
Questions
Base Practice 3 10 1 Manage and Maintain Asset Information
What tool or system is used to maintain asset information?
What attribute information is initially recorded about the assets? What types of updates are made, and how frequently?
For what purposes is asset information used (e.g. financial reporting, managing service levels etc.)? How does the asset management system interface with the other functions (such as accounting) that need access to asset information?
Does the tool enable detection and tracking of all hardware and software components installed on the network?
Can asset information be updated/deleted/browsed remotely and/or locally?
Base Practice 3 10 2 Audit Information in System
How is information in the system audited for correctness, completeness and accuracy?
How frequently do audits occur?
Can asset information be searched based on customer-defined parameters?
Who is responsible for overseeing the audit process?
Base Practice 3 10 3 Report on Discrepancies
What reports are generated based on discrepancies identified during the audit process? What information do these reports contain?
Are the content and format of these reports based on documented standards?
Who receives these reports and for what purposes?
What action is taken if discrepancies are identified? Does the action depend on the severity of the discrepancy? Are these procedures documented?
How frequently does this reporting process occur?
Base Practice 3 10 4 Archive Asset Information
How long is asset information stored for? In what format and where is old asset information archived?
For what purposes and how frequently is archived asset information accessed?
Base Practice 3 10 5 Log all Assets in Inventory
How is it ensured that, in addition to assets in use, all assets in inventory are logged on the asset management system?
What is the updating process when an asset in inventory is moved for use?
Does the process for auditing informational accuracy cover assets in inventory?
Generic Questions for Process Area
Is the asset management tool/process periodically reviewed to identify potential improvements? If so, how frequently does this occur and who controls this process?
How is performance of asset management functions measured?
Are any performance targets (e.g. percent of incorrect asset data in system) for the asset management process defined? If so, what are they and how is performance assessed against these targets?
Do you find that the existing asset management system adequately meets the organization's asset information needs?
What type of relevant qualifications and training do asset management personnel have?
Process Capability Assessment Instrument
Process Area 3 10 Asset Management
Process Area Asset Management ensures that all assets are registered within the inventory system and that Description detailed information for registered assets is updated and validated throughout the asset's lifetime. This information will be required for such activities as managing service levels, managing change, assisting in incident and problem resolution and providing necessary financial information to the organization Questionnaire
Figure imgf000186_0001
Work Product list
Process Area 3 10 Asset Management
Example list of assets and details related to each asset
Sample asset log
Audit reports
Discrepancy reports (if different from above)
Procurement (3.11)
PA Number 3.11
PA Name Procurement
PA Purpose Procurement is responsible for ensuring that the necessary quantities of equipment (both hardware and software) are purchased and delivered on time to the appropriate locations Procurement is also responsible for logging all assets into the mventoiy as they are received
PA's Base Maintain vendor information Practices Receive and log request
Identify vendor and place order
Track orders
Ensure timely/accurate delivery & log assets received
Manage leturns and replacements
Report on procurement activities and assess procurement strategy
PA Goals To procure and deliver assets on time and at the lowest possible cost
To maintain accurate vendor information
To ensure all assets purchased are entered into asset management system
PA's Metπcs Differential between actual and budgeted equipment costs Percentage of requested items delivered on time Costs incurred from returns due to incorrect purchases
Base Practices
Figure imgf000186_0002
Figure imgf000187_0001
References
MODE v2
MODE vl Toolkit Process Area Procurement
Level 1
Assessment Indicators Process Performance
Generic Practice Ensure that Base practices are performed
Figure imgf000188_0001
Level 2
Figure imgf000188_0002
Figure imgf000189_0001
Level 3 Assessment Indicators
Figure imgf000189_0002
Level 4 Assessment Indicators
Figure imgf000189_0003
Level 5 Assessment Indicators
Figure imgf000189_0004
Process Capability Assessment Instrument: Interview Guide
Process Area 3.11 Procurement Questions
Base Practice 3 1 1 1 Maintain vendor information
What was the process for creating a list of approved vendors? Have vendors been identified for each type of standard equipment? Does the list include more than one potential vendor for each type of standard equipment?
What information about potential vendors and those used in the past is stored? For example, is the history of transactions and quality ot service received noted? Are special terms or conditions that apply to a vendor recorded?
Is information maintained on any regulatory requirements or existing contracts that could affect vendor selection?
When does vendor information get entered and who is responsible tor maintaining it?
Who accesses the vendor information and for what purposes?
Base Practice 3 11 2 Receive and log request
In what format does procurement receive a purchase request (e.g. a request form, on-line etc.)?
What information does the purchase request contain?
Does procurement verify that the request carries the necessary approval or authorization? How is this done? Whose approval is required lor purchases? Does a documented policy describe the necessary authorizations?
For non-standard orders, does procurement verify the technical compatibility of the equipment/software requested? What is the process for verifying compatibility?
Is every request logged when received? If so how? Are these procedures documented?
Base Practice 3 11 3 Identify vendor and place order
What is the process tor selecting a vendor for a particular order? Is the vendor listing and information used?
Does negotiation of specific terms occur with the vendor after selection, or does any preliminary negotiation occur with several potential vendors and then are the outcomes considered during selection?
Who is responsible for placing an order? Is a purchase order or other document used? If so, please describe. Is the log updated when the order is placed?
Is the requester notified of the order placement and estimated delivery date?
Base Practice 3 1 1 4 Track orders
How are open orders tracked? Do specified checkpoints exist when all open orders are reviewed to identify any over-due deliveries?
Is a backlog and backorder information maintained? If yes, by whom?
In what instances does procurement need to communicate with rollout/release management? What information is exchanged?
What action is taken if an order is overdue?
Base Practice 3 11 5 Ensure timely/accurate delivery & log assets received
What is the procedure for handling receipt of equipment delivered? How is procurement involved? Are any proactive steps taken to ensure timely delivery (e.g. supplier is contacted shortly betore the delivery date to verity the delivery)
Does procurement verify that the correct equipment was received? How?
Is the receipt logged and the request record closed? What is the procedure for this?
Who is responsible tor logging all assets received in the asset management system?
What information about the new asset is logged?
Base Practice 3 11 6 Manage returns and replacements
Do any policies exist on how returns and replacements are to be handled? If so, please describe them. Who is responsible tor this task?
Under what circumstances (i.e. inaccurate order placement, inaccurate order delivery etc.) do returns typically occur?
What type of documentation is made for any returns or replacements handled?
Base Practice 3 11 7 Report on procurement activities & assess procurement strategy
Does procurement produce any reports on a regular basis? If so, what reports are produced and what do they present?
Who reviews the reports and for what purposes?
Do periodic reviews occur to evaluate and modify the procurement strategy, if needed? If so, who is involved in this assessment process and how frequently does it occur? What types of issues are considered (e.g. are current suppliers performing adequately) ?
Figure imgf000191_0001
Process Capability Assessment Instrument
Process Aiea 3 11 Procurement
Process Area Procurement is responsible for ensuring that the necessary quantities of equipment (both Description hardware and software) are purchased and delivered on time to the appropriate locations Procurement is also responsible for logging all assets into the inventory as they are received
Questionnaire
Figure imgf000191_0002
Work Product list
Process Area 3 1 1 Procurement
Purchase request form
Purchase order
Sample vendor profile
Procurement reports
Current Procurement catalogue of vendoi s/supphers
Figure imgf000191_0003
Figure imgf000192_0001
Base Practices
Figure imgf000192_0002
References
MODE vl Toolkit
Figure imgf000193_0001
Process Area- Quality Management
Level 1
Assessment Indicators Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000193_0002
Level 2
Figure imgf000193_0003
Level 3 Assessment Indicators
Figure imgf000193_0004
Figure imgf000194_0001
Level 4 Assessment Indicators
Figure imgf000194_0002
Level 5 Assessment Indicators
Figure imgf000194_0003
Process Capability Assessment Instrument: Interview Guide
Process Area 4.3 Quality Management
Questions
Base Practice: 4 3 1 Determine quality management actions
By what process were/are quality management actions determined?
Please describe the various quality management tools used to verify and measure quality, (e.g. customer surveys, audits, collection/evaluation of performance metrics)?
Is a set of assessment indicators specified by which quality of operations is measured? If so, what are
Figure imgf000195_0001
Process Capability Assessment Instrument
Process Area 4 3 Quality Management
Process Area Quality Management is an on-going process, which monitors how well the distributed Description environment is being managed, and looks towaid continually improving its management capabilities and service. Within this process, quality improvement actions are determined, agreed upon, planned and monitored.
Questionnaire
Figure imgf000196_0001
Work Product list
Process Area [ 4 3 Quality Management
Quality improvement action plan
Quality improvement action schedule
Quality assessment reports
Or anizational chart or hirin matrix of quality assessment team
Figure imgf000196_0002
Base Practices
Figure imgf000196_0003
Figure imgf000197_0001
References
MODE v2
Web sites- www.epic.org and legal.web. aol.com
MODE vl Toolkit
Process Area Legal Issues Management
Level 1
Assessment Indicators Process Performance
Generic Practice- Ensure that Base practices are performed
Figure imgf000197_0002
Figure imgf000198_0001
Level 2
Figure imgf000198_0002
Level 3 Assessment Indicators
Figure imgf000198_0003
Figure imgf000199_0001
Level 4 Assessment Indicators
Figure imgf000199_0002
Level 5 Assessment Indicators
Figure imgf000199_0003
Process Capability Assessment Instrument Interview Guide
Process Area 4 5 Legal Issues Management
Questions
Base Practice 4 5 1 Identify legal risk areas
Is the web site reviewed for legal risk issues prior to publishing? If yes, by whom and how often? If no, why?
What issues have provided the most concern? Have these concerns been made known to and been addressed by the web master, content management or other related operational areas? If yes, how are they made know (e.g. symposiums, reports, conferences, phone mail, etc.) and addressed (e.g. policy, procedures, reviews, etc.)?
Are legal issues personnel consistently made aware of new issues, litigation and laws that might affect future web publishing? If yes, what is of concern? Does the web site contain any disclaimers that would remove you from liability issues? If yes, what are they and what prompted their use?
Are legal issues reviewed on a state, domestic or worldwide scope? How has this view helped or hindered the process? Is jurisdiction a justification for the chosen scope?
Base Practice 4 5 2 Identify types of content where one may be legally at risk
Does the legal issues personnel review the different types of content (e.g. graphics, video, audio, Java applets, etc.) for risk? If yes, what types provide the most and least concern? Who is responsible for this review? How often is it done?
Is there a process in place to gain permission to use/publish copyrighted material? If yes, what is it? Is it consistently followed? Who is responsible for this? What types of content are the most protected/least protected?
Does the site allow any customers to download/FTP software? If yes, what software and what legal notifications are provided to the customers?
Are the graphics/text for any sales products provided with a disclaimer (e.g. color may be different than actual, size may be different, quantities are limited, etc.)? If yes, what are they?
Base Practice 4 5 3 Identify customers
Are pages evaluated with customers, laws, business goals, and employees in mind? If yes what are the areas of concentration/review for these each of these audiences?
Do customers communicate with the firm regarding legal concerns or complaints? If yes, how do they do this? To whom is this communication directed? Which group of customers seem to be the most vocal about the content (e.g. system, public, corporations, government, etc.)?
Do all customers, who are not employed with the firm, have the ability to gain access to all parts of the web site (e g chat rooms, join e-mail lists, place orders, view inventory, etc )7 If yes, what are the most popular destinations and peak times7 Are surveys offered to these customers7 If no, what type of access control do you provide (log-on and password, return e-mail address, etc ) and are legal disclaimers provided for any legally sensitive areas7 Please explain
When responding to complaints or legal instruments initiated by a customer, do the legal issues management personnel meet with other counsel to respond or is the issue handed off to another department7 In your experience, has this happened before and what were the circumstances7
Base Practice 4 5 4 Legal process setup and refinement
Is the legal issues management procedure/policy maintained to address new net centric issues? If yes, by whom and how often? Is it consistently followed?
Does the legal issues management personnel forward documents in question to corporate counsel for review, approval/change and/or resolution? If yes, explain the procedure. Who is responsible for tracking the document once it is transferred to corporate counsel? Explain this tracking.
What legal requirements and issues(e.g. privacy, censorship, freedom of information, intellectual property, etc.) are gathered on an on-going basis to ensure legal credibility for the site?
Are new business offerings by the firm viewed for operational legal requirements? If yes, by whom and how often (e.g. scheduled vs. ad hoc)?
Does the legal issues group maintain contracts and ensure their deployment for compliance? If yes, who is responsible for this and how often are reviews performed?
Generic Questions for Process Area
What is the standard procedure/policy with regard to legal issues management tasks and procedures? Is it followed? At any time are some procedures done in an ad hoc manner ? II yes, please explain?
Are adequate tools and personnel available for legal issues management tasks and procedures? What are the tools and who are the personnel?
Is training held for new employees within the legal issues management group? If yes, is this done on the job or during formal training sessions? Are classes / training provided to all legal issues personnel which cover new issues/procedures/tasks etc.?
If yes, how often is this planned?
Have measures been defined, selected and subsequent data collected for legal issues management? If yes, what type and how olten?
What reports are provided to various departments within the firm from legal issues management regarding pertinent issues (e.g. changes to plans, decisions, process, requirements, etc.)? To whom do they go and how often? Do recipients of these reports provide leedback to legal issues management? If yes, what method is used (e.g. e-mail, meetings, hardcopy, etc.)?
Does the legal issues management group provide web pages with version control numbers and change order requests for updated page content?
Are all change order requests for web pages signed off by legal issues management? If no, why? If yes, by whom? How often is this done?
Are metrics automatically collected from the web site for use by legal issues personnel? If yes, what is it? How is it collected (e.g. automated, manually, both)?
Are the legal issues management processes continually improved? If yes, how? Are the improvements validated and quantified against business goals and objectives?
Is your legal issues team made up of qualified lawyers? What type of continuous education do the pursue?
Process Capability Assessment Instrument
Process Area 4 5 Legal Issues Management
Process Area Legal Issues Management addresses the legal liability considerations associated with doing Description business on a public network To ensure that a legal risk is limited, there is a need for a close tie between Service Provider's Operations departments Legal department
Questionnaire
Figure imgf000201_0001
Legal Issues Management procedure manual /policy
Examples of bulletins/notifications regarding new legislation that would affect content Sample reports from legal issues group noting complaints, issues or concerns for existing and future web development
Example of a legal issues tracking document for web pages/sites showing the progression of the page/s through review/approval cycle
Figure imgf000201_0002
Figure imgf000202_0001
Base Practices
Figure imgf000202_0002
References
MODE v2
Figure imgf000203_0001
Process Area Capacity Modeling & Planning
Level 1
Assessment Indicatoi Process Performance
Gener ic Piactice Ensure that Base practices are performed
Figure imgf000203_0002
Level 2
Figure imgf000203_0003
Level 3 Assessment Indicators
Figure imgf000204_0001
Level 4 Assessment Indicators
Figure imgf000204_0002
Level 5 Assessment Indicators
Figure imgf000204_0003
Process Capability Assessment Instrument Interview Guide
Process Area | 4 6 Capacity Modeling & Planning
Questions
Base Practice 4 6 1 Define Overall Capacity Modeling & Planning Requirements
Has a base level model of the system's capacity been created and verified based on information from vendors, independent tests, etc.? Are service measures used as comparisons? If yes, what are they? If no, explain.
Explain your standard capacity planning process/policy, including CPU, memory, I/O and router usage and needs. All existing or future mainframe and server processors, storage, network configurations, and peripheral requirements should be addressed.
Are the capacity requirements coordinated across distributed system based on SLAs/OLAs? If yes, explain. Are there outstanding SLA/OLA issues to be resolved? If yes, explain.
Are alarms activated when a SLA/OLA is not met? If yes, how and to whose attention? II no, explain.
Are workload balancing forecasts/plans in place? If yes, do they consider key transactions that have been collected and verified? Explain.
What are the existing and future applications/data requirements that drive the capacity plan?
What are the functional requirements/data that drive the capacity plan?
Is there a policy in place to ensure the capacity plan updated regularly (semi-annual/annual/bi- annual) or only when changes/deviations are encountered? Please describe the policy.
Are possible future threats/changes to service levels noted in the capacity plan?
What is the plan of action for identified threats?
Base Practice 4 6 2 Collect AH Capacity Information (Based on Business Requirements)
What are the business drivers that affect the capacity model •>
What are the verified capacity plan requirements lor the networks/distributed system? (e.g. financial, physical, operational, software, vendor, applications, constraints/Iimts.)
Is the current system/version reviewed on a scheduled, documented basis to see how well it is being utilized? How often?
Is performance/cost benefit analysis performed and tracked for each configuration? If yes, who does this and how often?
What tools have been used to measure the system's capacity?
What reports are produced regarding capacity planning? Who receives these reports
Are the accuracy of assumptions, forecasts and results tracked f
Base Practice 4 6 3 Determine Ongoing Support Requirements
What projections have been created and reviewed that address ongoing support requirements for operations, personnel and functions?
2. Has the impact of planned business growth been evaluated with regards to support? If so, how?
3. Has the impact ol planned future locations been evaluated with regards to support? If so, how ?
Base Practice 4 6 4 Build and Test Model
1. How is the base model calibrated prior to adding forecast parameters? (e.g. verify model parameters, account for discrepancies, verify accuracy ol base model, etc.)
2. What forecast parameters/assumptions were added to the base model?
3. How are capacity shortfalls identified?
4. What model solutions address capacity shortfalls?
5. Have assumptions and strategies been documented?
Base Practice 4 6 5 Deploy Model, and Adjust as Appropriate
1 How often are reports disseminated to appropriate parties (e g weekly, monthly, etc )7
Is feedback received on utilization, capacity and performance?
Do management, development, and customers receive status reports that compare actual to planned utilization for review/discussion?
Does management review, revise and approve capacity plans? If no, explain.
What is the course of action/process regarding the capacity plan if major changes to the system or business occur? Are other groups/process informed (e.g. release management, SLA, procurement, security, etc.)? Explain. Geneπc Questions for Process Area
1. Are training sessions held for personnel on a scheduled basis regarding the capacity planning process and its defined tasks? If so what type of training is provided to personnel to ensure adequate/competent execution of capacity plan?
2. Is there written documentation that covers the established capacity plan procedures for personnel?
3. How often is the capacity process reviewed for continuous improvement purposes? How often are improvements implemented and by whom?
4. When continuous improvement strategies are executed, how is the improvement validated against business and performance goals (e.g. benchmarks, basic measurements, etc.)?
Process Capability Assessment Instrument
Process Area 4 6 Capacity Modeling & Planning
Process Area Capacity Planning attempts to ensure that the adequate resources will be in place to meet Description SLA requirements Resources include physical facilities, computers, memory, disk space, communications equipment, and personnel Capacity Planning must be done for the system as a whole so that the planners can understand how the capacity of one portion of the system affects the capacity of another Due to the large number of components typically found within a system, the mterdependencies between business functions and resource components must be clearly defined
Questionnaire
Process Area | 4 6 Capacity Modeling and Planning
Figure imgf000206_0001
Work Product list
Process Area | 4 6 Capacity Modeling and Planning
Example of an Existing Capacity Plan/Reports List of SLAs/OLAs requirements
List of resources referenced in Capacity Plan(e g physical facilities, computers, memory, disk space, communication equipment and personnel)
Figure imgf000206_0002
Figure imgf000207_0001
Base Practices
Figure imgf000207_0002
Figure imgf000208_0001
References
Figure imgf000208_0002
Process Area: Business/Disaster Recoveiy Planning & Management
Level 1
Assessment Indicators Process Performance
Generic Practice: Ensure that Base practices are performed
Base Practice Example of Assessment Indicator Assessment Indicators at Client
4 7.1 Determine what disaster Business/Disaster personnel, when asked, can list recovery requirements are based on various SLA requirements, timetables and functions SLAs needed for compliance.
4.7.2 Perform business and system Cost analysis is performed on the benefit of each plan. risk assessment Risks are addressed and incorporated into each plan. Revenue loss estimates are know by business/disaster personnel, whether interruptions or total losses.
4.7.3 Determine recovery A final plan is chosen for the respective sites. Plan is implementation plan maintained for procedure improvements and/or changes to equipment/site. Personnel can access plan readily.
4.7.4 Review recovery plan with Plan is reviewed by management. management
4.7.5 Plan disaster recovery testing Testing is performed on business/disaster recovery procedures procedures on a scheduled basis. Results of testing are tracked.
4.7.6 Produce and disseminate Reports are forwarded to other process areas (e.g. report on disaster recovery Fault Management, Back-up/Restore/Archive, Physical Management, etc.). Reports may be hardcopy o UurIi e &&l11ec..tIIrIIo VunIrIιiUuc.~...
4.7.7 Receive feedback on disaster Feedback is solicited via surveys, reports, meetings etc. recovery strategy and used to improve the business/disaster process and associated procedures.
Level 2
Figure imgf000208_0003
Figure imgf000209_0001
Level 3 Assessment Indicators
Figure imgf000209_0002
Level 4 Assessment Indicators
Figure imgf000209_0003
Figure imgf000210_0001
Level 5 Assessment Indicators
Figure imgf000210_0002
Process Capability Assessment Instrument Interview Guide
Process Area 4 7 Business/Disaster Recovery Planning & Management
Questions
Base Practice 4 7 1 Determine what disaster recovery requirements are based on SLAs
Are business/disaster recovery plans based on SLAs or documented business requirements? If yes, how are these communicated to the group and how often?
What SLA requirements are difficult to address or have not been addressed thus far? Are these issues being examined for possible solutions? If yes, by whom?
Do SLA requirements note speed of recovery and capacity? Are they prioritized? If no, explain.
Base Practice 4 7 2 Perform business and system risk assessment
Are business and system risk assessments done? If yes, by whom and how often? Is potential revenue loss considered during system failure or loss?
Is cost-benefit analysis performed when additions or changes are made to the recovery plan? Is this based on servers, applications, SLAs? Explain.
Are business goals developed during the risk assessment? If yes, what are they?
Has it been determined what critical data should be moved off site when performing the risk assessment? If yes, how is this determined?
Are business risk assessments performed considering security management, political instability and malicious intent? If yes, by whom and how?
Base Practice 4 7 3 Determine recovery implementation plan
Is there a formal policy regarding the recovery plan at all sites? If yes, is it followed? Is it accessible to all recovery personnel? If no, explain. If yes, is it in multiple locations? Which sites? Is revision control maintained?
Are teams established within the plan tor notification and at a predetermined location in case of a disaster declaration? If yes, explain.
Are metrics collected regarding the recovery plan? If yes, how often and what are they? Are they collected automatically or manually?
Are lists maintained showing hardware and supplies needed during a disaster? If yes, where is this list? Are copies maintained for each site and at a remote location for saleguard? Who is aware of these lists?
Does the plan examine the recovery of dependent or independent applications? If yes, which ones? Has a cost analysis been performed on the loss of each application?
Are any recovery procedures performed by hot/cold sites? If yes, do they have back-ups, procedures and schedules? If yes, how are these maintained/updated?
How often is the plan reviewed? Do other process area personnel (e.g. Backup/Restore/Archive, Fault Management, Monitoring) review the plan? If yes, explain the process and describe who participates in the review.
Base Practice 4 7 4 Review recovery plan with management
Does the management team review business/disaster recovery plans? If yes, how often? Is the management team static or dynamic?
Does the plan call for the management team to resolve resource conflicts? If yes, is a procedure noted for each site?
Base Practice 4 7 5 Plan disaster recovery testing procedures
Are tests performed on the business/disaster recovery procedures/tasks at each site? If yes, how often?
Explain what procedures pose the most concern (e.g. business or disaster) during the testing phase? Have modifications been implemented to improve process? If yes, what has been the outcome?
Are other departments brought into the testing environment for an end-to-end run through (e.g. Fault Management, Back-up/Restore/Archive, Monitoring, Physical Site Management, etc.)? If yes, which ones and how? Are other process areas tied with business/disaster recovery systems for automatic notification or metrics collection? If yes, explain.
Base Practice 4 7 6 Produce and disseminate report on disaster recovery
Are reports produced and disseminated regarding the business/disaster recovery plan? If yes, to whom and how olten? II no, explain.
What are the contents of the reports that are disseminated?
Do reports include the latest testing results? Metrics? If yes, which ones?
Base Practice 4 7 7 Receive feedback on disaster recovery strategy
Is feedback sought and collected regarding the business/disaster recovery plan7 If yes, by whom and how7
Is the feedback used for continuous improvement reasons? If yes, has this proven to be beneficial? If no, how could the feedback process be changed to provide benefit?
Generic Questions for Process Area
Is training provided to new business/disaster recovery personnel? If yes, in what tormat (e.g. on the job, formal training, computer based training, etc.)?
Are adequate resources (e g personnel, equipment, software, etc ) provided to perfoi m the necessaiy recovery procedures7
Process Capability Assessment Instrument
Process Area 4 7 Business/Disaster Recovery Planning & Management
Process Area Determines what the requirements are for disaster recovery based upon agreed upon SLAs, Description strategies and plans to restore a business or service after it has been interrupted or failed This planning process develops the strategy for recovering a system or a portion of the system The contingency plans must consider failure of both centralized and remote components and strategies for the recoveiy of these systems
Questionnaire
Figure imgf000211_0001
Are reports disseminated regarding the status/readiness of the business/disaster recovery plan?
Is feedback solicited and collected on the business/disaster recovery plan?
Work Product list
Process Area 4.7 Business/Disaster Recovery Planning & Management
1. Example of an existing business/disaster recovery procedure for each of the sites (on site copy and off site copy should be the same).
2. Example of a business/disaster recovery plan report.
3. List of SLAs prioritized business/disaster recovery management
4. Schedule of Back-up/Restore/ Archive tasks for each site.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

CLAIMSWhat is claimed is:
1. A method for determining capability levels of a process area in an operational maturity investigation comprising the steps of:
(a) defining a plurality of process attributes;
(b) defining a plurality of generic practices for each of the process attributes;
(c) defining a plurality of capability levels in terms of groups of the process attributes; (d) rating each of the process attributes based on achievement of the corresponding generic practices;
(e) determining which of the capability levels is achieved by a process area based on the rating of the process attributes of the capability levels; and
(f) outputting the capability level.
2. The method as set forth in claim 1, wherein the capability levels are each achieved upon the ratings of the process attributes of the capability level surpassing a predetermined amount.
3. The method as set forth in claim 1 , wherein each capability level is defined by the process attributes of a lower capability level and is further defined by at least one more process attribute.
4. The method as set forth in claim 1, wherein the process attributes include process attributes selected from the group of process attributes consisting of process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and process change.
5. The method as set forth in claim 1, wherein the capability levels include capability levels selected from the group of capability levels consisting of performed informally, planned and tracked, well defined, quantitatively controlled, and continuously improving.
6. The method as set forth in claim 1, and further comprising the step of gauging a maturity of an operations organization based on the outputted capability level.
7. A computer program embodied on a computer readable medium for determining capability levels of a process area in an operational maturity investigation comprising: (a) a code segment that defines a plurality of process attributes;
(b) a code segment that defines a plurality of generic practices for each of the process attributes;
(c) a code segment that defines a plurality of capability levels in terms of groups of the process attributes; (d) a code segment that rates each of the process attributes based on achievement of the corresponding generic practices;
(e) a code segment that determines which of the capability levels is achieved by a process area based on the rating of the process attributes of the capability levels; and
(f) a code segment that outputs the capability level.
8. The computer program as set forth in claim 7, wherein the capability levels are each achieved upon the ratings of the process attributes of the capability level surpassing a predetermined amount.
9. The computer program as set forth in claim 7, wherein each capability level is defined by the process attributes of a lower capability level and is further defined by at least one more process attribute.
10. The computer program as set forth in claim 7, wherein the process attributes include process attributes selected from the group of process attributes consisting of process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and process change.
1 1. The computer program as set forth in claim 7, wherein the capability levels include capability levels selected from the group of capability levels consisting of performed informally, planned and tracked, well defined, quantitatively controlled, and continuously improving.
12. The computer program as set forth in claim 7, and further comprising a code segment that gauges a maturity of an operations organization based on the outputted capability level.
13. A system for determining capability levels of a process area in an operational maturity investigation comprising: (a) logic that defines a plurality of process attributes;
(b) logic that defines a plurality of generic practices for each of the process attributes;
(c) logic that defines a plurality of capability levels in terms of groups of the process attributes;
(d) logic that rates each of the process attributes based on achievement of the corresponding generic practices;
(e) logic that determines which of the capability levels is achieved by a process area based on the rating of the process attributes of the capability levels; and
(f) logic that outputs the capability level.
14. The system as set forth in claim 13, wherein the capability levels are each achieved upon the ratings of the process attributes of the capability level surpassing a predetermined amount.
15. The system as set forth in claim 13, wherein each capability level is defined by the process attributes of a lower capability level and is further defined by at least one more process attribute.
16. The system as set forth in claim 13, wherein the process attributes include process attributes selected from the group of process attributes consisting of process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and process change.
17. The system as set forth in claim 13, wherein the capability levels include capability levels selected from the group of capability levels consisting of performed informally, planned and tracked, well defined, quantitatively controlled, and continuously improving. The system as set forth in claim 13, and further comprising logic that gauges a maturity of an operations organization based on the outputted capability level.
PCT/US2000/020353 1999-07-26 2000-07-26 A system, method and computer program for determining capability levels of processes to evaluate operational maturity of an organization WO2001008037A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU62384/00A AU6238400A (en) 1999-07-26 2000-07-26 A system, method and article of manufacture for determining capability levels ofprocesses for process assessment purposes in an operational maturity investigat ion

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US36133899A 1999-07-26 1999-07-26
US09/361,338 1999-07-26

Publications (2)

Publication Number Publication Date
WO2001008037A2 true WO2001008037A2 (en) 2001-02-01
WO2001008037A3 WO2001008037A3 (en) 2002-07-11

Family

ID=23421637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/020353 WO2001008037A2 (en) 1999-07-26 2000-07-26 A system, method and computer program for determining capability levels of processes to evaluate operational maturity of an organization

Country Status (2)

Country Link
AU (1) AU6238400A (en)
WO (1) WO2001008037A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11568336B2 (en) * 2019-02-06 2023-01-31 Mitsubishi Electric Corporation Information-technology-utilization evaluation device, information-technology-utilization evaluation system, and information-technology-utilization evaluation method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998042102A1 (en) * 1997-03-14 1998-09-24 Crosskeys Systems Corporation Service level agreement management in data networks
US5819270A (en) * 1993-02-25 1998-10-06 Massachusetts Institute Of Technology Computer system for displaying representations of processes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819270A (en) * 1993-02-25 1998-10-06 Massachusetts Institute Of Technology Computer system for displaying representations of processes
WO1998042102A1 (en) * 1997-03-14 1998-09-24 Crosskeys Systems Corporation Service level agreement management in data networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
AIDAROUS S E ET AL: "SERVICE MANAGEMENT IN INTELLIGENT NETWORKS" IEEE NETWORK, IEEE INC. NEW YORK, US, vol. 4, no. 1, 1990, pages 18-24, XP000113852 ISSN: 0890-8044 *
MCGARRY F ET AL: "Measuring the impacts individual process maturity attributes have on software products" PROCEEDINGS FIFTH INTERNATIONAL SOFTWARE METRICS SYMPOSIUM. METRICS (CAT. NO.98TB100262), PROCEEDINGS FIFTH INTERNATIONAL SOFTWARE METRICS SYMPOSIUM. METRICS 1998, BETHESDA, MD, USA, 20-21 NOV. 1998, pages 52-60, XP002185627 1998, Los Alamitos, CA, USA, IEEE Comput. Soc, USA ISBN: 0-8186-9201-4 *
NIESSINK F ET AL: "Towards mature measurement programs" PROCEEDINGS OF THE SECOND EUROMICRO CONFERENCE ON SOFTWARE MAINTENANCE AND REENGINEERING (CAT. NO.98EX143), PROCEEDINGS OF THE SECOND EUROMICRO CONFERENCE ON SOFTWARE MAINTENANCE AND REENGINEERING, FLORENCE, ITALY, 8-11 MARCH 1998, pages 82-88, XP002185625 1998, Los Alamitos, CA, USA, IEEE Comput. Soc, USA ISBN: 0-8186-8421-6 *
ROJAS T ET AL: "The capabilities and maturity model (CMM): a case study" 1997 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS. COMPUTATIONAL CYBERNETICS AND SIMULATION (CAT. NO.97CH36088-5), 1997 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS. COMPUTATIONAL CYBERNETICS AND SIMULATION, ORLAND, pages 1285-1290 vol.2, XP002185624 1997, New York, NY, USA, IEEE, USA ISBN: 0-7803-4053-1 *
VARKOI T K ET AL: "Case study of CMM and SPICE comparison in software process assessment" IEMC '98 PROCEEDINGS. INTERNATIONAL CONFERENCE ON ENGINEERING AND TECHNOLOGY MANAGEMENT. PIONEERING NEW TECHNOLOGIES: MANAGEMENT ISSUES AND CHALLENGES IN THE THIRD MILLENNIUM (CAT. NO.98CH36266), IEMC '98 PROCEEDINGS. INTERNATIONAL CONFERENCE ON ENGI, pages 477-482, XP002185626 1998, New York, NY, USA, IEEE, USA ISBN: 0-7803-5082-0 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11568336B2 (en) * 2019-02-06 2023-01-31 Mitsubishi Electric Corporation Information-technology-utilization evaluation device, information-technology-utilization evaluation system, and information-technology-utilization evaluation method

Also Published As

Publication number Publication date
WO2001008037A3 (en) 2002-07-11
AU6238400A (en) 2001-02-13

Similar Documents

Publication Publication Date Title
US20060161444A1 (en) Methods for standards management
US20060161879A1 (en) Methods for managing standards
US7810067B2 (en) Development processes representation and management
US6738736B1 (en) Method and estimator for providing capacacity modeling and planning
US8140367B2 (en) Open marketplace for distributed service arbitrage with integrated risk management
US8332807B2 (en) Waste determinants identification and elimination process model within a software factory operating environment
US8448129B2 (en) Work packet delegation in a software factory
US20040243428A1 (en) Automated compliance for human resource management
US20150356477A1 (en) Method and system for technology risk and control
Niessink et al. The IT service capability maturity model
WO2001025877A2 (en) Organization of information technology functions
WO2008076984A1 (en) Methods and systems for risk management
US20030055697A1 (en) Systems and methods to facilitate migration of a process via a process migration template
US20080091676A1 (en) System and method of automatic data search to determine compliance with an international standard
US10460265B2 (en) Global IT transformation
WO2007030633A2 (en) Method and system for remotely monitoring and managing computer networks
Al-Dabbous et al. Assessment of the trustworthiness of e-service providers
WO2001008035A2 (en) A system, method and computer program for determining capability level of processes to evaluate operational maturity in an administration process area
WO2001008037A2 (en) A system, method and computer program for determining capability levels of processes to evaluate operational maturity of an organization
WO2001008004A2 (en) A system, method and article of manufacture for determining capability levels of a monitoring process area for process assessment purposes in an operational maturity investigation
Spencer et al. Technology best practices
WO2001008038A2 (en) A system, method and computer program for determining operationalmaturity of an organization
WO2001008074A2 (en) A system, method and article of manufacture for determining capability levels of a release management process area for process assessment purposes in an operational maturity investigation
Herold The shortcut guide to improving IT service support through ITIL
Rae A guide to SLAs

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

AK Designated states

Kind code of ref document: A3

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP