WO2001008035A2 - A system, method and computer program for determining capability level of processes to evaluate operational maturity in an administration process area - Google Patents

A system, method and computer program for determining capability level of processes to evaluate operational maturity in an administration process area Download PDF

Info

Publication number
WO2001008035A2
WO2001008035A2 PCT/US2000/020238 US0020238W WO0108035A2 WO 2001008035 A2 WO2001008035 A2 WO 2001008035A2 US 0020238 W US0020238 W US 0020238W WO 0108035 A2 WO0108035 A2 WO 0108035A2
Authority
WO
WIPO (PCT)
Prior art keywords
ofthe
management
capability
level
practices
Prior art date
Application number
PCT/US2000/020238
Other languages
French (fr)
Other versions
WO2001008035A3 (en
Inventor
Nancy S. Greenberg
Colleen R. Winn
Original Assignee
Accenture Llp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Accenture Llp filed Critical Accenture Llp
Priority to AU62372/00A priority Critical patent/AU6237200A/en
Publication of WO2001008035A2 publication Critical patent/WO2001008035A2/en
Publication of WO2001008035A3 publication Critical patent/WO2001008035A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling

Definitions

  • the present invention relates to IT operations organizations and more particularly to evaluating a maturity of an operations organization by determining capability levels of a user administration process area.
  • frameworks and gap analysis have been used to capture the best practices of IT management and to determine areas of improvement. While the frameworks and gap analysis are intended to capture weaknesses in processes that are observable, it does not provide data with sufficient objectivity and granularity upon which a comprehensive improvement plan can be built.
  • a system, method, and article of manufacture consistent with the principles ofthe present invention are provided for determining capability levels of a user administration process area when gauging a maturity of an operations organization.
  • a plurality of process attributes are defined.
  • a plurality of generic practices are determined for each ofthe process attributes.
  • the generic practices include base practices such as receiving information from a human resources regarding employee events, adding users to a plurality of systems, changing user information on each ofthe systems, deleting user information on each ofthe systems, and/or notifying parties periodically of a user administration status.
  • a maturity of an operations organization is determined based at least in part on the achievement ofthe generic practices.
  • the present invention provides a basis for organizations to gauge performance, and assists in planning and tracking improvements to the operations environment.
  • the present invention further affords a basis for defining an objective improvement strategy in line with an organization's needs, priorities, and resource availability.
  • the present invention also provides a method for determining the overall operational maturity of an organization based on the capability levels of its processes.
  • the present invention can thus be used by organizations in a variety of contexts.
  • An organization can use the present invention to assess and improve its processes.
  • An organization can further use the present invention to assess the capability of suppliers in meeting their commitments, and hence better manage the risk associated with outsourcing and sub-contract management.
  • the present invention may be used to focus on an entire IT organization, on a single functional area such as service management, or on a single process area such as a service desk.
  • Figure 1 is a schematic diagram of a hardware implementation of one embodiment ofthe present invention
  • Figure 2 is a flowchart illustrating generally the steps associated with the present invention
  • Figure 3 is an illustration showing the relationships ofthe process category, process area, and base practices ofthe operations environment dimension in accordance with one embodiment of the present invention
  • Figure 4 is an illustration showing a measure of each process area to the capability levels according to one embodiment ofthe present invention.
  • Figure 5 is an illustration showing various determinants of operational maturity in accordance with one embodiment ofthe present invention.
  • Figure 6 is an illustration showing an overview ofthe operational maturity model
  • Figure 7 is an illustration showing a relationship of capability levels, process attributes, and generic practices in accordance with one embodiment ofthe present invention.
  • Figure 8 is an illustration showing a capability rating of various attributes in accordance with one embodiment ofthe present invention.
  • Figure 9 is an illustration showing a mapping of attribute ratings to the process capability levels determination in accordance with one embodiment ofthe present invention.
  • Figure 10 is an illustration showing assessment roles and responsibilities in accordance with one embodiment ofthe present invention
  • Figure 11 is an illustration showing the process area rating in accordance with one embodiment ofthe present invention.
  • the present invention comprises a collection of best practices, both from a technical and management perspective.
  • the collection of best practices is a set of processes that are fundamental to a good operations environment.
  • the present invention provides a definition of an "ideal” operations environment, and also acts as a road map towards achieving the "ideal" state.
  • Figure 1 is a schematic diagram of one possible hardware implementation by which the present invention may be carried out. As shown, the present invention may be practiced in the context of a personal computer such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.
  • a personal computer such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.
  • FIG. 1 A representative hardware environment is depicted in Figure 1, which illustrates a typical hardware configuration of a workstation in accordance with one embodiment having a central processing unit 110, such as a microprocessor, and a number of other units interconnected via a system bus 112.
  • the workstation shown in Figure 1 includes a Random Access Memory (RAM) 114, Read Only Memory (ROM) 116, an I/O adapter 118 for connecting peripheral devices such as disk storage units 120 to the bus 112, a user interface adapter 122 for connecting a keyboard 124, a mouse 126, a speaker 128, a microphone 132, and/or other user interface devices such as a touch screen (not shown) to the bus 112, communication adapter 134 for connecting the workstation to a communication network 135 (e.g., a data processing network) and a display adapter 136 for connecting the bus 112 to a display device 138.
  • a communication network 135 e.g., a data processing network
  • display adapter 136 for connecting the bus
  • the workstation typically has resident thereon an operating system such as the Microsoft Windows NT or Windows/95 Operating System (OS), the IBM OS/2 operating system, the MAC OS, or UNIX operating system.
  • OS Microsoft Windows NT or Windows/95 Operating System
  • IBM OS/2 operating system the IBM OS/2 operating system
  • MAC OS the MAC OS
  • UNIX operating system the operating system
  • a preferred embodiment ofthe present invention is written using JAVA, C, and the C++ language and utilizes object oriented programming methodology.
  • Object oriented programming has become increasingly used to develop complex applications. As OOP moves toward the mainstream of software design and development, various software solutions require adaptation to make use of the benefits of OOP.
  • OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program.
  • An object is a software package that contains both data and a collection of related structures and procedures. Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task.
  • OOP therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.
  • OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture.
  • a component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each others capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to differentiate between an object and a class of objects at this point.
  • An object is a single instance ofthe class of objects, which is often just called a class.
  • a class of objects can be viewed as a blueprint, from which many objects can be formed.
  • OOP allows the programmer to create an object that is a part of another object.
  • the object representing a piston engine is said to have a composition-relationship with the object representing a piston.
  • a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.
  • OOP also allows creation of an object that "depends from” another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition.
  • a ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic.
  • the object representing the ceramic piston engine is called a derived object, and it inherits all ofthe aspects of the object representing the piston engine and adds further limitation or detail to it.
  • the object representing the ceramic piston engine "depends from" the object representing the piston engine. The relationship between these objects is called inheritance.
  • the object or class representing the ceramic piston engine inherits all of the aspects ofthe objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class.
  • the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons.
  • Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with it (e.g., how many pistons in the engine, ignition sequences, lubrication, etc.).
  • a programmer would call the same functions with the same names, but each type of piston engine may have different/ove ⁇ iding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymorphism and it greatly simplifies communication among objects.
  • composition-relationship With the concepts of composition-relationship, encapsulation, inheritance and polymorphism, an object can represent just about anything in the real world. In fact, our logical perception ofthe reality is the only limit on determining the kinds of things that can become objects in object- oriented software. Some typical categories are as follows:
  • Objects can represent physical objects, such as automobiles in a traffic-flow simulation, electrical components in a circuit-design program, countries in an economics model, or aircraft in an air-traffic-control system.
  • Objects can represent elements ofthe computer-user environment such as windows, menus or graphics objects.
  • An object can represent an inventory, such as a personnel file or a table ofthe latitudes and longitudes of cities.
  • An object can represent user-defined data types such as time, angles, and complex numbers, or points on the plane.
  • OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system, or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.
  • OOP enables software developers to build objects out of other, previously built objects.
  • C++ is an OOP language that offers a fast, machine-executable code.
  • C++ is suitable for both commercial-application and systems-programming projects.
  • C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.
  • Encapsulation enforces data abstraction through the organization of data into small, independent objects that can communicate with each other. Encapsulation protects the data in an object from accidental damage, but allows other objects to interact with that data by calling the object's member functions and structures.
  • Class hierarchies and containment hierarchies provide a flexible mechanism for modeling real- world objects and the relationships among them.
  • Class libraries are very flexible. As programs grow more complex, more programmers are forced to adopt basic solutions to basic problems over and over again.
  • a relatively new extension ofthe class library concept is to have a framework of class libraries. This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers.
  • event loop programs require programmers to write a lot of code that should not need to be written separately for every application.
  • the concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making these things all work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some ofthe generic capabilities ofthe framework with the specific capabilities ofthe intended application.
  • a programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.
  • a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e.g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.
  • default behavior e.g., for menus and windows
  • Behavior versus protocol Class libraries are essentially collections of behaviors that one can call when one wants those individual behaviors in a program.
  • a framework provides not only behavior but also the protocol or set of rules that govern the ways in which behaviors can be combined, including rules for what a programmer is supposed to provide versus what the framework provides.
  • a preferred embodiment ofthe invention utilizes HyperText Markup Language (HTML) to implement documents on the Internet together with a general-purpose secure communication protocol for a transport medium between the client and the Newco. HTTP or other protocols could be readily substituted for HTML without undue experimentation. Information on these products is available in T. Berners-Lee, D. Connoly, "RFC 1866: Hypertext
  • HTML Markup Language - 2.0
  • R. Fielding H, Frystyk, T. Berners-Lee, J. Gettys and J.C. Mogul, "Hypertext Transfer Protocol - HTTP/1.1 : HTTP Working Group Internet Draft” (May 2, 1996).
  • HTML is a simple data format used to create hypertext documents that are portable from one platform to another.
  • HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains.
  • HTML has been in use by the World-Wide Web global information initiative since 1990. HTML is an application of ISO Standard 8879; 1986 Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).
  • SGML Standard Generalized Markup Language
  • HTML has been the dominant technology used in development of Web-based solutions.
  • HTML has proven to be inadequate in the following areas:
  • UI User Interface
  • Custom “widgets” e.g., real-time stock tickers, animated icons, etc.
  • client-side performance is improved.
  • Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance.
  • Dynamic, real-time Web pages can be created. Using the above-mentioned custom UI components, dynamic Web pages can also be created.
  • Sun's Java language has emerged as an industry-recognized language for "programming the Internet.”
  • Sun defines Java as: "a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword- compliant, general-purpose programming language.
  • Java supports programming for the Internet in the form of platform-independent Java applets.”
  • Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add "interactive content” to Web documents (e.g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e.g.,
  • ActiveX Technologies to give developers and Web designers wherewithal to build dynamic content for the Internet and personal computers.
  • ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content.
  • the tools use Internet standards, work on multiple platforms, and are being supported by over 100 companies.
  • the group's building blocks are called ActiveX Controls, small, fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages.
  • ActiveX Controls work with a variety of programming languages including Microsoft Visual C++, Borland Delphi, Microsoft Visual Basic programming system and, in the future, Microsoft's development tool for Java, code named "Jakarta.”
  • ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications.
  • ActiveX could be substituted for JAVA without undue experimentation to practice the invention.
  • One embodiment ofthe present invention includes three different, but complementary dimensions that together provide a framework which can be used in assessing and rating the IT operations of an organization.
  • the following three dimensions constitute the framework ofthe present invention: 1) Operations Environment Dimension, 2) Capability Dimension, and 3) Maturity Dimension.
  • the first dimension describes and organizes the standard operational activities that any IT organization should perform.
  • the second dimension provides a context for evaluating the performance quality of these operational activities. This dimension specifies the qualitative characteristics of an operations environment and orders these characteristics on a scale denoting rising capability.
  • the final dimension uses this capability scale and outlines a method for deriving a capability rating for specific IT process groups and the entire organization.
  • the Operations Environment and Capability dimensions provide the foundation for determining the quality or capability level ofthe organization's IT operations.
  • the Operations Environment dimension can be viewed as a descriptive mapping of a model operations environment.
  • the Capability dimension can be construed as a qualitative mapping of a model operations environment.
  • the Maturity dimension builds on the foundation set by these two dimensions to provide a method for rating the maturity level ofthe entire IT organization.
  • FIG. 2 is a flow chart illustrating the various steps associated with the different dimensions of the present invention. As shown, a plurality of process areas of an operations organization are first defined in terms of either a goal or a purpose in operation 200. The process areas are then grouped into categories, as indicated in operation 202. It should be noted that the categories are grouped in terms of process areas having common characteristics.
  • process capabilities are received for the process areas ofthe operations organization.
  • Such data may be generated via a maturity questionnaire which includes a set of questions about the operations environment that sample the base practices in each process area of the present invention.
  • the questionnaire may be used to obtain information on the capability of the IT organization, or a specific IT area or project.
  • category capabilities are calculated for the categories ofthe process areas in operation 206.
  • a maturity ofthe operations organization is subsequently determined based on the category capabilities ofthe categories in operation 208.
  • the user-specified or measured parameters may be inputted by any input device, such as the keyboard 124, the mouse 126, the microphone 132, a touch screen (not shown), or anything else such as an input port that is capable of relaying such information.
  • the definitions, grouping, calculations and determinations may be carried out manually or via the CPU 110, which in turn may be governed by a computer program stored on a computer readable medium, i.e., the RAM 114, ROM 116, the disk storage units 120, and/or anything else capable of storing the computer program.
  • dedicated hardware such as an application specific integrated circuit (ASIC) may be employed to accomplish the same.
  • any one or more ofthe definitions, grouping and determinations may be carried out manually or in combination with the computer.
  • the outputting ofthe determination ofthe maturity ofthe operations organization may be effected by way ofthe display 138, the speaker 128, a printer (not shown) or any other output mechanism capable of delivering the output to the user. It should be understood that the foregoing components need not be resident on a single computer, but also may be a component of either a networked client and/or a server.
  • the Operations Environment Dimension is characterized by a set of process areas that are fundamental to the effective technical execution of an operations environment. More particularly, each process is characterized by its goals and purpose, which are the essential measurable objectives of a process. Each process area has a measurable purpose statement, which describes what has to be achieved in order to attain the defined purpose ofthe process area.
  • goals refer to a summary ofthe base practices of a process area that can be used to determine whether an organization or project has effectively implemented the process area.
  • the goals signify the scope, boundaries, and intent of each process area.
  • the process goals and purpose may be achieved in an IT organization through the various lower level activities; such as tasks and practices that are carried out to produce work products. These performed tasks, activities and practices, and the characteristics ofthe work products produced 5 are the indicators that demonstrate whether the specific process goals or purpose is being achieved.
  • work product describes evidence of base practice implementation. For example, a completed change control request, a resolved trouble ticket, and/or a service level agreement (SLA) report.
  • SLA service level agreement
  • Process Categories The operations environment is partitioned into three process areas: Process Categories, Process Areas and Base Practices which reflect processes within any IT organization.
  • Figure 3 depicts and summarizes the relationship ofthe Process Categories 300, Process Areas 302, and Base
  • a Process Category has a defined purpose and measurable goals and consists of logically related set of Process Areas that collectively address the purpose and goals, in the same general area of activity.
  • Process Categories The purpose of Process Categories is to organize Process Areas according to common IT functional characteristics. There are four process categories defined in the present invention: Service Management, Systems Management, Managing Change, and IT Operations Planning. Process Categories are described as follows:
  • Process Areas are the second level in the operations hierarchy.
  • the elements of this level are a collection of Base Practices that are performed to achieve the defined purpose ofthe Process
  • Process Areas refer to a collection of Base Practices that are performed sequentially, concurrently and/or iteratively to achieve the defined purpose ofthe process area.
  • the purpose describes the unique functional objectives ofthe process area when instantiated in a particular environment. Satisfying the purpose statement of a process area represents the first step in building process area capability.
  • Process Areas for the Service Management Category include service level management, operations level management, service desk, user administration, and service pricing.
  • purpose of service level management may be to document the information technology services to be delivered to users. Note that this purpose states a unique functional objective (to establish requirements), and provides a context (service level).
  • Base Practices are the lowest level in the operation hierarchy. Base Practices are essential activities that an IT organization performs to achieve the purpose of a Process Area. A base practice is what an IT organization does.
  • Base Practices of service level management may be to assess business strategy, audit current service levels, determine service requirements and IT's ability to deliver services, prepare a draft SLA, identify the charge-back structure, and agree to SLAs with customers.
  • the Process Areas are expressed in terms of their goals, whereas Base Practices are tasks that need to be carried out to achieve those goals.
  • Base Practices may have work products associated with them.
  • a work product is evidence of base practice implementation, for example, a completed change control request, a resolved trouble ticket, and/or a SLA report.
  • a service desk example of a process area and associated base practices is as follows:
  • Capability Dimension refers to formalizing the process performance into quantifiable range of expected results based on the process capability level that can be achieved by following the process.
  • Process capability dimension characterizes the level of capability of each process area within an organization. In other words, the process capability dimension describes how well the processes in the process dimension are performed.
  • the Capability Dimension measures how well an IT organization performs its operational processes. In determining capabilities, the Base Practices are viewed as a guide to what should be done. The related Generic Practices deal with the effectiveness in which the Base Practices are carried out. Capability Levels, Process Attributes, and Generic Practices describe the Process Capability. The present invention has five levels of Process Capability that can be applied to any Process Area. The Capability Dimension provides a means to formalize and quantify the process performance. The Capability Dimension describes how well the processes are performed as contrasted with Base Practices that describe what an IT organization does.
  • Capability Dimension consists of three components: Capability Levels, Process Attributes, and Generic Practices. These are described below.
  • Capability Levels indicate increasing levels of process maturity and are comprised of one or more generic practices that work together to provide a major enhancement in the capability to perform the process.
  • the Capability Level is the highest level ofthe Capability dimension.
  • the Capability Level of a process determines its performance and effectiveness.
  • Each Capability Level has certain Process Attributes associated with it.
  • a Process Attribute is comprised of a set of Generic Practices that provide criteria for improving performance.
  • a particular Capability Level is achieved when all the Process Attributes associated with it and with preceding levels are present. Therefore, once the Capability Level is determined, those Process Attributes - and associated Generic Practices - that are required to enhance capability can be identified. In other words, Capability Levels offer a staged guideline for improving the capability to perform the defined processes.
  • Capability Levels provide two benefits: they acknowledge dependencies and relationships among the Base Practices of a Process Area, and they help an IT organization identify which improvements should be performed first, based on a plausible sequence of process implementation.
  • Each level provides a major enhancement in capability to that provided by its predecessors in the fulfillment ofthe process purpose. For example, at capability Level 1, Base Practices are performed. The performance is ad hoc, informal, and unpredictable. At capability Level 2, the performing of Base Practices are planned and tracked versus just performed - thereby offering a significant improvement over Level 1 practice.
  • Capability Levels are applied to each Process Area independent of other Process Areas. An assessment is performed to determine Process Capability for each Process Area, as illustrated in Figure 4.
  • an assessment refers to a diagnostic performed by a trained team to evaluate aspects of an organization's IT operations environment processes.
  • the trained team determines the state ofthe operational processes, identifies pressing operational process related issues, and obtains organizational support for a process improvement program.
  • Process Areas can, and may, exist at different levels of capability.
  • the ability to rate Process Areas independently enables an IT organization to focus on process improvement priorities driven from business goals and strategic directions. An example of this is illustrated in Figure 4.
  • process attributes refer to features of a process that can be evaluated on a scale of achievement (performed, partially performed, not performed, etc.) which provide a measure ofthe capability ofthe process.
  • measures of capability are based on a set of nine Process Attributes.
  • Process Attributes are used to determine whether a process has reached a given capability.
  • the nine Process Attributes are: Process Performance Performance Management Work Product Management Process Definition Process Resource Process Measurement Process Control Process Change Continuous Improvement
  • the attributes are evaluated on a four-point scale of achievement. Achieving a given Capability Level depends on the rating assigned to one or more of these attributes.
  • Generic Practices refer to activities that contribute to the capability of managing and improving the effectiveness ofthe operations environment Process Areas.
  • a generic practice is applicable to any and all Process Areas. It contributes to overall process management, measurement, and the institutionalization capability ofthe Process Areas.
  • the allocation of adequate resources to a process is a Generic Practice and is applicable to all processes.
  • Service Level Management and Migration Control are two different Process Areas with different Base Practices, goals, and purposes. However, they share the same Generic Practice of allocation of adequate resources.
  • Operational Maturity Dimension characterizes the maturity of an entire operations IT organization.
  • maturity refers to the degree of order (structure or systemization) and effectiveness of a process.
  • the degree of order determines its state of maturity. Less mature processes are less ordered and less effective; more mature processes are more ordered and more effective.
  • the Capability Dimension focuses on the determination ofthe capability of individual processes, within an operations organization, in achieving their stated goals and purpose.
  • Maturity Dimension determines the IT organizational maturity by focusing on a collection of
  • >1 processes at a certain level of capability in order to characterize the evolution ofthe operations IT organization as they improve.
  • Maturity in the overall context of present invention, is applied to an IT organization as a whole.
  • the Maturity Level is determined by the Capability Level ofthe four Process
  • Maturity Level refers to a sequence of key intermediate states leading to the goal state. Each state builds incrementally on the preceding state.
  • the assessment tool ofthe present invention is flexible to accommodate an assessment of a Process Category or just a Process Area. As shown in Figure 5, an assessment could end at the Process Area Level with the Process Capability Level or Process Area Maturity determined. An assessment could also be performed to assess all the Process Areas within a Process Category to determine the Process Category Maturity Level.
  • the framework of the present invention which consists ofthe three dimensions described previously, is illustrated in Figure 6.
  • the Operations Environment Dimension 600 the box in the center of Figure 6, divides all IT processes into Process Categories 300. Process Categories
  • Process Area 300 divide into a finite number of Process Areas 302.
  • Process Areas 302 consist of a finite number of Base Practices 304.
  • Each Process Area within a category is assigned a Capability Level 504 based on the performance of Process Attributes 601 comprised of a finite number of Generic Practices 602 applicable to that process (shown in the box on the right).
  • the IT organization's operational maturity 603 present invention is based on a clustering of process capabilities, as illustrated in the third box to the left.
  • the framework ofthe present invention is designed to support an IT organization's need to assess and improve their operational capability.
  • the structure ofthe model enables a consistent appraisal methodology to be used across diverse Process Areas. The distinction between essential operations and process management-focused elements therefore allows a systematic approach to process improvement.
  • Capability Dimension ofthe present invention measures how capable an IT organization is in achieving the purpose of its various Process Areas.
  • Capability Levels, Process Attributes, and Generic Practices describe the Process Capability.
  • the Capability Levels, their characteristics, the Process Attributes, and the Generic Practices that comprise them are discussed in more detail.
  • the present invention has five levels of Process Capability that can be applied to any Process
  • Capability Level As mentioned before, Generic Practices are grouped by Process Attributes, and Process Attributes determine the Capability Level. Capability Levels build upon one another; levels cannot, therefore, be skipped.
  • Level 1 Level 2
  • GP Generic Practices
  • ATT 1 A Process Performance - the extent to which the execution ofthe process employs a set of practices which uses identifiable input work products to produce identifiable output work products that are adequate to satisfy the purpose ofthe process.
  • GP1.1 Ensure that Base Practices are performed. When all base practices are performed, the purpose ofthe process area is satisfied. A process may exist but it may be informal and undocumented.
  • Process Area performance is dependent on how efficiently the Base Practices are implemented.
  • Work products such as completed change control requests, resolved trouble tickets, etc., which are related to base practice implementation are periodically reviewed and placed under version control. Corrective action is taken when variances in services and work products occur.
  • ATT 2A Performance Management - the extent to which the execution ofthe process is managed in order to produce work products within a stated time and resource requirement.
  • the related Generic Practices are:
  • GP2.1 Establish and maintain a policy for performing operational tasks.
  • Policy is a visible way for the operations environment personnel and the management team to set expectations.
  • the form of policies varies widely depending on the local culture. Policy typically specifies that plans are documented, managed and controlled, and that reviews are conducted. Policy provides guidance for performing the operational tasks and processes.
  • Resources include adequate funding, appropriate physical facilities, skilled people, and appropriate tools. This practice ensures that the level of effort, appropriate skills mix, tools, workspace, and other direct resources are available to perform the operational task and processes.
  • GP2.3 Ensure personnel receive the appropriate type and amount of training. Ensure that the individuals are appropriately trained on how to perform the operational tasks and processes. Training provides a common basis for repeatable performance. Even if the operations personnel or management have satisfactory technical skills and knowledge, there is almost always a need to establish a common understanding ofthe operational process activities and how skills are applied in them. Training, and how it is delivered, may change with process capability due to changes in how the process is performed and managed.
  • GP2.4 Collect data to measure performance.
  • the use of measurement implies that the metrics have been defined and selected, and data has been collected. Building a history of measures, such as cost and schedule variances, is a foundation for managing by data. Quality measures may be collected and used, but result in maximum impact at Level 4 when they are subjected to quantitative process control.
  • Open communication ensures that there is common understanding, that decisions are consensual, and that team members are kept aware of decisions made. Communication is needed when changes are made to plans, products, processes, activities, requirements, and responsibilities. The commitments, expectations, and responsibilities are documented and agreed upon within the project group. Commitment may be obtained by negotiation, by using input and feedback, or through joint development of solutions to issues. Issues are tracked and resolved within the group. Communication occurs periodically and whenever the status changes. The participants have access to data, status information, and recommended actions.
  • ATT 2B Work Product Management - the extent to which the process is managed to produce work products that are documented and controlled, and that meet their functional and nonfunctional requirements, in line with the work product quality goals ofthe process. n In order to achieve this capability, a process needs to have stated functional and non-functional requirements for work products, including integrity, and to produce work products that fulfill the stated requirements.
  • the related Generic Practices are:
  • Requirements may come from the business customer, policies, standards, laws, regulations, etc. The applicable requirements are documented and available for verification activities.
  • GP2.7 Employ version control to manage changes to work products. Place identified work products under version control, or configuration management to provide a means of controlling work products and services.
  • Base Practices are performed with the assistance of an available, well-defined, and operations-wide process infrastructure. The processes are tailored to meet the specific needs of a certain practice.
  • Data from using the process are gathered to determine if modifications or improvements should be made. This information is used in planning and managing the day-to-day execution of multiple projects within the IT organization, and for short and long-term process improvement.
  • ATT 3 A Process Resource - the extent to which the execution ofthe process uses suitable skilled human resources and process infrastructure effectively to contribute to the defined business goals ofthe operations environment.
  • ⁇ 1 GP3.1 Define policies and procedures at an IT level.
  • GP3.2 Define tasks that satisfy the process purpose and business goals consistently and repeatedly. This includes:
  • Process Attribute ATT 3B Process Definition - the extent to which the execution ofthe process uses a definition, based upon a standard process, that enables it to contribute to the defined business goals ofthe IT organization.
  • this practice embodies the pro-active planning of personnel. This includes the selection of proper work forces, training, and dissemination.
  • GP 3.4 Provide feedback in order to maintain knowledge and experience.
  • the standard process repository is to be kept up-to-date, through a continuous feedback system based on experiences gained from using the defined process.
  • ATT 4A Process Measurement - the extent to which measures are used to ensure that the implementation ofthe process supports its execution, and contributes to the achievement of IT organizational goals.
  • GP4.1 Establish measurable quality objectives for the operations environment.
  • Process definitions are modified to reflect the quantitative nature of process performance. Measurements become inherent in the process definition and are collected as the process is being performed.
  • Process Attribute ATT 4B Process Control - the extent to which the execution ofthe process is controlled through the collection and analysis of measures that correct the performance ofthe process in order to reliably achieve the defined process goals.
  • the related Generic Practices are:
  • GP4.3 Provide adequate resources and infrastructure for data collection.
  • Level 5 is the highest achievement level from the viewpoint of Process Capability.
  • Continuous process improvement is enabled by quantitative feedback from the process and from pilot studies of innovative ideas and new technology. A focus on widespread, continuous improvement should permeate the IT organization.
  • the IT organization should establish quantitative performance goals for process effectiveness and efficiency, based on its business goals and strategic objectives.
  • ATT 5A Continuous Improvement - the extent to which changes to the process are identified and implemented to ensure continuous improvement in the fulfillment ofthe defined business goals ofthe IT organization.
  • Improvements may be based on incremental operational refinements or through innovations, such new technologies. Improvements may typically be driven by the following activities:
  • ATT 5B Process Change - the extent to which changes to the definition, management, and performance ofthe process is controlled to better achieve the business goals of the IT organization.
  • GP5.2 Deploy "best practices" across the IT organization. Improved practices must be deployed across the operations environment to allow their benefit to be felt across the IT organization.
  • the deployment activities include: Identifying improvement opportunities in a systematic and proactive manner to continuously improve the process.
  • the rating framework requires identification of objective attributes or characteristics of a practice or work product of an implemented process to validate that Base Practices are performed, and Generic Practices are followed. Assessment Indicators determine Process Attribute ratings which then are used to determine Capability Level.
  • Assessment Indicators refer to objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process.
  • the cornerstone of a rating framework is the identification and description of Assessment Indicators to help rate the Process Attributes.
  • Assessment Indicators are objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process.
  • Assessment Indicators are evidence that Base Practices are performed, and Generic Practices are followed.
  • the indicators are not intended to be regarded as a mandatory checklist to be followed, but rather are a guide to enhance an assessment team's objectivity in making their judgments of a process's performance and capability.
  • the rating framework adds definition and reliability to the present invention, and thereby improves repeatability.
  • Assessment Indicators are determinants of Process Attribute ratings for each Process Capability attribute.
  • Each assessed process profile consists of a set of Process Attribute ratings.
  • Each attribute rating represents a judgment by the assessment team ofthe extent to which the attribute is achieved.
  • Figure 8 illustrates the Process Attribute rating represented on a four-point scale of achievement.
  • the indicators determine attributes rating which then are used to determine Capability Level.
  • the rating scale defined below is used to describe the degree of achievement ofthe defined capability characterized by Process Attributes. Once the appropriate rating for each Process Attribute is determined, ratings can be combined to assign the Capability Level achieved by the assessed process.
  • Figure 9 represents the mapping of attribute ratings to the process Capability Levels determination.
  • the first step is to identify if the appropriate Base Practices are performed at all.
  • the necessary foundation for improving the capability of any process is to at least demonstrate that the Base Practices are being performed.
  • the assessment team may then formulate an objective judgment of process performance attribute through different means such as analysis ofthe work products
  • Achievement of Base Practices is an indication that Process Area goals are being met.
  • the increasing capability of a process to effectively achieve its goals and objectives is based upon attribute rating.
  • the attribute rating is determined by the performance ofthe associated Generic Practices.
  • Attribute supports the assessment team's judgement ofthe degree of achievement ofthe attributes.
  • Process Category capabilities are determined from capability ratings of its Process Areas. Once all Process Areas of a category are rated the lowest rating assigned to a Process Area becomes the category rating as well. Similarly, the operational maturity rating is determined from Process Category rating within the IT organization. Once all Process Categories are rated then the lowest rating assigned to a Process Category becomes the IT organizational maturity.
  • an assessment team collects the evidence on the implementation of the processes being assessed and determines their compatibility as defined in the framework of the present invention.
  • the objective ofthe assessment is to identify the differences and the gaps between the actual implementations ofthe processes in the assessed operational IT organization with respect to the present invention.
  • Using the framework ofthe present invention ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
  • the assessment process is used to appraise an organization's IT operations environment process capability. Defining a reference model ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
  • An IT organization can perform an assessment for a variety of reasons.
  • An assessment can be performed in order to assess the processes in the IT operations environment with the purpose of improving its own work and service processes.
  • An IT organization can also perform an assessment to determine and better manage the risks associated with outsourcing.
  • an assessment can be performed to better understand a single functional area such as systems management, on a single process area such as a performance management, or on the entire IT operations environment.
  • Three phases are defined in the assessment model: Planning and Preparing, Performing, and Distributing Results. All phases ofthe assessment are performed using a team-based approach. Team members include the client sponsor, the assessment team lead, assessment team members, and client participants.
  • assessment scope refers to organizational entities and components selected for inspection.
  • a clear understanding ofthe purpose ofthe framework, constraints, roles, responsibilities, and outputs are needed prior to the start ofthe assessment. Therefore, in preparation for the assessment, the assessment team lead and the client sponsor work together to reach agreement on the scope and goals ofthe assessment. Once agreement is reached, the assessment team lead ensures that the IT operational processes selected for the assessment are sufficient to meet the assessment purpose and may provide output that is representative ofthe assessment scope.
  • An assessment plan is developed based on the goals identified by the client sponsor.
  • the plan consists of detailed schedules for the assessment and potential risks identified with performing the assessment.
  • Assessment team members, assessment participants, and areas to be assessed are selected.
  • Work products are identified for initial review, and the logistics for the on-site visit are identified and planned.
  • the assessment team members must receive adequate training on the framework ofthe present invention and the assessment process. It is essential that the assessment team be well-trained on the present invention to ensure that they may have the ability to interpret the data obtained during the assessment.
  • the team must have comprehensive understanding ofthe assessment process, its underlying principles, the tasks necessary to execute it, and their role in performing the tasks.
  • Maturity questionnaires are distributed to participants prior to the client site visit. Maturity questionnaires exist for each process area ofthe present invention, and tie back to base practices, process attributes and generic practices. Completed questionnaires provide the assessment team with an overview ofthe IT operational process capability ofthe IT organization. The responses assist the team in focusing their investigations, and provide direction for later activities such as interviews and document reviews. Assessment team members prepare exploratory questions based on Interview Aids and responses to the maturity questionnaires.
  • Interview Aids refers to a set of exploratory questions about the operations environment which are used during the interview process to obtain more detailed information on the capability ofthe IT organization.
  • the interview aids are used by the assessment team to guide them through interview sessions with assessment participants.
  • a Kick off meeting is scheduled at the start ofthe on-site activities.
  • the purpose ofthe meeting is to provide the participants with an overview of present invention and the assessment process, to set expectations, and to answer any questions about the process.
  • a client sponsor ofthe assessment may participate in the presentation to show visible support and stress the importance ofthe assessment process to everyone involved.
  • Data for the assessment are obtained from several sources: responses to the maturity questionnaires, interview sessions, work products, and document reviews. Documents are reviewed in order to verify compliance. Interviewing provides an opportunity to gain a deeper understanding ofthe activities performed, how the work is performed, and processes currently in use. Interviewing provides the assessment team members with identifiable assessment indicators for each Process Area appraised. Interviewing also provides the opportunity to address all areas ofthe present invention within the scope ofthe assessment.
  • 3* collect data within the scope ofthe assessment and to identify areas that they can and should improve in the IT organization.
  • the purpose of solidifying this information is to summarize and consolidate information into a manageable set of findings.
  • the data is then categorized into Process Areas ofthe present invention.
  • the assessment team must reach consensus on the validity ofthe data and whether sufficient information in the areas evaluated has been collected. It is the team's responsibility to obtain sufficient information on the components ofthe present invention within the scope ofthe assessment for the required areas ofthe IT organization before any rating can be done.
  • follow- up interviews may occur for clarification.
  • Initial findings are generated from the information collected thus far, and presented to the assessment participants.
  • the purpose of presenting initial findings is to obtain feedback from the individuals who provided information during the various interviews. Ratings are not considered until after the initial findings presentations, as the assessment team is still collecting data.
  • Initial findings are presented in multiple sessions in order to protect the confidentiality ofthe assessment participants. Feedback is recorded for the team to consider at the conclusion of all of the initial findings presentations.
  • H° Examples of assessments associated with the foregoing service desk example are as follows:
  • the rating process may begin.
  • the first step in the rating process is to determine if Process Area goals are being met. Process Area goals are considered met when all base practices are performed. Each process attribute for each Process Area within the assessment scope is then rated. Process attributes are rated based on the existence of and compliance to generic practices.
  • the Assessment Indicator Rating template the assessment team identifies assessment indicators for each process area to determine whether or not process attributes are achieved. Ratings are always established based on consensus ofthe entire assessment team. Questionnaire responses, interview notes, and documentation are used to support ratings; confirmation from two sources in different contexts (e.g., two people in different meetings) ensures compliance of an activity.
  • the team reviews all weaknesses that relate to the associated generic practices. If the team determines that a weakness is strong enough to impact the process attribute, the process attribute is rated "not achieved.” If it is decided that there are no significant weaknesses that have an impact on a process attribute, it is rated "fully achieved.” For a Process Area to be rated “fully achieved,” all process attributes for the Process Area must be rated “fully achieved.” A Process Area may be rated fully achieved, largely achieved, partially achieved, or not achieved.
  • Assignment of a maturity level rating is optional at the discretion ofthe sponsor. For a particular maturity level rating to be achieved, all Process Areas within and below the maturity level must be satisfied. For example, for an IT organization to be rated at maturity level 4, all Process Areas at level 4, level 3 and at level 2 must have been investigated during the assessment, and all Process Areas must have been rated achieved by the assessment team. The final findings presentation is developed by the team to present to the sponsor and the IT organization the strengths and weaknesses observed for each Process Area within the assessment scope, the ratings of each Process Area, and the maturity level rating if desired by sponsor.
  • the final assessment results are presented to the client sponsor. During the final presentation, the assessment team must ensure that the IT organization understands the issues that were discovered during the assessment and the key issues that it faces. Operational strengths are presented to validate what the IT organization is doing well. Strengths and weaknesses are presented for each process area within the assessment scope as well as any issues that affect process and are un-related to the present invention. A Process Area profile is presented showing the individual Process Area ratings in detail.
  • An executive overview session is held in order to allow the senior IT Operations manager to clarify any issues with the assessment team, to confirm his or her understanding ofthe operational process issues, and to gain full understanding ofthe recommendations report.
  • the assessment team collects feedback from the assessment participants and the assessment team on the process, packages information that needs to be saved for historical purposes.
  • Figure 10 describes the roles and responsibilities of those involved with the assessment process.
  • Figure 11 represents the indicator types and their relationship to the determination of Process Area rating. As shown, evidence of process performance and process capability is provided by assessment indicators. Such assessment indicators, in turn, consist of base practices and general
  • PA Goals To define services to be delivered (by application and/or business unit).
  • KPIs Key Performance Indicators
  • SLA Management involves the creation, management, reporting, and discussion of Service Description Level Agreements (SLAs) with users and the providers within Information Technology (IT).
  • SLA is a formal agreement between a user who requires information services and the IT organization responsible for providing those services.
  • SLA Management involves the following areas:
  • SLA Definition The SLA document defines, in specific and quantifiable terms, the level of service that is to be delivered to users. In the enterprise environment, many design and
  • OLA Operations Level Agreements with providers within the organization, as well as external suppliers and vendors.
  • An OLA is an agreement between the IT organization and those delivering the constituent services ofthe system.
  • OLAs enable the IT organization to provide the level of service stipulated in a Service Level Agreement as supporting services are guaranteed in the OLA.
  • OLA Management involves the following:
  • OLA Definition An OLA outlines the type of service that will be delivered to the users from each service provider. OLA Definition works with service providers to define:
  • Which provider(s) can supply a service, or part of a service
  • Formal OLAs are defined for suppliers who are external to the IT organization. They may take the form of maintenance contracts, warranties, or service contracts. Further formal or informal OLAs may also be created for internal suppliers, depending on the size ofthe organization.
  • OLA Reporting The actual production of trend reports are necessary to monitor and meter the effectiveness of an OLA.
  • OLA Control It is important that the services described in OLAs are carefully aligned with current business needs, monitored to ensure that they are performed as described, and updated in line with changes to business needs.
  • OLA Review The reports generated from tracking OLAs are reviewed to ensure that the OLAs are carefully aligned with current business needs and if necessary updated to be in line with business needs. In enterprise environments, this process becomes more complex as more components are required to perform these services.
  • PA's Base 1.2.1 Determine operational items Practices 1.2.2 Group related operational items
  • PA Goals To define a quantifiable service level that represents a minimum level of service for each service delivered.
  • OLAs contain e.g. workloads, cost of service, targets, type of support etc.
  • OLA outline each key business application e.g. penalties, tools used to maintain the OLA
  • KPIs Key Performance Indicators
  • service measurement metrics specified in the OLA Are targets for the service measurement metrics specified? If so, how are these targets determined, for example is the supplier capability gauged and considered?
  • the Service Desk provides a single point of contact for users with problems or specific service request.
  • the Service Desk forms part of an organization's strategy to enable users and business communities to achieve business objectives through the use of technology.
  • the Service Desk main objectives are:
  • the Service Desk consists ofthe following functions:
  • Incident Management An incident is a single occurrence of an issue that affects the delivery of normal or expected services. Incident Management strives to resolve as high a proportion of incidents as possible prior to passing them on to other areas.
  • Problem Management A problem is the underlying cause of one or more incidents. Problem Management utilizes the skills of experts and support groups to fix and prevent recurring incidents by determining and fixing the underlying problems causing the incidents.
  • Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk. Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Further sub- functions of Request Management are: Request Logging Impact Analysis Authorization Prioritization
  • the Service Desk provides a single point of contact for users with problems or specific Description service request.
  • the Service Desk forms part of an organization's strategy to enable users and business communities to achieve business objectives through the use of technology.
  • the Service Desk main objectives are:
  • the Service Desk consists ofthe following functions:
  • Incident Management An incident is a single occurrence of an issue that affects the delivery of normal or expected services. Incident Management strives to resolve as high a proportion of incidents as possible prior to passing them on to other areas.
  • Problem Management- A problem is the underlying cause of one or more incidents. Problem Management utilizes the skills of experts and support groups to fix and prevent recurring incidents by determining and fixing the underlying problems causing the incidents.
  • Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk. Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Further sub- functions of Request Management are: Request Logging Impact Analysis Authorization Prioritization
  • Do budgets include contingencies for unanticipated growth or product/service needs 7
  • Base Practice 1.4.9 Prepare, distribute, and maintain a catalogue of service prices for users
  • Process Area Service Pricing is comprised ofthe following areas: Description
  • Service Pricing & Cost Service Costing & Pricing projects and monitors costs for the management of operations, provision of service, equipment installation, etc. Based upon the projected cost and business needs, a service pricing strategy may be developed to re-allocate costs within the organization. If developed, the service pricing strategy will be documented, communicated to the users, monitored and adjusted to ensure that it is both comprehensive and fair.
  • Billing & Accounting The purpose of Billing & Accounting is to gather information for calculating actual cost, determine chargeback costs and bill users for services rendered.
  • Process Area Production Scheduling determines the requirements for the execution of scheduled jobs Description across a distributed environment. A production schedule is then placed to meet these requirements, taking into consideration other processes occurring throughout the distributed environment (e.g , software and data distribution, and remote backup/restoration of data.)
  • Results of any network performance testing across the network (e.g. RMON, SNMP, etc.)
  • Process Area Output and Print Management monitors all ofthe printing and/or done across a distributed Description environment and is responsible for managing the printers and the printing for both central and remote locations.
  • List of equipment/supplies used for non-typical print jobs e.g. feeders, inks, etc.
  • °n and control system is set up to handle multiple transfers and both remote systems and the host complete file transfer successfully.
  • Convert file types e.g., VSAM, PDS, etc.
  • Example File Transfer Type Considerations include:
  • Can file types e.g. VSAM, PDS, etc.
  • Can file types e.g. VSAM, PDS, etc.
  • Process Area File Transfer and Control initiates and monitors the files being transferred throughout the Description system as part ofthe business processing (e.g., nightly batch runs). File transfers can take place in a bi-directional fashion between hosts, servers and workstations.

Abstract

A system, method, and article of manufacture consistent with the principles of the present invention are provided for determining capability levels of a user administration process area when gauging a maturity of an operations organization. First, a plurality of process attributes are defined. Next, a plurality of generic practices are determined for each of the process attributes. The generic practices include base practices such as receiving information from human resources regarding employee events, adding users to a plurality of systems, changing user information on each of the systems, deleting user information on each of the systems, and/or notifying parties periodically of a user administration status. Thereafter, a maturity of an operations organization is determined based at least in part on the achievement of the generic practices.

Description

A SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR OPERATIONAL MATURITY PROCESS ASSESSMENT VIA CAPABILITY LEVEL DETERMINATION IN A USER ADMINISTRATION PROCESS AREA
FIELD OF INVENTION
The present invention relates to IT operations organizations and more particularly to evaluating a maturity of an operations organization by determining capability levels of a user administration process area.
BACKGROUND OF INVENTION
Triggered by a recent technology avalanche and a highly competitive global market, the management of information systems is undergoing a revolutionary change. Both information technology and business directions are driving information systems management to a fundamentally new paradigm. While business bottom lines are more tightly coupled with information technology than ever before, studies indicate that many CEOs and CFOs feel that they are not getting their money's worth from their IT investments. The complexity of this environment demands that a company have a formal way of assessing its IT capabilities, as well as a specific and measurable path for improving them.
In initiatives to address these issues, various frameworks and gap analysis have been used to capture the best practices of IT management and to determine areas of improvement. While the frameworks and gap analysis are intended to capture weaknesses in processes that are observable, it does not provide data with sufficient objectivity and granularity upon which a comprehensive improvement plan can be built.
There is thus a need to add further objectivity and consistency to conventional framework and gap analysis. SUMMARY OF INVENTION
A system, method, and article of manufacture consistent with the principles ofthe present invention are provided for determining capability levels of a user administration process area when gauging a maturity of an operations organization. First, a plurality of process attributes are defined. Next, a plurality of generic practices are determined for each ofthe process attributes. The generic practices include base practices such as receiving information from a human resources regarding employee events, adding users to a plurality of systems, changing user information on each ofthe systems, deleting user information on each ofthe systems, and/or notifying parties periodically of a user administration status. Thereafter, a maturity of an operations organization is determined based at least in part on the achievement ofthe generic practices.
The present invention provides a basis for organizations to gauge performance, and assists in planning and tracking improvements to the operations environment. The present invention further affords a basis for defining an objective improvement strategy in line with an organization's needs, priorities, and resource availability. The present invention also provides a method for determining the overall operational maturity of an organization based on the capability levels of its processes.
The present invention can thus be used by organizations in a variety of contexts. An organization can use the present invention to assess and improve its processes. An organization can further use the present invention to assess the capability of suppliers in meeting their commitments, and hence better manage the risk associated with outsourcing and sub-contract management. In addition, the present invention may be used to focus on an entire IT organization, on a single functional area such as service management, or on a single process area such as a service desk.
> BRIEF DESCRIPTION OF DRAWINGS
The invention may be better understood when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
Figure 1 is a schematic diagram of a hardware implementation of one embodiment ofthe present invention;
Figure 2 is a flowchart illustrating generally the steps associated with the present invention;
Figure 3 is an illustration showing the relationships ofthe process category, process area, and base practices ofthe operations environment dimension in accordance with one embodiment of the present invention;
Figure 4 is an illustration showing a measure of each process area to the capability levels according to one embodiment ofthe present invention;
Figure 5 is an illustration showing various determinants of operational maturity in accordance with one embodiment ofthe present invention;
Figure 6 is an illustration showing an overview ofthe operational maturity model;
Figure 7 is an illustration showing a relationship of capability levels, process attributes, and generic practices in accordance with one embodiment ofthe present invention;
Figure 8 is an illustration showing a capability rating of various attributes in accordance with one embodiment ofthe present invention;
Figure 9 is an illustration showing a mapping of attribute ratings to the process capability levels determination in accordance with one embodiment ofthe present invention;
Figure 10 is an illustration showing assessment roles and responsibilities in accordance with one embodiment ofthe present invention; and Figure 11 is an illustration showing the process area rating in accordance with one embodiment ofthe present invention.
DISCLOSURE OF INVENTION
The present invention comprises a collection of best practices, both from a technical and management perspective. The collection of best practices is a set of processes that are fundamental to a good operations environment. In other words, the present invention provides a definition of an "ideal" operations environment, and also acts as a road map towards achieving the "ideal" state.
Figure 1 is a schematic diagram of one possible hardware implementation by which the present invention may be carried out. As shown, the present invention may be practiced in the context of a personal computer such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.
A representative hardware environment is depicted in Figure 1, which illustrates a typical hardware configuration of a workstation in accordance with one embodiment having a central processing unit 110, such as a microprocessor, and a number of other units interconnected via a system bus 112. The workstation shown in Figure 1 includes a Random Access Memory (RAM) 114, Read Only Memory (ROM) 116, an I/O adapter 118 for connecting peripheral devices such as disk storage units 120 to the bus 112, a user interface adapter 122 for connecting a keyboard 124, a mouse 126, a speaker 128, a microphone 132, and/or other user interface devices such as a touch screen (not shown) to the bus 112, communication adapter 134 for connecting the workstation to a communication network 135 (e.g., a data processing network) and a display adapter 136 for connecting the bus 112 to a display device 138.
The workstation typically has resident thereon an operating system such as the Microsoft Windows NT or Windows/95 Operating System (OS), the IBM OS/2 operating system, the MAC OS, or UNIX operating system. Those skilled in the art may appreciate that the present invention may also be implemented on other platforms and operating systems.
A preferred embodiment ofthe present invention is written using JAVA, C, and the C++ language and utilizes object oriented programming methodology. Object oriented programming (OOP) has become increasingly used to develop complex applications. As OOP moves toward the mainstream of software design and development, various software solutions require adaptation to make use of the benefits of OOP.
OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program. An object is a software package that contains both data and a collection of related structures and procedures. Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task. OOP, therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.
In general, OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture. A component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each others capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to differentiate between an object and a class of objects at this point. An object is a single instance ofthe class of objects, which is often just called a class. A class of objects can be viewed as a blueprint, from which many objects can be formed.
OOP allows the programmer to create an object that is a part of another object. For example, the object representing a piston engine is said to have a composition-relationship with the object representing a piston. In reality, a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.
OOP also allows creation of an object that "depends from" another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition. A ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic. In this case, the object representing the ceramic piston engine is called a derived object, and it inherits all ofthe aspects of the object representing the piston engine and adds further limitation or detail to it. The object representing the ceramic piston engine "depends from" the object representing the piston engine. The relationship between these objects is called inheritance.
When the object or class representing the ceramic piston engine inherits all of the aspects ofthe objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class. However, the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons. Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with it (e.g., how many pistons in the engine, ignition sequences, lubrication, etc.). To access each of these functions in any piston engine object, a programmer would call the same functions with the same names, but each type of piston engine may have different/oveπiding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymorphism and it greatly simplifies communication among objects.
With the concepts of composition-relationship, encapsulation, inheritance and polymorphism, an object can represent just about anything in the real world. In fact, our logical perception ofthe reality is the only limit on determining the kinds of things that can become objects in object- oriented software. Some typical categories are as follows:
• Objects can represent physical objects, such as automobiles in a traffic-flow simulation, electrical components in a circuit-design program, countries in an economics model, or aircraft in an air-traffic-control system.
• Objects can represent elements ofthe computer-user environment such as windows, menus or graphics objects.
• An object can represent an inventory, such as a personnel file or a table ofthe latitudes and longitudes of cities.
• An object can represent user-defined data types such as time, angles, and complex numbers, or points on the plane. With this enormous capability of an object to represent just about any logically separable matters, OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system, or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.
If 90% of a new OOP software program consists of proven, existing components made from preexisting reusable objects, then only the remaining 10% ofthe new software project has to be written and tested from scratch. Since 90% already came from an inventory of extensively tested reusable objects, the potential domain from which an error could originate is 10% ofthe program. As a result, OOP enables software developers to build objects out of other, previously built objects.
This process closely resembles complex machinery being built out of assemblies and sub- assemblies. OOP technology, therefore, makes software engineering more like hardware engineering in that software is built from existing components, which are available to the developer as objects. All this adds up to an improved quality ofthe software as well as an increased speed of its development.
Programming languages are beginning to fully support the OOP principles, such as encapsulation, inheritance, polymorphism, and composition-relationship. With the advent ofthe C++ language, many commercial software developers have embraced OOP. C++ is an OOP language that offers a fast, machine-executable code. Furthermore, C++ is suitable for both commercial-application and systems-programming projects. For now, C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.
The benefits of object classes can be summarized, as follows:
• Objects and their corresponding classes break down complex programming problems into many smaller, simpler problems.
• Encapsulation enforces data abstraction through the organization of data into small, independent objects that can communicate with each other. Encapsulation protects the data in an object from accidental damage, but allows other objects to interact with that data by calling the object's member functions and structures.
• Subclassing and inheritance make it possible to extend and modify objects through deriving new kinds of objects from the standard classes available in the system. Thus, new capabilities are created without having to start from scratch.
• Polymorphism and multiple inheritance make it possible for different programmers to mix and match characteristics of many different classes and create specialized objects that can still work with related objects in predictable ways.
• Class hierarchies and containment hierarchies provide a flexible mechanism for modeling real- world objects and the relationships among them.
• Libraries of reusable classes are useful in many situations, but they also have some limitations. For example:
• Complexity. In a complex system, the class hierarchies for related classes can become extremely confusing, with many dozens or even hundreds of classes. • Flow of control. A program written with the aid of class libraries is still responsible for the flow of control (i.e., it must control the interactions among all the objects created from a particular library). The programmer has to decide which functions to call at what times for which kinds of objects.
• Duplication of effort. Although class libraries allow programmers to use and reuse many small pieces of code, each programmer puts those pieces together in a different way.
Two different programmers can use the same set of class libraries to write two programs that do exactly the same thing but whose internal structure (i.e., design) may be quite different, depending on hundreds of small decisions each programmer makes along the way. Inevitably, similar pieces of code end up doing similar things in slightly different ways and do not work as well together as they should.
Class libraries are very flexible. As programs grow more complex, more programmers are forced to reinvent basic solutions to basic problems over and over again. A relatively new extension ofthe class library concept is to have a framework of class libraries. This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers.
1 Frameworks also represent a change in the way programmers think about the interaction between the code they write and code written by others. In the early days of procedural programming, the programmer called libraries provided by the operating system to perform certain tasks, but basically the program executed down the page from start to finish, and the programmer was solely responsible for the flow of control. This was appropriate for printing out paychecks, calculating a mathematical table, or solving other problems with a program that executed in just one way.
The development of graphical user interfaces began to turn this procedural programming arrangement inside out. These interfaces allow the user, rather than program logic, to drive the program and decide when certain actions should be performed. Today, most personal computer software accomplishes this by means of an event loop which monitors the mouse, keyboard, and other sources of external events and calls the appropriate parts ofthe programmer's code according to actions that the user performs. The programmer no longer determines the order in which events occur. Instead, a program is divided into separate pieces that are called at unpredictable times and in an unpredictable order. By relinquishing control in this way to users, the developer creates a program that is much easier to use. Nevertheless, individual pieces ofthe program written by the developer still call libraries provided by the operating system to accomplish certain tasks, and the programmer must still determine the flow of control within each piece after it's called by the event loop. Application code still "sits on top of the system.
Even event loop programs require programmers to write a lot of code that should not need to be written separately for every application. The concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making these things all work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some ofthe generic capabilities ofthe framework with the specific capabilities ofthe intended application.
Application frameworks reduce the total amount of code that a programmer has to write from scratch. However, because the framework is really a generic application that displays windows, supports copy and paste, and so on, the programmer can also relinquish control to a greater degree than event loop programs permit. The framework code takes care of almost all event
.0 handling and flow of control, and the programmer's code is called only when the framework needs it (e.g., to create or manipulate a proprietary data structure).
A programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.
Thus, as is explained above, a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e.g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.
There are three main differences between frameworks and class libraries:
• Behavior versus protocol. Class libraries are essentially collections of behaviors that one can call when one wants those individual behaviors in a program. A framework, on the other hand, provides not only behavior but also the protocol or set of rules that govern the ways in which behaviors can be combined, including rules for what a programmer is supposed to provide versus what the framework provides.
• Call versus override. With a class library, the code the programmer instantiates objects and calls their member functions. It's possible to instantiate and call objects in the same way with a framework (i.e., to treat the framework as a class library), but to take full advantage of a framework's reusable design, a programmer typically writes code that overrides and is called by the framework. The framework manages the flow of control among its objects. Writing a program involves dividing responsibilities among the various pieces of software that are called by the framework rather than specifying how the different pieces should work together. • Implementation versus design. With class libraries, programmers reuse only implementations, whereas with frameworks, they reuse design. A framework embodies the way a family of related programs or pieces of software work. It represents a generic design solution that can be adapted to a variety of specific problems in a given domain. For example, a single framework can embody the way a user interface works, even though two different user interfaces created with the same framework might solve quite different interface problems.
Thus, through the development of frameworks for solutions to various problems and programming tasks, significant reductions in the design and development effort for software can be achieved. A preferred embodiment ofthe invention utilizes HyperText Markup Language (HTML) to implement documents on the Internet together with a general-purpose secure communication protocol for a transport medium between the client and the Newco. HTTP or other protocols could be readily substituted for HTML without undue experimentation. Information on these products is available in T. Berners-Lee, D. Connoly, "RFC 1866: Hypertext
Markup Language - 2.0" (Nov. 1995); and R. Fielding, H, Frystyk, T. Berners-Lee, J. Gettys and J.C. Mogul, "Hypertext Transfer Protocol - HTTP/1.1 : HTTP Working Group Internet Draft" (May 2, 1996). HTML is a simple data format used to create hypertext documents that are portable from one platform to another. HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains.
HTML has been in use by the World-Wide Web global information initiative since 1990. HTML is an application of ISO Standard 8879; 1986 Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).
To date, Web development tools have been limited in their ability to create dynamic Web applications which span from client to server and interoperate with existing computing resources. Until recently, HTML has been the dominant technology used in development of Web-based solutions. However, HTML has proven to be inadequate in the following areas:
• Poor performance; • Restricted user interface capabilities;
• Can only produce static Web pages;
• Lack of interoperability with existing applications and data; and
• Inability to scale.
Sun Microsystem's Java language solves many ofthe client-side problems by:
• Improving performance on the client side;
• Enabling the creation of dynamic, real-time Web applications; and
• Providing the ability to create a wide variety of user interface components. t> With Java, developers can create robust User Interface (UI) components. Custom "widgets" (e.g., real-time stock tickers, animated icons, etc.) can be created, and client-side performance is improved. Unlike HTML, Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance. Dynamic, real-time Web pages can be created. Using the above-mentioned custom UI components, dynamic Web pages can also be created.
Sun's Java language has emerged as an industry-recognized language for "programming the Internet." Sun defines Java as: "a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword- compliant, general-purpose programming language. Java supports programming for the Internet in the form of platform-independent Java applets." Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add "interactive content" to Web documents (e.g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e.g.,
Netscape Navigator) by copying code from the server to client. From a language standpoint, Java's core feature set is based on C++. Sun's Java literature states that Java is basically, "C++ with extensions from Objective C for more dynamic method resolution."
Another technology that provides similar function to JAVA is provided by Microsoft and
ActiveX Technologies, to give developers and Web designers wherewithal to build dynamic content for the Internet and personal computers. ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content. The tools use Internet standards, work on multiple platforms, and are being supported by over 100 companies. The group's building blocks are called ActiveX Controls, small, fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages. ActiveX Controls work with a variety of programming languages including Microsoft Visual C++, Borland Delphi, Microsoft Visual Basic programming system and, in the future, Microsoft's development tool for Java, code named "Jakarta." ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications. One of ordinary skill in the art readily recognizes that ActiveX could be substituted for JAVA without undue experimentation to practice the invention.
I) One embodiment ofthe present invention includes three different, but complementary dimensions that together provide a framework which can be used in assessing and rating the IT operations of an organization. The following three dimensions constitute the framework ofthe present invention: 1) Operations Environment Dimension, 2) Capability Dimension, and 3) Maturity Dimension.
The first dimension describes and organizes the standard operational activities that any IT organization should perform. The second dimension provides a context for evaluating the performance quality of these operational activities. This dimension specifies the qualitative characteristics of an operations environment and orders these characteristics on a scale denoting rising capability. The final dimension uses this capability scale and outlines a method for deriving a capability rating for specific IT process groups and the entire organization.
The Operations Environment and Capability dimensions provide the foundation for determining the quality or capability level ofthe organization's IT operations. The Operations Environment dimension can be viewed as a descriptive mapping of a model operations environment. In a similar manner, the Capability dimension can be construed as a qualitative mapping of a model operations environment. The Maturity dimension builds on the foundation set by these two dimensions to provide a method for rating the maturity level ofthe entire IT organization.
Figure 2 is a flow chart illustrating the various steps associated with the different dimensions of the present invention. As shown, a plurality of process areas of an operations organization are first defined in terms of either a goal or a purpose in operation 200. The process areas are then grouped into categories, as indicated in operation 202. It should be noted that the categories are grouped in terms of process areas having common characteristics.
Next, in operation 204, process capabilities are received for the process areas ofthe operations organization. Such data may be generated via a maturity questionnaire which includes a set of questions about the operations environment that sample the base practices in each process area of the present invention. The questionnaire may be used to obtain information on the capability of the IT organization, or a specific IT area or project.
Thereafter, category capabilities are calculated for the categories ofthe process areas in operation 206. A maturity ofthe operations organization is subsequently determined based on the category capabilities ofthe categories in operation 208.
Ή The user-specified or measured parameters, i.e., capability of each ofthe process areas, may be inputted by any input device, such as the keyboard 124, the mouse 126, the microphone 132, a touch screen (not shown), or anything else such as an input port that is capable of relaying such information. Further, the definitions, grouping, calculations and determinations may be carried out manually or via the CPU 110, which in turn may be governed by a computer program stored on a computer readable medium, i.e., the RAM 114, ROM 116, the disk storage units 120, and/or anything else capable of storing the computer program. In the alternative, dedicated hardware such as an application specific integrated circuit (ASIC) may be employed to accomplish the same. As an option, any one or more ofthe definitions, grouping and determinations may be carried out manually or in combination with the computer.
Further, the outputting ofthe determination ofthe maturity ofthe operations organization may be effected by way ofthe display 138, the speaker 128, a printer (not shown) or any other output mechanism capable of delivering the output to the user. It should be understood that the foregoing components need not be resident on a single computer, but also may be a component of either a networked client and/or a server.
Operations Environment Dimension The Operations Environment Dimension is characterized by a set of process areas that are fundamental to the effective technical execution of an operations environment. More particularly, each process is characterized by its goals and purpose, which are the essential measurable objectives of a process. Each process area has a measurable purpose statement, which describes what has to be achieved in order to attain the defined purpose ofthe process area.
In the present description, goals refer to a summary ofthe base practices of a process area that can be used to determine whether an organization or project has effectively implemented the process area. The goals signify the scope, boundaries, and intent of each process area.
The process goals and purpose may be achieved in an IT organization through the various lower level activities; such as tasks and practices that are carried out to produce work products. These performed tasks, activities and practices, and the characteristics ofthe work products produced 5 are the indicators that demonstrate whether the specific process goals or purpose is being achieved.
In the present description, work product describes evidence of base practice implementation. For example, a completed change control request, a resolved trouble ticket, and/or a service level agreement (SLA) report.
The operations environment is partitioned into three process areas: Process Categories, Process Areas and Base Practices which reflect processes within any IT organization. Figure 3 depicts and summarizes the relationship ofthe Process Categories 300, Process Areas 302, and Base
Practices 304 ofthe Operations Environment Dimension. This breakdown provides a grouping by type of activity. The activities characterize the performance of a process. The three level hierarchy is described as follows.
Process Categories (300)
In the present description, a Process Category has a defined purpose and measurable goals and consists of logically related set of Process Areas that collectively address the purpose and goals, in the same general area of activity.
The purpose of Process Categories is to organize Process Areas according to common IT functional characteristics. There are four process categories defined in the present invention: Service Management, Systems Management, Managing Change, and IT Operations Planning. Process Categories are described as follows:
Figure imgf000017_0001
Figure imgf000018_0001
Figure imgf000018_0002
Figure imgf000018_0003
Figure imgf000019_0001
Figure imgf000019_0002
Process Areas (302)
Process Areas are the second level in the operations hierarchy. The elements of this level are a collection of Base Practices that are performed to achieve the defined purpose ofthe Process
Area.
In the present description, Process Areas refer to a collection of Base Practices that are performed sequentially, concurrently and/or iteratively to achieve the defined purpose ofthe process area. The purpose describes the unique functional objectives ofthe process area when instantiated in a particular environment. Satisfying the purpose statement of a process area represents the first step in building process area capability.
Examples of Process Areas for the Service Management Category include service level management, operations level management, service desk, user administration, and service pricing. To illustrate further, the purpose of service level management may be to document the information technology services to be delivered to users. Note that this purpose states a unique functional objective (to establish requirements), and provides a context (service level). Base Practices (304)
Base Practices are the lowest level in the operation hierarchy. Base Practices are essential activities that an IT organization performs to achieve the purpose of a Process Area. A base practice is what an IT organization does.
For example, Base Practices of service level management may be to assess business strategy, audit current service levels, determine service requirements and IT's ability to deliver services, prepare a draft SLA, identify the charge-back structure, and agree to SLAs with customers. The Process Areas are expressed in terms of their goals, whereas Base Practices are tasks that need to be carried out to achieve those goals. Base Practices may have work products associated with them. A work product is evidence of base practice implementation, for example, a completed change control request, a resolved trouble ticket, and/or a SLA report.
A service desk example of a process area and associated base practices is as follows:
Figure imgf000020_0001
Figure imgf000021_0001
Base Practices
Figure imgf000021_0002
>0
Figure imgf000022_0001
H Capability Dimension
In the present description, Capability Dimension refers to formalizing the process performance into quantifiable range of expected results based on the process capability level that can be achieved by following the process. Process capability dimension characterizes the level of capability of each process area within an organization. In other words, the process capability dimension describes how well the processes in the process dimension are performed.
The Capability Dimension measures how well an IT organization performs its operational processes. In determining capabilities, the Base Practices are viewed as a guide to what should be done. The related Generic Practices deal with the effectiveness in which the Base Practices are carried out. Capability Levels, Process Attributes, and Generic Practices describe the Process Capability. The present invention has five levels of Process Capability that can be applied to any Process Area. The Capability Dimension provides a means to formalize and quantify the process performance. The Capability Dimension describes how well the processes are performed as contrasted with Base Practices that describe what an IT organization does.
The Capability Dimension consists of three components: Capability Levels, Process Attributes, and Generic Practices. These are described below.
Capability Levels
In the present description, Capability Levels indicate increasing levels of process maturity and are comprised of one or more generic practices that work together to provide a major enhancement in the capability to perform the process.
The Capability Level is the highest level ofthe Capability dimension. The Capability Level of a process determines its performance and effectiveness. Each Capability Level has certain Process Attributes associated with it. A Process Attribute is comprised of a set of Generic Practices that provide criteria for improving performance. A particular Capability Level is achieved when all the Process Attributes associated with it and with preceding levels are present. Therefore, once the Capability Level is determined, those Process Attributes - and associated Generic Practices - that are required to enhance capability can be identified. In other words, Capability Levels offer a staged guideline for improving the capability to perform the defined processes.
» Capability Levels provide two benefits: they acknowledge dependencies and relationships among the Base Practices of a Process Area, and they help an IT organization identify which improvements should be performed first, based on a plausible sequence of process implementation.
Each level provides a major enhancement in capability to that provided by its predecessors in the fulfillment ofthe process purpose. For example, at capability Level 1, Base Practices are performed. The performance is ad hoc, informal, and unpredictable. At capability Level 2, the performing of Base Practices are planned and tracked versus just performed - thereby offering a significant improvement over Level 1 practice.
In this architecture, the Capability Levels are applied to each Process Area independent of other Process Areas. An assessment is performed to determine Process Capability for each Process Area, as illustrated in Figure 4.
In the present description, an assessment refers to a diagnostic performed by a trained team to evaluate aspects of an organization's IT operations environment processes. The trained team determines the state ofthe operational processes, identifies pressing operational process related issues, and obtains organizational support for a process improvement program.
Therefore, different Process Areas can, and may, exist at different levels of capability. The ability to rate Process Areas independently enables an IT organization to focus on process improvement priorities driven from business goals and strategic directions. An example of this is illustrated in Figure 4.
Process Attributes
In the present description, process attributes refer to features of a process that can be evaluated on a scale of achievement (performed, partially performed, not performed, etc.) which provide a measure ofthe capability ofthe process.
Within the framework ofthe present invention, measures of capability are based on a set of nine Process Attributes. Process Attributes are used to determine whether a process has reached a given capability. The nine Process Attributes are: Process Performance Performance Management Work Product Management Process Definition Process Resource Process Measurement Process Control Process Change Continuous Improvement
The attributes are evaluated on a four-point scale of achievement. Achieving a given Capability Level depends on the rating assigned to one or more of these attributes.
Generic Practices In the present description, Generic Practices refer to activities that contribute to the capability of managing and improving the effectiveness ofthe operations environment Process Areas. A generic practice is applicable to any and all Process Areas. It contributes to overall process management, measurement, and the institutionalization capability ofthe Process Areas.
For example, the allocation of adequate resources to a process is a Generic Practice and is applicable to all processes. Service Level Management and Migration Control are two different Process Areas with different Base Practices, goals, and purposes. However, they share the same Generic Practice of allocation of adequate resources.
Maturity Dimension
Operational Maturity Dimension characterizes the maturity of an entire operations IT organization. In the present description, maturity refers to the degree of order (structure or systemization) and effectiveness of a process. The degree of order determines its state of maturity. Less mature processes are less ordered and less effective; more mature processes are more ordered and more effective.
The Capability Dimension focuses on the determination ofthe capability of individual processes, within an operations organization, in achieving their stated goals and purpose. The Operational
Maturity Dimension determines the IT organizational maturity by focusing on a collection of
>1 processes at a certain level of capability in order to characterize the evolution ofthe operations IT organization as they improve.
The term Maturity, in the overall context of present invention, is applied to an IT organization as a whole. The Maturity Level is determined by the Capability Level ofthe four Process
Categories: Service Management, Systems Management, Managing Change, and IT Operations Planning. Operational maturity is defined by a staged model, wherein a operational maturity level 500 cannot be reached until all Process Categories driving it have themselves reached a certain maturity level. Similarly, a category Capability Level 502 cannot be reached until all Process Areas 302 contained in it have reached a certain Process Capability Level 504. This staging is illustrated in Figure 5.
In the present description, Maturity Level refers to a sequence of key intermediate states leading to the goal state. Each state builds incrementally on the preceding state.
Even though it is recommended that an entire operational assessment be conducted, the assessment tool ofthe present invention is flexible to accommodate an assessment of a Process Category or just a Process Area. As shown in Figure 5, an assessment could end at the Process Area Level with the Process Capability Level or Process Area Maturity determined. An assessment could also be performed to assess all the Process Areas within a Process Category to determine the Process Category Maturity Level.
The framework of the present invention, which consists ofthe three dimensions described previously, is illustrated in Figure 6. The Operations Environment Dimension 600, the box in the center of Figure 6, divides all IT processes into Process Categories 300. Process Categories
300 divide into a finite number of Process Areas 302. Process Areas 302 consist of a finite number of Base Practices 304.
Each Process Area within a category is assigned a Capability Level 504 based on the performance of Process Attributes 601 comprised of a finite number of Generic Practices 602 applicable to that process (shown in the box on the right).
In turn, the IT organization's operational maturity 603 present invention is based on a clustering of process capabilities, as illustrated in the third box to the left. The framework ofthe present invention is designed to support an IT organization's need to assess and improve their operational capability. The structure ofthe model enables a consistent appraisal methodology to be used across diverse Process Areas. The distinction between essential operations and process management-focused elements therefore allows a systematic approach to process improvement.
Capability Determination
As described in the previous section, the Capability Dimension ofthe present invention measures how capable an IT organization is in achieving the purpose of its various Process Areas. Within the context ofthe present invention, Capability Levels, Process Attributes, and Generic Practices describe the Process Capability. In this section, the Capability Levels, their characteristics, the Process Attributes, and the Generic Practices that comprise them are discussed in more detail.
The present invention has five levels of Process Capability that can be applied to any Process
Area. As mentioned before, Generic Practices are grouped by Process Attributes, and Process Attributes determine the Capability Level. Capability Levels build upon one another; levels cannot, therefore, be skipped.
Figure 7 tabulates the relationship of Generic Practices and Process Attributes to Capability
Levels.
The following section explains in greater detail what is meant by Level 1, Level 2, and so forth. Each Level is described in terms of its characteristics and the Generic Practices (GP) assigned to it.
Level 1 : Performed Informally
At this Level, all Base Practices are generally performed, but operations may be ad hoc and occasionally chaotic. Consistent planning and tracking of performance is not performed. Good performance depends on individual knowledge and effort. Operational support and services are generally adequate, but quality and efficiency depend on how well individuals within the IT organization perceive that tasks should be performed. The capability to perform an activity is not generally repeatable or transferable.
H Process Attribute
ATT 1 A: Process Performance - the extent to which the execution ofthe process employs a set of practices which uses identifiable input work products to produce identifiable output work products that are adequate to satisfy the purpose ofthe process.
In order to achieve this capability, Base Practices ofthe process must be implemented and work products must be produced that satisfy the process purpose. The related Generic Practice is:
GP1.1 Ensure that Base Practices are performed. When all base practices are performed, the purpose ofthe process area is satisfied. A process may exist but it may be informal and undocumented.
Level 2: Planned and Tracked
At this Level, performance ofthe Base Practices in the Process Area is planned and tracked. The necessary discipline is in place to repeat earlier successes with similar characteristics.
There is general recognition that the Process Area performance is dependent on how efficiently the Base Practices are implemented. Work products, such as completed change control requests, resolved trouble tickets, etc., which are related to base practice implementation are periodically reviewed and placed under version control. Corrective action is taken when variances in services and work products occur.
Process Attribute
ATT 2A: Performance Management - the extent to which the execution ofthe process is managed in order to produce work products within a stated time and resource requirement. The related Generic Practices are:
GP2.1 Establish and maintain a policy for performing operational tasks.
Policy is a visible way for the operations environment personnel and the management team to set expectations. The form of policies varies widely depending on the local culture. Policy typically specifies that plans are documented, managed and controlled, and that reviews are conducted. Policy provides guidance for performing the operational tasks and processes.
GP2.2 Allocate sufficient resources to meet expectations.
>? Resources include adequate funding, appropriate physical facilities, skilled people, and appropriate tools. This practice ensures that the level of effort, appropriate skills mix, tools, workspace, and other direct resources are available to perform the operational task and processes.
GP2.3 Ensure personnel receive the appropriate type and amount of training. Ensure that the individuals are appropriately trained on how to perform the operational tasks and processes. Training provides a common basis for repeatable performance. Even if the operations personnel or management have satisfactory technical skills and knowledge, there is almost always a need to establish a common understanding ofthe operational process activities and how skills are applied in them. Training, and how it is delivered, may change with process capability due to changes in how the process is performed and managed.
GP2.4 Collect data to measure performance. The use of measurement implies that the metrics have been defined and selected, and data has been collected. Building a history of measures, such as cost and schedule variances, is a foundation for managing by data. Quality measures may be collected and used, but result in maximum impact at Level 4 when they are subjected to quantitative process control.
GP2.5 Maintain communication among team members.
Open communication ensures that there is common understanding, that decisions are consensual, and that team members are kept aware of decisions made. Communication is needed when changes are made to plans, products, processes, activities, requirements, and responsibilities. The commitments, expectations, and responsibilities are documented and agreed upon within the project group. Commitment may be obtained by negotiation, by using input and feedback, or through joint development of solutions to issues. Issues are tracked and resolved within the group. Communication occurs periodically and whenever the status changes. The participants have access to data, status information, and recommended actions.
Process Attribute
ATT 2B: Work Product Management - the extent to which the process is managed to produce work products that are documented and controlled, and that meet their functional and nonfunctional requirements, in line with the work product quality goals ofthe process. n In order to achieve this capability, a process needs to have stated functional and non-functional requirements for work products, including integrity, and to produce work products that fulfill the stated requirements. The related Generic Practices are:
GP2.6 Ensure work products satisfy documented requirements.
Requirements may come from the business customer, policies, standards, laws, regulations, etc. The applicable requirements are documented and available for verification activities.
GP2.7 Employ version control to manage changes to work products. Place identified work products under version control, or configuration management to provide a means of controlling work products and services.
Level 3: Well-Defined
At Level 3, Base Practices are performed with the assistance of an available, well-defined, and operations-wide process infrastructure. The processes are tailored to meet the specific needs of a certain practice.
Data from using the process are gathered to determine if modifications or improvements should be made. This information is used in planning and managing the day-to-day execution of multiple projects within the IT organization, and for short and long-term process improvement.
Once the environment is stable, common practices for performing the processes are collected, defined in a consistent manner, and used as the basis for long-term improvement across the operations environment. At this level, the proper mechanism is in place to distribute knowledge and experience throughout the operations environment.
Process Attribute
ATT 3 A: Process Resource - the extent to which the execution ofthe process uses suitable skilled human resources and process infrastructure effectively to contribute to the defined business goals ofthe operations environment.
In order to achieve this capability, a process needs to have an infrastructure available that fulfills stated needs, and adequate human resources. The related Generic Practices are:
Ϊ1 GP3.1 Define policies and procedures at an IT level.
Policies, standards, and procedures are established at an IT level for common use throughout the operations environment.
GP3.2 Define tasks that satisfy the process purpose and business goals consistently and repeatedly. This includes:
Identifying the standard process from those available in the IT organization that is appropriate to the process purpose and the business goals ofthe IT organization. Tailoring the standard process to obtain a defined process appropriate for the task at hand, implementing the defined process to achieve the process purpose consistently and repeatedly, and to support the business goals ofthe organization.
Process Attribute ATT 3B: Process Definition - the extent to which the execution ofthe process uses a definition, based upon a standard process, that enables it to contribute to the defined business goals ofthe IT organization.
In order to achieve this capability, a process needs to be executed according to a standard definition that has been suitably tailored to the needs ofthe process instance. The standard process needs to be capable of supporting the stated business goals ofthe IT organization. The related Generic Practices are:
GP3.3 Plan for human resources proactively. Unlike training at Capability Level 2, this practice embodies the pro-active planning of personnel. This includes the selection of proper work forces, training, and dissemination.
GP 3.4 Provide feedback in order to maintain knowledge and experience. The standard process repository is to be kept up-to-date, through a continuous feedback system based on experiences gained from using the defined process.
Level 4: Quantitatively Controlled
At this Level, processes and services are quantitatively measured, understood, and controlled.
Detailed measures of performance are collected and analyzed. Establishing common processes within an operations environment enables more sophisticated methods of performing activities. These activities include controlling processes and results quantitatively; integrating processes across groups, or fine-tuning processes to different services.
At this Level, measurable process goals are established for each defined process and associated services. Detailed measures of performance are collected and analyzed. This data enables quantitative understanding ofthe processes and an improved ability to predict performance. Performance is objectively managed, the quality of services is quantitatively known, and defects are selectively identified and corrected.
Process Attribute
ATT 4A: Process Measurement - the extent to which measures are used to ensure that the implementation ofthe process supports its execution, and contributes to the achievement of IT organizational goals.
In order to achieve this capability, a process needs to have defined measures that enable an execution to be controlled. The related Generic Practices are:
GP4.1 Establish measurable quality objectives for the operations environment.
These quality objectives can be tied to the strategic quality goals ofthe IT organization, the particular needs and priorities ofthe customer, or the tactical needs of a specific group or project. The measurements referred to here go beyond the traditional service level and end product measurements. They are intended to imply sufficient understanding ofthe processes being used to enable the IT organization to set and use intermediate goals for work-product quality.
GP4.2 Automate data collection.
Process definitions are modified to reflect the quantitative nature of process performance. Measurements become inherent in the process definition and are collected as the process is being performed.
Process Attribute ATT 4B Process Control - the extent to which the execution ofthe process is controlled through the collection and analysis of measures that correct the performance ofthe process in order to reliably achieve the defined process goals. The related Generic Practices are:
GP4.3 Provide adequate resources and infrastructure for data collection.
Since the success of Level 4 lies fundamentally on collection of proper data, automated methods should be in place to collect them. This includes software tools and meaningful placement of appropriate metrics for collection ofthe relevant data.
GP4.4 Use data analysis methods and tools to manage and improve the process.
This includes the identification of analysis and control techniques appropriate to the process; the provision of adequate resources and infrastructure for analysis and process control; analysis of available measures to identify process control parameters; and, identification of deviations and employment of corrective actions.
Level 5: Continuously Improving
Level 5 is the highest achievement level from the viewpoint of Process Capability.
Continuous process improvement is enabled by quantitative feedback from the process and from pilot studies of innovative ideas and new technology. A focus on widespread, continuous improvement should permeate the IT organization. The IT organization should establish quantitative performance goals for process effectiveness and efficiency, based on its business goals and strategic objectives.
Once critical business objectives are consistently evaluated and compared against process capability, continuous improvement can be institutionalized within the operations environment.
This results in a cycle of continuous learning.
Process Attribute
ATT 5A: Continuous Improvement - the extent to which changes to the process are identified and implemented to ensure continuous improvement in the fulfillment ofthe defined business goals ofthe IT organization.
1 >- In order to achieve this capability, it is necessary to continuously identify and implement improvements to the tailored process, and provide input to make changes to the standard process definition. The related Generic Practices are:
GP5.1 Continually improve tasks and processes
Improvements may be based on incremental operational refinements or through innovations, such new technologies. Improvements may typically be driven by the following activities:
• Identifying and approving changes to the standard process definition on the basis of quantitative understanding ofthe process. • Providing adequate resources to effectively implement the approved changes in affected tailored processes.
• Implementing the approved changes to the affected tailored processes.
• Validating the effectiveness of process change on the basis of measurement of actual performance against the process and business goals.
Process Attribute
ATT 5B: Process Change - the extent to which changes to the definition, management, and performance ofthe process is controlled to better achieve the business goals of the IT organization.
In order to achieve this capability, a process may use quantitative methods to identify and implement changes to the standard process definition. The related Generic Practices are:
GP5.2 Deploy "best practices" across the IT organization. Improved practices must be deployed across the operations environment to allow their benefit to be felt across the IT organization. The deployment activities include: Identifying improvement opportunities in a systematic and proactive manner to continuously improve the process.
Establishing an implementation strategy based on the identified opportunities to improve process performance according to business goals. Implementing changes to selected areas ofthe tailored process according to the implementation strategy. Validating the effectiveness of process change on the basis of measurements of actual performance against process and business goals, and then feedback to the standard process definition.
Rating Framework
The rating framework requires identification of objective attributes or characteristics of a practice or work product of an implemented process to validate that Base Practices are performed, and Generic Practices are followed. Assessment Indicators determine Process Attribute ratings which then are used to determine Capability Level.
In the present description, Assessment Indicators refer to objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process.
Process Capability Rating
The cornerstone of a rating framework is the identification and description of Assessment Indicators to help rate the Process Attributes. Assessment Indicators are objective attributes or characteristics of a practice or work product that supports an assessor's judgment of performance of an implemented process. Assessment Indicators are evidence that Base Practices are performed, and Generic Practices are followed. The indicators are not intended to be regarded as a mandatory checklist to be followed, but rather are a guide to enhance an assessment team's objectivity in making their judgments of a process's performance and capability. The rating framework adds definition and reliability to the present invention, and thereby improves repeatability.
Assessment Indicators are determinants of Process Attribute ratings for each Process Capability attribute. Each assessed process profile consists of a set of Process Attribute ratings. Each attribute rating represents a judgment by the assessment team ofthe extent to which the attribute is achieved.
Figure 8 illustrates the Process Attribute rating represented on a four-point scale of achievement.
The indicators determine attributes rating which then are used to determine Capability Level. The rating scale defined below is used to describe the degree of achievement ofthe defined capability characterized by Process Attributes. Once the appropriate rating for each Process Attribute is determined, ratings can be combined to assign the Capability Level achieved by the assessed process. Figure 9 represents the mapping of attribute ratings to the process Capability Levels determination.
As an example, to assess the capability of a particular instance of a Service Desk process, the first step is to identify if the appropriate Base Practices are performed at all. The necessary foundation for improving the capability of any process is to at least demonstrate that the Base Practices are being performed. The assessment team may then formulate an objective judgment of process performance attribute through different means such as analysis ofthe work products
(i.e., reviewing completed trouble tickets), demonstration of evidence of process implementations (i.e., are escalation procedures documented and understood?), interviews with process performers (i.e., discuss daily activities with Service Desk personnel), and other means as appropriate (i.e., does the Service Desk have a dedicated phone number that users should call to report incident/problems/requests or a dedicated email address, etc.).
Achievement of Base Practices is an indication that Process Area goals are being met. The increasing capability of a process to effectively achieve its goals and objectives is based upon attribute rating. The attribute rating is determined by the performance ofthe associated Generic Practices. Evidence of effective performance ofthe Generic Practices associated with a Process
Attribute supports the assessment team's judgement ofthe degree of achievement ofthe attributes.
Operational Maturity Rating Up to now, the discussion has focused on the capability rating of Process Areas. To determine the maturity level of an organization, the third dimension ofthe architecture ofthe present invention, the capability ratings are used.
Process Category capabilities are determined from capability ratings of its Process Areas. Once all Process Areas of a category are rated the lowest rating assigned to a Process Area becomes the category rating as well. Similarly, the operational maturity rating is determined from Process Category rating within the IT organization. Once all Process Categories are rated then the lowest rating assigned to a Process Category becomes the IT organizational maturity.
3 For example, if the Process Categories of an IT organization are rated as follows, then this particular IT organization would receive a maturity level rating of "1".
Process Category Capability Rating
Service Management 2
Systems Management 1
IT Operations Planning 3
Managing Change 2
In the present invention, the concept of capability is applied to processes, and the concept of maturity is applied to IT organizations.
Assessment Process
In performing an assessment, an assessment team collects the evidence on the implementation of the processes being assessed and determines their compatibility as defined in the framework of the present invention. The objective ofthe assessment is to identify the differences and the gaps between the actual implementations ofthe processes in the assessed operational IT organization with respect to the present invention. Using the framework ofthe present invention ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
The assessment process is used to appraise an organization's IT operations environment process capability. Defining a reference model ensures that results of assessments can be reported in a common context and provides the basis on which comparisons can be based.
An IT organization can perform an assessment for a variety of reasons. An assessment can be performed in order to assess the processes in the IT operations environment with the purpose of improving its own work and service processes. An IT organization can also perform an assessment to determine and better manage the risks associated with outsourcing. In addition, an assessment can be performed to better understand a single functional area such as systems management, on a single process area such as a performance management, or on the entire IT operations environment. Three phases are defined in the assessment model: Planning and Preparing, Performing, and Distributing Results. All phases ofthe assessment are performed using a team-based approach. Team members include the client sponsor, the assessment team lead, assessment team members, and client participants.
Plan and Prepare for the Assessment
Determine Assessment Scope
In the present description, assessment scope refers to organizational entities and components selected for inspection. A clear understanding ofthe purpose ofthe framework, constraints, roles, responsibilities, and outputs are needed prior to the start ofthe assessment. Therefore, in preparation for the assessment, the assessment team lead and the client sponsor work together to reach agreement on the scope and goals ofthe assessment. Once agreement is reached, the assessment team lead ensures that the IT operational processes selected for the assessment are sufficient to meet the assessment purpose and may provide output that is representative ofthe assessment scope.
An assessment plan is developed based on the goals identified by the client sponsor. The plan consists of detailed schedules for the assessment and potential risks identified with performing the assessment. Assessment team members, assessment participants, and areas to be assessed are selected. Work products are identified for initial review, and the logistics for the on-site visit are identified and planned.
Train the Assessment Team
The assessment team members must receive adequate training on the framework ofthe present invention and the assessment process. It is essential that the assessment team be well-trained on the present invention to ensure that they may have the ability to interpret the data obtained during the assessment. The team must have comprehensive understanding ofthe assessment process, its underlying principles, the tasks necessary to execute it, and their role in performing the tasks.
Gather Assessment Input
Maturity questionnaires are distributed to participants prior to the client site visit. Maturity questionnaires exist for each process area ofthe present invention, and tie back to base practices, process attributes and generic practices. Completed questionnaires provide the assessment team with an overview ofthe IT operational process capability ofthe IT organization. The responses assist the team in focusing their investigations, and provide direction for later activities such as interviews and document reviews. Assessment team members prepare exploratory questions based on Interview Aids and responses to the maturity questionnaires.
In the present description, Interview Aids refers to a set of exploratory questions about the operations environment which are used during the interview process to obtain more detailed information on the capability ofthe IT organization. The interview aids are used by the assessment team to guide them through interview sessions with assessment participants.
Assessment participants prepare documentation for the assessment team members to review. Documentation about the IT operational processes allows the assessment team to tie IT organization data to the present invention.
Conduct Assessment
A Kick off meeting is scheduled at the start ofthe on-site activities. The purpose ofthe meeting is to provide the participants with an overview of present invention and the assessment process, to set expectations, and to answer any questions about the process. A client sponsor ofthe assessment may participate in the presentation to show visible support and stress the importance ofthe assessment process to everyone involved.
Gather Data
Data for the assessment are obtained from several sources: responses to the maturity questionnaires, interview sessions, work products, and document reviews. Documents are reviewed in order to verify compliance. Interviewing provides an opportunity to gain a deeper understanding ofthe activities performed, how the work is performed, and processes currently in use. Interviewing provides the assessment team members with identifiable assessment indicators for each Process Area appraised. Interviewing also provides the opportunity to address all areas ofthe present invention within the scope ofthe assessment.
Interviews are scheduled with IT operations managers, supervisors, and operations personnel. IT operations managers and supervisors are interviewed as a group in order to understand their view of how the work is performed in the IT organization, any problem areas of which they are aware, and improvements that they feel need to be made. IT operations personnel are interviewed to
3* collect data within the scope ofthe assessment and to identify areas that they can and should improve in the IT organization.
Examples of maturity questionnaires associated with the foregoing service desk example are as follows:
Questions
Base Practice: 1.3.1 Call Attention
What methods are available to users for communication with the Service Desk, and do users have access to resources needed for such communication?
Are all users informed how and when to contact the Service Desk? If so, how?
Do all users receive the same level of support? If no, how does support differ?
Do you gather call statistics like total volume of calls and number of abandoned calls? If so, can we access this information?
Is there a need for after-hours support? If so, what type of after-hours support does the Service Desk provide?
Base Practice: 1.3.2 Incident Request Logging
1. What is the procedure for logging incidents/requests, and is this followed in all cases?
Is a priority level assigned to the incident/request at time of receipt and how is it determined?
Base Practice: 1.3.3 Incident/Request Qualification
Do Service Desk personnel have access to a catalogue/database of frequently occurring incidents and their solutions, and does its format allow for rapid access and search?
How often is this catalogue/database accessed to provide an immediate solution or work-around to the user? (e.g., all calls, some calls, very few calls)
How frequently is this catalogue/database updated?
What other resources exist to aid Service Desk personnel with immediate incident resolution?
Base Practice: 1.3.4 Incident/Request Assignment
Is there a defined time frame within which the incident/request should be assigned and is it usually followed?
Are users notified of receipt, status and approximate time to resolution (if possible) of incident/request and provided with the incident/request ID?
By what process is the appropriate personnel determined for handling an incident/request?
Is a defined system used for assigning responsibility for an incident/request to the appropriate personnel? (e.g. trouble tickets are generated and sent to appropriate personnel)
Is a record made ofthe person to whom the incident/request is assigned?
Base Practice: 1.3.5 Incident & Problem Resolution
Are non-resolved incidents/problems escalated according to procedures defined in SLAs?
2. How are appropriate resources notified that the incident/problem has been escalated?
While problem resolution is in process, is a work-around solution determined and conveyed to the user?
When a problem is escalated or a resolution has been determined, is the log updated?
Does the Service Desk or the party to whom the problem was escalated "own" the problem?
Base Practice: 1.3.6 SLA & OLA Tracking and Monitoring
What is the system for tracking and monitoring the problem resolution process for an incident request?
What types of issues (e.g. excessive reassignments, deviations from estimated task times) are flagged and what action is taken to address them?
Base Practice: 1.3.7 Resolution Confirmation
Are users notified of incident/request resolution?
Is confirmation sought from the user to verify that incident/request has been resolved satisfactorily?
If such confirmation is not obtained what is done?
Base Practice: 1.3.8 Incident / Request Closure
How is an incident/request closed? What records are made?
If it exists, is a solution database updated with the incident/problem and solution for future reference?
What parties are informed of a closure?
Base Practice: 1.3.9 Trends and Repetitive Incident Analysis Are incidents analyzed to detect trends and identify underlying problems? If so, by what process? Are users notified of known incidents proactively before they report the incident?
Base Practice: 1.3.10 Service Level Control
Does the Service Desk generate reports comparing actual service levels (eg. Number of incidents resolved at initial call, resolution time by severity) with target service levels?
Who receives these reports and for what purposes?
How are service levels targets set and what is the process for reviewing/updating them?
Do the users communicate their views of support to the Service Desk and agree with the Service Desk's assessment of incident and problem management?
Base Practice: 1.3.11 Receive Requests
Are requests handled immediately or do they require provisioning/approval?
Does the Service Desk coordinate the approval of requests with the appropriate functions and notify requester of approval rejection?
If request requires functions outside the Service Desk, how does the Service Desk pass responsibility to the appropriate personnel?
Do SLAs exist between the Service Desk and the end user community?
Do agreements exist between the Service Desk and the next level of support (internal or external)? Generic Questions for Process Area
Are the policies for Service Desk operation outlined in a document? How are employees made aware of these policies?
What mechanisms are in place to ensure policies are followed?
How frequently are Service Desk policies reviewed and/or modified? What is the process for such policy updates?
Are the current staff and resources ofthe Service Desk adequate for satisfactorily meeting user needs. What type of qualification and/or training do Service Desk personnel have?
Are Service Desk operations periodically reviewed in order to identify and implement potential improvements? Who manages this process?
Solidify Information
The purpose of solidifying this information is to summarize and consolidate information into a manageable set of findings. The data is then categorized into Process Areas ofthe present invention. The assessment team must reach consensus on the validity ofthe data and whether sufficient information in the areas evaluated has been collected. It is the team's responsibility to obtain sufficient information on the components ofthe present invention within the scope ofthe assessment for the required areas ofthe IT organization before any rating can be done. Follow- up interviews may occur for clarification.
Initial findings are generated from the information collected thus far, and presented to the assessment participants. The purpose of presenting initial findings is to obtain feedback from the individuals who provided information during the various interviews. Ratings are not considered until after the initial findings presentations, as the assessment team is still collecting data. Initial findings are presented in multiple sessions in order to protect the confidentiality ofthe assessment participants. Feedback is recorded for the team to consider at the conclusion of all of the initial findings presentations. H° Examples of assessments associated with the foregoing service desk example are as follows:
Level 1
Figure imgf000042_0001
Level 2
Figure imgf000042_0002
HJ
Figure imgf000043_0001
Level 3
Figure imgf000043_0002
Level 4
Figure imgf000043_0003
Level 5
Process Attribute | Generic Practice [ Example of Assessment Indicator
Ή>-
Figure imgf000044_0001
Rating
After the assessment team consolidates all ofthe data, the rating process may begin. The experience and training that the assessment team has provides them with the knowledge needed to interpret the data obtained during the assessment. The first step in the rating process is to determine if Process Area goals are being met. Process Area goals are considered met when all base practices are performed. Each process attribute for each Process Area within the assessment scope is then rated. Process attributes are rated based on the existence of and compliance to generic practices. Using the Assessment Indicator Rating template, the assessment team identifies assessment indicators for each process area to determine whether or not process attributes are achieved. Ratings are always established based on consensus ofthe entire assessment team. Questionnaire responses, interview notes, and documentation are used to support ratings; confirmation from two sources in different contexts (e.g., two people in different meetings) ensures compliance of an activity.
For each process attribute, the team reviews all weaknesses that relate to the associated generic practices. If the team determines that a weakness is strong enough to impact the process attribute, the process attribute is rated "not achieved." If it is decided that there are no significant weaknesses that have an impact on a process attribute, it is rated "fully achieved." For a Process Area to be rated "fully achieved," all process attributes for the Process Area must be rated "fully achieved." A Process Area may be rated fully achieved, largely achieved, partially achieved, or not achieved.
Assignment of a maturity level rating is optional at the discretion ofthe sponsor. For a particular maturity level rating to be achieved, all Process Areas within and below the maturity level must be satisfied. For example, for an IT organization to be rated at maturity level 4, all Process Areas at level 4, level 3 and at level 2 must have been investigated during the assessment, and all Process Areas must have been rated achieved by the assessment team. The final findings presentation is developed by the team to present to the sponsor and the IT organization the strengths and weaknesses observed for each Process Area within the assessment scope, the ratings of each Process Area, and the maturity level rating if desired by sponsor.
Wrap up and Distribution of Results
The final assessment results are presented to the client sponsor. During the final presentation, the assessment team must ensure that the IT organization understands the issues that were discovered during the assessment and the key issues that it faces. Operational strengths are presented to validate what the IT organization is doing well. Strengths and weaknesses are presented for each process area within the assessment scope as well as any issues that affect process and are un-related to the present invention. A Process Area profile is presented showing the individual Process Area ratings in detail.
An executive overview session is held in order to allow the senior IT Operations manager to clarify any issues with the assessment team, to confirm his or her understanding ofthe operational process issues, and to gain full understanding ofthe recommendations report.
When the assessment has been completed and findings have been presented, the assessment team collects feedback from the assessment participants and the assessment team on the process, packages information that needs to be saved for historical purposes.
Figure 10 describes the roles and responsibilities of those involved with the assessment process.
As shown, various roles that may be involved with the execution ofthe present invention include a client sponsor, assessment participants, an assessment team leader, and assessment team members. It should be noted that any of such roles and responsibilities may be automated per the desires ofthe user.
Figure 11 represents the indicator types and their relationship to the determination of Process Area rating. As shown, evidence of process performance and process capability is provided by assessment indicators. Such assessment indicators, in turn, consist of base practices and general
1M practices. At the next level, the base practices and general practices are assessed by process implements, work products, practice performance, resources and infrastructure.
A plurality of examples of additional process areas and associated generic/base practices will now be set forth. In addition, maturity questionnaires are also provided for each example. Given this information, the foregoing principles ofthe present invention may be employed for determining capability levels of various process areas for process assessment purposes in an operational maturity investigation.
Figure imgf000046_0001
PA Goals To define services to be delivered (by application and/or business unit).
To define a quantifiable service level that represents a minimum level of service for each service delivered.
To gather and compare actual service statistics, and to identify and resolve service deviations.
To regularly review services being delivered and determine if they are appropriately fulfilling SLA requirements.
To ensure IT can deliver services required by the business.
To regularly report on SLA compliance.
«<r PA's Metrics Percentage of SLAs signed off on time
Number of iterations ofthe SLA before sign off
Percentage of SLAs not signed off at the same time as the corresponding OLAs.
Percentage of SLA Reports delivered on time
Base Practices
Figure imgf000047_0001
Figure imgf000048_0001
References
Figure imgf000048_0002
Process Area: SLA Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000048_0003
Level 2
Figure imgf000048_0004
7
Figure imgf000049_0001
Level 3 Assessment Indicators
Figure imgf000049_0002
Level 4 Assessment Indicators
Figure imgf000049_0003
V
Figure imgf000050_0001
Level 5 Assessment Indicators
Figure imgf000050_0002
Process Capability Assessment Instrument: Interview Guide
Process Area 1.1 SLA Management
Questions
Base Practice: 1.1.1 Assess Business Strategy
What actions are taken to incorporate the business strategy into the process of defining service goals and strategy?
What relevant components are drawn from the business strategy (e.g. service measures, volume projections, workloads etc.)?
What parties are involved in this process?
Is there any tie with capacity management and planning? If so, please describe the tie.
H nonwr r otfftnenn i ics t thhee* c srtfroattαemgyi a αcscsαesccsαeHd??
Base Practice: 1.1.2 Audit Current Service Levels
As part of the SLA preparation process, what is the procedure for auditing existing service levels? What information is gathered? Is this process carried out in accordance with predefined guidelines? Which service areas are audited?
Who carries out the audit and who receives the audit results?
What type of report or document is the output of the audit process?
Base Practice: 1.1.3 Determine Service Requirements
What is the process by which service requirements are defined? Who is involved in this process? Do the service requirements specify all service items and their associated service levels?
Are Key Performance Indicators (KPIs) and metrics for evaluating service levels determined? How often are service requirements revisited?
Base Practice: 1.1.4 Determine Ability to Deliver Services
Prior to preparing the SLAs, how was IT's ability to deliver services gauged?
Was capability evaluated in all service areas? What types of information were considered? Did this process involve the Capacity Planning & Modeling function?
In what form were the capability evaluation results reported, to whom and for what purposes?
Ml Base Practice: 1.1.5 Prepare Draft SLA
What is the procedure for drafting SLAs? What parties are involved?
What does the SLA contain (e.g. specific applications, workload, cost of service, measure of service, type of support etc.)?
Does the SLA outline each key business application (e.g. penalties for SLA violation, tools to maintain SLAs, manager/owner of SLA etc.)?
Are separate user groups determined based on different service requirements and unique SLAs created for each group? If so, do standard guidelines exist?
Does the process of preparing SLAs include identifying potential suppliers to support the service requirements?
Are provisions for normal/contingency/disaster conditions specified in the SLA?
Are monitoring and reporting procedures defined?
Are escalation procedures defined for instances when SLAs are not met?
Has what constitutes a failed SLA and the penalties for failure been determined?
Are provisions for rewards made for cases when service exceeds requirements?
Base Practice: 1.1.6 Identify Charge Back, Budget or Cost Structure Components
Was a chargeback structure determined as part of the SLA preparation process? If so, for what components is the chargeback determined?
How is the chargeback structure utilized in relation to service level management?
Do you have or do any budgeting or costing that is used in SLA management?
Base Practice: 1.1.7 Agree to SLAs with Users
To what parties are SLAs submitted for approval?
How is approval of the SLA documented?
Where is information about the finalized SLA stored? Are SLA summaries available to users?
Is there a system for users to communicate desired changes to services provided?
Base Practice: 1.1.8 Report on SLA Performance
Are actual statistics required to measure service delivery gathered and in what format are they stored?
Is information on service delivery collected according to prescribed schedules?
Are actual service statistics compared to targets defined in the SLAs?
Are users' input on SLA performance obtained (e.g. surveys)?
What types of reports are produced based on the statistics gathered?
Who reviews these reports and what is the process for ascertaining SLA compliance? What procedures are in place to monitor and address SLA breaches?
Does the need for short-term deviations to SLAs due to business requirements arise, and how is it managed?
Generic Questions for Process Area
How often are SLAs re-examined and updated? Approximately how many hours are allocated to review and discuss SLAs?
Are there personnel who control and manage new and existing SLAs? What relevant qualifications and/or training do they have?
Do you think the resources allocated to managing SLAs are adequate? Please explain.
Is the SLA management process periodically evaluated with the intent of identifying possible improvements? How frequently does this occur and what is the process? ■
Process Capability Assessment Instrument
Process Area 1.1 SLA Management
Process Area SLA Management involves the creation, management, reporting, and discussion of Service Description Level Agreements (SLAs) with users and the providers within Information Technology (IT). A SLA is a formal agreement between a user who requires information services and the IT organization responsible for providing those services. SLA Management involves the following areas:
SLA Definition: The SLA document defines, in specific and quantifiable terms, the level of service that is to be delivered to users. In the enterprise environment, many design and
5"o
Figure imgf000052_0001
Questionnaire
Process Area 1.1 SLA Management (Business Relationship Management)
Figure imgf000052_0002
Work Product list
Process Area 1.1 SLA Management (Business Relationship Management)
SLA process flow
Sample SLA document
IT capability report
SLA performance reports
User survey results
Charge back structure document
Responsibility matrix
SLA Communication flow
Job description of SLA manager and staff
OLA Mana ement 1.2
Figure imgf000052_0003
t Operations Level Agreements with providers within the organization, as well as external suppliers and vendors. An OLA is an agreement between the IT organization and those delivering the constituent services ofthe system. OLAs enable the IT organization to provide the level of service stipulated in a Service Level Agreement as supporting services are guaranteed in the OLA. OLA Management involves the following:
OLA Definition: An OLA outlines the type of service that will be delivered to the users from each service provider. OLA Definition works with service providers to define:
Whether a particular service level can be met, and how it will be met through operational levels
Which provider(s) can supply a service, or part of a service
Roles and responsibilities
What constitutes a failure to meet the OLA, and corresponding penalties (if appropriate)
Procedures for monitoring operational levels
Cost structures
How the service will be measured
Contractual arrangements with the providers
Formal OLAs are defined for suppliers who are external to the IT organization. They may take the form of maintenance contracts, warranties, or service contracts. Further formal or informal OLAs may also be created for internal suppliers, depending on the size ofthe organization.
OLA Reporting: The actual production of trend reports are necessary to monitor and meter the effectiveness of an OLA.
OLA Control: It is important that the services described in OLAs are carefully aligned with current business needs, monitored to ensure that they are performed as described, and updated in line with changes to business needs.
OLA Review: The reports generated from tracking OLAs are reviewed to ensure that the OLAs are carefully aligned with current business needs and if necessary updated to be in line with business needs. In enterprise environments, this process becomes more complex as more components are required to perform these services.
PA's Base 1.2.1 Determine operational items Practices 1.2.2 Group related operational items
1.2.3 Identify suppliers of operational items
1.2.4 Finalize service suppliers
1.2.5 Prepare OLAs
1.2.6 Agree to OLAs with suppliers
1.2.7 Report on OLA performance
PA Goals To define a quantifiable service level that represents a minimum level of service for each service delivered.
To gather and compare provider service statistics, and to identify and resolve service deviations.
To regularly review services being delivered, as specified in the OLA, to determine if they are appropriately fulfilling the requirements.
To regularly report on OLA compliance.
PA's Metrics Percentage of OLAs signed off on time Number of iterations ofthe OLA before sign off Percentage of OLA Reports delivered on time
Base Practices
Figure imgf000053_0001
.
Figure imgf000054_0001
References
Figure imgf000054_0002
Process Area: OLA Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000055_0001
Level 2
Figure imgf000055_0002
?H
Figure imgf000056_0001
Level 3 Assessment Indicators
Figure imgf000056_0002
Level 4 Assessment Indicators
Figure imgf000056_0003
Level 5 Assessment Indicators
Figure imgf000056_0004
? r
Figure imgf000057_0001
Process Capability Assessment Instrument: Interview Guide
Process Area 1.2 OLA Management
Questions
Base Practice: 1.2.1 Determine Operational Items
What is the process by which the key operational items required to support the SLAs is determined?
What personnel are assigned responsibility for identifying these key operational items?
Base Practice: 1.2.2 Group Related Operational Items
1. What criteria are used to group operational items together?
Please describe or list the various groupings of operational items.
Does each defined group of operational items typically fall under one OLA?
Base Practice: 1.2.3 Identify Suppliers of Operational Items
What procedure is used to identify potential service providers?
Do service providers include both internal and external organizations?
What information about the service providers is collected?
Are any preliminary negotiations conducted with the suppliers to determine what type of contractual terms they would consider?
Base Practice: 1.2.4 Finalize Service Suppliers
What selection criteria (e.g. cost, training requirements, tools required) are considered when choosing the service providers?
Does a formal system for evaluating potential suppliers exist to aid in the selection process?
Is a list of alternative or back-up suppliers determined?
Base Practice: 1.2.5 Prepare OLAs
How are OLAs prepared and negotiated with suppliers? Is a standardized procedure followed for each OLA?
What do OLAs contain (e.g. workloads, cost of service, targets, type of support etc.)? Does the OLA outline each key business application (e.g. penalties, tools used to maintain the OLA)?
Has a document specifying standard contents of an OLA been created? Are OLAs prepared according to the specifications in this document?
Are Key Performance Indicators (KPIs) or service measurement metrics specified in the OLA? Are targets for the service measurement metrics specified? If so, how are these targets determined, for example is the supplier capability gauged and considered?
Are OLA monitoring and reporting procedures defined, including the specific reports that will be produced?
Are OLA violation escalation procedures determined?
Is a specification of what constitutes a failed OLA made, and are the penalties (if appropriate) for failure determined?
Are there any provisions for rewards if OLA requirements are exceeded?
Base Practice: 1.2.6 Agree to OLAs with Suppliers
To what parties are OLAs submitted for approval?
J
Figure imgf000058_0001
Process Capability Assessment Instrument
Figure imgf000058_0002
Questionnaire
Process Area 1.2 OLA Management (Service Partner Management)
Figure imgf000059_0001
Sample OLA document
Service level performance reports
OLA compliance reports
Vendor/supplier selection information
Responsibility matrix
OLA Communication flow
Job Description of OLA manager and staff
Service Desk (1.3)
PA Number 1.3
PA Name Service Desk
PA Purpose The Service Desk provides a single point of contact for users with problems or specific service request. The Service Desk forms part of an organization's strategy to enable users and business communities to achieve business objectives through the use of technology.
The Service Desk main objectives are:
To help users when required.
To manage problem resolution.
To log and document problems types, their frequency, and associated workarounds.
To produce management reports on levels of service and user satisfaction.
The Service Desk consists ofthe following functions:
Incident Management - An incident is a single occurrence of an issue that affects the delivery of normal or expected services. Incident Management strives to resolve as high a proportion of incidents as possible prior to passing them on to other areas.
Problem Management - A problem is the underlying cause of one or more incidents. Problem Management utilizes the skills of experts and support groups to fix and prevent recurring incidents by determining and fixing the underlying problems causing the incidents.
Request Management - Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk. Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Further sub- functions of Request Management are: Request Logging Impact Analysis Authorization Prioritization
Figure imgf000060_0001
Base Practices
Figure imgf000060_0002
*1
Figure imgf000061_0001
Figure imgf000062_0001
Figure imgf000062_0002
Process Area: Service Desk
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000062_0003
H
Figure imgf000063_0001
Level 3 Assessment Indicators
Figure imgf000063_0002
Level 4 Assessment Indicators
Figure imgf000063_0003
Figure imgf000064_0001
Level 5 Assessment Indicators
Figure imgf000064_0002
Process Capability Assessment Instrument: Interview Guide
Process Area 1.3 Service Desk
Questions
Base Practice: 1.3.1 Call Attention
What methods are available to users for communication with the Service Desk, and do users have access to resources needed for such communication?
Are all users informed how and when to contact the Service Desk? If so, how?
Do all users receive the same level of support? If no, how does support differ?
Do you gather call statistics like total volume of calls and number of abandoned calls? If so, can we access this information?
Is there a need for after-hours support? If so, what type of after-hours support does the Service Desk provide?
Base Practice: 1.3.2 Incident/Request Logging
1. What is the procedure for logging incidents/requests, and is this followed in all cases?
Is a priority level assigned to the incident/request at time of receipt and how is it determined?
Base Practice: 1.3.3 Incident/Request Qualification
Do Service Desk personnel have access to a catalogue/database of frequently occurring incidents and their solutions, and does its format allow for rapid access and search?
How often is this catalogue/database accessed to provide an immediate solution or work-around to the user? (e.g., all calls, some calls, very few calls)
How frequently is this catalogue/database updated';
What other resources exist to aid Service Desk personnel with immediate incident resolution?
Base Practice: 1.3.4 Incident Request Assignment
Is there a defined time frame within which the incident/request should be assigned and is it usually followed?
Are users notified of receipt, status and approximate time to resolution (if possible) of incident/request and provided with the incident/request ID?
W By what process is the appropriate personnel determined for handling an incident/request?
Is a defined system used for assigning responsibility for an incident/request to the appropriate personnel? (e.g. trouble tickets are generated and sent to appropriate personnel)
Is a record made of the person to whom the incident/request is assigned?
Base Practice: 1.3.5 Incident & Problem Resolution
Are non-resolved incidents/problems escalated according to procedures defined in SLAs?
2. How are appropriate resources notified that the incident/problem has been escalated?
While problem resolution is in process, is a work-around solution determined and conveyed to the user?
When a problem is escalated or a resolution has been determined, is the log updated?
Does the Service Desk or the party to whom the problem was escalated "own" the problem?
Base Practice: 1.3.6 SLA & OLA Tracking and Monitoring
What is the system for tracking and monitoring the problem resolution process for an incident/request?
What types of issues (e.g. excessive reassignments, deviations from estimated task times) are flagged and what action is taken to address them?
Base Practice: 1.3.7 Resolution Confirmation
Are users notified of incident/request resolution?
Is confirmation sought from the user to verify that incident/request has been resolved satisfactorily?
If such confirmation is not obtained what is done?
Base Practice: 1.3.8 Incident / Request Closure
How is an incident/request closed? What records are made?
If it exists, is a solution database updated with the incident/problem and solution for future reference?
What parties are informed of a closure?
Base Practice: 1.3.9 Trends and Repetitive Incident Analysis
Are incidents analyzed to detect trends and identify underlying problems? If so, by what process?
Are users notified of known incidents proactively before they report the incident?
Base Practice: 1.3.10 Service Level Control
Does the Service Desk generate reports comparing actual service levels (eg. Number of incidents resolved at initial call, resolution time by severity) with target service levels?
Who receives these reports and for what purposes?
How are service levels targets set and what is the process for reviewing/updating them?
Do the users communicate their views of support to the Service Desk and agree with the Service Desk's assessment of incident and problem management?
Base Practice: 1.3.11 Receive Requests
Are requests handled immediately or do they require provisioning/approval?
Does the Service Desk coordinate the approval of requests with the appropriate functions and notify requester of approval/rejection?
If request requires functions outside the Service Desk, how does the Service Desk pass responsibility to the appropriate personnel?
Do SLAs exist between the Service Desk and the end user community?
Do agreements exist between the Service Desk and the next level of support (internal or external) 9
Generic Questions for Process Area
Are the policies for Service Desk operation outlined in a document? How are employees made aware of these policies?
What mechanisms are in place to ensure policies are followed?
How frequently are Service Desk policies reviewed and/or modified? What is the process for such policy updates?
Are the current staff and resources of the Service Desk adequate for satisfactorily meeting user needs.
What type of qualification and/or training do Service Desk personnel have?
Are Service Desk operations periodically reviewed in order to identify and implement potential improvements? Who manages this process?
Are any metrics computed to assess the Service Desk performance? If so, please describe them. Are targets for these metrics established and performance assessed against them?
Process Capability Assessment Instrument Process Area 1.3 Service Desk
Process Area The Service Desk provides a single point of contact for users with problems or specific Description service request. The Service Desk forms part of an organization's strategy to enable users and business communities to achieve business objectives through the use of technology.
The Service Desk main objectives are:
To help users when required.
To manage problem resolution.
To log and document problems types, their frequency, and associated workarounds.
To produce management reports on levels of service and user satisfaction.
The Service Desk consists ofthe following functions:
Incident Management — An incident is a single occurrence of an issue that affects the delivery of normal or expected services. Incident Management strives to resolve as high a proportion of incidents as possible prior to passing them on to other areas.
Problem Management- A problem is the underlying cause of one or more incidents. Problem Management utilizes the skills of experts and support groups to fix and prevent recurring incidents by determining and fixing the underlying problems causing the incidents.
Request Management - Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Requests can be raised as change requests with Change Control, or planned, executed, and tracked by the Service Desk. Request Management is responsible for coordinating and controlling all activities necessary to fulfill a request from a user, vendor, or developer. Further sub- functions of Request Management are: Request Logging Impact Analysis Authorization Prioritization
Questionnaire
Process Area 1.3 Service Desk
Figure imgf000066_0001
Figure imgf000067_0001
Work Product list
Process Area 1.3 Service Desk
Trouble ticket
Employee training handbook
User surveys
Performance reports (resolution, response, trending, etc.)
SLA
Sample log record for an incident/request
Staffing plan document
Figure imgf000067_0002
PA's Metrics Service Pricing & Cost: Billing & Accounting:
Percentage of chargebacks outstanding per month
Percentage of chargebacks paid on time each month
Total cost of software per month
Total cost of hardware per month
Total cost of services/support per month
Total amount of money spent per month by department
Base Practices
Figure imgf000068_0001
η
Figure imgf000069_0001
Figure imgf000069_0002
Figure imgf000070_0001
References
Figure imgf000070_0002
Process Area: Service Pricing
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Base Practice Example of Assessment Indicator Assessment Indicators at Client
1.4.1 Determine projected An annual projection of all equipment/service costs is service/equipment costs and made and depreciation schedules exist for appropriate depreciation schedule for equipment distributed technical environment
1.4.2 Determine if chargeback is All items with associated chargebacks are specified. appropriate
1.4.3 Determine usage trends Data collected and available on what services are used, frequency of use and time of use.
1.4.4 Prepare budgets and ensure Operating budgets are prepared for each department by that data is valid and correct a method that ensures accurate forecasting.
1.4.5 Identify product/service Example of SLA that outlines the product/services that options associated with service will be provided. level objectives
1.4.6 Define products/services in Example of notification sent to customers of terms useful to customers products/services offered.
1.4.7 Determine service price costs Cost model or cost recovery approach is available. and model/evaluate costs
1.4.8 Determine cost allocation Cost allocation plans exist that specify how shared and plans for services and equipment user/department specific costs are to be allocated.
1.4.9 Prepare, distribute, and A list of available services and service costs. maintain a catalogue of service prices for users
1.4.10 Inform users about costs Example ofthe information sent to users on the breakdown (shared vs. individual) of costs allocated to them.
Figure imgf000071_0001
Level 2
Figure imgf000071_0002
Level 3 Assessment Indicators
Figure imgf000071_0003
ft
Figure imgf000072_0002
Level 4 Assessment Indicators
Figure imgf000072_0003
Level 5 Assessment Indicators
Figure imgf000072_0004
Process Capability Assessment Instrument: Interview Guide
Process Area | 1.4 Service Pricing
Figure imgf000072_0001
What is the process for projecting costs of service and equipment capacity enhancements? How frequently does this occur?
Can costs be projected on a customer-group basis?
Can service costs be broken down by implementation, operation and overhead for each service 9
How are depreciation schedules determined? 9
Are projected costs and depreciation figures used to decide between leasing and purchasing?
Currently, what is the approximate percentage of leased and purchased equipment?
Base Practice: 1.4.2 Determine if chargeback is appropriate
What criteria are used to determine which items will be charged back?
Are departments or other appropriate parties informed of the items with associated charges? 9
Are there any known "hidden costs" (e.g. users spending business time helping other users) 9
What types of costs are not charged to department/project/individuals?
Base Practice: 1.4.3 Determine usage trends
What information is collected on service/equipment usage? Where is this information stored?
What type of trending analysis is performed using this data (e.g. frequency of calls to Service Desk per department)
For what purposes are trend data used?
Base Practice: 1.4.4 Prepare budgets and ensure that data is valid and correct
What is the process for creating budgets? Does each department follow a standard procedure?
What information is analyzed while preparing budgets? Are projected service/product costs, expected growth and past budgetary needs considered?
Are periodic audits of the budget performed to ensure the use of accurate and valid data?
Do budgets include contingencies for unanticipated growth or product/service needs 7
Base Practice: 1.4.5 Identify product/service options associated with service level objectives
Are SLAs or service level objectives reviewed to verify that all needed products/services are being offered?
At present, are all products/services covered by SLAs?
If a cost cannot be tied back to an SLA, does an evaluation of the need or justication for that service/product occur?
Who is responsible for the process of checking product/service options against SLAs?
Base Practice: 1.4.6 Define products/services in terms useful to customers
How are appropriate parties informed of services/products offered?
Is information sent of additional costs for non-standard products/services?
Base Practice: 1.4.7 Determine service price costs and model/evaluate costs
How are service costs finalized? Who is in charge of this process? What type of cost modeling is done? Why was this strategy settled on?
Has a pricing strategy been defined? If yes, please describe.
Does the pricing strategy map back to the services being provided?
Base Practice: 1.4.8 Determine cost allocation plans for services and equipment
What is the procedure for creating cost allocation plans for services and equipment?
How are costs of shared resources (e.g. service desk, technical infrastructure) allocated •?
Base Practice: 1.4.9 Prepare, distribute, and maintain a catalogue of service prices for users
What information does the catalogue of service prices for users contain?
How is the catalogue distributed, and how frequently?
Who receives the catalogue and for what purposes?
How frequently is the catalogue updated?
Base Practice: 1.4.10 Inform users about costs
How are users informed of the breakdown of costs (both individual and shared) allocated to them?
Have you found that informing users about costs affects their service expectations and/or the efficiency with which resources are used/requested?
Base Practice: 1.4.11 Monitor and assess budgetary spending and actual costs vs projected costs
1. What is the procedure for monitoring budgetary spending?
What are the outputs of the budgetary spending assessment process? (i.e. what documents are produced?)
What occurs when spending deviates from budget?
What is the process for notifying management of deviations from proposed spending?
Base Practice: 1.4.12 Review current and planned budgets and cost allocation plans with management/user
Are budgets and allocation plans submitted to management and user representatives for review 9 ϊ
Figure imgf000074_0001
Process Capability Assessment Instrument
Process Area 1.4 Service Pricing
Process Area Service Pricing is comprised ofthe following areas: Description
Service Pricing & Cost: Service Costing & Pricing projects and monitors costs for the management of operations, provision of service, equipment installation, etc. Based upon the projected cost and business needs, a service pricing strategy may be developed to re-allocate costs within the organization. If developed, the service pricing strategy will be documented, communicated to the users, monitored and adjusted to ensure that it is both comprehensive and fair.
Billing & Accounting: The purpose of Billing & Accounting is to gather information for calculating actual cost, determine chargeback costs and bill users for services rendered.
Questionnaire
Process Area | 1.4 Service Pncmg
Yes No Don't N/A Know
?*
Figure imgf000075_0001
Depreciation schedules Sample budget
Service price listing or catalogue Chargeback algorithm or strategy Chargeback reports
User Administration 1.5
Figure imgf000075_0002
Base Practices
Figure imgf000075_0003
Figure imgf000076_0001
References
Figure imgf000076_0002
Process Area: User Administration
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000076_0003
?r
Figure imgf000077_0001
Figure imgf000077_0002
Level 3 Assessment Indicators
Figure imgf000077_0003
?> changes to policy, process, etc.
Level 4 Assessment Indicators
Figure imgf000078_0002
Level 5 Assessment Indicators
Figure imgf000078_0003
Process Capability Assessment Instrument: Interview Guide
Process Area 1.5 User Administration
Figure imgf000078_0001
3. How often are these additions performed and by whom? Who approves these additions and for what purpose/reason are additions performed?
Is there a process for confirming timely and accurate completion of these additions? If so, please describe it.
Base Practice: 1.5.3 Change User Information on all necessary systems
For what purpose are accounts modified?
How are updates and modifications to established user accounts performed';
Who authorizes changes to established user accounts?
How often do these modifications occur? Who performs them?
Is there a means for ensuring that changes are completed in a timely and accurate manner?
Base Practice: 1.5.4 Delete User Information on all necessary systems
What is the process for deleting user accounts from all necessary systems?
Who approves deletions of established user accounts?
How often do these deletions occur? Why are accounts deleted? Are they performed on an as needed basis or are they scheduled? Who performs the deletions?
Does the deletion process take into consideration any security issues regarding accounts and ids? (e.g. Employee is leaving company and security policy indicates that all accounts need to be disabled and/or deleted by end of employees last day.)
Is there a process for confirming timely and accurate completion of these deletions?
Base Practice: 1.5.5 Notify appropriate parties periodically of user administration status
Do appropriate parties periodically receive status reports regarding user administration maintenance?
Who creates the user administration reports? What formats are they distributed in (e.g. Mail, Email, Meeting, etc.)
3. How are reports distributed? How often is such information distributed and by whom?
4. Who is the intended audience? How do the different parties utilize this information?
Generic Questions for Process Areas
1. How are the various additions, alterations, and deletions to user accounts prioritized? Are all potential stakeholders involved in the decision process? (e.g. Coordination with Change Control Plan, Human Resources, Operations Personnel)
2. Are current procedures and resources periodically assessed with the intent to promote continuous improvement? What is the approval process for proposed solutions? Are these solutions evaluated for impact?
3. Are there regularly scheduled training programs that address User Administration procedures? If so, what type of training provided?
4. Do you find that adequate resources are allocated for User Administration? Please elaborate.
Process Capability Assessment Instrument
Figure imgf000079_0001
Questionnaire
Figure imgf000079_0002
Yes No Don't N/A Know a
Figure imgf000080_0001
Work Product list
Figure imgf000080_0002
User Administration Maintenance Status Report
New Hire List
Termination List
Change of Name Request Form
Access Control Profile Document
Network Group Access Property Document
Figure imgf000080_0003
Base Practices
??
Figure imgf000081_0001
ro
Figure imgf000082_0001
References
Figure imgf000082_0002
Process Area: Production Scheduling
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000082_0003
Figure imgf000083_0001
Level 2
Figure imgf000083_0002
Level 3 Assessment Indicators
Figure imgf000083_0003
Figure imgf000084_0002
Level 4 Assessment Indicators
Figure imgf000084_0003
Level 5 Assessment Indicators
Figure imgf000084_0004
Process Capability Assessment Instrument: Interview Guide
Process Area 2.1 Production Scheduling
Figure imgf000084_0001
Figure imgf000085_0001
1. Describe any workload balancing capabilities provided?
2. Are forecasting mechanisms available? When/how are they used?
3. What reports are produced that provide network traffic data?
4. What tools are used to quantify that the production schedule is meeting goals?
5. What other historical data is used to maintain performance?
Generic Questions for Process Area
1. What are the procedures/policies for the current version of production scheduling? (e.g. Process of submitting a job.)
2. What reports are produced for management, operations and customers that show production performance measurements and verifications? How are these used to manage the production scheduling process?
3. Explain the training provided to the production scheduling staff regarding procedures, systems and interaction with other functions and their importance (e.g. event management, backup and restore, fault recovery, etc.)?
4. Is the process/procedure for production scheduling reviewed for continuous improvement? If yes, how?
5. Has there been a shortage of resources while performing the production scheduling process?
6. When continuous improvements are executed, how is the improvement validated against business and performance goals (e.g. benchmarks, basic measurements, etc.)?
7. What objectives are established to measure the quality of operation standards and processes?
8. What reports are distributed to customers, management and staff that provide feedback/verify adherence regarding the production scheduling process/procedure?
Process Capability Assessment Instrument
Process Area 2.1 Production Scheduling
Process Area Production Scheduling determines the requirements for the execution of scheduled jobs Description across a distributed environment. A production schedule is then placed to meet these requirements, taking into consideration other processes occurring throughout the distributed environment (e.g , software and data distribution, and remote backup/restoration of data.)
Questionnaire
Process Area | 2.1 Production Scheduling
Figure imgf000086_0001
Work Product list
Process Area | 2.1 Production Scheduling Example of an existing production schedule and work flow diagrams Existing operating procedure manuals
Software scheduling software documentation, detailed and quick reference
Examples of custom (or packaged) screens prompting for scheduling information needed to execute jobs or job streams
Phone list of who to call for different types of problems
Existing reports that analyze business customers performance
Existing reports that review network traffic and hardware during the monitoring process
Existing reports that review network traffic trend data to validate job performance
Results of any network performance testing across the network, (e.g. RMON, SNMP, etc.)
Figure imgf000087_0001
Base Practices
Figure imgf000087_0002
n
Figure imgf000088_0001
References
Figure imgf000088_0002
Process Area: Print Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000088_0003
s-7
Figure imgf000089_0001
Level 3 Assessment Indicators
Figure imgf000089_0002
Level 4 Assessment Indicators
Figure imgf000089_0003
9?
Figure imgf000090_0002
Level 5 Assessment Indicators
Figure imgf000090_0003
Process Capability Assessment Instrument: Interview Guide
Process Area | 2.2 Print Management
Figure imgf000090_0001
1. Can print jobs be sent from one print queue to another without customer intervention? If so, how? Who redirects the print job?
What are the reasons that a print job would be redirected to another printer (e.g. off-line, out of paper, powered off, busy)?
Base Practice: 2.2.6 Batch print jobs
1. Are customers made aware of the batch print feature? How?
2. What is the typical length of time and size of a batch print job?
3. Do you schedule your batch print jobs during certain hours of the day? If yes, when?
4. Is there a software program that manages and monitors this process or does an administrator need to schedule and oversee? If yes, what is the process?
Base Practice: 2.2.7 Print forms
1. Does the output/print management personnel review forms prior to their use throughout the distributed system? Reports? How is this process of approval managed (e.g. meetings, requests, sample stock, test runs, etc.)?
2. Can forms be collated and packed?
3. Are there certain printers on the network where confidential forms/output is directed to? If yes, how are those printers managed (e.g. locked closet, attendant, specific time frame, etc.)?
4. How many different types of preprinted forms are used? How many standard paper stock reports are produced? Of these forms, how many are multi-part?
Generic Questions for Base Practice
1. Are customers able to access a master map or listing printer types and locations available to them? If yes, who updates this information?
2. Are customers notified of any delays or problems with regards to their print jobs? If yes, how are they informed (e.g. broadcasts messages, e-mails, phone mail, etc.)?
3. What type of training is provided on the procedure/policy regarding output/print management? Is it followed? When is it provided (e.g. orientation, new hire review, process change meetings, etc.)?
4. Is there a standard procedure provided on how to perform output/print management? How is this documented/maintained (e.g. hardcopy, manual, service procedure updates)?
5. What measurements are used to qualify and quantify the output/print management process?
6. Are there enough resources for output/print management (e.g. printers, supplies, personnel, software, etc.)?
Are there any business or processing goals for the output/print management process? If yes, what are they? How are they qualified and quantified?
8. Is the output/print management consistently review for continuous improvement for business and process aspects? If yes, are these recommendations acted on and tracked for their results?
Process Capability Assessment Instrument
Process Area 2.2 Print Management
Process Area Output and Print Management monitors all ofthe printing and/or done across a distributed Description environment and is responsible for managing the printers and the printing for both central and remote locations.
Questionnaire
Process Area | 2.2 Print Management
Figure imgf000091_0001
Are there any special forms, equipment, and/or supplies needed to produce some print jobs?
Work Product list
Process Area | 2.2 Print Management
Operator's manual for output/print management personnel
Customer's manual for available output/print resources
Examples of any forms/paper stock used for non-typical print jobs
List of equipment/supplies used for non-typical print jobs (e.g. feeders, inks, etc.)
File Transfer and Control (2.3)
Figure imgf000092_0001
Base Practices
Figure imgf000092_0002
°n and control system is set up to handle multiple transfers and both remote systems and the host complete file transfer successfully.
BP Number 2.3.4
BP Name Location, format, and file verification
BP Description Determine if the file to be transferred exists
Determine and check the version ofthe file to be transferred
Determine if there is room on the recipient machine for the file
Dynamically allocate space for file
Convert file types (e.g., VSAM, PDS, etc.)
Convert file formats (e.g., ASCII to EBCDIC)
Encrypt/decrypt file being transferred
Compress/decompress file at source and at target
Rename file at source and/or target
Create, write over or delete files
Merge or append to transferred files
Example File Transfer Type Considerations include:
Host to Host
Remote System to Host
Remote System to Remote System
References
Figure imgf000093_0001
Process Area: File Transfer and Control
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000093_0002
Level 2
Figure imgf000093_0003
°&
Figure imgf000094_0001
Level 3 Assessment Indicators
Figure imgf000094_0002
Level 4 Assessment Indicators
Figure imgf000094_0003
V
Figure imgf000095_0001
Level 5 Assessment Indicators
Figure imgf000095_0002
Process Capability Assessment Instrument: Interview Guide
Process Area 2.3 File Transfer and Control
Questions
Base Practice: 2.3.1 Transfer files on a scheduled basis
1. Has the schedule of file transfers to and from devices been determined? If yes, what is the schedule? Who is responsible for this task? Is it under version control? Does the schedule encompass all aspects of the service provider at the organizational level?
Can file transfers be initiated by the sender and/or the receiver? What is their customer level (e.g. administrator, all customers, some customers, etc.) and do they write scripts or assign priorities levels via an interface?
Can concurrent file transfers be performed? If yes, please explain how?
Can automated conditional file transfers be performed? If yes, please explain how?
Base Practice 2.3.2 Determine backup and recovery scheme
Are file transfer events logged? If yes how and is this information kept for historical purposes?
Are failed file transfers retried? If yes, by whom or is it automatic?
Has the backup/recovery scheme for a file transfer been invoked? If no, why? If yes, what was the end result (e.g. lost data, transfer complete, etc.)? Who is responsible for creating scheme and is it under version control?
Is there notification of a successful/failed file transfer? If yes, how is this performed (e.g. e-mail, banner message, report, etc.) and to whom (e.g. administrator, initiator, etc.)? Is fault management made aware of failures? If yes, how?
Is there a check for successful file transfers? If yes, how are these checks performed and logged?
Base Practice: 2.3.3 Transfer files on an ad hoc basis
Are files transferred on an ad hoc basis? If yes, what are the most common reasons and by whom? Do these transfers interfere with other process areas (e.g. production scheduling, output/print management, etc.)?
Who can perform or initiate an ad hoc file transfer (e.g. administrators, all customers, customers with permission, etc.)? Is it performed by senders, receivers or both?
Can ad hoc files be transferred concurrently? If yes, please explain how this is being done?
Base Practice: 2.3.4 Location, format, and file verification H Can space for a transferred file be dynamically allocated? If no, what is the customers recourse if there is a problem?
Can file types (e.g. VSAM, PDS, etc.) be converted? If yes, what is the most common? How are they converted? What tools do you use to convert them?
Have file formats (e.g. ASCII to EBCDIC) been converted? If yes, what is the most common? What tools do you use and how are they converted?
4. Are files being compressed/decompressed at source and at target? If yes, how?
5. Can files be renamed at source and/or target? Can files be created, written over or deleted? If yes to either, please explain the process of how this is done.
6. Can transferred files be merged or appended to? If yes, is this method used often?
What are the most common platforms encountered during file transfer? Has there been a problem with any particular platform? If yes, explain.
Are files transferred being encrypted/decrypted? If no, why? If yes, please explain how? What tools are being used?
Generic Questions for Process Area
Are file transfer times defined and/or evaluated for number of destinations, machines and platforms? If yes, explain?
Is there a policy established and maintained for file transfer and control? Is this process followed?
3. Are adequate resources available for file transfer and control? If no, explain?
Is training provided for all new employees within file transfer and control? If no explain? Are subsequent training times available for file transfer and control personnel to learn new processes, technologies, etc.? If yes, explain. Are proactive plans made for future personnel needs? If yes, explain.
5. Are reports to customers, administration and other groups provided as a means for process update and feedback? If yes, who gets these reports. If no, explain how feedback is provided?
Is the file transfer and control process and procedure reviewed for continuous improvement purposes? Are these improvements deployed and measured against process and business goals?
Are strategic goals in place for file transfer and control? If yes, what are they and can they be measured? Are metrics collected on the file transfer and control process? Is this process automated with use of software, tools, etc.? Are the metrics analyzed for process parameters and deviation identification?
Process Capability Assessment Instrument
Process Area 2.3 File Transfer and Control
Process Area File Transfer and Control initiates and monitors the files being transferred throughout the Description system as part ofthe business processing (e.g., nightly batch runs). File transfers can take place in a bi-directional fashion between hosts, servers and workstations.
Questionnaire
Process Area 2.3 File Transfer and Control
Figure imgf000096_0001
Work Product list
Process Area 2.3 File Transfer and Control
°i Sample of a file transfer and control schedule
Sample of a backup and recovery scheme
List of file types and formats used during file conversions
Reports metrics, concerns and/or issues regarding file transfer and control
Figure imgf000097_0001
Base Practices
Figure imgf000097_0002
Figure imgf000098_0001
Figure imgf000099_0001
References
Figure imgf000099_0002
Process Area: Network Services
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000099_0003
V
Figure imgf000100_0001
Level 2
Figure imgf000100_0002
Figure imgf000101_0001
Level 3 Assessment Indicators
Figure imgf000101_0002
Level 4 Assessment Indicators
Figure imgf000101_0003
Figure imgf000102_0001
Level 5 Assessment Indicators
Figure imgf000102_0002
Process Capability Assessment Instrument: Interview Guide
Process Area 2.4 Network Services
Questions
Base Practice: 2.4.1 Populate Directories
What is the process for adding first time directory information to new directories? Is there a different process for populating old directories? If so, please describe.
How often does populating new directories occur and who approves this?
How are directory permission properties defined and gathered?
How often are directory permission properties surveyed and altered?
Does the process of populating existing directories take various system needs into consideration? (E.g. Does directory population follow a convenient and logical schedule?)
Base Practice: 2.4.2 Manage Directories
Who is responsible for managing the network directories? What is the overall process for managing the directories?
How is the directory content volume monitored and managed?
How are the relationships between directories managed?
How often is the interface between different directories updated?
How is the content of different directories maintained?
Do you have directories that require synchronization? What is the process for synchronizing the directories?
Base Practice: 2.4.3 Determine Organizational Impacts
Are organizational and business impacts taken into consideration when determining and designing various network services? (e.g. directory structure, permissions, etc.) If yes, how?
What processes are in place to determine organizational impacts?
Base Practice: 2.4.4 Extract Information from Directories
What type of information do you gather or extract from directories (e.g. authentication information, access control profiles, etc.)?
How do you store the information collected from directories?
Are you creating reports from this data? If yes, what types of reports are you creating? 9
Is anyone managing inconsistencies or flagging abnormalities? If yes, who and how are they flagging or correcting the abnormalities? Is their communication between the Network Services and Fault Management or Monitoring teams when sever abnormalities occur?
Base Practice: 2.4.5 Identify Component Options
What physical and logical components have you identified in your environment? How did you determine what components were needed for your environment?
Is there a process for categorizing different network components? Are different people responsible for the different types of components? If yes, who are they and do they just receive training on the specific component types they are responsible for?
Base Practice: 2.4.6 Document Strategic Drivers (e.g. geography, security, etc.)
What are some of the strategic drivers identified for providing the optimum network services? Is there an order of importance for the strategic drivers you have identified? If yes, please elaborate.
Are your strategic drivers documented? Are they revisited when a business or organizational change happens? How are they kept in line with the business or organizational needs?
Base Practice: 2.4.7 Outline Guiding Principles for Communication Address Planning
Do you have any guiding principles in place that allow the address team to develop and share a common vision for all addressing functions? If yes, what are some of these guiding principles?
Are there common processes and practices across several of your networking functions? If yes, which ones?
Is there a lot of cross functionality between your network groups? If yes, please explain the cross functionality?
Base Practice: 2.4.3 Address and Domain Maintenance
How often is address maintenance performed? What processes are used for the addition, deletion, maintenance, and modification of addresses?
How often is domain maintenance performed? What processes are used for the addition, deletion, maintenance, and modification of domains?
How are the address tables maintained?
What is the process for maintaining DNS?
Base Practice: 2.4.5 Address Design Process
1. Are address design and technical network diagrams created? Are they updated? If so, how often?
2. Are conflicts or network issues taken into consideration when the address system is being designed? If yes, what conflicts or issues are considered and how are the network solutions modified? Is there a process to follow for making changes?
Base Practice: 2.4.6 IP Technology Research Process
How often is emerging technology considered and evaluated for the current network?
Are there defined processes that determine whether a new technology would enhance or improve the current network system? If so, what are they?
If a new technology is being considered what type of testing or research is done to ensure that the technology meets the business needs?
Generic Questions for Process Area
1. Are training classes provided and do all new Network Services personnel attend training on the defined Directory Maintenance and Communication Address Planning processes? If so what type of training ensures adequate execution of these established directory management and address servicing procedures?
2. Are current resources and procedures periodically assessed with the intent to promote continuous improvement? What is the approval process for proposed solutions? Are all potential stakeholders involved in the decision process? How often are these solutions implemented and by whom?
I 0> 3. How are routine network services and continuous improvement solutions evaluated for impact?
4. Do you find that the resources allocated to network services is adequate? Please elaborate.
Process Capability Assessment Instrument
Process Area 2.4 Network Services
Process Area Network Services Process Area is comprised ofthe following two areas: Description
Directory Services: is the function of publishing and maintaining organized inventories of information resources to make them available to networked customers. Directory Management can apply to internal directories as well as the publishing of directory information for global directory services.
DNS: ensures that IP services are provided to devices within an enteφrise. Whether dealing with a new or existing capability, the communications address management function demands that high-level business requirements be taken into consideration.
Questionnaire
Process Area 2.4 Network Services
Figure imgf000104_0001
Work Product list
Process Area 2.4 Network Services
Access Control Profiles
Network Traffic Flow Diagrams
IP Address Availability Report
DHCP Address Lease Contracts
IP Address Tables
Copy of current documented Address Plan
Backup/Restore/Archiving (2.5)
Figure imgf000105_0001
Base Practices
Figure imgf000105_0002
io
Figure imgf000106_0001
References
Figure imgf000106_0002
Process Area: Backup/Restore/Archiving
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000106_0003
Figure imgf000107_0002
Level 2
Figure imgf000107_0003
Figure imgf000107_0004
Figure imgf000107_0001
and experience | periodic meetings
Level 4 Assessment Indicators
Figure imgf000108_0001
Level 5 Assessment Indicators
Figure imgf000108_0002
Process Capability Assessment Instrument: Interview Guide
Process Area | 2.5 Backup/Restore/ Archiving
Questions
Base Practice: 2.5.1 Test Central/Remote Backup/Restore/ Archival Procedure Periodically What type of periodic testing of the backup/restore/archival procedures is performed?
Are both central and remote backup/restore/archiving tested?
How (in what format) and to whom are the testing results reported?
Have your tests typically been successful? What constitutes a successful test?
Base Practice: 2.5.2 File Backup Steps and Considerations
Have the backup requirements been defined and documented for the following items:
Customer, operations, applications responsibilities
Remote vs. central backups
Frequency of backups
Components to be backed up
What type of application or automated process is used for backup?
Are backup and restore processes managed centrally or remotely?
What type of backup (full, incremental, export) is performed and how often?
What media (tape, magnetic disc, cartridge etc.) is used for backup? Why was this medium chosen? If the system is unavailable to customers during backups, how is system unavailability managed? If parts of the system are down during a scheduled backup, is a manual backup performed when the system gets back online?
Where is backed-up/archived data stored? For what length of time is data stored?
Does the backup and restore process require manual intervention?
What type of monitoring of the backup process is performed?
Are backup records made? If yes, what information is documented?
Base Practice: 2.5.3 File Restoration Steps and Considerations
What events warrant a restoration and how is the process initiated? Are these policies documented? Can customers submit requests for particular files to be restored? How are customer requests logged and tracked?
Can single/multiple objects be restored from the backup media?
Can a full/incremental backup be restored centrally and remotely?
What type of monitoring is done of the restoration process?
Are notification procedures in place to inform customers and service providers of success/failure of restoration?
Base Practice: 2.5.4 Compress and Index Information Being Archived
Is archiving triggered automatically or must it be manually initiated?
How is data compressed and indexed prior to being archived?
Base Practice: 2.5.5 Notify that Backup/Restoration/ Archival Process has been Completed Successfully/Failed
Who receives notification of the outcome of the backup/restore/archival process?
How is this notification sent?
What action, if any, is taken on receipt of the notification?
Base Practice: 2.5.6 Perform Housekeeping on the Backup/ Archival Library
What maintenance tasks are performed on the backup/archival library? Who is responsible for maintaining the library?
Is storage media labeled? What information is recorded on the label? Does labeling follow documented specifications?
How many copies of backup data are made, and how many generations are maintained? Are copies stored in different locations?
How is integrity of stored and retrieved files ensured (e.g. resurrecting relationships)? 9
Base Practice: 2.5.7 Synchronize Backups and Restores
Does a predefined schedule for regular backups and restores exist? If so, when do backups and restores occur?
What is the process for scheduling a backup/restore not regularly planned? Who manages this process?
Are there any indicators in the application that can help signal when a backup is needed if it does not fall on one of the scheduled backup times?
Generic Questions for Process Area
Are any quantitative targets set with regard to the backup/restore/archive process (e.g. % of successful backups per month)? If so, what are they? Are these targets achieved? How frequently are they evaluated?
Is the backup/restore/archive process periodically reviewed and new technologies evaluated with the purpose of identifying potential improvements? How frequently does this occur?
Do you find that adequate resources are allocated to managing the backup/restore/archive process 9
What type of training do backup/restore/archive personnel receive?
Process Capability Assessment Instrument
Process Area 2.5 Backup/Restore/ Archiving
Process Area Backup/Restore/ Archive Management considers all ofthe back-ups and restorations that need Description to take place across a distributed system for master copies of data. Archiving saves and stores information across the distributed environment. These processes may occur centrally or in distributed locations.
Questionnaire l«* Process Area | 2.5 Backup/Restore/Archiving
Figure imgf000110_0001
Work Product list
Process Area | 2.5 Backup/Restore/Archiving
Backup requirements document
Sample backup log
Document outlining schedule of backups (e.g. full, incremental, differential)
SLA outlining backup and restore agreements
Figure imgf000110_0002
ι°1
Figure imgf000111_0001
Base Practices
Figure imgf000111_0002
Figure imgf000112_0001
Ill References
Figure imgf000113_0001
Process Area: Monitoring
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000113_0002
|i >-
Figure imgf000114_0001
Level 3 Assessment Indicators
Figure imgf000114_0002
Level 4 Assessment Indicators
Figure imgf000114_0003
'0 Level 5 Assessment Indicators
Figure imgf000115_0001
Process Capability Assessment Instrument: Interview Guide
Process Area | 2.6 Momtoπng
Questions
Base Practice: 2.6.1 Poll for Current Status, if necessary
How is polling of the current status of the network done?
Does polling impact the performance of the network? If so, how?
Base Practice: 2.6.2 Gather and Document Monitoring Information
From what sources is monitoring information gathered (e.g. element management systems, network components)?
Has a document been created that specifies the type of information that should be collected for monitoring purposes? Are these specifications followed?
In what format is monitoring information stored?
Base Practice: 2.6.3 Classify events/Assign severity levels/Assess impact
How do you classify or define your events?
What system or applications do you used for gathering, defining, and classifying events?
How are severity levels and system impact determined?
Base Practice: 2.6.4 Analyze Faults
What type of preliminary analysis of a fault event occurs? Is the extent of the fault investigated? If so is this process automated?
Does your monitoring tool have the capability to correlate multiple events?
Can your tool provide a high level view and then enable "drilling down" to analyze a fault?
Base Practice: 2.6.5 Route Faults to be Corrected
How is routing of faults to the appropriate resource managed?
Are fault notifications anticipated due to other errors being received?
Is a determination of the customers/devices affected by the fault made and are those customers notified?
If a fault puts the system at risk, are appropriate resources (e.g. help desk) notified? Once the fault is identified, are associated alarms suppressed?
Is fault handling tracked to ensure successful resolution? (e.g. trouble ticket logged)
Does a fault log exist, and is the appropriate level of documentation made? If so, please describe the information recorded.
Are fault statistics reported and managed? Are targets set for statistics relating to fault management and how well are these met?
Base Practice: 2.6.6 Map Event Types to Pre-defined Procedures
What types of events activate pre-defined resolution procedures?
How are these pre-defined procedures managed? How was the decision made of which events to set up with pre-defined solutions? How frequently is the collection of such events updated?
What mechanism is in place to check for successful execution of pre-defined procedures when necessary?
Base Practice: 2.6.7 Log Events Locally and/or Remotely
Where are event records stored?
For what time length is event data stored?
HI Who accesses the event log and for what purposes?
Base Practice: 2.6.8 Suppress Duplicated informational Messages Until Thresholds are Reached
What mechanism checks for duplicated/informational messages and clears them from the event log unless a threshold is reached?
Base Practice: 2.6.9 Display Status Information on Console(s) in Multiple Formats
1. What types of current status information can be obtained?
In what formats can such status information be viewed (e.g. graph, map, log)?
Base Practice: 2.6.10 Display Status Information in Multiple Locations
In what locations is status information displayed?
Do personnel other than operations staff access this status information? If so who does and for what purposes?
Base Practice: 2.6.11 Issue Commands on Remote Processors/Hosts
What types of commands can be run on remote processors/hosts?
Can commands to remote processors/hosts be initiated both manually or by an application?
Base Practice: 2.6.12 Set up and Change Local and/or Remote Filters
For what types of purposes are router filters set up?
How frequently does the need arise for these filters to be changed?
What is the procedure for changing filters? Who manages this process?
Base Practice: 2.6.13 Set up and Change Local and/or Remote Threshold Schemes
How are thresholds determined for critical nodes?
Do these thresholds meet SLAs?
Under what circumstances are these thresholds changed?
What is the procedure for changing threshold schemes? Who controls this process?
Base Practice: 2.6.14 Analyze Traffic Patterns
What information about network traffic is collected?
What types of conclusions are sought in analyzing the traffic data? Are there predefined guidelines for the analysis that needs to be done?
Who performs this analysis and how frequently?
Base Practice: 2.6.15 Send Broadcast Messages
Are there provisions for sending broadcast messages?
What circumstances necessitate broadcast messages?
Who has the ability/responsibility for sending broadcast messages?
How frequently are broadcast messages sent?
Generic Questions for Process Area
What personnel are involved in the monitoring process? What roles to they play? What type of relevant qualification/training do they have?
Are personnel trained to decipher monitoring data, understand the processes involved in monitoring a distributed environment, and how to make changes to the monitoring system?
Are the monitoring software and process periodically evaluated with the intent of identifying potential improvements? Who facilitates this evaluation process?
Do you feel that adequate resources are allocated for monitoring purposes? Please elaborate.
Process Capability Assessment Instrument
Process Area 2.6 Monitoring
Process Area Monitoring verifies that the system is continually functioning in accordance with defined Description SLAs. Monitoring consists ofthe following functions:
Event Management: receives, logs, classifies and presents event messages on a console(s) based on pre-established filters or thresholds. Event information is sent from such components as: hardware, applications/system software, communications resources, etc. If an event is classified as "negative" (i.e., a fault), event management forwards the event on to fault management for diagnosis and correction.
Fault Management: a negative event has been brought to the attention ofthe system, actions are undertaken within Fault Management to define, diagnose and correct the fault. Although it may be possible to automate this process, human intervention may be required to perform at least some of these management tasks. lu" Questionnaire
Figure imgf000117_0001
Sample of event log Network status map Reports on traffic patterns Reports on faults
Figure imgf000117_0002
IU-
Figure imgf000118_0001
nl
Figure imgf000119_0001
References
Figure imgf000119_0002
Process Area: Performance Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000119_0003
Level 2
Figure imgf000119_0004
11ST
Figure imgf000120_0001
Level 3 Assessment Indicators
Figure imgf000120_0002
Level 4 Assessment Indicators
Figure imgf000120_0003
Level 5 Assessment Indicators
Figure imgf000120_0004
w locations
Process Capability Assessment Instrument: Interview Guide
Process Area | 2.7 Performance Management
Questions
Base Practice: 2.7.1 Monitor Resources Utilization/Performance to Ensure Adequacy of Resources
How are systems/applications/network workloads monitored to check for adequacy' 9
What condition qualifies a resource as inadequate, and what action occurs if an inadequacy is noted? Are these procedural policies documented?
Who is responsible for monitoring adequacy of resources?
How is trending data reported to the service provider for planning?
Base Practice: 2.7.2 Establish Thresholds for Each Critical Node
How are thresholds measured and determined for managed resources?
Do these thresholds meet SLAs?
Base Practice: 2.7.3 Prioritize Information and Flag Abnormalities
How is utilization monitored vis-a-vis thresholds?
As utilization is monitored, what types of abnormalities are flagged?
What is the procedure for handling abnormalities and who is responsible for ensuring that the necessary action occurs?
Base Practice: 2.7.4 Capture, Save, Summarize and Collate Necessary Capacity Statistics
Are capacity statistics collected on an on-going basis?
For how long is this capacity data saved?
What types of summary or trend reports on capacity are generated? How often?
Who reviews these reports and for what purposes?
Base Practice: 2.7.5 Create Reports on Utilization/Capacity/Performance
What types of reports on utilization/capacity/performance are generated?
Are guidelines for the format and contents of regular reports documented?
Base Practice: 2.7.6 Disseminate Reports to Appropriate Parties
1. Who receives the utilization capacity /performance reports and for what purposes?
How frequently are these reports distributed?
Base Practice: 2.7.7 Determine Where Performance Requires Short-term Adjustments
Are adjustments to performance data made to account for down time related to repairs, upgrades, etc. (to ensure trending information is not skewed)? If so, in what situations are adjustments made?
Who decides on the appropriate adjustments, and on what basis?
Base Practice: 2.7.8 Isolate the Cause ofthe Performance Problem
Is system-wide data gathered and analyzed to identify the source of a performance problem? How is this data reported? Does any trending occur?
What is the mechanism or procedure by which the cause of a performance problem is isolated using system-wide data?
Generic Questions for Process Area
What personnel are involved in the Performance Management process? What roles to they play? What type of relevant qualification/training do they have?
Is a documented set of procedural policies followed in activities related to managing performance?
Are any data collected for use in assessing performance management? If so, please describe the information collected and any metrics that are computed. Are targets for the metrics set and performance evaluated against those targets?
Do you feel that adequate resources are allocated to performance management? Please elaborate.
Process Capability Assessment Instrument
Process Area 2.7 Performance Management
Process Area Performance Management ensures that the required resources are available at all times Description throughout the distributed system to meet the agreed upon SLAs. This includes the monitoring and management of end-to-end performance based on utilization, capacity and overall performance statistics. If necessary, Performance Management can make adjustments
J>0 to the production environments to either enhance performance or to rectify degraded performance.
Questionnaire
Process Area | 2.7 Performance Management
Figure imgf000122_0001
Work Product list
Process Area 2.7 Performance Management
Capacity reports
Utilization reports
Performance reports
Document listing thresholds for managed resources
Figure imgf000122_0002
J l restrictions.
PA's Metrics % of individuals with multiple IDs and passwords Number of security modifications made per month Number of security violations per month Mean number of accounts deleted and created per month
Base Practices
Figure imgf000123_0001
Figure imgf000124_0001
References »j
Figure imgf000125_0001
Process Area: Security Planning & Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000125_0002
»>1
Figure imgf000126_0001
Level 3 Assessment Indicators
Figure imgf000126_0002
Level 4 Assessment Indicators
Figure imgf000126_0003
Level 5 Assessment Indicators
Figure imgf000126_0004
)>
Figure imgf000127_0001
Process Capability Assessment Instrument: Interview Guide
Process Area | 2.8 Secunty Management & Planning
Questions
Base Practice: 2.8.1 Define Security Objectives
What types of issues are covered by the formal security policy?
Was the security policy submitted to management for approval?
Is the security policy documented and available to customers and management?
Base Practice: 2.8.2 Develop security plan and policies
Please describe the contents of the security plan?
What was the process for creating the security plan and policies?
Who is involved in the creation of the security plan/policies and who views the completed document?
Base Practice: 2.8.3 Obtain feedback & update security plan
What is the procedure by which new factors that affect the system's security are determined and incorporated into security planning?
Who is responsible for identifying and monitoring factors that might necessitate changes to the current security plan?
How does the security planning function receive information on planned changes to the distributed environment? Who is responsible for communicating such information?
How are developments of new technology (that threatens or enhances security) tracked and taken into consideration for security planning?
Base Practice: 2.8.4 Establish Security
List all security software (encryption, authentication, virus protection, remote access, proactive evaluation etc. ) that currently protects your system?
What other types of security measures have been implemented?
How are customers informed of the importance of network security and their responsibilities in supporting security?
Base Practice: 2.8.3 Receive Information from Human Resources Regarding Employee Comings and Goings
How is information on employee comings and goings communicated by Human Resources? How long after an employee's departure is the account disabled?
Who is responsible for creating and deleting accounts?
Base Practice: 2.8.4 Maintain Accounts and Ids
Who is responsible for maintaining accounts, passwords and IDs?
Are customer, supervisor and resource profiles maintained?
Do any shared login ids exist on the system? If so, for what purposes?
Does a default "guest" login ID exist on the system? If so, for what purpose and how are access rights controlled?
Are there any specifications for valid customer passwords, such as minimum length, character combinations etc.?
How frequently are customers required to change their passwords? Are customers required to change their password after an administrative reset (e.g. customer forgets password)?
Are customer accounts locked out when consecutive failed logins occur? If yes, how many failed login attempts cause a lock-out? How long is the account locked before it is reset automatically? Are customer accounts disabled when they are inactive for a set period of time? If so, what is this time period?
Base Practice: 2.8.5 Log Security Events
What types of event information are logged for security monitoring purposes?
Where are these logs stored and for what time period?
Who has access to the security event logs and for what purposes?
How are log records protected from alteration by unauthorized personnel? Base Practice: 2.8.6 Check for Viruses and Clean up any Found
What forms of virus protection does your system have?
Are viruses checked for only when a virus scan is explicitly ordered by the customer, or does the virus checker implicitly monitor all file accesses? If the former is the case, is there a mechanism to ensure customers routinely run virus scans?
How frequently are updates to the anti-virus product received?
Base Practice: 2.8.7 Audit Logs
Is the security log monitoring process automated? If so, what types of events generate alerts?
Are the logs reviewed regularly for abnormalities that might not be automatically flagged?
What types of summary reports are created from the log information? Who receives these reports and for what puφoses?
Base Practice: 2.8.8 Take Corrective Actions for Security Violations
What is the procedure for dealing with security violations? Are these procedural guidelines documented and viewed by security personnel?
Are security violations handled off-line?
When are security violations escalated and what is the process for doing so? Are escalation policies documented?
What types of reports are generated on security violations? Who reviews these reports and for what purposes?
Base Practice: 2.8.9 Monitor Security Plan for its Effectiveness
At time of security plan creation, were any means for judging plan effectiveness specified? If so, what are these methods, and are they routinely employed?
How frequently are security data reviewed to assess effectiveness of security plan? Who is responsible for performing these reviews?
Are any quantitative targets related to security set? Are these typically met? If they are not met, what is done?
What types of explicit testing (e.g. running hacker tools) of the system's security are performed? How frequently?
Generic Questions for Process Area
Do you find that adequate resources are devoted to planning, implementing and monitoring system security?
Are security policies and procedures documented and communicated to appropriate personnel? What type of training do security personnel receive?
Process Capability Assessment Instrument
Process Area 2.8 Security Planning & Management
Process Area Security Planning initially involves defining the organization's security policy and Description developing a security "plan of action". An ongoing function of Security Planning is to evaluate the effectiveness ofthe existing security plan -particularly in the context of changing technologies - and plan for future security needs.
Security Management controls both physical and logical security for the distributed system. Due to the nature of a distributed environment, security may need to be managed centrally, remotely or through a combination ofthe two methods. Security Management also handles the logging of proper and illegal access, provides a way to audit security information, rectify security breaches and address unauthorized use ofthe system.
Process Capability Assessment Instrument: Questionnaire
Process Area | 2.8 Security Planning & Management
Figure imgf000128_0001
»?
Figure imgf000129_0001
Process Capability Assessment Instrument: Work Product list
Process Area | 2.8 Security Planning & Management
Security policy document
Security plans and procedures document
Sample of security log
Security violations reports
Report on any tests ofthe security system
Figure imgf000129_0002
Base Practices
Figure imgf000129_0003
>n
Figure imgf000130_0001
References
Figure imgf000130_0002
Process Area: Physical Site Planning & Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000130_0003
>*7
Figure imgf000131_0001
Level 2
Figure imgf000131_0002
Level 3 Assessment Indicators
Figure imgf000131_0003
IΛ>
Figure imgf000132_0001
Level 4 Assessment Indicators
Process Generic Practice Example of Assessment Indicator Assessment Attribute Indicators at Client
Process GP4.1 Establish measurable Addressing and responding to physical site
Measuremen quality objectives for the management issues based on strategic t operations environment business needs vs. industry standards.
GP4.2 Automate data Metrics are automatically collected from collection physical site management vs. a manual collection, for example: Time of day and reason UPS usage occurred, breakdown of test performed according to plan and their results, etc.
GP4.3 Provide adequate Metrics collected by physical site management resources and infrastructure personnel are analyzed and reported. for data collection
Process GP4.4 Use data analysis Physical site management is evaluated against Control methods and tools to manage performance goals and metrics for suggested and improve the process improvements and revisions to the process.
Level 5 Assessment Indicators
Figure imgf000132_0002
Interview Guide
Process Area | 2.9 Physical Site Planmng & Management
Questions
Base Practice: 2.9.0 Determine physical site needs
Is there a procedure in place that plans for the control and management of construction, development or changes to the physical site? If yes, what it is? Is it followed? Who is responsible for this plan?
Is the physical site planning handled via one plan or several? If more than one, why? Is feedback collected for one or all plans? If yes, how often and by whom?
Are plans determined by balancing implementation costs with estimated business benefits? If yes, by whom (e.g. team, individual, management, etc.)?
Does planning consider the following requirements and functions: hardware capacity and layout, HVAC and fire suppression, power, structural planning (i.e. mitigate Manmade or natural disaster), and integration with security planning & management?
\ If yes, explain.
Are business goals established for physical site planning and incoφorated? If yes, by whom? How often is the plan reviewed?
Base Practice: 2.9.1 Test environmental regulatory control plans periodically on a per-site basis
1. Is testing performed regarding environmental and regulatory controls on a periodic basis? If yes, how often for each site and by whom? If no, explain?
What are the main environmental/regulatory concerns for each site? Please prioritize and explain?
Are the plans for testing updated to include new equipment, regulations, etc.? If yes, how often are they reviewed and by whom?
Base Practice 2.9.2 Notify appropriate part of environmental failure on a per-site basis
When a failure is encountered are there identified contacts who you notify for each site? If yes, how is notification done (e.g. pager, e-mail, phone, etc.)?
What are the most common failures within each site and how often do they occur? How is feedback from the various sites collected (e.g. reports, conference calls, e-mail, etc.)?
Are data collected regarding the types of failures, response time, locations, reasons, etc? If yes, what data are collected and who receives this data? Are data collected on a manual basis or automatically?
Base Practice: 2.9.3 Monitor progress of corrective actions to failure on a per-site basis
Are corrective actions, in response to previous failures, monitored per site? If yes, how are they monitored and who is responsible for this? Are other related groups notified of changes or issues concerning any corrective action? If yes, how and when? If no, explain?
Are metrics collected on the progress or status of physical site management procedures for each site? If yes, how often and are these collections done manually or are there software/automation tools in use? Are these metrics analyzed against goals and quantified objectives? If yes, by whom?
Base Practice: 2.9.4 Monitor physical site management plan for its effectiveness on a per-site basis
Are business goals and strategies for each site used to measure the success or failure of corrections and/or the general operation procedures for physical site management?
Are the physical site management tasks continuously improved? If yes, are these improvements deployed and measured for effectiveness?
Are enough resources available, as far as equipment, space, procedures, software and/or personnel on each site? If no, explain how the addition of resources would improve the effectiveness of a site (e.g. better monitoring, quicker response time, accurate data, etc.)?
Base Practice: 2.9.5 Provide feedback on physical site management to physical site planning function
Is feedback from physical site management forwarded to physical site planning? If yes, how (e.g. conference calls, reports, e-mail, etc.)?
Are the plans, procedure reviews, issues and problems for each site collected and addressed via one centralized group or is each site a completely separate entity? If separate, does each communicate with physical site planning?
Generic Questions for Process Area
Is there a written policy regarding physical site management's procedures? If yes, is it followed? Is version controlled enacted on this plan? Are change control documents regarding the plan cut and forwarded to appropriate departments?
Is training made available to new hires within physical site management? Is follow-up training covering new technologies, procedures, etc. provided? Are plans made for future employment needs within physical site management?
Is the entire physical site management process reviewed for continuous improvement? If yes, by whom and how often? Are the improvements deployed and measured against business goals and metrics? If yes, by whom?
Process Capability Assessment Instrument
Figure imgf000133_0001
Questionnaire
Process Area | 2.9 Physical Site Planmng & Management
Figure imgf000134_0001
Work Product list
Process Area | 2.9 Physical Site Planmng & Management
Procedures noting physical site planning (e.g. expansion, new layout, etc.)
Procedures regarding environmental regulatory control plans for each site
Failure monitoring/reporting procedures for each site
Reports noting status of physical site management for each site
List of risk issues for physical site management (e.g. earthquakes, wild fires, temperature extremes, brown/black outs, frequency of lighting strikes, tornadoes, etc.) for each site
Figure imgf000134_0002
w Base Practices
BP Number 2.10.1
BP Name Monitor and control storage usage
BP Description Key factors in monitoring and controlling storage usage:
Team of knowledgeable people
Mass Storage tools to support items such as:
Multiple platforms
Multiple addressing media forms
Media-sharing
Scalability/flexibility
Automated media management
Use of hierarchical storage management (reduce online storage and tape backup requirements)
Example Qualified database administrators oversee the operation of a tool able to support the operating systems within the distributed environment, to determine available space, to assess the physical file placement and volume mappings, to eliminate fragmentation, and to manage the number/type/location of storage devices.
BP Number 2.10.2
BP Name Define usage standards for storage media
BP Description Usage standards and support for storage media is typically defined in the following manner: System description documents Operational procedures documents Software description documents
Contact list for help desk and problem resolution personnel Detailed and quick-reference Mass Storage Management software documentation Mass Storage Management software on-line help or context sensitive help Operating systems on-line help or manual pages
Hard copy backups of Mass Storage Management configuration files or customized scripts Disaster recovery plan
Example Storage policies, naming standards and storage hardware configurations and characteristics (e.g. maximum usage level per device) are registered in the storage information database.
BP Number 2.10.3
BP Name Disk space Management for Mass Storage
BP Description Determine the requirements for shared disk space. Partition the disk space as necessary.
Example A set of usage profiles is established to aid in allocating disk space. Once space is partitioned, drive access is restricted appropriately. Usage requirements are tracked so disk space allocation can be updated to reflect changes.
BP Number 2.10.4
BP Name Rectify problems with stateless file systems (e.g., hanging)
BP Description Some ways to rectify stateless file system problems: Identify whether a file has been or is being changed during backup. Scheduling of backups during times when files are least likely to be in use. Lock files during backup.
Example During backup a log is generated to track all stateless files, unable to be backed up. Additional attempts to backup those files are made until a predefined threshold is reached. At that point, notification of unsuccessful backup of specific files is sent to appropriate parties.
BP Number 2.10.5
BP Name Locate Datasheets According to Access Priority
BP Description The media hierarchy is typically broken up into the following levels:
Frequently accessed data: On-line storage (e.g. harddrive)
Moderately accessed data: Near-line storage (e.g. CD-ROM)
Rarely accessed data: Off-line storage (e.g. tape)
Determine how often data is access and decide what storage level is appropriate for that data.
The data can be moved up and down this hierarchical format to fit the access needs ofthe data
Example A Hierarchical Storage Management (HSM) system is installed that determines which files are being accessed and which files should be moved to another form of storage. The HSM software looks at how often the customers are accessing various files then it categorizes them according to the age ofthe last access. The more frequently a file is accessed, the more likely
Figure imgf000136_0002
References
Figure imgf000136_0003
Process Area: Mass Storage Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000136_0004
Level 2
Figure imgf000136_0001
Figure imgf000137_0001
Level 3 Assessment Indicators
Figure imgf000137_0002
Level 4 Assessment Indicators
Process Generic Practice Example of Assessment Indicator Assessment Attribute Indicators at Client
Process GP4.1 Establish measurable Quantitative targets for mass storage
Measuremen quality objectives for the management performance are periodically set. t operations environment
GP4.2 Automate data The storage management tool automatically collection collects data on how frequently files are accessed and reassigns storage locations within the media hierarchy accordingly.
GP4.3 Provide adequate All data specified as necessary for mass resources and infrastructure storage management (e.g. available disk for data collection space, access frequencies) or for assessing the process are collected.
Process GP4.4 Use data analysis Assessment metrics collected are compared to Control methods and tools to manage targets and discrepancies addressed. and improve the process
Level 5 Assessment Indicators
Figure imgf000137_0003
1*A
Figure imgf000138_0001
Process Capability Assessment Instrument: Interview Guide
Process Area | 2.10 Mass Storage Management
Questions
Base Practice: 2.10.1 Monitor and Control Storage Usage
What type of system or tool do you have in place for monitoring and controlling storage usage? What utilities does it have?
Can the tool support all the operating systems within the distributed environment?
Does the tool have the ability to assess the physical file placement and determine space availability? Does the tool allow for reordering of files to eliminate fragmentation?
What media types are used for storage? Can the tool monitor all these media types?
Who oversees or manages the monitoring and control process? What are their responsibilities?
Base Practice: 2.10.2 Define Usage Standards for Storage Media
What information is specified as part of the storage media's usage standards? Are system descriptions, operational procedures, help-desk/problem resolution contacts, Mass Storage Management configuration files etc. included?
Where is the usage standards documentation stored and who accesses these documents? Who is responsible for maintaining usage standards documentation?
How frequently are usage standards reviewed and updated? What is the process for doing so?
Base Practice: 2.10.3 Disk Space Management for Mass Storage
What is the procedure for determining shared disk space requirements?
On what basis is disk-space partitioning done?
How is disk space allocation kept track of?
How frequently are disk space requirements reevaluated and space reallocated?
Base Practice: 2.10.4 Rectify Problems with Stateless File Systems
What mechanisms are employed to rectify backup problems resulting from stateless file systems? Has an assessment been made of how well these mechanisms deal with the problem? If so, what was the outcome of the assessment?
Base Practice: 2.10.5 Locate Datasheets According to Access Priority
Does a storage media hierarchy (based on ease of access) exist and is data stored at particular levels based on defined strategies or priorities? If so, what are the levels of the hierarchy (e.g. online, nearline, offline) and how is data assigned to a particular level?
Are data moved around within the hierarchy? What circumstances initiate such location changes? Is there an automated process for discerning what datasheets should be moved? (e.g. the storage management software keeps track of the number of times particular files are accessed and determines which files should be moved to make retrieval more efficient)? If manual intervention is required, what needs to be done and who does it?
Do you have any means of gauging the efficiency of your data organization at a particular time? If so how frequently is the efficiency assessed? Are any efficiency-related targets set?
Base Practice: 2.10.6 Tape Management
What is your procedure for requesting, locating and loading tapes?
Where are tapes stored? How is the location of each tape in storage tracked?
How do you ensure that all tapes are labeled? What information is recorded on the label?
Generic Questions for Process Area
Are problems ever experienced in running backups due to large data volumes, inadequate bandwidth or sub-optimal hardware/software support?
What type of training do storage management personnel receive on standards, policies and actual operation of the mass storage management system?
Are procedures audited to verify that standards and policies are being followed? Are storage management operations periodically reviewed with the purpose of identifying potential improvements?
Do you find that the resources devoted to mass storage management satisfactorily meet the storage needs of the organization?
Process Capability Assessment Instrument
Figure imgf000139_0001
Questionnaire
Process Area | 2.10 Mass Storage Management
Figure imgf000139_0002
Work Product list
Process Area | 2.5 Backup/Restore/ Archiving
Storage policies document Naming standards document Tape management procedures Usage level reports
Release Management (3.1)
PA Number 3.1
PA Name Release Management
PA Purpose Release Management is the overall process of delivering an on-time release into production. Release Management is broken down into several areas, which are described below:
Release Planning
Release Planning coordinates the release of updates to the distributed and central sites. Due to the fact that any change in the distributed environment may impact other components, releases must be planned carefully to ensure that a change will not negatively impact the distributed system.
Figure imgf000140_0001
Base Practices
Figure imgf000140_0002
Figure imgf000141_0001
References
Figure imgf000141_0002
Process Area: Release Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000141_0003
Level 2
Figure imgf000141_0004
Ho
Figure imgf000142_0001
Level 3 Assessment Indicators
Figure imgf000142_0002
Level 4 Assessment Indicators
Figure imgf000142_0003
Level 5 Assessment Indicators
Figure imgf000142_0004
HI
Figure imgf000143_0002
Process Capability Assessment Instrument: Interview Guide
Process Area 3.1 Release Management
Figure imgf000143_0001
Generic Questions for Process Area
1. Are there release management procedures/policies noting how change orders, schedules, reports, analysis and feedback are processed? If yes, what are the procedures? Is this followed? If not, why?
2. Please describe training for release management personnel? Is this enacted for new and well as existing staff members?
3. What metrics are collected on the release management process to measure success/completion/failure? Are adequate resources provided to gather these statistics? What is done with these metrics?
4. Are standardized checklists, processed and required deliverables noted to personnel who perform the release process? If yes, what is used (e.g. checklists, process, etc.)?
5. Is the release management process reviewed for continuous improvement and are these improvements enacted? If yes, how is the improvement validated against business and performance goals (e.g. benchmarks, basic measurements, etc.)?
Process Capability Assessment Instrument
Process Area 3.1 Release Management
Process Area Release Management is the overall process of delivering an on-time release into production. Description Release Management is broken down into several areas, which are described below:
Release Planning
Release Planning coordinates the release of updates to the distributed and central sites. Due to the fact that any change in the distributed environment may impact other components, releases must be planned carefully to ensure that a change will not negatively impact the distributed system.
Release Planning defines the content of a release; groups new or changed software, data, procedures, training material and upgrade packages for distribution and implementation; applies versions to the release components, and creates a release schedule.
Release Tracking
Release Tracking is the process of monitoring the progress of release contents and all releases.
Questionnaire
Process Area | 3.1 Release Management
Figure imgf000144_0001
Work Product list
Process Area | 3.1 Release Management
Documented release procedures
IV Example of a past release schedule
Example of configuration parameters
Example of build procedures and scripts
Example of operations procedures
Example of customer procedures
Example of customer training materials
Example of legacy data interfaces
Example of early release rollout process successes and failures.
Figure imgf000145_0001
Base Practices
BP Number 3.2.1
BP Name Change Initiation
BP Description The change request serves as a formal record to document and track the status of a change from identification to its eventual completion. In this activity, a change request is created and logged, and the criticality ofthe change is determined. Receipt of change request with the requestor is confirmed.
Example The requester completes and submits a 'Change Request Form ' that contains information such as the date the Change Request Form was completed, the name and signature ofthe employee requesting the change, the type of change request (systems, security, or other), the description ofthe change, rationale for the change, and the priority. The Change Request Form is logged in the change control database.
BP Number 3.2.2 ιϊH
Figure imgf000146_0001
Figure imgf000147_0001
Process Area: Change Control
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000147_0002
Level 2
Figure imgf000147_0003
1*
Figure imgf000148_0001
Level 3 Assessment Indicators
Figure imgf000148_0002
Level 4 Assessment Indicators
Figure imgf000148_0003
Level 5 Assessment Indicators
Figure imgf000148_0004
H7
Figure imgf000149_0001
Process Capability Assessment Instrument: Interview Guide
Process Area 3.2 Change Control
Questions
Base Practice: 3.2.1 Change Initiation
How is a change initiated? Is a change-request form completed and submitted? What information is required on a change-request form?
Is confirmation of request receipt sent?
Where is a change-request logged? What information is recorded when a change-request is logged? Does each change-request receive a priority level? If so, what are the various priority levels and what action or service level does a particular priority level warrant? Does a documented policy specify these actions/levels?
Does the requestor specify the criticality of the change or do change control personnel determine the request's priority level? If the latter is the case, on what basis is a criticality level assigned to the request?
Base Practice: 3.2.2 Change Impact Analysis/Assessment
What type of analysis of the change's impact is performed? What issues are considered? Are both technical and business implications taken into consideration?
Is the effort required to complete the change determined?
Who performs the analysis and who reviews it?
What are the consequences of the change impact analysis (i.e. is the change request rejected if the change analysis yields particular results)?
Base Practice: 3.2.3 Change Approval
Whose approval is needed before a change request can be implemented? Does the person(s) whose approval is necessary depend on the scope or priority level of the change?
How is approval obtained and documented?
Is the change requestor notified of change approval or rejection?
Base Practice: 3.2.4 Change Communication and Scheduling
Once approval is obtained, what is the process for estimating the time and scheduling the change? Are other completion times and dates factored into the estimated time of a change to be implemented?
Does a master schedule exist on which the change is noted, or how is the scheduled change communicated to appropriate parties?
Base Practice: 3.2.5 Change Implementation Planning and Preparation
Who is notified of an impending change?
How does change notification take place?
How much time before the implementation of the change does notification occur?
If the system or parts of the system will be unavailable during the change implementation, how is this unavailability managed?
Base Practice: 3.2.6 Change Request Tracking
What is the process for tracking the implementation of a change request?
What events or conditions related to the change request are logged, i.e. when is the change request status updated?
Is the log reviewed to identify changes that might be overdue or that require additional action?
Base Practice: 3.2.7 Change Implementation
If necessary are change requests escalated/re-routed? What is the process for doing so? Is this process documented and followed?
How is successful completion of the requested change testified or verified?
Who is responsible for verifying the successful completion of the change?
Base Practice: 3.2.8 Change Backout and Contingency Planning For what types of changes are back-out or contingency plans devised? Does a policy exist specifying changes that require such plans?
Where are back-out/contingency plans documented?
How frequently (often, rarely, never) are these back-out or contingency plans utilized?
Base Practice: 3.2.9 Change Reporting
What reports are generated pertaining to changes? What are the contents of these reports? Do the reports follow documented guidelines on format and content?
How frequently are these reports created and disseminated?
Who views these reports and for what purposes?
Base Practice: 3.2.10 Change Post-Implementation Reviews
Is requestor notified of change completion and confirmation received?
What is the process for closing a change request?
Is an audit trail of each change request stored? If so, what documentation is saved?
Can the audit trail for a particular change request be obtained? If so, how?
Generic Questions for Process Area
Are any metrics (e.g. percent of change requests completed on time, percent of requests put on hold) collected to measure performance of the change control process? If so, what are they?
Are any quantitative performance targets set for change control? If so, please describe them. Is performance evaluated against these targets?
What type of training do change control personnel receive? Are employees aware of all document policies and procedures?
Is the change control process periodically reviewed/evaluated with the intent of identifying potential improvements?
Process Capability Assessment Instrument
Process Area 3.2 Change Control
Process Area Change Control is responsible for coordinating and controlling all change administration Description activities with the enteφrise environment (i.e. document, impact, authorize, schedule, implementation control). Change Control determines if and when a change will be carried through in the enteφrise environment. Change potentially covers all events that impact application software, systems software, or hardware.
Changes may often be divided into categories, for example:
New capability, such as new applications or hardware components.
Modifications, which can change functionality, improve performance, etc.
Maintenance, typically to correct errors.
Emergency, which require immediate attention and correction/implementation.
Questionnaire
Process Area 3.2 Change Control
Figure imgf000150_0001
Figure imgf000151_0001
Work Product list
Process Area 3.2 Change Control
Change request form
Sample change control log record
Change control reports
Complete audit trail of a change request
Impact analysis results
Master change control schedule
Example of back-out/contingency plan
Validation 3.3
Figure imgf000151_0002
Base Practices
Figure imgf000151_0003
Figure imgf000152_0001
References
Figure imgf000152_0002
Process Area: Validation
Level 1
Assessment Indicators: Process Performance
IJT | Generic Practice: Ensure that Base practices are performed
Figure imgf000153_0001
Level 2
Figure imgf000153_0002
Level 3 Assessment Indicators
Figure imgf000153_0003
\ζ>
Figure imgf000154_0001
Level 4 Assessment Indicators
Figure imgf000154_0002
Level 5 Assessment Indicators
Figure imgf000154_0003
Process Capability Assessment Instrument: Interview Guide
Process Area 3.3 Validation
Questions
Base Practice: 3.3.1 Determine what needs to be tested for the product
What is the process for identifying all that needs to be tested for a new product? Are business requirements reviewed and taken into consideration?
Has a general set of technical standards been defined for components of the distributed environment? If so, are the testing requirements defined to ensure that compliance with these standards will be tested?
For any product are there certain standard tests performed (e.g. capacity, operability, compatibility etc.)? If so, what are these tests?
Base Practice: 3.3.2 Prepare test plans
What tasks are completed while preparing test plans?
Is a test environment specified, and the necessary preparations detailed?
How is the appropriate testing approach and test model developed?
What test plan documents are produced? Are these a standard set of documents produced for every testing project? If not, how might they vary?
Are all resources required for the testing process identified? Who is in charge of identifying them, (i.e. are others consulted for this decision or is this just done by the validation team)?
Who is involved in creating the final test plans? Who reviews the final test plan documents? Base Practice: 3.3.3 Document test inputs and expected results
What document(s) are prepared detailing all test inputs to be used and the expected results? What other information do these documents contain? Are these documents prepared according to predefined specifications?
Are the test inputs/expected results directly linked back to individual testing requirements identified earlier?
Base Practice: 3.3.4 Install new product in test environment
Please describe the test environment used for testing. Does a single environment exist for all testing purposes?
Does the test environment cover all operating systems, configurations, applications, etc. that are in the production environment?
What tasks or activities are involved in preparing the test environment for the installation of a new product (e.g. verifying proper setup of hardware, software, network, clear data from previous tests, load test data in appropriate regions)? Are these procedures documented?
Can information be copied from the production environment to the test environment? If so, typically what information is transferred? How is this information transferred?
Is the product's installation method documented and installation issues noted? Does the installation follow a standard process or policy for all new installations? If yes, please describe this policy or process.
Base Practice: 3.3.5 Test product and evaluate results
Are all predefined testing requirements tested? Are any mechanisms in place to ensure that all specified test cases are run? If yes, what are these mechanisms?
Are any tools used for automated testing? If so, please describe them. Approximately what proportion of testing is automated and what proportion is performed manually?
Who manages/controls the testing process? What are his/her responsibilities?
In addition to testing the product functionality, is the product's business functionality verified (i.e. does the product meet the business requirements for which it is intended)? If so, what is the process for doing so?
If appropriate, is the product tested on customers to check system navigation/ease of use and adequacy of training/job aids that accompany the product?
What reports or documents are produced as the output of the testing process? What information is presented and who receives this information? Have reporting guidelines been defined?
Base Practice: 3.3.6 Perform regression testing on environment and system's functionality
What is the process for identifying the requirements for regression testing?
Is any tool employed for automated regression testing? If so, please elaborate. Does this tool meet all regression testing requirements? If not, where does it fall short? How are these shortcomings addressed?
Are any manual or automated test scripts created and retained for reuse during future regression testing activities? If yes, are these test scripts periodically updated or changed to accommodate new processes or requirements? Who updates these scripts?
If regression testing results show that the product has unintended impacts on other areas, what is done? Is the change rolled back? Who decides that a roll back should occur and at what point during the process does this happen?
Generic Questions for Process Area
Does a designated "validation team" exist? If so, please describe the roles and responsibilities of members of the team. How does the team coordinate its activities?
What other groups does validation interface with? Where do requests for testing of a particular product originate?
Are the testing process and new technologies periodically evaluated to identify potential improvements? Are associated future human resource requirements considered? How frequently does such a review occur? Who is involved in the process?
What type of training do testing personnel receive? Does formal training occur or does training primarily occur on-the-job?
Are any statistics collected for purposes of evaluating the testing process (e.g. percent of successful migrations of tested products)? If so please describe them and the method by which they are collected. Are targets for these metrics set? What is the process for assessing performance against these targets? How has performance been vis-a-vis the targets defined?
Do you find that adequate resources are allocated for validation activities. Please elaborate.
Process Capability Assessment Instrument | r Process Area 3.3 Validation
Process Area Validation involves testing potential hardware and software for the distributed environment Description prior to procurement to determine how well a product will fulfill the requirements identified. Validation also ensures that the implementation of a new product will not adversely affect the existing environment.
Questionnaire
Process Area 3.3 Validation
Figure imgf000156_0002
Work Product list
Process Area 4.3 Validation
Figure imgf000156_0003
Figure imgf000156_0001
PA's Metrics Total number of batch rollouts scheduled per month Number of hours used per month for rollouts % of time that rollout schedule is successfully adhered to Ratio of emergency versus planned changes
Base Practices
Figure imgf000157_0001
References
Figure imgf000157_0002
Process Area: Deployment
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000158_0001
Level 2
Figure imgf000158_0002
Level 3 Assessment Indicators
Figure imgf000158_0003
Figure imgf000159_0001
Level 4 Assessment Indicators
Figure imgf000159_0002
Level 5 Assessment Indicators
Figure imgf000159_0003
Process Capability Assessment Instrument: Interview Guide
Process Area | 3.4 Deployment
Questions
Base Practice: 3.4.1 Confirm schedule with all key groups periodically
Figure imgf000160_0001
Is training provided for all customers effected by the deployment? If yes, describe the training.
Are the deployment activities and processes monitored for continuous improvement? If yes, how?
Have any changes been enacted and validated after they have been identified as a continuous improvement area?
Process Capability Assessment Instrument
Process Area 3.4 Deployment
Process Area Deployment monitors the rollout schedule against the activities taking place to ensure that Description rollout happens smoothly according to the planned schedule. As there are many dependencies within a distributed system, deployment can become highly complex and must be synchronized.
In addition, numerous groups within and external to the organization will be involved in the rollout. Deployment is responsible for managing these groups, coordinating the information received from these groups, and determining whether or not the schedule will be negatively impacted by any activity taking place. If changes to the schedule are required, Deployment is responsible for coordinating the changes across all ofthe groups involved and seek management approval for the changes.
Questionnaire
Process Area | 3.4 Deployment
Figure imgf000161_0001
Work Product list
Process Area | 3.4 Deployment
Example of a previous deployment plan
Example of training schedule/materials that was provided to employees who recently received deployed application
Example of a previous deployment reports
A copy ofthe standard procedures regarding deployment
Example of a backout strategy if deployment is not successful
Software & Data Distribution (3.5)
PA Number 3.5
PA Name Software & Data Distribution
PA Purpose The Software and Data Distribution process allows software and data to be installed or lb*
Figure imgf000162_0001
Base Practices
Figure imgf000162_0002
References
Figure imgf000162_0003
Process Area: Software & Data Distribution
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed lb!
Figure imgf000163_0001
Level 2
Figure imgf000163_0002
Level 3 Assessment Indicators
Figure imgf000163_0003
lb
Figure imgf000164_0002
Level 4 Assessment Indicators
Figure imgf000164_0003
Level 5 Assessment Indicators
Figure imgf000164_0004
Process Capability Assessment Instrument
Figure imgf000164_0001
Questionnaire w Process Area 3.5 Software & Data Distribution
Figure imgf000165_0001
Work Product list
Process Area 3.5 Software & Data Distribution
Example of Software Performance Evaluation Example of "Manual" Distribution Package sent to User's. Example output of Software/Data Distribution Reports (Successes/Failures/etc.)
Example of Asset Inventory Report for Software Data Distribution Current copy of Detailed Design Plan Example of Change Control Document
Figure imgf000165_0002
Base Practices
BP Number 3.6.1
BP Name Assemble the release package
BP Description The puφose of this activity is to bundle the requirement components of a release, and ensure that it is correct and complete.
Example Assurances are made that the tools, testing, software, space and version control is in place before a package is released.
BP Number 3.6.2
BP Name Maintain integrity of all master release packages
Figure imgf000166_0001
References
Figure imgf000166_0002
Process Area: Migration Control
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000166_0003
J
Figure imgf000167_0001
Level 2
Figure imgf000167_0002
Level 3 Assessment Indicators
Figure imgf000167_0003
Figure imgf000168_0001
Level 4 Assessment Indicators
Figure imgf000168_0002
Level 5 Assessment Indicators
Figure imgf000168_0003
Interview Guide
Process Area | 3.6 Migration Control
»? Questions
Base Practice: 3.6.1 Assemble the release package
Are tools, software, space and version controls always in place to secure a complete and bundled release? If yes, who does this and how? If no, explain.
Who does migration control coordinate this process with (e.g. Change Control, Validation, Deployment, Software and Data Distribution, etc.)? Explain the interactions.
Base Practice 3.6.2 Maintain integrity of all master release packages
Are all master release packages maintained in their own file and directory structure? If no, explain.
Are all documents for the master release package archived/maintained? If yes, by whom (e.g. owners, developers, programmers, etc.) and are they accessible?
Base Practice: 3.6.3 Implement version control on release received from development
Is version control maintained on release software from development? If yes, how and who is responsible? How is feedback provided (e.g. reports, form provided, etc.)?
Is change control made aware of releases received from development? If yes, how? If no, explain.
Base Practice: 3.6.4 Migrate proper versions of release from development to test environment Are versions validated to ensure that the correct versions of releases are migrated into the test environment? If yes, how and by whom?
Is validation made aware of release migration into the environment? If yes, how? If no, explain.
Base Practice: 3.6.5 Receive confirmation that release package has been tested successfully
How is confirmation received regarding successful testing? By whom and to whom is this information sent?
2. Are all schedules updated with this information? If yes, which ones. If no, why?
Base Practice: 3.6.6 Notify appropriate parties of status of release package's migration
How are other parties notified of release package's migration? Who would be the typical receivers of such information?
Do other parties supply feedback to migration control regarding concerns, problems or collaborative efforts? If yes, how is typical communication handled (e.g. e-mail, reports, meetings, etc.)?
Base Practice: 3.6.7 Maintain migration libraries
Are migration libraries maintained? If yes, by who and how? If no, explain how historical software or versions are kept?
2. How long are migration libraries maintained for?
Generic Questions for Process Area
Is there a formal policy in place that covers the entire migration control process? If yes, is it followed and who is responsible for its maintenance. If no, explain.
Is there training in place for new employees? If yes, explain the training provided (e.g. ad hoc, on the job, formal, lecture)? Is follow-up training provided on new technologies and procedures for all migration control employees? Explain.
Are data collected on the migration process? If yes, is this automated? Are metrics gathered noting more statistical information? If yes, explain what metrics are collected and what tools are used (e.g. software, programs, etc.).
Are strategic goals in place for migration control? If yes, what are they and are they measured against metrics? Are these metrics analyzed against business goals and reported on. If yes, how and by whom? If no, explain.
Is the migration control process reviewed for continuous improvement? If yes, are these improvements ever deployed and measured against metrics and business goals?
Are there enough resources provided for the migration control process (e.g. software, tools, personnel, etc.)? If no, explain?
Process Capability Assessment Instrument
Process Area 3.6 Migration Control
Process Area Migration Control is the process of testing updates to the distributed system prior to being Description released into the distributed environment. To control the updates as they move from the development into the production environment, Migration Control ensures that the proper
\w updates are: received from development versioned according to the version strategy of Release Planning moved into the test environment moved from the test environment into the production environment after the pre release tests have been successfully completed.
Questionnaire
Process Area | 3.6 Migration Control
Figure imgf000170_0001
Work Product list
Process Area | 3.6 Migration Control
A copy ofthe policy or procedure guide regarding migration control
Samples of changes requests noting migration control information
Samples of reports noting migration control status and future schedules
A copy of a migration control schedule/calendar for a typical software migration process
Figure imgf000170_0002
Base Practices
Figure imgf000170_0003
Figure imgf000171_0001
References
Figure imgf000171_0002
Figure imgf000171_0003
Base Practices
Figure imgf000171_0004
?D
Figure imgf000172_0001
References
Figure imgf000172_0002
Process Area: Content Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000172_0003
Figure imgf000173_0001
Level 2
Figure imgf000173_0002
Level 3 Assessment Indicators
Figure imgf000173_0003
Level 4 Assessment Indicators
Figure imgf000173_0004
173-
Figure imgf000174_0001
Level 5 Assessment Indicators
Figure imgf000174_0002
Process Capability Assessment Instrument: Interview Guide
Process Area 3.8 Content Management
Questions
Base Practice: 3.8.1 Content Development
Are meetings held to discuss verbal and graphical content of each application? If yes, who attends? How often are these meetings held?
Is a web template used to standardize information and aesthetics for every application? Who developed the template? Is there a purpose for its specific design?
Has a standardized list of approved text, image and multi-media formats been agreed upon? If yes, what are they? What was the process of composing this list?
Is there a procedure/policy regarding the content development? If yes, what is the procedure? Is the procedure followed?
Base Practice: 3.8.2 Content Approval
1. Is there a procedure for content approval? If yes, what is it?
Who reviews content for approval purposes? If yes, whose concerns do they represent (e.g. legal, marketing, engineering, etc.)?
Are meetings held on a scheduled basis for content approval matters? If yes, who attends? 4. Is version control established for all web related documents?
Base Practice: 3.8.3 Content Integration
Who is responsible for migrating documents into the production environment? Is migration performed on an ad-hoc bases or on a scheduled basis? What is the process for migrating documents?
How is old or outdated material archived/stored when new data is migrated onto the system to replace it?
Base Practice: 3.8.4 Technical Review
Are technical standards and procedures established for content review? If yes, what are they? Who conducts these reviews?
How are technical problems/concerns reported to the author, customers, content management or the web master (e.g. meetings, reports, e-mails)? Does Content Management coordinate an action plan corrections with the author (e.g. scheduled, prioritized, ad hoc, etc.)?
What are the most common technical problems encountered? What are the future technical threats or issues to be considered? How are these problems fixed or resolved?
Base Practice: 3.8.5 Content Testing
1. Is the content tested before or after it is integrated into the production environment?
When testing content, which environments/platforms are checked for problems/issues (e.g. unix, standalone, network)?
Who is responsible for testing? How is feedback provided from and to content management, customers, authors, web masters, etc.?
Base Practice: 3.8.6 Content Restoration
Has any part or all of an archived web site ever been migrated into a production environment? If yes, explain the reason?
Who handles content restoration? What are the most common problems encountered when replacing current pages with older versions?
Is there an approval procedure as to what is restored and when? If yes, what is the process 9
Base Practice: 3.8.7 Content Aging
Does the web site contain date sensitive/volatile content that must be updated often? If yes, how often and by whom?
Is the site checked for relevant and current information on a scheduled basis? If yes, by whom? How frequently does such a check occur?
Are files removed from a site (e.g. erased, archived), updated to include historical information/content or both? Is content volume an issue?
Are metrics gathered regarding content management? If yes, explain what data is gathered, why, and who is it distributed to?
Generic Questions for Process Area
Is a policy established, maintained and followed for the entire content management process? If yes, please describe it.
Are there enough personnel available in content management to perform all necessary tasks and manage the different types of contents (video, voice, etc.)? If no, why?
Is training provided for new content management personnel? If yes, how is it performed (e.g. on the job, scheduled, ad-hoc)?
Is formal training provided on a continuous basis for all content management personnel? If yes, describe training.
Are metrics collected? Is software used to perform metric collection on an automated basis? If yes, what program are used? What data is being collected?
Is the content management process reviewed for continuous improvement? If yes, is this process measured? How?
Are all documents processed through the content management personnel prior to migration in a production environment? If no, why?
Are strategic goals established for content management? Are these measured? If yes, how?
Is the content management process compared against goals and metrics? Do these comparisons lead to suggested improvements for the process? Are deployed improvements then validated via metrics?
Does content management lack any resources that are needed to perform tasks and follow procedure? If yes, what ?
Process Capability Assessment Instrument Process Area 3.8 Content Management
Process Area Content Management represents the people, processes, and technologies that allow a net- Description centric site to maintain up-to-date, secure, and valid contents for its customers.
Questionnaire
Process Area | 3.8 Content Management
Figure imgf000176_0001
Work Product List
Process Area | 3.8 Content Management
Content Management Manual Example of any Content Management Reports Example of a web page that progressed through Content Management cycle
Metrics collected for the Content Management process Examples of tracking documents/reports noting the status of web pages throughout the Content Management process
Figure imgf000176_0002
Base Practices
BP Number 3.9.1 w
Figure imgf000177_0001
References
Figure imgf000177_0002
Process Area: License Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000177_0003
Level 2
Figure imgf000177_0004
Figure imgf000178_0001
Level 3 Assessment Indicators
Figure imgf000178_0002
Level 4 Assessment Indicators
Figure imgf000178_0003
17? Level 5 Assessment Indicators
Figure imgf000179_0002
Process Capability Assessment Instrument: Interview Guide
Process Area | 3.9 License Management
Questions
Base Practice: 3.9.1 Acquire New/Increased Number of Licenses
How are new/increased number of licenses acquired? By whom?
Are the software programs used authorized by the original manufacturer? If no, explain
Are housekeeping duties performed on license information? If yes, when? How?
Is the ability available to track, run detailed reports with version information, and measure the license management process regarding software licenses? If yes, how?
Does license management authorize license use? If yes, how?
Base Practice: 3.9.2 Delete expired software and corresponding licenses
Is there a process in place for removing software with expired licenses? If yes, what is the process? How often does this occur?
Are there any reports or data collected on software where the license has expired? If so, what detailed information is collected on the expired software? What is done with the data?
Base Practice: 3.9.3 Support Various License Types
1. Are various license types supported? If yes, identify?
How are license renewals handled? By whom?
Are notices sent when license expiration dates are near? If yes, how is notification sent?
Is unlicensed software searched for? If yes, how (physical, system)?
What is done when unlicensed software is discovered?
Generic Questions for Process Areas
What is the license management process?
2. Are reviews for the license management process conducted for continual improvement?
3. If improvements are implemented, how are the outcomes measured?
4. What training is provided to new and existing personnel regarding the license management process?
5. What license management reports are generated to management for review/feedback?
6. What policy, standards or procedures have been established for license management?
7. What are the needs, priorities and quantitative goals for license management?
8. Are any resources lacking that would facilitate data collection regarding license management?
Process Capability Assessment Instrument
Figure imgf000179_0001
Figure imgf000180_0001
Questionnaire
Process Area | 3.9 License Management
Figure imgf000180_0002
Work Product list
Process Area | 3.9 License Management
Sample Software License Agreement
Sample of Software License Purchases
List of available software with details (expiration date, number of customers, etc.)
Customer's Guide for Software Tracking Program
Figure imgf000180_0003
Base Practices
BP Number 3.10.1
BP Name Manage and maintain asset information
BP Description Update and delete asset information locally or remotely. The puφose of this activity is to ensure that asset information is accurate.
Example Software packages, such as ValuWise, can allow an Asset Management team to maintain a database with what assets are assigned to whom. m
Figure imgf000181_0001
References
Figure imgf000181_0002
Process Area: Asset Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000181_0003
Level 2
Figure imgf000181_0004
1 5-0
Figure imgf000182_0001
Level 3 Assessment Indicators
Figure imgf000182_0002
Level 4 Assessment Indicators
Figure imgf000182_0003
and improve the process [ addressed.
Level 5 Assessment Indicators
Figure imgf000183_0001
Process Capability Assessment Instrument: Interview Guide
Process Area [ 3.10 Asset Management
Questions
Base Practice: 3.10.1 Manage and Maintain Asset Information
What tool or system is used to maintain asset information?
What attribute information is initially recorded about the assets? What types of updates are made, and how frequently?
For what purposes is asset information used (e.g. financial reporting, managing service levels etc.)? How does the asset management system interface with the other functions (such as accounting) that need access to asset information?
Does the tool enable detection and tracking of all hardware and software components installed on the network?
Can asset information be updated/deleted/browsed remotely and/or locally?
Base Practice: 3.10.2 Audit Information in System
How is information in the system audited for correctness, completeness and accuracy? How frequently do audits occur?
Can asset information be searched based on customer-defined parameters?
Who is responsible for overseeing the audit process?
Base Practice: 3.10.3 Report on Discrepancies
What reports are generated based on discrepancies identified during the audit process? What information do these reports contain?
Are the content and format of these reports based on documented standards?
Who receives these reports and for what purposes';
What action is taken if discrepancies are identified? Does the action depend on the severity of the discrepancy? Are these procedures documented?
How frequently does this reporting process occur?
Base Practice: 3.10.4 Archive Asset Information
How long is asset information stored for? In what format and where is old asset information archived?
For what purposes and how frequently is archived asset information accessed?
Base Practice: 3.10.5 Log all Assets in Inventory
How is it ensured that, in addition to assets in use, all assets in inventory are logged on the asset management system?
What is the updating process when an asset in inventory is moved for use?
Does the process for auditing informational accuracy cover assets in inventory?
Generic Questions for Process Area
Is the asset management tool/process periodically reviewed to identify potential improvements? If so, how frequently does this occur and who controls this process?
How is performance of asset management functions measured? Are any performance targets (e.g. percent of incorrect asset data in system) for the asset management process defined? If so, what are they and how is performance assessed against these targets?
Do you find that the existing asset management system adequately meets the organization's asset information needs?
What type of relevant qualifications and training do asset management personnel have?
Process Capability Assessment Instrument
Process Area 3.10 Asset Management
Process Area Asset Management ensures that all assets are registered within the inventory system and that Description detailed information for registered assets is updated and validated throughout the asset's lifetime. This information will be required for such activities as managing service levels, managing change, assisting in incident and problem resolution and providing necessary financial information to the organization.
Questionnaire
Process Area 3.10 Asset Management
Figure imgf000184_0001
Work Product list
Process Area | 3.10 Asset Management
Example list of assets and details related to each asset
Sample asset log
Audit reports
Discrepancy reports (if different from above)
Procurement (3.11)
PA Number 3.11
PA Name Procurement
PA Puφose Procurement is responsible for ensuring that the necessary quantities of equipment (both hardware and software) are purchased and delivered on time to the appropriate locations. Procurement is also responsible for logging all assets into the inventory as they are received.
PA's Base Maintain vendor information Practices Receive and log request
Identify vendor and place order
Track orders
Ensure timely/accurate delivery & log assets received
Manage returns and replacements
Report on procurement activities and assess procurement strategy
PA Goals To procure and deliver assets on time and at the lowest possible cost. To maintain accurate vendor information. To ensure all assets purchased are entered into asset management system.
PA's Metrics Differential between actual and budgeted equipment costs Percentage of requested items delivered on time Costs incurred from returns due to incorrect purchases
Base Practices
Figure imgf000185_0001
Figure imgf000186_0002
References
Figure imgf000186_0003
Process Area: Procurement
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000186_0004
Figure imgf000186_0005
Figure imgf000186_0001
Figure imgf000187_0001
Level 3 Assessment Indicators
Figure imgf000187_0002
Level 4 Assessment Indicators
Figure imgf000187_0003
Level 5 Assessment Indicators i
Figure imgf000188_0001
Process Capability Assessment Instrument: Interview Guide
Process Area 3.11 Procurement
Questions
Base Practice: 3.11.1 Maintain vendor information
What was the process for creating a list of approved vendors? Have vendors been identified for each type of standard equipment? Does the list include more than one potential vendor for each type of standard equipment?
What information about potential vendors and those used in the past is stored? For example, is the history of transactions and quality of service received noted? Are special terms or conditions that apply to a vendor recorded?
Is information maintained on any regulatory requirements or existing contracts that could affect vendor selection?
When does vendor information get entered and who is responsible for maintaining it?
Who accesses the vendor information and for what purposes?
Base Practice: 3.11.2 Receive and log request
In what format does procurement receive a purchase request (e.g. a request form, on-line etc.)?
What information does the purchase request contain?
Does procurement verify that the request carries the necessary approval or authorization? How is this done? Whose approval is required for purchases? Does a documented policy describe the necessary authorizations?
For non-standard orders, does procurement verify the technical compatibility of the equipment/software requested? What is the process for verifying compatibility?
Is every request logged when received? If so how? Are these procedures documented?
Base Practice: 3.11.3 Identify vendor and place order
What is the process for selecting a vendor for a particular order? Is the vendor listing and information used?
Does negotiation of specific terms occur with the vendor after selection, or does any preliminary negotiation occur with several potential vendors and then are the outcomes considered during selection?
Who is responsible for placing an order? Is a purchase order or other document used? If so, please describe. Is the log updated when the order is placed?
Is the requester notified of the order placement and estimated delivery date?
Base Practice: 3.11.4 Track orders
How are open orders tracked? Do specified checkpoints exist when all open orders are reviewed to identify any over-due deliveries?
Is a backlog and backorder information maintained? If yes, by whom?
In what instances does procurement need to communicate with rollout/release management? What information is exchanged?
What action is taken if an order is overdue?
Base Practice: 3.11.5 Ensure timely/accurate delivery & log assets received
What is the procedure for handling receipt of equipment delivered? How is procurement involved?
Are any proactive steps taken to ensure timely delivery (e.g. supplier is contacted shortly before the delivery date to verify the delivery)
Does procurement verify that the correct equipment was received? How?
Is the receipt logged and the request record closed? What is the procedure for this? Who is responsible for logging all assets received in the asset management system?
\tl
Figure imgf000189_0001
Process Capability Assessment Instrument
Process Area 3.11 Procurement
Process Area Procurement is responsible for ensuring that the necessary quantities of equipment (both Description hardware and software) are purchased and delivered on time to the appropriate locations. Procurement is also responsible for logging all assets into the inventory as they are received.
Questionnaire
Process Area 3.11 Procurement
Figure imgf000189_0002
Work Product list
\t Process Area I 3.11 Procurement
Purchase request form
Purchase order
Sample vendor profile
Procurement reports
Current Procurement catalogue of vendors/suppliers
Figure imgf000190_0002
Figure imgf000190_0001
Figure imgf000191_0001
Process Area: Quality Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000191_0002
Level 2
Figure imgf000191_0003
Figure imgf000192_0001
Level 3 Assessment Indicators
Figure imgf000192_0002
Level 4 Assessment Indicators
Figure imgf000192_0003
Level 5 Assessment Indicators
Figure imgf000192_0004
Hi Process Capability Assessment Instrument: Interview Guide
Process Area | 4.3 Quality Management
Figure imgf000193_0001
l <b- Process Capability Assessment Instrument
Process Area 4.3 Quality Management
Process Area Quality Management is an on-going process, which monitors how well the distributed Description environment is being managed, and looks toward continually improving its management capabilities and service. Within this process, quality improvement actions are determined, agreed upon, planned and monitored.
Questionnaire
Process Area | 4.3 Quality Management
Figure imgf000194_0001
Work Product list
Process Area | 4.3 Quality Management
Quality improvement action plan
Quality improvement action schedule
Quality assessment reports
Or anizational chart or hirin matrix of ualit assessment team
Figure imgf000194_0002
Base Practices
Figure imgf000194_0003
ι \>
Figure imgf000195_0001
References
Figure imgf000195_0002
Process Area: Legal Issues Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000195_0003
m
Figure imgf000196_0001
Level 2
Figure imgf000196_0002
Level 3 Assessment Indicators
Figure imgf000196_0003
IK
Figure imgf000197_0001
Level 4 Assessment Indicators
Figure imgf000197_0002
Level 5 Assessment Indicators
Figure imgf000197_0003
Process Capability Assessment Instrument: Interview Guide
Process Area | 4.5 Legal Issues Management
Questions
\% Base Practice: 4.5.1 Identify legal risk areas
1. Is the web site reviewed for legal risk issues prior to publishing? If yes, by whom and how often? If no, why?
What issues have provided the most concern? Have these concerns been made known to and been addressed by the web master, content management or other related operational areas? If yes, how are they made know (e.g. symposiums, reports, conferences, phone mail, etc.) and addressed (e.g. policy, procedures, reviews, etc.)?
Are legal issues personnel consistently made aware of new issues, litigation and laws that might affect future web publishing? If yes, what is of concern?
Does the web site contain any disclaimers that would remove you from liability issues? If yes, what are they and what prompted their use?
Are legal issues reviewed on a state, domestic or worldwide scope? How has this view helped or hindered the process? Is jurisdiction a justification for the chosen scope?
Base Practice 4.5.2 Identify types of content where one may be legally at risk
Does the legal issues personnel review the different types of content (e.g. graphics, video, audio, Java applets, etc.) for risk? If yes, what types provide the most and least concern? Who is responsible for this review? How often is it done?
Is there a process in place to gain permission to use/publish copyrighted material? If yes, what is it? Is it consistently followed? Who is responsible for this? What types of content are the most protected/least protected?
Does the site allow any customers to download/FTP software? If yes, what software and what legal notifications are provided to the customers?
Are the graphics/text for any sales products provided with a disclaimer (e.g. color may be different than actual, size may be different, quantities are limited, etc.)? If yes, what are they?
Base Practice: 4.5.3 Identify customers
Are pages evaluated with customers, laws, business goals, and employees in mind? If yes what are the areas of concentration/review for these each of these audiences?
Do customers communicate with the firm regarding legal concerns or complaints? If yes, how do they do this? To whom is this communication directed? Which group of customers seem to be the most vocal about the content (e.g. system, public, corporations, government, etc.)?
Do all customers, who are not employed with the firm, have the ability to gain access to all parts ofthe web site (e.g. chat rooms, join e-mail lists, place orders, view inventory, etc.)? If yes, what are the most popular destinations and peak times? Are surveys offered to these customers? If no, what type of access control do you provide (log-on and password, return e-mail address, etc.) and are legal disclaimers provided for any legally sensitive areas? Please explain.
When responding to complaints or legal instruments initiated by a customer, do the legal issues management personnel meet with other counsel to respond or is the issue handed off to another department? In your experience, has this happened before and what were the circumstances?
Base Practice: 4.5.4 Legal process setup and refinement
Is the legal issues management procedure/policy maintained to address new net centric issues? If yes, by whom and how often? Is it consistently followed?
Does the legal issues management personnel forward documents in question to corporate counsel for review, approval/change and/or resolution? If yes, explain the procedure. Who is responsible for tracking the document once it is transferred to corporate counsel? Explain this tracking.
What legal requirements and issues(e.g. privacy, censorship, freedom of information, intellectual property, etc.) are gathered on an on-going basis to ensure legal credibility for the site?
Are new business offerings by the firm viewed for operational legal requirements? If yes, by whom and how often (e.g. scheduled vs. ad hoc)?
Does the legal issues group maintain contracts and ensure their deployment for compliance? If yes, who is responsible for this and how often are reviews performed?
Generic Questions for Process Area
What is the standard procedure/policy with regard to legal issues management tasks and procedures? Is it followed? At any time are some procedures done in an ad hoc manner ? If yes, please explain?
Are adequate tools and personnel available for legal issues management tasks and procedures? What are the tools and who are the personnel?
Is training held for new employees within the legal issues management group? If yes, is this done on the job or during formal training sessions? Are classes / training provided to all legal issues personnel which cover new issues/procedures/tasks etc.? If yes, how often is this planned?
Have measures been defined, selected and subsequent data collected for legal issues management? If yes, what type and how often?
What reports are provided to various departments within the firm from legal issues management regarding pertinent issues (e.g. changes to plans, decisions, process, requirements, etc.)? To whom do they go and how often? Do recipients of these reports provide feedback to legal issues management? If yes, what method is used (e.g. e-mail, meetings, hardcopy, etc.)?
Does the legal issues management group provide web pages with version control numbers and change order requests for updated page content?
Are all change order requests for web pages signed off by legal issues management? If no, why? If yes, by whom? How often is this done?
Are metrics automatically collected from the web site for use by legal issues personnel? If yes, what is it? How is it collected (e.g. automated, manually, both)?
Are the legal issues management processes continually improved? If yes, how? Are the improvements validated and quantified against business goals and objectives?
Is your legal issues team made up of qualified lawyers? What type of continuous education do the pursue?
Process Capability Assessment Instrument
Process Area 4.5 Legal Issues Management
Process Area Legal Issues Management addresses the legal liability considerations associated with doing Description business on a public network. To ensure that a legal risk is limited, there is a need for a close tie between Service Provider's Operations departments Legal department.
Questionnaire
Process Area | 4.5 Legal Issues Management
Figure imgf000199_0001
Work Product list
Process Area | 4.5 Legal Issues Management
Legal Issues Management procedure manual /policy
Examples of bulletins/notifications regarding new legislation that would affect content. Sample reports from legal issues group noting complaints, issues or concerns for existing and future web development.
Example of a legal issues tracking document for web pages/sites showing the progression ofthe page/s through review/approval cycle.
Figure imgf000199_0002
Figure imgf000200_0001
Base Practices
Figure imgf000200_0002
M
Process Area: Capacity Modeling & Planning
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000201_0002
Level 2
Figure imgf000201_0003
0*
Figure imgf000202_0001
Level 3 Assessment Indicators
Figure imgf000202_0002
Level 4 Assessment Indicators
Figure imgf000202_0003
Level 5 Assessment Indicators
Figure imgf000202_0004
>0f
Figure imgf000203_0001
Process Capability Assessment Instrument: Interview Guide
Process Area | 4.6 Capacity Modeling & Planning
Questions
Base Practice: 4.6.1 Define Overall Capacity Modeling & Planning Requirements
Has a base level model of the system's capacity been created and verified based on information from vendors, independent tests, etc.? Are service measures used as comparisons? If yes, what are they? If no, explain.
Explain your standard capacity planning process/policy, including CPU, memory, I O and router usage and needs. All existing or future mainframe and server processors, storage, network configurations, and peripheral requirements should be addressed.
Are the capacity requirements coordinated across distributed system based on SLAs/OLAs? If yes, explain. Are there outstanding SLA/OLA issues to be resolved? If yes, explain.
Are alarms activated when a SLA/OLA is not met? If yes, how and to whose attention? If no, explain.
Are workload balancing forecasts/plans in place? If yes, do they consider key transactions that have been collected and verified? Explain.
What are the existing and future applications/data requirements that drive the capacity plan?
What are the functional requirements/data that drive the capacity plan?
Is there a policy in place to ensure the capacity plan updated regularly (semi-annual/annual/bi- annual) or only when changes/deviations are encountered? Please describe the policy.
Are possible future threats/changes to service levels noted in the capacity plan?
What is the plan of action for identified threats?
Base Practice: 4.6.2 Collect All Capacity Information (Based on Business Requirements)
What are the business drivers that affect the capacity model?
What are the verified capacity plan requirements for the networks/distributed system? (e-g- financial, physical, operational, software, vendor, applications, constraints/limts.)
Is the current system/version reviewed on a scheduled, documented basis to see how well it is being utilized? How often?
Is performance/cost benefit analysis performed and tracked for each configuration? If yes, who does this and how often?
What tools have been used to measure the system's capacity?
What reports are produced regarding capacity planning? Who receives these reports? Are the accuracy of assumptions, forecasts and results tracked?
Base Practice: 4.6.3 Determine Ongoing Support Requirements
What projections have been created and reviewed that address ongoing support requirements for operations, personnel and functions?
2. Has the impact of planned business growth been evaluated with regards to support? If so, how?
3. Has the impact of planned future locations been evaluated with regards to support? If so, how?
Base Practice: 4.6.4 Build and Test Model
1. How is the base model calibrated prior to adding forecast parameters? (e.g. verify model parameters, account for discrepancies, verify accuracy of base model, etc.)
2. What forecast parameters/assumptions were added to the base model?
3. How are capacity shortfalls identified?
4. What model solutions address capacity shortfalls?
5. Have assumptions and strategies been documented? 0 Base Practice: 4.6.5 Deploy Model, and Adjust as Appropriate
1. How often are reports disseminated to appropriate parties (e.g. weekly, monthly, etc.)? Is feedback received on utilization, capacity and performance?
Do management, development, and customers receive status reports that compare actual to planned utilization for review/discussion?
Does management review, revise and approve capacity plans? If no, explain.
What is the course of action/process regarding the capacity plan if major changes to the system or business occur? Are other groups/process informed (e.g. release management, SLA, procurement, security, etc.)? Explain.
Generic Questions for Process Area
1. Are training sessions held for personnel on a scheduled basis regarding the capacity planning process and its defined tasks? If so what type of training is provided to personnel to ensure adequate/competent execution of capacity plan?
2. Is there written documentation that covers the established capacity plan procedures for personnel?
3. How often is the capacity process reviewed for continuous improvement purposes? How often are improvements implemented and by whom?
4. When continuous improvement strategies are executed, how is the improvement validated against business and performance goals (e.g. benchmarks, basic measurements, etc.)?
Process Capability Assessment Instrument:
Process Area 4.6 Capacity Modeling & Planning
Process Area Capacity Planning attempts to ensure that the adequate resources will be in place to meet Description SLA requirements. Resources include physical facilities, computers, memory, disk space, communications equipment, and personnel. Capacity Planning must be done for the system as a whole so that the planners can understand how the capacity of one portion ofthe system affects the capacity of another. Due to the large number of components typically found within a system, the interdependencies between business functions and resource components must be clearly defined.
Questionnaire
Process Area [ 4.6 Capacity Modeling and Planning
Figure imgf000204_0001
Work Product list
Process Area | 4.6 Capacity Modelmg and Planmng
Example of an Existing Capacity Plan/Reports List of SLAs/OLAs requirements
List of resources referenced in Capacity Plan(e.g. physical facilities, computers, memory, disk space, communication equipment and personnel)
PA Number 4.7
>0i
Figure imgf000205_0001
Base Practices
BP Number 4.7.1
BP Name Determine what disaster recovery requirements are based on SLAs
BP Description Service Level Agreements may require compliance with system recovery timetables for various operations, servers, departments, etc.
Example The service desk may require a 4 hour recovery from network loss within an organization whereas mail systems loss may require a 8 hour recovery.
BP Number 4.7.2
BP Name Perform business and system risk assessment
BP Description This process identifies business and system risks for insurance assessment. It also addresses the cost-benefit of each recovery plan to insure effective gain compared with monetary commitment.
Example Insurance needs to address accident, malicious intent and component failure when seeking and obtaining coverage. Analysis needs to be done on cost-benefit. A hot site, hot standby maybe too expensive for the service provided but a hot site, cold standby proves more efficient service for the dollar.
BP Number 4.7.3
BP Name Determine recovery implementation plan
BP Description Plan needs to include all recovery procedures for each site. Personnel, tasks, equipment and timetables need to be included. Different scenarios also need to be accounted for and appropriate procedures should be incoφorated for these.
Example The recovery plan from a site loss will be different than a major data loss. Personnel, time, hardware and backup requirements will be different and reflected within the plan.
BP Number 4.7.4
BP Name Review recovery plan with management
BP Description This base practice allows for management to review resource capabilities, issues, and progress for each site. SLAs and cost-benefit analysis will also be examined.
Example Management's review ofthe recovery plans will show the readiness ofthe Service ProviderOperations and the clear procedures needed to minimize any loss. Management review will also allows for cohesiveness so that the process area goals are obtained.
BP Number 4.7.5
BP ame Plan disaster recovery testing procedures
Figure imgf000206_0001
References
Figure imgf000206_0002
Process Area: Business/Disaster Recovery Planning & Management
Level 1
Assessment Indicators: Process Performance
Generic Practice: Ensure that Base practices are performed
Figure imgf000206_0003
Figure imgf000207_0001
Level 3 Assessment Indicators
Figure imgf000207_0002
Level 4 Assessment Indicators
Figure imgf000207_0003
9-0 h
Figure imgf000208_0001
Level 5 Assessment Indicators
Figure imgf000208_0002
Process Capability Assessment Instrument: Interview Guide
Process Area 4.7 Business/Disaster Recovery Planning & Management
Questions
Base Practice: 4.7.1 Determine what disaster recovery requirements are based on SLAs
Are business/disaster recovery plans based on SLAs or documented business requirements? If yes, how are these communicated to the group and how often?
What SLA requirements are difficult to address or have not been addressed thus far? Are these issues being examined for possible solutions? If yes, by whom?
Do SLA requirements note speed of recovery and capacity? Are they prioritized? If no, explain.
Base Practice: 4.7.2 Perform business and system risk assessment
Are business and system risk assessments done? If yes, by whom and how often? Is potential revenue loss considered during system failure or loss?
Is cost-benefit analysis performed when additions or changes are made to the recovery plan? Is this based on servers, applications, SLAs? Explain.
Are business goals developed during the risk assessment? If yes, what are they?
Has it been determined what critical data should be moved off site when performing the risk assessment? If yes, how is this determined?
Are business risk assessments performed considering security management, political instability and malicious intent? If yes, by whom and how?
Base Practice: 4.7.3 Determine recovery implementation plan oi Is there a formal policy regarding the recovery plan at all sites? If yes, is it followed? Is it accessible to all recovery personnel? If no, explain. If yes, is it in multiple locations? Which sites? Is revision control maintained?
Are teams established within the plan for notification and at a predetermined location in case of a disaster declaration? If yes, explain.
Are metrics collected regarding the recovery plan? If yes, how often and what are they? Are they collected automatically or manually?
Are lists maintained showing hardware and supplies needed during a disaster? If yes, where is this list? Are copies maintained for each site and at a remote location for safeguard? Who is aware of these lists?
Does the plan examine the recovery of dependent or independent applications? If yes, which ones? Has a cost analysis been performed on the loss of each application?
Are any recovery procedures performed by hot/cold sites? If yes, do they have back-ups, procedures and schedules? If yes, how are these maintained/updated?
How often is the plan reviewed? Do other process area personnel (e.g. Backup/Restore/Archive, Fault Management, Monitoring) review the plan? If yes, explain the process and describe who participates in the review.
Base Practice: 4.7.4 Review recovery plan with management
Does the management team review business/disaster recovery plans? If yes, how often? Is the management team static or dynamic?
Does the plan call for the management team to resolve resource conflicts? If yes, is a procedure noted for each site?
Base Practice: 4.7.5 Plan disaster recovery testing procedures
Are tests performed on the business/disaster recovery procedures/tasks at each site? If yes, how often?
Explain what procedures pose the most concern (e.g. business or disaster) during the testing phase? Have modifications been implemented to improve process? If yes, what has been the outcome?
Are other departments brought into the testing environment for an end-to-end run through (e.g. Fault Management, Back-up/Restore/ Archive, Monitoring, Physical Site Management, etc.)? If yes, which ones and how? Are other process areas tied with business/disaster recovery systems for automatic notification or metrics collection? If yes, explain.
Base Practice: 4.7.6 Produce and disseminate report on disaster recovery
Are reports produced and disseminated regarding the business/disaster recovery plan? If yes, to whom and how often? If no, explain.
What are the contents of the reports that are disseminated?
Do reports include the latest testing results? Metrics? If yes, which ones?
Base Practice: 4.7.7 Receive feedback on disaster recovery strategy
Is feedback sought and collected regarding the business/disaster recovery plan? If yes, by whom and how?
Is the feedback used for continuous improvement reasons? If yes, has this proven to be beneficial? If no, how could the feedback process be changed to provide benefit?
Generic Questions for Process Area
Is training provided to new business/disaster recovery personnel? If yes, in what format (e.g. on the job, formal training, computer based training, etc.)?
Are adequate resources (e.g. personnel, equipment, software, etc.) provided to perform the necessary recovery procedures?
Process Capability Assessment Instrument
Process Area 4.7 Business/Disaster Recovery Planning & Management
Process Area Determines what the requirements are for disaster recovery based upon agreed upon SLAs, Description strategies and plans to restore a business or service after it has been interrupted or failed. This planning process develops the strategy for recovering a system or a portion ofthe system. The contingency plans must consider failure of both centralized and remote components and strategies for the recovery of these systems.
Questionnaire
Process Area | 4.7 Business/Disaster Recovery Planning & Management
Figure imgf000210_0001
Work Product list
Process Area 4.7 Business/Disaster Recovery Planning & Management
1. Example of an existing business/disaster recovery procedure for each ofthe sites (on site copy and off site copy should be the same).
2. Example of a business/disaster recovery plan report.
3. List of SLAs prioritized business/disaster recovery management
4. Schedule of Back-up/Restore/ Archive tasks for each site.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any ofthe above described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
9 1

Claims

CLAIMSWhat is claimed is:
1. A method for determining capability levels of a user administration process area when gauging a maturity of an operations organization comprising the steps of:
(a) defining a plurality of process attributes;
(b) determining a plurality of generic practices for each ofthe process attributes, the generic practices including base practices selected from the group consisting of receiving information from a human resources regarding employee events, adding users to a plurality of systems, changing user information on each ofthe systems, deleting user information on each ofthe systems, and notifying parties periodically of a user administration status; and
(c) calculating a maturity of an operations organization based at least in part on the achievement of the generic practices.
2. The method as set forth in claim 1, and further comprising the steps of: defining a plurality of capability levels in terms of groups ofthe process attributes, rating each of the process attributes based on achievement ofthe corresponding generic practices, and determining which ofthe capability levels is achieved by a process area based on the rating of the process attributes of the capability levels, wherein the maturity of the operations organization is calculated based on the capability level that is achieved.
3. The method as set forth in claim 2, wherein each capability level is defined by the process attributes of a lower capability level and is further defined by at least one more process attribute.
4. The method as set forth in claim 1, wherein the process attributes include process attributes selected from the group of process attributes consisting of process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and process change.
i0
5. The method as set forth in claim 2, wherein the capability levels include capability levels selected from the group of capability levels consisting of performed informally, planned and tracked, well defined, quantitatively controlled, and continuously improving.
6. The method as set forth in claim 1, wherein the generic practices are further selected from the group consisting of establishing and maintaining a policy for performing operational tasks, allocating resources to meet expectations, ensuring personnel receive the appropriate type and amount of training, collecting data to measure performance, maintaining communication among team members, ensuring work products satisfy documented requirements, employing version control to manage changes to work products.
7. The method as set forth in claim 1, wherein the base practices include receiving information from a human resources regarding employee events, adding users to a plurality of systems, changing user information on each ofthe systems, deleting user information on each ofthe systems, and notifying parties periodically of a user administration status.
8. A computer program embodied on a computer readable medium for determining capability levels of a user administration process area when gauging a maturity of an operations organization comprising:
(a) a code segment that defines a plurality of process attributes; (b) a code segment that determines a plurality of generic practices for each ofthe process attributes, the generic practices including base practices selected from the group consisting of receiving information from a human resources regarding employee events, adding users to a plurality of systems, changing user information on each ofthe systems, deleting user information on each ofthe systems, and notifying parties periodically of a user administration status; and
(c) a code segment that calculates a maturity of an operations organization based at least in part on the achievement ofthe generic practices.
9. The computer program as set forth in claim 8, and further comprising a code segment for defining a plurality of capability levels in terms of groups ofthe process attributes, rating each ofthe process attributes based on achievement ofthe corresponding generic practices, and determining which ofthe capability levels is achieved by a process area based on the rating ofthe process attributes ofthe capability levels, wherein the maturity ofthe operations organization is calculated based on the capability level that is achieved.
10. The computer program as set forth in claim 9, wherein each capability level is defined by the process attributes of a lower capability level and is further defined by at least one more process attribute.
11. The computer program as set forth in claim 8, wherein the process attributes include process attributes selected from the group of process attributes consisting of process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and process change.
12. The computer program as set forth in claim 9, wherein the capability levels include capability levels selected from the group of capability levels consisting of performed informally, planned and tracked, well defined, quantitatively controlled, and continuously improving.
13. The computer program as set forth in claim 8, wherein the generic practices are further selected from the group consisting of establishing and maintaining a policy for performing operational tasks, allocating resources to meet expectations, ensuring personnel receive the appropriate type and amount of training, collecting data to measure performance, maintaining communication among team members, ensuring work products satisfy documented requirements, employing version control to manage changes to work products.
14. The computer program as set forth in claim 8, wherein the base practices include receiving information from a human resources regarding employee events, adding users to a plurality of systems, changing user information on each ofthe systems, deleting user information on each ofthe systems, and notifying parties periodically of a user administration status.
15. A system for determining capability levels of a user administration process area when gauging a maturity of an operations organization comprising:
(a) logic that defines a plurality of process attributes;
(b) logic that determines a plurality of generic practices for each ofthe process attributes, the generic practices including base practices selected from the group consisting of receiving information from a human resources regarding employee events, adding users to a plurality of systems, changing user information on each ofthe systems, deleting user information on each ofthe systems, and notifying parties periodically of a user administration status; and (c) logic that calculates a maturity of an operations organization based at least in part on the achievement ofthe generic practices.
16. The system as set forth in claim 15, and further comprising logic for defining a plurality of capability levels in terms of groups ofthe process attributes, rating each ofthe process attributes based on achievement ofthe corresponding generic practices, and determining which ofthe capability levels is achieved by a process area based on the rating ofthe process attributes ofthe capability levels, wherein the maturity ofthe operations organization is calculated based on the capability level that is achieved.
17. The system as set forth in claim 16, wherein each capability level is defined by the process attributes of a lower capability level and is further defined by at least one more process attribute.
18. The system as set forth in claim 15, wherein the process attributes include process attributes selected from the group of process attributes consisting of process performance, performance management, work product management, process definition, process resource, process measurement, process control, continuous improvement, and process change.
19. The system as set forth in claim 16, wherein the capability levels include capability levels selected from the group of capability levels consisting of performed informally, planned and tracked, well defined, quantitatively controlled, and continuously improving.
9-/
0. The system as set forth in claim 15, wherein the base practices include receiving information from a human resources regarding employee events, adding users to a plurality of systems, changing user information on each ofthe systems, deleting user information on each ofthe systems, and notifying parties periodically of a user administration status.
PCT/US2000/020238 1999-07-26 2000-07-26 A system, method and computer program for determining capability level of processes to evaluate operational maturity in an administration process area WO2001008035A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU62372/00A AU6237200A (en) 1999-07-26 2000-07-26 A system, method and article of manufacture for operational maturity process assessment via capability level determination in a user administration process area

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US36092899A 1999-07-26 1999-07-26
US09/360,928 1999-07-26

Publications (2)

Publication Number Publication Date
WO2001008035A2 true WO2001008035A2 (en) 2001-02-01
WO2001008035A3 WO2001008035A3 (en) 2002-07-11

Family

ID=23419965

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/020238 WO2001008035A2 (en) 1999-07-26 2000-07-26 A system, method and computer program for determining capability level of processes to evaluate operational maturity in an administration process area

Country Status (2)

Country Link
AU (1) AU6237200A (en)
WO (1) WO2001008035A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283713A (en) * 2021-05-08 2021-08-20 上海华兴数字科技有限公司 Method and system for analyzing operation and control behaviors of engineering machinery manipulator
US11605144B1 (en) * 2012-11-29 2023-03-14 Priority 5 Holdings, Inc. System and methods for planning and optimizing the recovery of critical infrastructure/key resources

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5630069A (en) * 1993-01-15 1997-05-13 Action Technologies, Inc. Method and apparatus for creating workflow maps of business processes

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5630069A (en) * 1993-01-15 1997-05-13 Action Technologies, Inc. Method and apparatus for creating workflow maps of business processes

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MCGARRY F ET AL: "Measuring the impacts individual process maturity attributes have on software products" PROCEEDINGS FIFTH INTERNATIONAL SOFTWARE METRICS SYMPOSIUM. METRICS (CAT. NO.98TB100262), PROCEEDINGS FIFTH INTERNATIONAL SOFTWARE METRICS SYMPOSIUM. METRICS 1998, BETHESDA, MD, USA, 20-21 NOV. 1998, pages 52-60, XP002185627 1998, Los Alamitos, CA, USA, IEEE Comput. Soc, USA ISBN: 0-8186-9201-4 *
NIESSINK F ET AL: "Towards mature measurement programs" PROCEEDINGS OF THE SECOND EUROMICRO CONFERENCE ON SOFTWARE MAINTENANCE AND REENGINEERING (CAT. NO.98EX143), PROCEEDINGS OF THE SECOND EUROMICRO CONFERENCE ON SOFTWARE MAINTENANCE AND REENGINEERING, FLORENCE, ITALY, 8-11 MARCH 1998, pages 82-88, XP002185625 1998, Los Alamitos, CA, USA, IEEE Comput. Soc, USA ISBN: 0-8186-8421-6 *
ROJAS T ET AL: "The capabilities and maturity model (CMM): a case study" 1997 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS. COMPUTATIONAL CYBERNETICS AND SIMULATION (CAT. NO.97CH36088-5), 1997 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS. COMPUTATIONAL CYBERNETICS AND SIMULATION, ORLAND, pages 1285-1290 vol.2, XP002185624 1997, New York, NY, USA, IEEE, USA ISBN: 0-7803-4053-1 *
VARKOI T K ET AL: "Case study of CMM and SPICE comparison in software process assessment" IEMC '98 PROCEEDINGS. INTERNATIONAL CONFERENCE ON ENGINEERING AND TECHNOLOGY MANAGEMENT. PIONEERING NEW TECHNOLOGIES: MANAGEMENT ISSUES AND CHALLENGES IN THE THIRD MILLENNIUM (CAT. NO.98CH36266), IEMC '98 PROCEEDINGS. INTERNATIONAL CONFERENCE ON ENGI, pages 477-482, XP002185626 1998, New York, NY, USA, IEEE, USA ISBN: 0-7803-5082-0 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11605144B1 (en) * 2012-11-29 2023-03-14 Priority 5 Holdings, Inc. System and methods for planning and optimizing the recovery of critical infrastructure/key resources
CN113283713A (en) * 2021-05-08 2021-08-20 上海华兴数字科技有限公司 Method and system for analyzing operation and control behaviors of engineering machinery manipulator

Also Published As

Publication number Publication date
WO2001008035A3 (en) 2002-07-11
AU6237200A (en) 2001-02-13

Similar Documents

Publication Publication Date Title
US6738736B1 (en) Method and estimator for providing capacacity modeling and planning
US7810067B2 (en) Development processes representation and management
US20060161444A1 (en) Methods for standards management
US20060161879A1 (en) Methods for managing standards
US8140367B2 (en) Open marketplace for distributed service arbitrage with integrated risk management
JP5694200B2 (en) Method and system for workflow integration
US8200527B1 (en) Method for prioritizing and presenting recommendations regarding organizaion&#39;s customer care capabilities
Ng et al. Maintaining ERP packaged software–a revelatory case study
US20150356477A1 (en) Method and system for technology risk and control
WO2001025877A2 (en) Organization of information technology functions
Niessink et al. The IT service capability maturity model
US20070073572A1 (en) Data collection and distribution system
US20030055697A1 (en) Systems and methods to facilitate migration of a process via a process migration template
US20080091676A1 (en) System and method of automatic data search to determine compliance with an international standard
US10460265B2 (en) Global IT transformation
WO2007030633A2 (en) Method and system for remotely monitoring and managing computer networks
WO2001008035A2 (en) A system, method and computer program for determining capability level of processes to evaluate operational maturity in an administration process area
Spencer et al. Technology best practices
WO2001008004A2 (en) A system, method and article of manufacture for determining capability levels of a monitoring process area for process assessment purposes in an operational maturity investigation
WO2001008038A2 (en) A system, method and computer program for determining operationalmaturity of an organization
WO2001008074A2 (en) A system, method and article of manufacture for determining capability levels of a release management process area for process assessment purposes in an operational maturity investigation
WO2001008037A2 (en) A system, method and computer program for determining capability levels of processes to evaluate operational maturity of an organization
Bose et al. Interpreting SLA and related nomenclature in terms of Cloud Computing: a layered approach to understanding service level agreements in the context of cloud computing
Spasic et al. Information and Communication Technology Unit Service Management in a Non-Profit Organization Using ITIL Standards.
De Jong ITIL® V3 Foundation Exam-The Study Guide

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP