WO2004006092A2 - Method, system, and apparatus for automating the creation of customer-centric interface - Google Patents

Method, system, and apparatus for automating the creation of customer-centric interface Download PDF

Info

Publication number
WO2004006092A2
WO2004006092A2 PCT/US2003/019835 US0319835W WO2004006092A2 WO 2004006092 A2 WO2004006092 A2 WO 2004006092A2 US 0319835 W US0319835 W US 0319835W WO 2004006092 A2 WO2004006092 A2 WO 2004006092A2
Authority
WO
WIPO (PCT)
Prior art keywords
customer
task
tasks
user
rules
Prior art date
Application number
PCT/US2003/019835
Other languages
French (fr)
Other versions
WO2004006092A8 (en
Inventor
Robert R. Bushey
Gregory W. Edwards
Kurt M. Joseph
Benjamin A. Knott
John M. Martin
Scott H. Mills
Theodore B. Pasquale
Original Assignee
Sbc Properties, L.P.
Sbc Technology Resources, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sbc Properties, L.P., Sbc Technology Resources, Inc. filed Critical Sbc Properties, L.P.
Priority to AU2003253680A priority Critical patent/AU2003253680A1/en
Publication of WO2004006092A2 publication Critical patent/WO2004006092A2/en
Publication of WO2004006092A8 publication Critical patent/WO2004006092A8/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP

Definitions

  • the present invention relates generally to interface designs, and more specifically relates to a system and method for implementing customer-centric interfaces.
  • IVR interactive voice response
  • an IVR system In order to maintain a high level of customer satisfaction, an IVR system must be designed so that customers can easily navigate the various menus and accomplish their tasks without spending too much time on the telephone and becoming frustrated and unsatisfied with the company and its customer service. Therefore, companies must design and continually test, update, and improve the IVR systems including the IVR menus so that the IVR systems function efficiently so- that customers remain satisfied with the level of customer service. BRIEF DESCRIPTION OF THE DRAWINGS
  • FIGURE 1 illustrates a block diagram showing a system incorporating teachings of the present invention
  • FIGURE 2 depicts a flow diagram of a method for automating the creation of a customer-centric interface
  • FIGURE 3 depicts an example task frequency table
  • FIGURE 4 illustrates a block flow diagram of various components of the system for the automated creation of a customer-centric interface
  • FIGURE 5 illustrates a flow diagram of a method for creating customer-centric menu prompts
  • FIGURE 8 illustrates an example graphical user interface for the analysis of performance data within a customer-centric interface
  • FIGURE 9 depicts an example log file including performance data
  • the customer listens to one or more prerecorded menus or prompts and provides responses using touch-tone input and/or speech input in order to accomplish their task. Therefore, the content and structure of the IVR including the prerecorded menus or prompts needs to allow for customers to easily and quickly accomplish their tasks with little frustration.
  • the IVR interface is tested to ensure functionality and that it is error free.
  • the inclusion of customers into the design process occurs late in the development phase, if it all, through usability testing. But much of the customer input gathered in the usability testing will not be implemented into the IVR interface because of the costs involved with making changes late in the development phase. Only significant errors discovered through the usability testing are generally corrected.
  • the result is an IVR interface having a business-centric organization and structure, where the menu options and prompts are structured according to the organization of the company and are worded using company terminology.
  • a single persona may be assigned to the IVR system.
  • users of automated systems tend to view automated systems more favorably when the persona or personality of the system matches the user's personality.
  • introverts and extroverts tend to be more satisfied with, likely to trust and likely to make a purchase from an automated system possessing voice characteristics similar to their own.
  • the usability of IVR systems are tested and improved by conducting laboratory studies or tests where test participants are asked to accomplish sets of tasks using the IVR system.
  • An example of a task to accomplish may be, "Call Telephone
  • the participants use telephones to interact with an IVR simulation application which is presented by a laboratory computer.
  • the simulated IVR application plays prerecorded announcements or prompts to the participants in the form of a series of menus and records information regarding the participants' responses such as the menu name, the amount of time the prerecorded prompt played before the participant made a selection or pressed a key, and the key that the participant pressed.
  • the recorded information regarding the participants' responses or the performance data is compiled into a log file with information regarding each task test stored as an individual performance data set.
  • the company may score participants call routing performance based on two factors - accomplishment of the task and the time spent in the IVR simulation application attempting to accomplish the task.
  • Analysis of the log file and the performance data is typically done as a manual process where one or more persons manually examine each performance data set noting the task, determining if the participant accomplished the task, and calculating the time spent listening to the menus or prompts and then manually creating an output file containing the findings of the IVR simulation.
  • the manual analysis of the performance data is a very time consuming, labor intensive, and resource intensive process.
  • the manual analysis of the performance data is also subject to human error such as math errors in calculating time spent in the menus and in omitting particular data points .
  • many companies often track statements made by customers when the customers contact the company with problems or questions about a product or service or to alter a product or service.
  • the customer When a customer calls a service number and speaks to a CSR, the customer typically tells the CSR the purpose of the call in the first substantive statement the customer makes.
  • a customer may contact a company via the company web site or email and generally the first substantive statement made in the email or web site response includes the customer' s purpose for contacting the company.
  • These initial statements containing the purpose of the customer's call are often referred to as opening statements.
  • opening statements can be used by companies to better design IVR systems, web sites, and any other customer interfaces between a company and the customers and allow for a more customer-centric interface design.
  • One effective way to design an IVR system or a web site interface is to analyze the scripts of incoming calls or emails to a customer support center or call center to locate the opening statements and identify the purpose of each call or email by classifying or categorizing each opening statement. Once categorized, a frequency report can be created that details how often customers are calling with specific problems or questions about specific products or services. For example, a telephone company may want to know how many customers are calling or emailing about a problem with their bill or to add a new product to their telephone service.
  • an IVR system can be designed that incorporates the frequencies so that customers calling with common problems, complaints, or questions can be serviced quickly and efficiently. For example, a company would be able to determine that of the 5,000 service calls received in one month, what percentage of the calls were about particular topics and also rank the reasons why the customers called or emailed the customer support. In order to maximize the utilization of the statements given by the customers in a customer-centric interface design, a company therefore needs to track and categorize the statements. Typically, companies have manually tracked and manually categorized opening statements. The company manually tracks each call and manually records and transcribes each opening statement spoken to a CSR or received via email and then creates a list of opening statements.
  • An employee of the company reads the long list of opening statements with a list of categories in front of him/her and assigns a category label to each opening statement. This is a very time consuming and costly process because one or more people manually examining every opening statement and deciding how to categorize the statement in accordance with multiple category labels requires a large amount of employee time which is expensive and would be better utilized in a revenue generating task.
  • the category labels used to manually categorize the opening statements are generally designed to be objective but when applied by a person, the person's subjective thinking and opinions affect how they categorize the opening statements. For instance, an opening statement such as "I am calling about my bill for the charges for Call Waiting" may be categorized by one person as a billing inquiry and another person as a call waiting inquiry. Therefore, even though multiple people may use the same category labels to categorize the opening statements, they might categorize the same opening statement differently because the categorization is partly a matter of opinion.
  • This human opinion factor and subjectiveness creates an inconsistency in the categorization data and frequency reports that results in unreliable data and a customer interface design that is not optimized with respect to the opening statements and the way customers think.
  • the example embodiment described herein allows for the automated creation of a customer-centric interface.
  • the customer-centric interface is designed to best represent the customers' preferences and levels of knowledge and understanding. Additionally, the example embodiment allows for the inclusion of the customers in the design process from the beginning to ensure that the customer-centric interface is both usable and useful for the customers.
  • the customer-centric interface allows for the customers to quickly and easily navigate the various menus within the customer-centric interface to accomplish their tasks with high levels of customer satisfaction.
  • the customer-centric design also allows for increased call routing accuracy and a reduction in the number of misdirected calls. Therefore, companies save time and money because less time is spent dealing with misdirected calls and less resources are used by the customers since the customers spend less time within the customer-centric interface accomplishing their tasks.
  • the example embodiment described herein allows for the automated analysis of performance data to better enable the creation of a customer-centric interface. Additionally, the example embodiment allows for the consistent analysis of performance data free of human error. Time and money is saved because employees no longer manually examine the performance data determining if the task was accomplished and manually calculating the time required to accomplish each task. Therefore, employees' time may be better utilized in other revenue generating projects since less time is required to analyze the performance data. Furthermore, the analysis of the performance data is more reliable because the analysis is not subject to human error such as calculation errors and different people are not interpreting the performance data in different manners.
  • FIGURE 1 generally illustrates one embodiment of a customer-centric interface solution incorporating teachings of the present invention and operable to provide automated or computer based customer service to callers using an interactive voice response (IVR) system.
  • system 10 preferably includes at least one IVR system 12.
  • IVR system 12 may include one or more traffic handling devices 14. Traffic handling devices may include, but are not limited to, such devices as routers, switches, hubs, bridges, content accelerators, or other similar devices.
  • one or more traffic handling devices 14 may be coupled between communications link 16 and computer system 18.
  • Computing system 18 may be a personal computer, a server, or any other appropriate computing device.
  • Communications technologies which may be used as communications link 16 include, but are not limited to, a PSTN (public switched telephone network) , the Internet using voice over IP (Internet Protocol) , such mobile technologies as satellite and PCS (personal communication service) , as well as others.
  • PSTN public switched telephone network
  • IP Internet Protocol
  • PCS personal communication service
  • IVR system 12 having a component or storage system 20 which is maintained separately from computer system 18, as depicted in FIGURE 1, one or more traffic handling devices 14 may be included and coupled between computer system 18 and such a storage system 20.
  • traffic handling devices 14 may be included and coupled between computer system 18 and such a storage system 20.
  • storage system 20 or portions thereof may be incorporated into computer system 18, according to teachings of the present invention.
  • Computer system 18 may be constructed according to a variety of configurations. Preferably, however, computer system 18 includes one or more processors or microprocessors 22.
  • Processors or microprocessors 22 may include such computer processing devices as those manufactured by Intel, Advanced Micro Devices, Motorola, Transmeta, as well as others.
  • Operably coupled to microprocessor (s) 22 are one or more memory devices 24.
  • Memory devices 24 may include, but are not limited to, such memory devices as SDRAM (synchronous dynamic random access memory) , RDRAM (Rambus dynamic random access memory) , FLASH memory, or other memory device operable to functioning with the microprocessor (s) 22 of choice.
  • component systems interfaces 28 are also preferably included and coupled to microprocessor 22. According to teachings of the present invention, component systems interfaces 28 preferably couple one or more component systems to microprocessor (s) 22 such that microprocessor (s) 22 may access the functionality included therein. Examples of component systems include storage system 20, video displays, storage devices, scanners, CD-ROM (compact-disc-read only memory) systems, input/output devices, etc.
  • Component systems interfaces 28 may include, for example, ISA (industry standard architecture) connections, PCI (peripheral component interconnect) connections, PCI-X (peripheral component interconnect-extended) connections, SCSI (small computer systems interface) connections, USB (universal serial bus) connections, FC-AL (fibre-channel arbitrated loop) connections, serial connections, parallel connections, Ethernet connections, IEEE 802.11b receivers/transmitters, Bluetooth receivers/transmitters, as well as others.
  • component systems interfaces 28 may be provided to couple one or more components system internal to computer system 18, such as hard disc drive (HDD) devices, CD-ROM read/write devices, etc., to microprocessor (s) 22.
  • HDD hard disc drive
  • CD-ROM read/write devices etc.
  • Computing system 18 further includes hard disk drive (HDD) 30 containing databases 32, 33, 34, 36, 38, and 40 and processor 22, memory 24, communications interface 26, component systems interface 28, and HDD 30 communicate and may work together via bus 42 to provide the desired functionality.
  • the various hardware and software components may also be referred to as processing resources.
  • Computer system 18 further includes display 44 for presenting graphical user interface (GUI) 46 and input and output devices such as a mouse and a keyboard.
  • Computer system 18 also includes rule engine 48, task engine 50, collection engine 52, customer language engine 54, performance engine 56, and customer structure engine 58, which reside in memory such as HDD 30 and are executable by processor 22 through bus 42.
  • HDD 30 may include more or less than six databases and be remotely located in storage system 20.
  • one or more traffic handling devices 14 may be coupled between computer system 18 and storage system 20.
  • storage system 20 may be included within or internal to computer system 18.
  • storage system 20 or one or more components thereof may be directly coupled to the one or more component systems interfaces 28.
  • Component or storage system 20 may include a variety of computing devices and is preferably not limited to one or more types of storage device.
  • a plurality of storage devices preferably storing one or more applications and databases for use in accordance with teachings of the present invention, 'may be provided.
  • component or storage system 20 may include one or more supplemental hard disc drive (HDD) devices 60, digital linear tape (DLT) libraries (not expressly shown) , CD-ROM libraries and/or one or more storage area networks (SAN) 62.
  • HDD hard disc drive
  • DLT digital linear tape
  • SAN storage area networks
  • one or more HDD devices 60 may be included in computer system 18 with one or more SANs 62 included in storage system 20.
  • a variety of applications 64 may be used to leverage the functionality or processing capability of computer system 18.
  • a plurality of applications 64 may be effectively included in storage system 20, on one or more HDD devices 60 and/or on one or more SANs 62.
  • one or more communications applications operable to establish a communication connection with one or more users via communication link 16 may be included in storage system 20.
  • one or more speech recognition or voice analysis applications are included on HDD devices 60 and/or SAN 62 for use as described below.
  • a variety of additional applications 64 may also be included on one or more of HDD devices 60 and/or SANs 62.
  • the library may include a first subset of prompts or scripted dialog designed to help novice callers, a second subset designed to help expert callers, a third subset designed to sound sympathetic and soothing and a fourth subset designed to be more abrupt.
  • These different subsets or styles may be produced by altering characteristics of the persona such as speaking rate, choice of formal or informal words, use of terse or verbose utterances, etc.
  • IVR system 12 may dynamically change from one style to another in response to detected changes in the speech characteristics of a caller.
  • FIGURE 2 depicts a flow diagram of a method for automating the creation of a customer-centric interface.
  • the method begins at step 70 and at step 72 collection engine 52 collects a plurality of customer opening statements.
  • a customer calls a service number and speaks to a CSR
  • the customer typically tells the CSR the purpose of the call in the first substantive statement the customer makes.
  • a customer may contact a company via the company web site or email and generally the first substantive statement made in the email or web site response includes the customer's purpose for contacting the company.
  • These initial statements containing the purpose of the customer' s call are often referred to as customer opening statements.
  • Collection engine 52 collects the customer opening statements from customer service centers and stores the customer opening statement in customer opening statement database 32.
  • the tasks may include such tasks as "telephone line is not working,” “question about my bill,” “order a new service,” or any other appropriate reason for a customer to call seeking assistance regarding a product or service.
  • task engine 50 determines a task frequency of occurrence for each task at step 78.
  • the task frequency of occurrence allows system 10 to recognize which tasks customers are calling about the most and which tasks the customers are calling about the least.
  • Task engine 50 determines the task frequency of occurrence by examining and categorizing the customer opening statements. Each customer opening statement is examined to identify the purpose of the call and is then categorized as a particular task.
  • Task frequency table 120 is ordered in descending frequency order and is a statistically valid representation of the tasks that the customers inquire about when calling customer service centers. Because having a menu prompt for every single task results in numerous menu prompts making customer navigation of the customer-centric interface burdensome and slow, at step 80 task engine 50 determines which tasks are to be included in the customer-centric interface. In order to allow easy and quick navigation for the customers but at the same time not utilize too many company resources operating the customer-centric interface, only the most frequently occurring tasks are included within the customer-centric interface. Task engine 50 utilizes task frequency table 120 to determine which tasks are to be included in the customer- centric interface. In one embodiment, task engine 50 includes only the tasks that have a frequency of occurrence of 1% or higher.
  • Task frequency table 120 includes only the tasks having a frequency of occurrence of 1% or higher and includes eighteen tasks accounting for 80.20% of the tasks represented in the customer opening statement.
  • task engine 50 includes tasks so that the total number of included tasks accounts for a specified percentage coverage of the tasks represented in the customer opening statements.
  • task engine 50 may include a specified number of tasks so that the total frequency of occurrence is a specific total percentage coverage value such as 85%, 90% or any other appropriate percentage of coverage.
  • Either embodiment typically allows for between fifteen and twenty tasks to be included in the customer-centric interface.
  • the customer-centric interface does not include an opening customer-centric menu prompt listing all of the included tasks in frequency order. Such an opening menu prompt would take too long for the customers to listen to and would not allow for quick and easy navigation of the customer- centric interface. Therefore, the customer-centric interface is of a hierarchical design with the tasks grouped together by task relationships.
  • customer structure engine 58 elicits from one or more test customers each customer' s perceptions as to how the included tasks relate to each other in order to create interface structure for the customer-centric interface.
  • Interface structure is how the tasks are placed within the customer-centric interface and organized and grouped within the customer-centric menu prompts. For instance, the interface structure of a web page refers to how the pages, objects, menu items, and information is organized relative to each other while the interface structure for an IVR system refers to the sequence and grouping of the tasks within the customer-centric menu prompts.
  • Customer structure engine 58 uses tasks 127 - 161 from task frequency table 120 and performs customer exercises with the customers to elicit customer feedback regarding how the customers relate and group together tasks 127 - 161. For instance, customer structure engine 58 may require a group of test customers to group tasks 127 - 161 into one or more groups of related tasks. In addition, customer structure engine 58 may also require the test customers to make comparative judgments regarding the similarity of two or more of the tasks where the test customers state how related or unrelated they believe the tasks to be. Furthermore, customer structure engine 58 may require the test customers to rate the relatedness of the tasks on a scale.
  • Customer structure engine 58 performs the customer exercises using a test IVR system, a web site, or any other appropriate testing means. In addition to eliciting tasks relationships, customer structure engine 58 also elicits from the test customers general names or headings that can be used to describe the groups of tasks in the customers own language or terminology. Once customer structure engine 58 elicits from the test customers how the customers perceive tasks 127 -161 to relate to each other, customer structure engine 58 aggregates the customer feedback and analyzes the customer feedback to determine customer perceived task relationships. The customer perceived task relationships are how the customers perceive the tasks to be related. Customer structure engine 58 represents the customer perceived task relationships in a numerical data matrix of relatedness scores that represents collectively the customers' perceived relatedness of the included tasks.
  • customer structure engine 58 utilizes the customer perceived task relationships and the numerical data matrix and combines the included tasks into one or more groups of related tasks. For example, using the customer feedback from the customer exercises, customer structure engine 58 determines that the customers perceive tasks 133, 155, and 159 as related and group one, tasks 147, 149, and 157 as related and group two, tasks 127, 129, 131, 135, 139, 141, 143, 145, 153, and 161 as related and group three, and tasks 137 and 151 as related and group four. To aid in the grouping of the tasks and to better enable the company to understand the structure and grouping of the tasks, customer structure engine 58 represents the customer perceived task relationships and numerical data matrix in a graphical form. For instance, customer structure engine 58 may generate a flow chart or indogram illustrating a customer-centric call flow for the groups of tasks.
  • Task engine 50 orders the groups within customer-centric interface in descending frequency order so that the tasks having the highest frequency of occurrence are heard first by the customers when the customers listen to the customer-centric menu prompts within the customer-centric interface. Since 59.4% of the customer will be calling about a task in group three, task engine 50 orders group three first followed by group one, group two, and group four.
  • task engine 50 In addition to ordering the groups of tasks, task engine 50 also orders the tasks within each group. Task engine 50 orders the tasks within each group according to each task' s frequency of occurrence from the highest frequency of occurrence to the lowest frequency of occurrence. For instance, the tasks in group one are ordered as task 133, task 155, and task 159. The tasks in group two are ordered as task 147, task 149, and task 157. The tasks in group three are ordered as task 127, task 129, task 131, task 135, task 139, task 141, task 143, task 145, task 153, and task 161. The tasks in group four are ordered as task 137 and task 151. The grouping and ordering of the tasks allow for the high frequency tasks to be more accessible to the customers than the low frequency tasks by placing the tasks having higher frequency of occurrences higher or earlier in the customer-centric interface menu prompts.
  • customer language engine 54, task engine 50, and customer structure engine 58 work together to create and order the customer-centric menu prompts for the customer-centric interface.
  • Task engine 50 and customer structure engine 58 do not take into account customer terminology when calculating task frequencies, grouping the tasks, and ordering the tasks. So once task engine 50 and customer structure engine 58 create interface structure including ordering the included tasks, customer language engine 54 creates customer- centric menu prompts using the customers own terminology. Customer-centric menu prompts in the language of the customers allow for the customers to more easily recognize what each menu prompt is asking and allows the customer to accomplish their tasks quickly and with little frustration. In other embodiments, customer language engine 54 may create customer-centric menu prompts using action specific object words in addition to the customers own terminology.
  • the customer-centric interface plays the prerecorded customer-centric menu prompts to the test customers and performance engine 56 records information regarding the test customers' responses such as the menu name for the menus accessed, the amount of time the prerecorded menu prompt played before the test customer made a selection or pressed a key, and the key that the test customer pressed.
  • performance engine 56 analyzes the results of the usability tests. With respect to the results, performance engine 56 focuses on three different usability test results: customer satisfaction, task accomplishment, and response times.
  • Customer satisfaction is whether or not the test customer was satisfied using the customer-centric interface.
  • Performance engine 56 gathers customer satisfaction by asking the test customers a • variety of questions regarding their experiences in interacting with the customer-centric interface such as how satisfied the test customer was in accomplishing the assigned tasks, how confident the test customer was about being correctly routed, the level of agreement between the selected menu prompts and test customers' assigned tasks, and whether the test customers would want to use the customer-centric interface again.
  • Performance engine 56 also determines a task accomplishment or call routing accuracy score. Task accomplishment measures whether a test customer successfully completes an assigned task and is based on a sequence of key presses necessary to navigate the customer-centric interface and accomplish the task.
  • Performance engine 56 determines if the test customers actually accomplished their assigned task. For example, if a test customer was assigned the task of using the customer-centric interface to inquire about their bill, did the test customer correctly navigate the customer- centric menu prompts in order to inquire about their bill. Performance engine 56 examines all the different menu prompts accessed by the test customers and compares the test customer key sequences with the correct key sequences in order to determine if the test customers accomplished the assigned tasks.
  • performance engine 56 calculates a response time or cumulative response time (CRT) for each customer-centric menu prompt accessed by the test customers.
  • the response time indicates the amount of time a test customer spends interacting with each customer-centric menu prompt and the customer-centric interface.
  • the response times reflects the amount of time the test customers listen to a menu prompt versus the amount of time it takes for the menu prompt to play in its entirety.
  • the amount of time the test customers spend listening to the menu prompt is not a very valuable number unless menu duration times are also taken into account.
  • a menu duration time is the amount of time it takes for a menu prompt to play in its entirety. For instance, a menu prompt may have five different options to choose from and the menu duration time is the amount of time it takes for the menu prompt to play through all five options.
  • Performance engine 56 records a listening time for each test customer for each menu prompt.
  • the listening time is the time the test customers actually spend listening to a menu prompt before making a selection.
  • Performance engine 56 also has access to the menu duration times for all of the customer-centric menu prompts in the customer-centric interface.
  • performance engine 56 may also calculate response times for entire tasks for each test customer by summing the menu duration times and the listening times for each menu prompt required to accomplish the task and subtracting the total menu duration time from the total listening time.
  • performance engine 56 Once performance engine 56 has determined customer satisfaction, task accomplishment, and response times, performance engine 56 generates a performance matrix which charts customer satisfaction, task accomplishment, and response times for each test customer, each customer- centric menu prompt, and each task.
  • the performance matrix allows for performance engine 56 to determine if any of the customer-centric menu prompts or tasks have unsatisfactory performance at step 94 by examining the combination of customer satisfaction, task accomplishment, and response times and thereby evaluating how well the customer-centric interface performs.
  • a customer-centric menu prompt or task has unsatisfactory performance at step 94, then at step 96 performance engine 56 selects the menu prompt or task, at step 98 determines the reason for the unsatisfactory performance, and at step 100 modifies the customer- centric menu prompt or task to correct for the unsatisfactory performance.
  • a task may have a high level of customer satisfaction and high rate of task accomplishment but a positive response time. The test customers are accomplishing the task and are satisfied when interacting with the customer-centric interface but are spending too much time interacting with the customer-centric interface as indicated by the positive response time. The positive response time is not good for the customer-centric interface because the customers are using unnecessary resources from the customer-centric interface in the form of too much time in accomplishing the task.
  • performance engine 56 By examining the menu prompts for the task, performance engine 56 determines that the terminology used in the menu prompts for the task is not the terminology used by the customers. Therefore, performance engine 56 alerts customer language engine 54 to the terminology problem and customer language engine 54 rewords the menu prompts for the task using the customers own terminology.
  • performance engine 56 determines if there are additional menu prompts or tasks that have unsatisfactory performance at step 102. If at step 102 there are additional menu prompts or tasks having unsatisfactory performance, then at step 104 performance engine 56 selects the next menu prompt or task having unsatisfactory performance and returns to step 98.
  • Performance engine 56 repeats steps 98, 100, 102, and 104 until there are no additional menu prompts or tasks at step 102 having unsatisfactory performance. When there are no additional menu prompts or tasks having unsatisfactory performance at step 102, the process returns to step 90 and performance engine 56 tests the customer-centric interface having the modified menu prompts or tasks. Performance engine 56 repeats steps 90, 92, 94, 96, 98, 100, 102, and 104 until there are no customer-centric menu prompts or tasks having unsatisfactory performance at step 94.
  • system 10 monitors the customer-centric interface performance and modifies the customer-centric interface to allow for customer-centric menu prompts that are worded in the terminology of the customers, that directly match the tasks that the customers are trying to accomplish, and that are ordered and grouped by customer task frequencies and the customers' perceptions of task relationships .
  • FIGURE 4 illustrates a block flow diagram of how collection engine 52, customer language engine 54, task engine 50, customer structure engine 58, and performance engine 56 of system 10 interact and interoperate to automatically create the customer-centric interface.
  • FIGURE 4 also represents the various functions for collection engine 52, customer language engine 54, task engine 50, customer structure engine 58, and performance engine 56.
  • Collection engine 52 gathers customer intention information from the customer opening statements and includes customer task model 128 which includes the list of tasks for which the customers access and use the customer-centric interface.
  • Customer language engine 54, task engine 50, and customer structure engine 58 perform their various functions by processing and manipulating the customer intention information and task list.
  • Customer language engine 54 develops customer- centric menu prompts for the customer-centric interface using the customers own terminology. Customer language engine 54 analyzes the customers' language by analyzing and tracking every word used by the customers in the customer opening statements to get a feel for how the customers refer to each of the tasks. Customer language engine 54 counts each word in each customer opening statement to determine which words the customers use the most and thereby recognize which of the customers' words are best to use in creating customer-centric menu prompts using the customers own terminology.
  • customer language engine 54 may also create customer-centric menu prompts using action specific object words taken from the customer opening statements .
  • FIGURE 5 illustrates a flow diagram for creating customer-centric menu prompts utilizing action specific object words.
  • Customer wordings of tasks in customer opening statements are generally in four different styles: action-object ("I need to order CALLNOTES"); action ("I need to make changes”); object (“I don't understand my bill”) ; and general (“I have some questions”) .
  • Menu prompts are typically worded in one of four styles: action specific object ("To order CALLNOTES press one") ; specific object (For CALLNOTES press two") ; general object ("To order a service press three”); and action general object ("For all other questions press four”) .
  • customer language engine 54 determines the action words and object words used by the customers.
  • customer language engine 54 analyzes the customer opening statements in customer opening statement database 32 in order to identify the action words and the object words used by the customers in their opening statements.
  • customer language engine 54 also determines which of the action words are specific action words and which of the object words are specific object words. For instance, "order” and "pay” are specific action words and "CALLNOTES” and “Call Waiting” are specific object words while “service” and “question” are not specific object words .
  • customer language engine 54 saves the specific action words in specific action database 34 and the specific object words in specific object database 36.
  • customer language engine 54 identifies and maintains the relationships between the specific action words and the specific object words by linking the specific action words with the specific object words that were used together by the customers as shown by arrows 199 in FIGURE 5. For example, for the customer opening statements of "I want to buy CALLNOTES" and “I want to inquire about my bill,” "buy” and “inquire” are the specific action words and "CALLNOTES" and “bill” are the specific object words.
  • customer language engine 54 In addition to storing the specific action words and the specific object words in databases 34 and 36, customer language engine 54 also calculates a frequency of occurrence for each specific action word and each specific object word and stores the specific action words and the specific object words in databases 34 and 36 in accordance with the frequency of occurrence in descending frequency order. Therefore, the specific action words having the highest frequency of occurrence are stored at the top of specific action database 34 and the specific object words having the highest frequency of occurrence are stored at the top of specific object database 36.
  • customer language engine 54 determines the frequency of occurrence and stores the specific action words and the specific object words
  • customer language engine 54 generalizes the specific action words into general groups of specific action words and generalizes the specific object words into general groups of specific object words.
  • Customer language engine 54 examines the specific action words and the specific object words for commonalties and then groups the specific action words and the specific object words together in groups based on the commonalties. For example, the specific action words of "buy,” “order,” and “purchase” all share the commonality of acquiring something and may be grouped together. The specific object words of "CALLNOTES" and “Call Waiting” share the commonality of being residential telephone services and therefore may be grouped together. Customer language engine 54 assigns names for each of the general groups of specific action words and the specific object words and saves the general action words in general action database 38 and the general object words in general object database 40 at step 138.
  • specific action database 34 Having specific action database 34, specific object database 36, general action database 38, and general object database 40 allows for a great resource for customer language engine 54 to locate customer terminology when creating customer-centric menu prompts.
  • customer language engine 54 uses words from general action database 38 and general object database 40.
  • customer language engine 54 uses words from specific action database 34 and specific object database 36. Because the specific action words and the specific object words are ordered by frequency in databases 34 and 36, customer language engine 54 can create action specific object menu prompts using the customer terminology most often used by the customers.
  • FIGURE 7 depicts a flow diagram of a method for the automated categorization of statements.
  • the method begins at step 180 and at step 182 a user selects the statements to be categorized. Before system 10 can automatically categorize the statements, the user must have one or more statements to categorize and load the list of statements into system 10.
  • the statements may be opening statements as defined above, written statements from a training session, survey responses, search statements from a web site or pop-up window, statements evaluating a customer' s experience and satisfaction in a test environment, or any other appropriate response to an open-ended question that can be analyzed using content text analysis.
  • New rules are desirable when there have been new products or services recently made available to the customers and the existing rules do not reflect these new products or services or when the statements are from a new domain not covered by the existing rules, such as survey responses where all the existing rules pertain to statements from customer service call centers.
  • the user utilizes rule engine 48 and rule creation screen 160 to create new rules and then edit the newly created rules. Creation of the rules involves the use of four include boxes 162, 163, 165, and 167 and two exclude boxes 169 and 171. In other embodiments, there may be more or less than four include boxes and more or less than two exclude boxes.
  • the user inputs combinations of words and text strings that should be included in the statement in order for the statement to satisfy the rule include boxes 162, 163, 165, and 167 and combinations of words and text strings that should not be in the statement in order for the statement to satisfy the rule in exclude boxes 169 and 171. Each rule is also associated with a particular category label which the user enters in category label box 164.
  • a user may want to create a new rule to categorize statements with respect to the late payment of customer bills. Therefore “late” may be entered in include box 162, “bill” may be entered in include box 163, “paid” may be entered in exclude box 169, and "labill” may be entered in category label box 164. This allows for a rule that finds statements that contain the words "late” and "bill” but do not contain the word
  • the user may group only newly created rules together in a group or group together newly created rules with existing rules when creating sets of rules.
  • the rules must be arranged in a rule order in accordance with a rule hierarchy enabling performance engine 56 to apply the rules in the correct order thereby preventing inconsistent results.
  • the rule hierarchy is from specific rules to general rules but can be any other appropriate way of ordering the rules. For a specific to general rule hierarchy, performance engine 56 applies the most specific rules first to a statement and then applies the more general rules if the statement does not satisfy any of the specific rules.
  • a rule specifying "telephone” needs to be above the rule specifying "phone” in the rule hierarchy so that the "telephone” rule is applied to a statement before the "phone” rule is applied to a statement. If the "phone” rule is applied before the "telephone” rule, then when performance engine 56 locates a statement containing the word "telephone,” performance engine 56 will find “phone” in “telephone” and categorize the statement with the "phone” category label instead of the "telephone” category label and the statement will be incorrectly categorized.
  • rule engine 48 stores the newly created rules, sets of rules, and rule hierarchy in database 33 at step 192 so that users and performance engine 56 may later access the rules.
  • rule screen 170 the user may edit an existing rule such as rule 173 by selecting it in rule screen 170 and clicking edit rule button 156.
  • the rule then appears in rule creation screen 160 and the user may modify include boxes 162, 163, 165, and 167 and exclude boxes 169 and 171.
  • step 98 the user selects run button 148 and performance engine 56 applies the selected rules to the list of statements in order to determine a category label for each statement.
  • Performance engine 56 cycles through the list of statements one statement at a time applying the rules to a statement until each statement satisfies a rule.
  • Performance engine 56 begins applying the rules to the list of statements at step 200 by applying the first rule in the rule hierarchy to the first statement in the list of statements. When performance engine 56 applies the rules to the statements, performance engine 56 strips the punctuation off the statements so that "bill,” and "bill" do not appear as two different text strings.
  • performance engine 56 determines if the statement satisfies the first rule. Performance engine 56 determines if a statement satisfies a rule by searching the statement for the presence of particular text string combinations or words and the exclusion of other text string combinations or words. For instance, rule 173 is the highest rule in the rule hierarchy shown in rule screen 170. Therefore, performance engine 56 searches the first statement to see if the text string "dsl" is present in the first statement. If "dsl" is not present in the first statement, then the first statement does not satisfy rule 173. If the statement does not satisfy the rule, then at step 204 performance engine 56 checks to see if there are additional rules in the set of rules to apply to the statement.
  • step 206 performance engine 56 applies the next rule in the rule hierarchy to the statement and the process returns to step 202 where performance engine 56 determines if the statement satisfies this rule. Steps 202, 204, and 206 repeat until either the statement satisfies a rule at step 202 or until the statement does not satisfy any of the rules at step 202 and there are no more rules to apply to the statement at step 204.
  • Performance engine 56 labels the statement as catch-all so that the statement may be examined at a later date to determine if the statement really does not satisfy any of the rules or if there is a malfunction of system 10 which resulted in the statement not satisfying any of the rules.
  • a high number of catch-all category labels may indicate that system 10, rule engine 48, or performance engine 56 are not operating correctly and require attention.
  • performance engine 56 After performance engine 56 assigns a category label to the statement at either step 208 or step 210, at step 212 performance engine 56 checks to see if there are additional statements in the list of statements that require categorization. If there are additional statements to be categorized at step 212, then at step 214 performance engine 56 selects the next statement to be categorized and applies the first rule in the rule hierarchy to the statement and then determines if the statement satisfies the rule at step 202. Performance engine 56 repeats steps 202 - 212 until performance engine 56 determines at step 212 that there are no additional statements to be categorized.
  • Performance engine 56 applies the first rule in rule screen 170, rule 173, to the statement. Performance engine 56 applies rule 173 by searching the statement "I cannot access my email account” for the text string "dsl.” Performance engine 56 determines that the statement does not contain the text string "dsl” and therefore the statement does not satisfy rule 173. Performance engine 56 then applies each rule below rule 173 to the statement one rule at a time until the statement satisfies a rule.
  • performance engine 56 When performance engine 56 gets to rule 175 and applies rule 175 to the statement, performance engine 56 determines that the statement includes the text string "email" and does not include the text strings "bill” and “can't comm.” Therefore, the statement satisfies rule 175 and performance engine 56 assigns category label "email" to the statement.
  • system 10 begins to automatically analyze the performance contained in the log file.
  • system 10 selects the first performance data set in the log file to analyze.
  • System 10 selects the performance data set to analyze by selecting the first performance data set in the log file.
  • a performance data set is the recorded data and information regarding one specific participant and one specific task for that participant. Generally in an IVR test, a participant is given four different tasks to accomplish such as "order DSL service" or "change your billing address.” For example, a performance data set would contain the recorded information for participant A and the task of ordering DSL service.
  • An example log file 250 including two example performance data sets 252 and 254 is shown in FIGURE 10.
  • a performance data set includes such information as the start time of the task, each menu accessed by the participant within the IVR, the time each menu was accessed, how long the participant listened to each menu, the key the participant pressed in response to the menu, and the total time the participant interacted with the IVR system.
  • task engine 50 determines a task code and task for the selected data set at step 316.
  • the performance data sets do not contain a participant number identifying the participant or the task. But the participant number is stored in database 33 in a log-in call record file.
  • system 10 stores in database 33 each participant's participant number and the tasks they are to accomplish. Participants are generally given more than one task to accomplish and the participants are to attempt the tasks in a pre-specified order and the log files reflect this specified order of tasks. For example, if each participant is given four tasks to accomplish, then the log file includes four performance data sets for each participant where the four performance data sets for each participant are grouped together in the same sequence as the participant attempted each task.
  • task engine 50 locates the participant number in database 33, determines what tasks the participant was supposed to accomplish and the order the tasks were to be accomplished, and determines which performance data sets correlate with which participants and tasks. After task engine 50 determines the correct task for the selected performance data set, at step 318 task engine 50 retrieves from database 33 the correct key sequence for the corresponding task for the selected performance data set.
  • Each task has a distinct correct key sequence so that for example that correct key sequence for "ordering DSL service” is different from the correct key sequence for "changing your billing address.”
  • the correct key sequence is the keys pressed in response to the IVR menu prompts that allows the participant to navigate the IVR menus and successfully accomplish the assigned task.
  • the task of "ordering DSL service” requires the participant to navigate through and listen to three different menus in order to order DSL service. After the first menu, the participant needs to press the "3" key which sends the participant to the second menu. After the second menu the participant needs to press the "2" key which sends the participant to the third menu. After the third menu the participant needs to press the "4" key after which the participant has ordered DSL service and successfully completed the task. Therefore the correct key sequence for the task of "order DSL service” is "3, 2, 4.”
  • performance engine 56 having the correct key sequence from task engine 50, searches the selected performance data set for the correct key sequence.
  • Performance engine 56 searches the last few keys 280 for the correct data sequence.
  • Performance engine 56 starts with the line right above end line 277 and begins searching up the lines 275, 273, 271, 269, and 267 to start line 265 looking for the correct key sequence.
  • Performance engine 56 examines the end of the selected performance data set because that is the only location where the correct key sequence may be located because when the participant enters the correct key sequence, the task is accomplished, the performance data set ends, and the participant moves on to the next assigned task. Therefore once the participant enters the last key of the correct key sequence, the next line in the performance data set is end line 277 and a new performance data set begins .
  • Performance engine 56 compares the recorded key sequence entered by the participant with the correct key sequence at step 322. For example, performance data set 254 is for the task of "changing your billing address" and the task has a correct key sequence of "2, 2, 1, 1, 5.” Performance engine 56 compares the correct key sequence with the recorded key sequence in performance data set 254 beginning with line 275 which has "5" as key 280. Performance engine 56 then moves up to line 273 to look for "1" as key 280 and finds "1” as key 280. Performance engine 56 repeats this process for lines 271, 269, and 267 until a line does not have the correct key 280 or until performance engine 56 determines that the recorded key sequence of performance data set 254 is the same as the correct key sequence.
  • system 10 also provides for another objective performance measure - the amount of time the participant listens to each IVR menu and the total amount spent attempting to accomplish or accomplishing the assigned task.
  • the amount of time the participant spends listening to the menu is not a very valuable number unless menu duration times are also taken into account.
  • a menu duration time is the amount of time it takes for a menu to play in its entirety. For instance, a menu may have five different options to choose from and the menu duration time is the amount of time it takes for the menu to play through all five options.
  • performance engine 56 obtains the menu duration time from database 33 for the first menu in the selected performance data set. Performance engine 56 also obtains the listening time for the first menu in the selected performance data set.
  • the listening time is the time a participant actually spends listening to a menu before making a selection.
  • performance data set 254 contains the first menu BMainMenu that has listening time 278 of 30 seconds (line 267) .
  • performance engine 56 retrieves that menu BMainMenu has a menu duration time of 30 seconds.
  • performance engine 56 calculates the response time or the cumulative response time (CRT) for the first menu at step 332.
  • the response time is the difference between the menu duration time and the listening time.
  • Performance engine 56 calculates the response time by subtracting the menu duration time from the listening time.
  • the participant listens to the whole menu and then makes a selection
  • performance engine 56 at step 334 determines if the selected performance data set has additional menus for which a response time needs to be calculated. If there are additional menus within the selected performance data set at step 334, then at step 336 performance engine 56 obtains the menu duration time from database 33 for the next menu and the listening time for the next menu in the same manner as performance engine 56 obtained the menu duration time and listening time for the first menu at step 330. So for performance data set 254, performance engine 56 obtains the menu duration time and listening time for line 269 and menu "B20.” Once performance engine 56 obtains the menu duration time and the listening time for the next menu, at step 338 performance engine 56 calculates the response time for the next menu in the same manner as described above at step 332.
  • step 334 performance engine 56 determines if the selected performance data set has additional menus that have not yet been analyzed. Steps 334, 336, and 338 repeat until there are no additional menus to be analyzed within the selected performance data. If there are no additional menus within the selected performance data set at step 334, then at step 340 performance engine 56 calculates the total response time for the selected performance data set. The total response time is the difference between the total menu duration time and the total listening time. Performance engine 56 calculates the total response time by first summing the menu duration times and the listening times for each menu within the selected performance data set.
  • performance engine 56 calculates the total response time for the selected performance data set by subtracting the total menu duration time from the total listening time.
  • a negative total response time indicates that less time was used than required to accomplish the task
  • a zero response time indicates that the exact amount of time was used by the participant to accomplish the task
  • a positive response time indicates that more time was used than required to accomplish the task.
  • system 10 determines if there are additional performance data sets within the log file to be analyzed. If at step 342 there are additional performance data sets, then system 10 selects the next performance data set at step 344 and the method returns to step 316 and repeats as described above until there are no additional performance data sets in the log file or files to be analyzed at step 342.
  • system 10 and performance engine 56 When there are no additional performance data sets to be analyzed at step 342, system 10 and performance engine 56 generate an output file at step 346 and the method ends at step 348.
  • the output file is similar in structure to the log file and performance data and is sorted by participant and sequence of task.
  • the output file includes all the information in the log file as well as additional information such as the participant number, the assigned task, the IVR system used by the participant, the response time for each menu, the total response time, and whether the task was successfully accomplished.
  • the output file may also contain the correct key sequence for each performance data set.
  • the output file allows a user of system 10 to determine which IVR menu and tasks may need to be redesigned based on high positive response times or failures to accomplish tasks.
  • a performance data set for a particular task that was successfully accomplished but has very high response times may indicate that the menus need to be redesigned or reworded because although the participants accomplished the task, they had to listen to the menus several times before being able to make a selection.
  • GUI 46 has an additional feature that allows a user of system 10 to quickly determine the reliability of IVR test results.
  • Summary window 240 allows the user to quickly determine the pass/fail results for task accomplishment for each participant. Because participants may not take the IVR test seriously and others may only be taking the test to be paid, not all of the participants actually attempt to accomplish any of the assigned tasks. A participant intentionally failing all assigned tasks is not good for the overall test results and affects the analysis of the IVR system. A participant failing all of their assigned tasks is a good indication that the participant did not really try and that the associated performance data should be ignored when analyzing the output file. Summary window 240 allows the user to quickly peruse each participant's pass/fail results and call-routing accuracy without having to examine the output file and therefore determine which performance data should be disregarded and which tasks need to be tested again.
  • the call-routing and response time results of the IVR usability test yield important information that can be used in the further development and refinement of IVR systems. Based on these measures, companies can select IVR designs associated with the best objective performance and usability score and have an IVR system that is efficient and satisfactory to the customers.
  • FIGURES 11 through 13 a method for conducting a dialog exchange between a user and an IVR system 12 is generally depicted.
  • the method preferably enables an operator of a customer service call center, for example, to achieve, among other benefits, greater numbers of favorable responses to system prompts by matching the active persona of the IVR system 12 to one or more personality traits or characteristics of a current caller into the call center.
  • method 360 preferably identifies one or more personality traits or characteristics of a caller or user and activates an IVR system persona likely to put the user at ease during the user's interaction with the IVR system 12, elicit desirable responses to IVR system 12 prompts, such as sales prompts, as well as achieve other benefits.
  • Method 360 may be implemented in a variety of ways.
  • method 360 may be implemented in the form of a program of instructions storable on and readable or executable from one or more computer readable media such as floppy discs, CD-ROM, HDD devices, FLASH memory, etc.
  • method 360 may be implemented in one or more ASIC (application specific integrated circuits) .
  • method 360 may be implemented using both ASIC and computer readable media.
  • Other methods of enabling method 360 to be stored and/or executed by a computer system, such as system 18, are contemplated and considered within the scope of the present invention.
  • Other embodiments of the invention also include computer-usable media encoding logic such as computer instructions for performing the operations of the invention.
  • Such computer-usable media may include, without limitation, storage media such as floppy disks, hard disks, CD-ROMs, read-only memory, and random access memory; as well as communications media such as wires, optical fibers, microwaves, radio waves, and other electromagnetic or optical carriers.
  • the control logic may also be referred to as a program product. Specifically referring to FIGURE 11, method 360 begins at 362 where IVR system 12 is preferably initialized. Upon initialization of IVR system 12 at 362, method 360 preferably proceeds to 364.
  • method 360 preferably proceeds to 366 where a communication connection between IVR system 12 and the user's communications device 40 may be established.
  • User communication device 17 is preferably operable to allow a user to submit voice, touch-tone or other responses to prompts communicated from IVR system 12. Examples of user communications devices include, but are not limited to, telephones, mobile phones, PDAs (personal digital assistant) , personal computers, portable computers, etc.
  • method 360 preferably then proceeds to 370 where a first prompt for the user may be generated.
  • the first prompt generated by IVR system 12 includes a request for a user response.
  • the request for a user response will preferably encourage the user to respond with a spoken or verbal response.
  • the request for a user response may assume other preferred constructs.
  • the text, voice, gender, rate of speech and other characteristics of the first prompt may be determined or dictated by the information gathered from the user's incoming call, from one or more IVR system 12 settings, as well as from other factors.
  • IVR system 12 may identify the user from one or more communication link characteristics of the user's incoming call and accesses a stored user persona profile for the calling user, such as a persona from stored user persona profiles 68. For example, ANI information may be used to identify the caller.
  • the stored user persona profile may contain the IVR system 12 persona used during the user's last call, for example.
  • the first prompt may be generated based on one or more speech parameters identified in the stored user persona profile. Upon generation of a first user prompt at 370, method 360 preferably proceeds to 372. At 372, the first user prompt may be communicated to the user.
  • IVR system 12 preferably communicates the first prompt to the user over communications link 16 to user communication device 17 via communications interface 26.
  • the first prompt may be generated using one or more speech generation applications and/or hardware devices and according to the IVR system persona then in effect, e.g., a default IVR system persona or an IVR system persona identified from one or more call characteristics.
  • method 360 Upon communication of the first prompt to the user at 372, method 360 preferably proceeds to 374.
  • IVR system 12 preferably awaits a user response to the first prompt. To avoid trapping IVR system 12 in a loop waiting for the current user to respond to the first prompt, if no response is detected within a reasonable delay after prompting, method 360 preferably proceeds to 376.
  • method 360 preferably proceeds to 378.
  • IVR system 12 may determine whether a predetermined number of first prompt communication attempts have been exhausted. Again, to aid in the avoidance of locking IVR system 12 in a loop waiting for a user response, a limit to the number of first prompt communication attempts may be implemented in method 360. If at 374 a user response has not been received, at 376 the predetermined wait period for the most recent first prompt communication has been exhausted and at 378 the number of first prompt communication retries have not been exhausted, method 360 preferably returns to 372 where the first prompt may again be communicated to the user. Upon re-prompting the user, method 360 preferably reiterates through the processes indicated at 374, 376 and 378.
  • method 360 preferably proceeds to 380 where the communication connection with the current user is preferably severed. Once the communications link between the current user and IVR system 12 has been severed, method 360 preferably returns to 364 where the next user call may be awaited.
  • IVR system 12 may transfer the caller to a human operator.
  • Method 360 preferably proceeds to 382 of FIGURE 12 as a result of the detection of a user response to the first IVR system prompt at 374 of FIGURE 11.
  • IVR system 12 may begin processing the user response at 384 generally concurrently with the analysis of the user response at 382. For example, if the first prompt generated by IVR system 12 at 370 presented a plurality of transaction options from which the user was to select one, processing the user's response and selection of a desired transaction at 384 would allow IVR system 12 to initiate the desired user transaction.
  • any information associated with the user identifier stored by the IVR system 12 may be retrieved at 384 before, after or generally concurrent with the analysis and identification of the speech characteristics of the user's response at 382.
  • IVR system 12 may interrogate one or more of the persona libraries 66 preferably stored in storage system 20 on HDD device 60 and/or SAN 62.
  • One goal of the persona library 66 interrogation at 386 is for IVR system 12 to identify a persona available in a persona library 66 which best comports with or matches the current personality or demeanor of the user.
  • IVR system 12 may be configured to select from a plurality of persona characteristics to create an IVR system persona which best matches the current personality or demeanor of the user.
  • the personality or demeanor of a user may be defined in a variety of ways.
  • a user personality or demeanor may include IVR system 12 analysis to determine whether a user is likely a novice or experienced IVR system 12 user.
  • a user' s personality or demeanor may include the gender of the caller, whether the caller may be characterized as an introvert or extrovert, whether the user is agitated, seems confused or is questioning the system.
  • IVR system 12 may be configured to identify when a user is struggling with the system by recognizing that a user has increased the duration and amplitude of their speech. Further, IVR system 12 may be configured to identify tension in a user's voice.
  • Other categories or types of user personalities or demeanors are considered within the scope of the present invention.
  • an IVR system persona is selected or created at 388.
  • the IVR system 12 persona may be activated at 390.
  • Activation of an IVR system persona may include, but is not limited to, loading one or more person characteristic, i.e., gender, speech rate, etc., into a memory accessible to voice generation software or hardware.
  • the persona of IVR system 12 can have a significant impact on the responsiveness of a user. Accordingly, selection of a preferable, optimal or appropriate IVR system persona and the subsequent dialog exchange with the user in accordance with the IVR system persona provide computer-based call centers an advantage over single persona IVR system 12 based call centers.
  • the plurality of user prompts generated at 392 are generally directed to completing a user desired transaction, i.e., the purpose for which the current user contacted IVR system 12, such as to check a balance or pay on an account.
  • method 360 preferably proceeds to 394 where the one or more user prompts may be communicated to the user.
  • IVR system 12 determines at 72 that the current user is a shy male seeking to check an account balance
  • the selected IVR system persona may have the characteristics of being a soft spoken male that prompts the user for an account number, asks whether the user would like an account statement mailed to his address of record or whether the user would like his balance spoken to him over the communications link, etc.
  • method 360 preferably proceeds to 396 where a user response to the prompt is awaited. In the event a user response is not detected within a reasonable delay after prompting, method 50 preferably proceeds to 398. At 398, a determination is made as to whether a predetermined overall or total wait period for a user response to the IVR system 12 prompt has been exhausted. In the event that the predetermined time period has not been exhausted, method 360 preferably loops at 398 until the predetermined time period has been exhausted. Once the predetermined time period has been exhausted, method 360 preferably proceeds to 400.
  • Method 360 preferably proceeds to 404 of FIGURE 13 in response to detection or reception of a user response to the IVR system 12 prompt directed to completing the desired user transaction communicated at 394.
  • one or more parameters or characteristics of the user's response are preferably identified, analyzed or otherwise isolated.
  • each user response to an IVR system prompt may be evaluated for a change in the user's personality, or demeanor.
  • only selected user responses to IVR system prompts may be evaluated for a change in the user's personality or demeanor.
  • IVR system 12 may process the user response in furtherance of the desired user transaction, as indicated at 406.
  • IVR system 12 preferably compares or otherwise determines whether any differences exist between the user's current personality or demeanor and the personality or demeanor previously detected, e.g., at 382 of FIGURE 12. Specifically, according to teachings of the present invention, IVR system 12 is attempting to monitor the user's personality or demeanor to determine whether a new IVR system persona or change in style of the current persona is likely to elicit more favorable responses from the user, put the user at ease, or otherwise enhance the user's interaction with IVR system 12. In addition, IVR system 12 may be configured to detect whether the user is having difficulty using or interacting with the system and to access and communicate one or more help prompts to aid the user in such instances .
  • method 360 may return to 386 of FIGURE 12 where the one or more persona libraries 66 may again be interrogated to identify one or more IVR system personas which best comport or match the user's current demeanor or personality.
  • the style of the current persona may be changed or one or more persona characteristics may be compiled to create an overall IVR system persona which best matches or comports with the user's present demeanor or personality.
  • method 360 preferably again proceeds through selection at 388 and activation at 390 of a new IVR system persona or style. If at 408 there is no detected change in the user's demeanor or personality detected, method 360 preferably proceeds to 410.
  • IVR system 12 preferably determines whether the desired user transaction has been completed, i.e., whether the user has received all desired information or whether the user has provided all of the information requested by IVR system 12. If it is determined at 410 that the desired user transaction is incomplete, method 360 preferably returns to 392 of FIGURE 12 where the next prompt in the sequence of prompts directed to completing a desired user transaction may be generated for communication at 394.
  • method 360 may proceed to 412.
  • personas for users of IVR system 12 may be stored for use during subsequent transactions or dialog exchanges with the user.
  • the persona for the last transaction with the current user for example, may be stored in one or more stored user persona profiles 68 on one or more HDD devices 60 or SANs 62.
  • stored user persona profiles 68 may be used by IVR system 12 in those instances where the caller can be identified prior to the communication of the first prompt to the user as well as in other instances.
  • method 360 preferably proceeds to 414 where the communications link with the user may be severed. Once the communications link has been effectively severed, method 360 preferably returns to 364 of FIGURE 11 where IVR system 12 may await the next incoming call.
  • System 10 allows for the automated creation of a customer-centric interface that directly matches menu prompts with customer tasks, orders and groups the tasks and menu options by the task frequency of occurrence and the customer perceived task relationships, and states the menu prompts using the customers own terminology.
  • system 10 may also be utilized for the automated creation of customer- centric interfaces for web sites with respect to developing content for the web site, designs of the web pages, and what tasks to locate on different web pages.
  • IVR system 12 and computer system 18 may also automatically analyze performance data from other systems in addition to IVR systems as well as any other appropriate type of data.

Abstract

A method, system, and apparatus for automating the creation of customer-centric interfaces includes an IVR system and a computer system automatically creating a customer-centric interface design. A plurality of opening statements are collected and analyzed in order to determine the customers' terminology, the tasks for which the customers contact the interface, and how the tasks are related. Analysis and categorization of the customer opening statements (using one or more rules), the tasks, and the task relationships allows for menu prompts and dialog exchanges within the interface to be in the customers own terminology and arranged in an order familiar to the customers. Furthermore, a plurality of personas are available to interact with the customers in accordance with the customers' personality or demeanor. The performance of the customer-centric interface is tested whereby the test results are automatically analyzed and the customer-centric interface is modified for optimal performance.

Description

METHOD, SYSTEM, AND APPARATUS FOR AUTOMATING THE CREATION OF CUSTOMER-CENTRIC INTERFACE
TECHNICAL FIELD OF THE INVENTION
The present invention relates generally to interface designs, and more specifically relates to a system and method for implementing customer-centric interfaces. BACKGROUND OF THE INVENTION
Every year, company service centers typically receive numerous telephone calls from customers seeking assistance with particular tasks. The customers often speak with customer service representatives (CSR) to complete their tasks. Because of the cost associated with CSR time, companies are switching over to automated systems such as interactive voice response (IVR) systems where IVR systems answer the customer phone calls and direct the customer phone calls to the correct service center using one or more menus of options. The IVR systems allow customers to complete their tasks without the assistance of a CSR.
In order to maintain a high level of customer satisfaction, an IVR system must be designed so that customers can easily navigate the various menus and accomplish their tasks without spending too much time on the telephone and becoming frustrated and unsatisfied with the company and its customer service. Therefore, companies must design and continually test, update, and improve the IVR systems including the IVR menus so that the IVR systems function efficiently so- that customers remain satisfied with the level of customer service. BRIEF DESCRIPTION OF THE DRAWINGS
A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
FIGURE 1 illustrates a block diagram showing a system incorporating teachings of the present invention;
FIGURE 2 depicts a flow diagram of a method for automating the creation of a customer-centric interface;
FIGURE 3 depicts an example task frequency table;
FIGURE 4 illustrates a block flow diagram of various components of the system for the automated creation of a customer-centric interface; FIGURE 5 illustrates a flow diagram of a method for creating customer-centric menu prompts;
FIGURE 6 illustrates an example graphical user interface for the categorization of statements within a customer-centric interface; FIGURE 7 depicts a flow diagram of a method for the automated categorization of statements within a customer- centric interface;
FIGURE 8 illustrates an example graphical user interface for the analysis of performance data within a customer-centric interface;
FIGURE 9 depicts an example log file including performance data;
FIGURE 10 illustrates a flow diagram of a method for the automated analysis of performance data within a customer-centric interface; and FIGURES 11 - 13 illustrate flow diagrams depicting a method for conducting a dialog exchange between an interactive voice response system and a user.
DETAILED DESCRIPTION OF THE INVENTION
Preferred embodiments and their advantages are best understood with reference to the figures, wherein like numbers may be used to indicate like and corresponding parts . Many companies that have customer service programs and/or call centers, such as telephone companies, Internet service providers, and credit card companies, typically have automated systems such as interactive voice response (IVR) systems that answer and direct customer phone calls when a customer calls seeking assistance for a particular task such as to change an address or inquire about payment of a bill. If a customer does not reach an IVR system when calling a service number, the customer may speak with a customer service representative (CSR) who either helps the customer or transfers the customer to an IVR. Within the IVR, the customer listens to one or more prerecorded menus or prompts and provides responses using touch-tone input and/or speech input in order to accomplish their task. Therefore, the content and structure of the IVR including the prerecorded menus or prompts needs to allow for customers to easily and quickly accomplish their tasks with little frustration.
The typical approach to IVR system interface design involves a company design team creating a set of requirements where the design team is comprised of various individuals representing different departments within the company. The design team incorporates various perspectives and documents from the team members in designing the IVR interface. The design team decides how best to structure the IVR interface based on their understanding of the underlying system and/or the organization of the company. The customers'' preferences and level of knowledge are generally not taken into account.
Once designed, the IVR interface is tested to ensure functionality and that it is error free. The inclusion of customers into the design process occurs late in the development phase, if it all, through usability testing. But much of the customer input gathered in the usability testing will not be implemented into the IVR interface because of the costs involved with making changes late in the development phase. Only significant errors discovered through the usability testing are generally corrected. The result is an IVR interface having a business-centric organization and structure, where the menu options and prompts are structured according to the organization of the company and are worded using company terminology.
When calling a customer service number, customers know why they are calling (to accomplish a specific task) but typically do not know which department within a company handles specific tasks. Therefore, business- centric interfaces generally do not allow for customers to easily and quickly navigate and accomplish their tasks with little frustration, since business-centric interfaces are designed around a company's organization and way of thinking. When customers cannot quickly and easily accomplish their tasks, they generally make incorrect selections within the IVR interface, resulting in misdirected calls. Misdirected calls are expensive to companies both in the time and money spent dealing with a misdirected call and in lower levels of customer satisfaction resulting from unpleasant customer experiences with business-centric interfaces, which can lead to negative feelings towards the company.
In most IVR system implementations, a single persona may be assigned to the IVR system. However, according to behavioral research, users of automated systems tend to view automated systems more favorably when the persona or personality of the system matches the user's personality. For example, a recent study suggests that introverts and extroverts tend to be more satisfied with, likely to trust and likely to make a purchase from an automated system possessing voice characteristics similar to their own. Similarly, evidence exists showing that many IVR system users prefer an IVR system having a system voice matching their own gender. In order for IVR systems to meet customers' needs and be more customer-centric, the usability of IVR systems are tested and improved by conducting laboratory studies or tests where test participants are asked to accomplish sets of tasks using the IVR system. An example of a task to accomplish may be, "Call Telephone
Company at 555-1111 and change your billing address." In these studies or tests, the participants use telephones to interact with an IVR simulation application which is presented by a laboratory computer. The simulated IVR application plays prerecorded announcements or prompts to the participants in the form of a series of menus and records information regarding the participants' responses such as the menu name, the amount of time the prerecorded prompt played before the participant made a selection or pressed a key, and the key that the participant pressed. Once the study is completed, the recorded information regarding the participants' responses or the performance data is compiled into a log file with information regarding each task test stored as an individual performance data set.
To analyze the performance data collected by the IVR simulation application in the log file, the company may score participants call routing performance based on two factors - accomplishment of the task and the time spent in the IVR simulation application attempting to accomplish the task. Analysis of the log file and the performance data is typically done as a manual process where one or more persons manually examine each performance data set noting the task, determining if the participant accomplished the task, and calculating the time spent listening to the menus or prompts and then manually creating an output file containing the findings of the IVR simulation. Given that a typical IVR study generally includes many participants each performing several different tasks, the manual analysis of the performance data is a very time consuming, labor intensive, and resource intensive process. In addition, the manual analysis of the performance data is also subject to human error such as math errors in calculating time spent in the menus and in omitting particular data points . Furthermore, many companies often track statements made by customers when the customers contact the company with problems or questions about a product or service or to alter a product or service. When a customer calls a service number and speaks to a CSR, the customer typically tells the CSR the purpose of the call in the first substantive statement the customer makes. Alternatively, a customer may contact a company via the company web site or email and generally the first substantive statement made in the email or web site response includes the customer' s purpose for contacting the company. These initial statements containing the purpose of the customer's call are often referred to as opening statements.
These opening statements can be used by companies to better design IVR systems, web sites, and any other customer interfaces between a company and the customers and allow for a more customer-centric interface design. One effective way to design an IVR system or a web site interface is to analyze the scripts of incoming calls or emails to a customer support center or call center to locate the opening statements and identify the purpose of each call or email by classifying or categorizing each opening statement. Once categorized, a frequency report can be created that details how often customers are calling with specific problems or questions about specific products or services. For example, a telephone company may want to know how many customers are calling or emailing about a problem with their bill or to add a new product to their telephone service. Once a company knows the frequency of customer complaints and questions, an IVR system can be designed that incorporates the frequencies so that customers calling with common problems, complaints, or questions can be serviced quickly and efficiently. For example, a company would be able to determine that of the 5,000 service calls received in one month, what percentage of the calls were about particular topics and also rank the reasons why the customers called or emailed the customer support. In order to maximize the utilization of the statements given by the customers in a customer-centric interface design, a company therefore needs to track and categorize the statements. Typically, companies have manually tracked and manually categorized opening statements. The company manually tracks each call and manually records and transcribes each opening statement spoken to a CSR or received via email and then creates a list of opening statements. An employee of the company reads the long list of opening statements with a list of categories in front of him/her and assigns a category label to each opening statement. This is a very time consuming and costly process because one or more people manually examining every opening statement and deciding how to categorize the statement in accordance with multiple category labels requires a large amount of employee time which is expensive and would be better utilized in a revenue generating task.
In addition to the cost and man-power required for the manual categorization of opening statements, there is also a subjective element to the manual categorization of opening statements which affects the reliability of the categorization results. The category labels used to manually categorize the opening statements are generally designed to be objective but when applied by a person, the person's subjective thinking and opinions affect how they categorize the opening statements. For instance, an opening statement such as "I am calling about my bill for the charges for Call Waiting" may be categorized by one person as a billing inquiry and another person as a call waiting inquiry. Therefore, even though multiple people may use the same category labels to categorize the opening statements, they might categorize the same opening statement differently because the categorization is partly a matter of opinion. This human opinion factor and subjectiveness creates an inconsistency in the categorization data and frequency reports that results in unreliable data and a customer interface design that is not optimized with respect to the opening statements and the way customers think.
By contrast, the example embodiment described herein allows for the automated creation of a customer-centric interface. The customer-centric interface is designed to best represent the customers' preferences and levels of knowledge and understanding. Additionally, the example embodiment allows for the inclusion of the customers in the design process from the beginning to ensure that the customer-centric interface is both usable and useful for the customers. The customer-centric interface allows for the customers to quickly and easily navigate the various menus within the customer-centric interface to accomplish their tasks with high levels of customer satisfaction. The customer-centric design also allows for increased call routing accuracy and a reduction in the number of misdirected calls. Therefore, companies save time and money because less time is spent dealing with misdirected calls and less resources are used by the customers since the customers spend less time within the customer-centric interface accomplishing their tasks. Furthermore, the example embodiment described herein allows for the automated categorization of statements to better enable the creation of a customer-centric interface. Additionally, the example embodiment allows for the creation of objective rules to categorize the statements which results in reliable and consistent categorization data. Time and money is saved because employees are no longer manually looking through lists of statements trying to categorize the statements using only category labels. Therefore, employees' time may be better utilized in revenue generating projects. Furthermore, the objective rules for categorizing the statements eliminate the subjective aspect of the categorization scheme allowing for the same statement to be categorized with the same category label as long as the same set of rules are used to categorize the statements. This results in consistent and reliable categorization and frequency data which can be used in the design and creation of customer interfaces that reflect the customers' view of how the interface should operate.
Furthermore, the example embodiment described herein allows for the automated analysis of performance data to better enable the creation of a customer-centric interface. Additionally, the example embodiment allows for the consistent analysis of performance data free of human error. Time and money is saved because employees no longer manually examine the performance data determining if the task was accomplished and manually calculating the time required to accomplish each task. Therefore, employees' time may be better utilized in other revenue generating projects since less time is required to analyze the performance data. Furthermore, the analysis of the performance data is more reliable because the analysis is not subject to human error such as calculation errors and different people are not interpreting the performance data in different manners. FIGURE 1 generally illustrates one embodiment of a customer-centric interface solution incorporating teachings of the present invention and operable to provide automated or computer based customer service to callers using an interactive voice response (IVR) system. As depicted in FIGURE 1, system 10 preferably includes at least one IVR system 12. In one embodiment, IVR system 12 may include one or more traffic handling devices 14. Traffic handling devices may include, but are not limited to, such devices as routers, switches, hubs, bridges, content accelerators, or other similar devices. As depicted, one or more traffic handling devices 14 may be coupled between communications link 16 and computer system 18. Computing system 18 may be a personal computer, a server, or any other appropriate computing device. Communications technologies which may be used as communications link 16 include, but are not limited to, a PSTN (public switched telephone network) , the Internet using voice over IP (Internet Protocol) , such mobile technologies as satellite and PCS (personal communication service) , as well as others.
In an embodiment of IVR system 12 having a component or storage system 20 which is maintained separately from computer system 18, as depicted in FIGURE 1, one or more traffic handling devices 14 may be included and coupled between computer system 18 and such a storage system 20. As described below, storage system 20 or portions thereof may be incorporated into computer system 18, according to teachings of the present invention.
Computer system 18 may be constructed according to a variety of configurations. Preferably, however, computer system 18 includes one or more processors or microprocessors 22. Processors or microprocessors 22 may include such computer processing devices as those manufactured by Intel, Advanced Micro Devices, Motorola, Transmeta, as well as others. Operably coupled to microprocessor (s) 22 are one or more memory devices 24. Memory devices 24 may include, but are not limited to, such memory devices as SDRAM (synchronous dynamic random access memory) , RDRAM (Rambus dynamic random access memory) , FLASH memory, or other memory device operable to functioning with the microprocessor (s) 22 of choice.
Also operably coupled to microprocessor (s) 22 are one or more communications interfaces 26. Communications interface 26 may employ wire-line and/or wireless technologies. For example, wire-line based communications interfaces 26 may include, but are not limited to, such wire-line technologies as PSTN (public switched telephone networks) , Ethernet, Token-Ring, coaxial, fiber optic, as well as others. Examples of wireless technology based communications interfaces 26 may include, but are not limited to, such wireless technologies as Bluetooth and IEEE (Institute of Electrical and Electronic Engineers) 802.11b, as well as others .
One or more component systems interfaces 28 are also preferably included and coupled to microprocessor 22. According to teachings of the present invention, component systems interfaces 28 preferably couple one or more component systems to microprocessor (s) 22 such that microprocessor (s) 22 may access the functionality included therein. Examples of component systems include storage system 20, video displays, storage devices, scanners, CD-ROM (compact-disc-read only memory) systems, input/output devices, etc. Component systems interfaces 28 may include, for example, ISA (industry standard architecture) connections, PCI (peripheral component interconnect) connections, PCI-X (peripheral component interconnect-extended) connections, SCSI (small computer systems interface) connections, USB (universal serial bus) connections, FC-AL (fibre-channel arbitrated loop) connections, serial connections, parallel connections, Ethernet connections, IEEE 802.11b receivers/transmitters, Bluetooth receivers/transmitters, as well as others. In addition, component systems interfaces 28 may be provided to couple one or more components system internal to computer system 18, such as hard disc drive (HDD) devices, CD-ROM read/write devices, etc., to microprocessor (s) 22.
Computing system 18 further includes hard disk drive (HDD) 30 containing databases 32, 33, 34, 36, 38, and 40 and processor 22, memory 24, communications interface 26, component systems interface 28, and HDD 30 communicate and may work together via bus 42 to provide the desired functionality. The various hardware and software components may also be referred to as processing resources. Computer system 18 further includes display 44 for presenting graphical user interface (GUI) 46 and input and output devices such as a mouse and a keyboard. Computer system 18 also includes rule engine 48, task engine 50, collection engine 52, customer language engine 54, performance engine 56, and customer structure engine 58, which reside in memory such as HDD 30 and are executable by processor 22 through bus 42. In other embodiments, HDD 30 may include more or less than six databases and be remotely located in storage system 20. Display 44 presents GUI 46 which allows for a user or an operator to interact with IVR system 12 and computer system 18. Shown in FIGURES 6 and 8 are various example GUIs 46. GUI 46 includes a plurality of screens and buttons that allow the users and the operators to access and control the operation of IVR system 12 and computer system 18.
As illustrated in FIGURE 1 and mentioned above, one or more traffic handling devices 14 may be coupled between computer system 18 and storage system 20. In another embodiment, however, storage system 20 may be included within or internal to computer system 18. In such an embodiment, storage system 20 or one or more components thereof may be directly coupled to the one or more component systems interfaces 28.
Component or storage system 20 may include a variety of computing devices and is preferably not limited to one or more types of storage device. In the embodiment of storage system 20 illustrated in FIGURE 1, a plurality of storage devices, preferably storing one or more applications and databases for use in accordance with teachings of the present invention, 'may be provided. Specifically, component or storage system 20 may include one or more supplemental hard disc drive (HDD) devices 60, digital linear tape (DLT) libraries (not expressly shown) , CD-ROM libraries and/or one or more storage area networks (SAN) 62. In yet another embodiment of IVR system 12, one or more HDD devices 60 may be included in computer system 18 with one or more SANs 62 included in storage system 20.
As with many computer systems, a variety of applications 64 may be used to leverage the functionality or processing capability of computer system 18. In the present invention, a plurality of applications 64 may be effectively included in storage system 20, on one or more HDD devices 60 and/or on one or more SANs 62. For example, one or more communications applications operable to establish a communication connection with one or more users via communication link 16 may be included in storage system 20. In addition, one or more speech recognition or voice analysis applications are included on HDD devices 60 and/or SAN 62 for use as described below. A variety of additional applications 64 may also be included on one or more of HDD devices 60 and/or SANs 62.
As will be described in more detail below with respect to one embodiment of a method according to the present invention, one or more persona libraries 66 are preferably included on storage system 20. Persona libraries 66 preferably include a plurality of IVR system personas, one or more of which may be selected for use during a transaction with a given user. In one embodiment, the personas stored in persona libraries 66 may be pre-existing, i.e., a complete persona or one having a defined gender, rate of speech, system prompt menu, etc., needing only to be selected and activated for use in the IVR system. Each persona in the library may also include a number of styles or strategies. For example, within a persona designed to communicate like a calm, caring, mature female (i.e., a motherly persona), the library may include a first subset of prompts or scripted dialog designed to help novice callers, a second subset designed to help expert callers, a third subset designed to sound sympathetic and soothing and a fourth subset designed to be more abrupt. These different subsets or styles may be produced by altering characteristics of the persona such as speaking rate, choice of formal or informal words, use of terse or verbose utterances, etc. As described below, IVR system 12 may dynamically change from one style to another in response to detected changes in the speech characteristics of a caller.
In another embodiment, persona libraries 66 may contain a plurality of IVR system persona components, such as gender, rate of speech, tone, inflection, prompt menus, etc. An overall IVR system persona may be selected and compiled from selected components to create an IVR system persona which has been determined, according to teachings of the present invention, to be the persona most likely to elicit favorable responses from the user as well as to achieve other benefits.
Also as described below with respect to one embodiment of a method of the present invention, one or more user persona profiles 68 may be stored on HDD 30, HDD devices 60, and/or on SANs 62. According to teachings of the present invention, when a repeat user contacts IVR system 12, the IVR system 12 may be implemented such that the user can be identified, e.g., from one or more call characteristics, and the user's preferred or most recent IVR system persona may be initiated by the IVR system 12. As will be described in more detail below, one or more responses from the user to prompts from the user's stored persona 68 may initiate a change in the persona used to complete the user's desired transaction. In other embodiments, the above may be stored in HDD 30 instead of storage system 20.
FIGURE 2 depicts a flow diagram of a method for automating the creation of a customer-centric interface. The method begins at step 70 and at step 72 collection engine 52 collects a plurality of customer opening statements. When a customer calls a service number and speaks to a CSR, the customer typically tells the CSR the purpose of the call in the first substantive statement the customer makes. Alternatively, a customer may contact a company via the company web site or email and generally the first substantive statement made in the email or web site response includes the customer's purpose for contacting the company. These initial statements containing the purpose of the customer' s call are often referred to as customer opening statements. Collection engine 52 collects the customer opening statements from customer service centers and stores the customer opening statement in customer opening statement database 32.
The customer opening statements provide insight into the tasks that the customers inquire about as well as the language or terminology the customers use to describe the tasks. At step 74, customer language engine 54 analyzes the customer opening statements to determine the language or terminology used by the customers when referring to particular tasks. When customers call a service number, they are not concerned with how the company is going to accomplish the task just that the task gets accomplished. Therefore, customer language engine 54 must learn and use the terminology of the customers in creating customer- centric menu prompts so that customers will be able to easily understand and identify how to accomplish their tasks when using the customer-centric interface. At step 76, customer task model 128 within collection engine 52 determines the different reasons why the customers contact the company in order to create a list of tasks for which the customers access the customer-centric interface. Analysis of the customer opening statements allows for the determined tasks to be tested to see if the list of tasks accounts for a majority of the reasons why the customer contact the company. The tasks may include such tasks as "telephone line is not working," "question about my bill," "order a new service," or any other appropriate reason for a customer to call seeking assistance regarding a product or service.
Once the list of tasks has been created and determined to cover the majority of the customers' reasons for calling, task engine 50 determines a task frequency of occurrence for each task at step 78. The task frequency of occurrence allows system 10 to recognize which tasks customers are calling about the most and which tasks the customers are calling about the least. Task engine 50 determines the task frequency of occurrence by examining and categorizing the customer opening statements. Each customer opening statement is examined to identify the purpose of the call and is then categorized as a particular task.
Once the customer opening statements have been categorized, task engine 50 creates a task frequency table that ranks the tasks according to the task frequency of occurrence. The task frequency table details how often customers call with specific problems or questions about each particular task. An example task frequency table 120 for eighteen tasks 127 - 161 is shown in FIGURE 3 and includes column 122 for the frequency rank of the task, column 124 for the task, and column 126 for the frequency value. In other embodiments, task frequency table 120 may include more or less than eighteen tasks. Task frequency table 120 shows that eighteen tasks account for more than 80% of the customer opening statements or service calls received from the customers. Task frequency table 120 allows for system 10 to determine which tasks the customers call about the most and provides valuable information on how to arrange the customer-centric menu prompts within the customer- centric interface.
Task frequency table 120 is ordered in descending frequency order and is a statistically valid representation of the tasks that the customers inquire about when calling customer service centers. Because having a menu prompt for every single task results in numerous menu prompts making customer navigation of the customer-centric interface burdensome and slow, at step 80 task engine 50 determines which tasks are to be included in the customer-centric interface. In order to allow easy and quick navigation for the customers but at the same time not utilize too many company resources operating the customer-centric interface, only the most frequently occurring tasks are included within the customer-centric interface. Task engine 50 utilizes task frequency table 120 to determine which tasks are to be included in the customer- centric interface. In one embodiment, task engine 50 includes only the tasks that have a frequency of occurrence of 1% or higher. Task frequency table 120 includes only the tasks having a frequency of occurrence of 1% or higher and includes eighteen tasks accounting for 80.20% of the tasks represented in the customer opening statement. In another embodiment, task engine 50 includes tasks so that the total number of included tasks accounts for a specified percentage coverage of the tasks represented in the customer opening statements. For instance, task engine 50 may include a specified number of tasks so that the total frequency of occurrence is a specific total percentage coverage value such as 85%, 90% or any other appropriate percentage of coverage. Either embodiment typically allows for between fifteen and twenty tasks to be included in the customer-centric interface. For efficient operation, the customer-centric interface does not include an opening customer-centric menu prompt listing all of the included tasks in frequency order. Such an opening menu prompt would take too long for the customers to listen to and would not allow for quick and easy navigation of the customer- centric interface. Therefore, the customer-centric interface is of a hierarchical design with the tasks grouped together by task relationships.
In order for the customer-centric interface to be organized from the vantage of the customers, the included tasks need to be grouped according to how the customers perceive the tasks to be related. Therefore at step 82, customer structure engine 58 elicits from one or more test customers each customer' s perceptions as to how the included tasks relate to each other in order to create interface structure for the customer-centric interface. Interface structure is how the tasks are placed within the customer-centric interface and organized and grouped within the customer-centric menu prompts. For instance, the interface structure of a web page refers to how the pages, objects, menu items, and information is organized relative to each other while the interface structure for an IVR system refers to the sequence and grouping of the tasks within the customer-centric menu prompts. The interface structure for the customer-centric interface needs to allow for the customers to find information and complete tasks as quickly as possible without confusion. Customer structure engine 58 uses tasks 127 - 161 from task frequency table 120 and performs customer exercises with the customers to elicit customer feedback regarding how the customers relate and group together tasks 127 - 161. For instance, customer structure engine 58 may require a group of test customers to group tasks 127 - 161 into one or more groups of related tasks. In addition, customer structure engine 58 may also require the test customers to make comparative judgments regarding the similarity of two or more of the tasks where the test customers state how related or unrelated they believe the tasks to be. Furthermore, customer structure engine 58 may require the test customers to rate the relatedness of the tasks on a scale. Customer structure engine 58 performs the customer exercises using a test IVR system, a web site, or any other appropriate testing means. In addition to eliciting tasks relationships, customer structure engine 58 also elicits from the test customers general names or headings that can be used to describe the groups of tasks in the customers own language or terminology. Once customer structure engine 58 elicits from the test customers how the customers perceive tasks 127 -161 to relate to each other, customer structure engine 58 aggregates the customer feedback and analyzes the customer feedback to determine customer perceived task relationships. The customer perceived task relationships are how the customers perceive the tasks to be related. Customer structure engine 58 represents the customer perceived task relationships in a numerical data matrix of relatedness scores that represents collectively the customers' perceived relatedness of the included tasks. At step 84, customer structure engine 58 utilizes the customer perceived task relationships and the numerical data matrix and combines the included tasks into one or more groups of related tasks. For example, using the customer feedback from the customer exercises, customer structure engine 58 determines that the customers perceive tasks 133, 155, and 159 as related and group one, tasks 147, 149, and 157 as related and group two, tasks 127, 129, 131, 135, 139, 141, 143, 145, 153, and 161 as related and group three, and tasks 137 and 151 as related and group four. To aid in the grouping of the tasks and to better enable the company to understand the structure and grouping of the tasks, customer structure engine 58 represents the customer perceived task relationships and numerical data matrix in a graphical form. For instance, customer structure engine 58 may generate a flow chart or indogram illustrating a customer-centric call flow for the groups of tasks.
At step 86, task engine 50 orders the groups of task and the tasks within each group based on the task frequency of occurrence. Task engine 50 determines a frequency of occurrence for each group of tasks by summing the individual frequency of occurrences for each task within each group. From the example above, group one has a group frequency of occurrence of 8.9% (6.7% + 1.1% + 1.1%) , group two has a group frequency of occurrence of 6.2% (3% + 2.1% + 1.1%), group three has a group frequency of occurrence of 59.4% (14% + 11.6% + 11.3% + 5.6% -I- 3.8% + 3.8% + 3.5% + 3.4% + 1.4% + 1.0%), and group four has group frequency of occurrence of 5.7% (3.8% + 1.9%) . Task engine 50 orders the groups within customer-centric interface in descending frequency order so that the tasks having the highest frequency of occurrence are heard first by the customers when the customers listen to the customer-centric menu prompts within the customer-centric interface. Since 59.4% of the customer will be calling about a task in group three, task engine 50 orders group three first followed by group one, group two, and group four.
In addition to ordering the groups of tasks, task engine 50 also orders the tasks within each group. Task engine 50 orders the tasks within each group according to each task' s frequency of occurrence from the highest frequency of occurrence to the lowest frequency of occurrence. For instance, the tasks in group one are ordered as task 133, task 155, and task 159. The tasks in group two are ordered as task 147, task 149, and task 157. The tasks in group three are ordered as task 127, task 129, task 131, task 135, task 139, task 141, task 143, task 145, task 153, and task 161. The tasks in group four are ordered as task 137 and task 151. The grouping and ordering of the tasks allow for the high frequency tasks to be more accessible to the customers than the low frequency tasks by placing the tasks having higher frequency of occurrences higher or earlier in the customer-centric interface menu prompts.
At step 88, customer language engine 54, task engine 50, and customer structure engine 58 work together to create and order the customer-centric menu prompts for the customer-centric interface. Task engine 50 and customer structure engine 58 do not take into account customer terminology when calculating task frequencies, grouping the tasks, and ordering the tasks. So once task engine 50 and customer structure engine 58 create interface structure including ordering the included tasks, customer language engine 54 creates customer- centric menu prompts using the customers own terminology. Customer-centric menu prompts in the language of the customers allow for the customers to more easily recognize what each menu prompt is asking and allows the customer to accomplish their tasks quickly and with little frustration. In other embodiments, customer language engine 54 may create customer-centric menu prompts using action specific object words in addition to the customers own terminology. The use of action specific object words to create menu prompts is described in further detail below with respect to FIGURE 5. Once system 10 creates the customer-centric menu prompts and the customer-centric interface, performance engine 56 tests the customer-centric interface at step 90 by performing usability tests. Performance engine 56 performs the usability tests in order to locate and fix any problems with the customer-centric interface before the customer-centric interface is implemented for use by all customers. The usability tests involve laboratory tests where test customers are asked to accomplish sets of tasks using the customer-centric interface such as "Call Telephone Company at 555-1111 and change your billing address." In these tests, the test customers use telephones to interact with the customer-centric interface. The customer-centric interface plays the prerecorded customer-centric menu prompts to the test customers and performance engine 56 records information regarding the test customers' responses such as the menu name for the menus accessed, the amount of time the prerecorded menu prompt played before the test customer made a selection or pressed a key, and the key that the test customer pressed.
When the usability tests conclude, at step 92 performance engine 56 analyzes the results of the usability tests. With respect to the results, performance engine 56 focuses on three different usability test results: customer satisfaction, task accomplishment, and response times. Customer satisfaction is whether or not the test customer was satisfied using the customer-centric interface. Performance engine 56 gathers customer satisfaction by asking the test customers a • variety of questions regarding their experiences in interacting with the customer-centric interface such as how satisfied the test customer was in accomplishing the assigned tasks, how confident the test customer was about being correctly routed, the level of agreement between the selected menu prompts and test customers' assigned tasks, and whether the test customers would want to use the customer-centric interface again. Performance engine 56 also determines a task accomplishment or call routing accuracy score. Task accomplishment measures whether a test customer successfully completes an assigned task and is based on a sequence of key presses necessary to navigate the customer-centric interface and accomplish the task.
Performance engine 56 determines if the test customers actually accomplished their assigned task. For example, if a test customer was assigned the task of using the customer-centric interface to inquire about their bill, did the test customer correctly navigate the customer- centric menu prompts in order to inquire about their bill. Performance engine 56 examines all the different menu prompts accessed by the test customers and compares the test customer key sequences with the correct key sequences in order to determine if the test customers accomplished the assigned tasks.
In addition to customer satisfaction and task accomplishment, performance engine 56 also calculates a response time or cumulative response time (CRT) for each customer-centric menu prompt accessed by the test customers. The response time indicates the amount of time a test customer spends interacting with each customer-centric menu prompt and the customer-centric interface. The response times reflects the amount of time the test customers listen to a menu prompt versus the amount of time it takes for the menu prompt to play in its entirety. The amount of time the test customers spend listening to the menu prompt is not a very valuable number unless menu duration times are also taken into account. A menu duration time is the amount of time it takes for a menu prompt to play in its entirety. For instance, a menu prompt may have five different options to choose from and the menu duration time is the amount of time it takes for the menu prompt to play through all five options.
Performance engine 56 records a listening time for each test customer for each menu prompt. The listening time is the time the test customers actually spend listening to a menu prompt before making a selection. Performance engine 56 also has access to the menu duration times for all of the customer-centric menu prompts in the customer-centric interface. Performance engine 56 calculates a response for a menu prompt which is the difference between the listening time and the menu duration time by subtracting the menu duration time from the listening time. For example, if the introductory menu prompt of the customer-centric interface requires 20 seconds to play in its entirety (menu duration time) and the test customer listens to the whole menu and then makes a selection, the test customer has a listening time of 20 seconds and receives a CRT score or response time of 0 (20 - 20 = 0) . If the test customer only listens to part of the menu prompt, hears his choice and chooses an option before the whole menu plays, then the test customer receives a negative CRT score or response time. For instance, if the test customer chooses option three 15 seconds
(listening time) into the four-option, 20 second menu prompt, the test customer receives a CRT score or response time of "-5" (15 - 20 = -5) . Conversely, the test customer has a response time of +15 if the test customer repeats the menu prompt after hearing it once, and then choose option three 15 seconds (35 second listening time) into the second playing of the menu (35 - 20 = 15) .
A negative response time is good because the test customers spent less time in the customer-centric interface than they could have and a positive response time is bad because the test customers spent more time than they should have in the customer-centric interface. In addition to calculating response times for individual menu prompts, performance engine 56 may also calculate response times for entire tasks for each test customer by summing the menu duration times and the listening times for each menu prompt required to accomplish the task and subtracting the total menu duration time from the total listening time.
Once performance engine 56 has determined customer satisfaction, task accomplishment, and response times, performance engine 56 generates a performance matrix which charts customer satisfaction, task accomplishment, and response times for each test customer, each customer- centric menu prompt, and each task. The performance matrix allows for performance engine 56 to determine if any of the customer-centric menu prompts or tasks have unsatisfactory performance at step 94 by examining the combination of customer satisfaction, task accomplishment, and response times and thereby evaluating how well the customer-centric interface performs.
Ideally a customer-centric menu prompt and task have a high level of customer satisfaction, a negative or zero response time, and a high rate of task accomplishment. For unsatisfactory performance, performance engine 56 looks for low customer satisfaction, low task completion, or a high positive response time. By charting the customer satisfaction, task accomplishment, and response times on the performance matrix, performance engine 56 can determine when one of the test results is not satisfactory.
If a customer-centric menu prompt or task has unsatisfactory performance at step 94, then at step 96 performance engine 56 selects the menu prompt or task, at step 98 determines the reason for the unsatisfactory performance, and at step 100 modifies the customer- centric menu prompt or task to correct for the unsatisfactory performance. For example, a task may have a high level of customer satisfaction and high rate of task accomplishment but a positive response time. The test customers are accomplishing the task and are satisfied when interacting with the customer-centric interface but are spending too much time interacting with the customer-centric interface as indicated by the positive response time. The positive response time is not good for the customer-centric interface because the customers are using unnecessary resources from the customer-centric interface in the form of too much time in accomplishing the task. By examining the menu prompts for the task, performance engine 56 determines that the terminology used in the menu prompts for the task is not the terminology used by the customers. Therefore, performance engine 56 alerts customer language engine 54 to the terminology problem and customer language engine 54 rewords the menu prompts for the task using the customers own terminology.
Once performance engine 56 locates and corrects the problem, performance engine 56 determines if there are additional menu prompts or tasks that have unsatisfactory performance at step 102. If at step 102 there are additional menu prompts or tasks having unsatisfactory performance, then at step 104 performance engine 56 selects the next menu prompt or task having unsatisfactory performance and returns to step 98.
Performance engine 56 repeats steps 98, 100, 102, and 104 until there are no additional menu prompts or tasks at step 102 having unsatisfactory performance. When there are no additional menu prompts or tasks having unsatisfactory performance at step 102, the process returns to step 90 and performance engine 56 tests the customer-centric interface having the modified menu prompts or tasks. Performance engine 56 repeats steps 90, 92, 94, 96, 98, 100, 102, and 104 until there are no customer-centric menu prompts or tasks having unsatisfactory performance at step 94.
When there are no customer-centric menu prompts or tasks having unsatisfactory performance at step 94, at step 106 system 10 implements the customer-centric interface for use by the customers. As customers use the customer-centric interface, system 10 and performance engine 56 continually monitor the performance of the customer-centric interface checking for low customer satisfaction levels, low task completion rates, or high positive response times at step 108. When system 10 discovers an unsatisfactory post-implementation result such as those described above, system 10 determines the cause of the problem and modifies the customer-centric interface to correct the problem at step 110. As long as the customer-centric interface is accessible by the customers, system 10 monitors the customer-centric interface performance and modifies the customer-centric interface to allow for customer-centric menu prompts that are worded in the terminology of the customers, that directly match the tasks that the customers are trying to accomplish, and that are ordered and grouped by customer task frequencies and the customers' perceptions of task relationships .
FIGURE 4 illustrates a block flow diagram of how collection engine 52, customer language engine 54, task engine 50, customer structure engine 58, and performance engine 56 of system 10 interact and interoperate to automatically create the customer-centric interface. In addition, FIGURE 4 also represents the various functions for collection engine 52, customer language engine 54, task engine 50, customer structure engine 58, and performance engine 56.
Collection engine 52 gathers customer intention information from the customer opening statements and includes customer task model 128 which includes the list of tasks for which the customers access and use the customer-centric interface. Customer language engine 54, task engine 50, and customer structure engine 58 perform their various functions by processing and manipulating the customer intention information and task list.
Customer language engine 54 develops customer- centric menu prompts for the customer-centric interface using the customers own terminology. Customer language engine 54 analyzes the customers' language by analyzing and tracking every word used by the customers in the customer opening statements to get a feel for how the customers refer to each of the tasks. Customer language engine 54 counts each word in each customer opening statement to determine which words the customers use the most and thereby recognize which of the customers' words are best to use in creating customer-centric menu prompts using the customers own terminology.
In addition to creating customer-centric menu prompts using the customers own terminology, in other embodiments of system 10 customer language engine 54 may also create customer-centric menu prompts using action specific object words taken from the customer opening statements . FIGURE 5 illustrates a flow diagram for creating customer-centric menu prompts utilizing action specific object words. Customer wordings of tasks in customer opening statements are generally in four different styles: action-object ("I need to order CALLNOTES"); action ("I need to make changes"); object ("I don't understand my bill") ; and general ("I have some questions") . Menu prompts are typically worded in one of four styles: action specific object ("To order CALLNOTES press one") ; specific object (For CALLNOTES press two") ; general object ("To order a service press three"); and action general object ("For all other questions press four") .
The style of the menu prompt wording can have an effect on the performance of the menu prompt due to the customers interaction with the menu prompt. Wording menu prompts as action specific object is typically the best way to word customer-centric menu prompts because upon hearing an action specific object menu prompt, the customer generally knows that it is the menu prompt they want to select and therefore response times decrease because customers do not have to repeat the menu prompts in order to make a selection. For example, if a customer calls wanting to order CALLNOTES and the second option in the six option menu prompt is "To order CALLNOTES press two" then the customer will typically press two without listening to the rest of the menu prompts and therefore have a negative response time, high customer satisfaction, and high task accomplishment rate.
In order to create customer-centric menu prompts using action specific object words, customer language engine 54 determines the action words and object words used by the customers. At step 132, customer language engine 54 analyzes the customer opening statements in customer opening statement database 32 in order to identify the action words and the object words used by the customers in their opening statements. In addition to identifying the action words and the object words, customer language engine 54 also determines which of the action words are specific action words and which of the object words are specific object words. For instance, "order" and "pay" are specific action words and "CALLNOTES" and "Call Waiting" are specific object words while "service" and "question" are not specific object words .
At step 134, customer language engine 54 saves the specific action words in specific action database 34 and the specific object words in specific object database 36. When saving the specific action words and the specific object words, customer language engine 54 identifies and maintains the relationships between the specific action words and the specific object words by linking the specific action words with the specific object words that were used together by the customers as shown by arrows 199 in FIGURE 5. For example, for the customer opening statements of "I want to buy CALLNOTES" and "I want to inquire about my bill," "buy" and "inquire" are the specific action words and "CALLNOTES" and "bill" are the specific object words. When customer language engine 54 saves the respective specific action words and specific object words in databases 34 and 36, a link will be maintained between "buy" and "CALLNOTES" and between "inquire" and "bill." Maintaining how the customers use the action words and object words in databases 34 and 36 prevents erroneous combinations of specific action words and specific object words when creating customer-centric menu prompts. An example erroneously combined menu prompt is "To buy a bill press one" since the statement would not make sense to the customer. The linking of the specific action words with the specific object words which the customer used together allows for the formation of correct customer-centric menu prompts that make sense to the customers .
In addition to storing the specific action words and the specific object words in databases 34 and 36, customer language engine 54 also calculates a frequency of occurrence for each specific action word and each specific object word and stores the specific action words and the specific object words in databases 34 and 36 in accordance with the frequency of occurrence in descending frequency order. Therefore, the specific action words having the highest frequency of occurrence are stored at the top of specific action database 34 and the specific object words having the highest frequency of occurrence are stored at the top of specific object database 36. Once customer language engine 54 determines the frequency of occurrence and stores the specific action words and the specific object words, at step 136 customer language engine 54 generalizes the specific action words into general groups of specific action words and generalizes the specific object words into general groups of specific object words. Customer language engine 54 examines the specific action words and the specific object words for commonalties and then groups the specific action words and the specific object words together in groups based on the commonalties. For example, the specific action words of "buy," "order," and "purchase" all share the commonality of acquiring something and may be grouped together. The specific object words of "CALLNOTES" and "Call Waiting" share the commonality of being residential telephone services and therefore may be grouped together. Customer language engine 54 assigns names for each of the general groups of specific action words and the specific object words and saves the general action words in general action database 38 and the general object words in general object database 40 at step 138.
Having specific action database 34, specific object database 36, general action database 38, and general object database 40 allows for a great resource for customer language engine 54 to locate customer terminology when creating customer-centric menu prompts. For creating upper level hierarchical menu prompts, customer language engine 54 uses words from general action database 38 and general object database 40. To create action specific object menu prompts in the words of the customers for lower level hierarchical menu prompts, customer language engine 54 uses words from specific action database 34 and specific object database 36. Because the specific action words and the specific object words are ordered by frequency in databases 34 and 36, customer language engine 54 can create action specific object menu prompts using the customer terminology most often used by the customers.
While customer language engine 54 determines the customer terminology and wording to use for the customer- centric menu prompts, task engine 50 determines the frequency of occurrence for the tasks that the customers call about and also determines which tasks will be included in the customer-centric interface. Generally the customer opening statements are from more than one call center so when determining the frequency of occurrence for each task, task engine 50 takes into account the volume of calls into each call center when constructing the task frequency table so that the frequency results are accurate. Frequency of occurrence data must be weighted so that a call center receiving three million calls does not have the same weight as a call center receiving ten million calls.
Once task engine 50 determines the tasks to be included in the customer-centric interface including all tasks down to 1% frequency or to a percentage coverage, customer structure engine 58 elicits customer perceived task relationships for the included tasks as described above. Utilizing the customer perceived task relationships, customer structure engine 58 creates interface structure for the customer-centric interface and represents the interface structure both as a numerical data matrix and a graphical representation.
At box 130, customer language engine 54, task engine 50, and customer structure engine 58 work together to automatically create the customer-centric interface. Customer language engine 54 contributes the wording of the customer-centric menu prompts in the customers own terminology for the customer-centric interface. Task engine 50 provides the tasks that are to be included in the customer-centric interface, the ordering of the groups of tasks in the menu prompts, and the ordering of the tasks within the groups of tasks. Customer structure engine 58 provides the interface structure or grouping of tasks for the customer-centric interface. After the automated creation of the customer-centric interface, performance engine 56 performs usability tests on the customer-centric interface as described above and evaluates and reconfigures the customer-centric interface based on customer satisfaction, task accomplishment, and response times during both the testing phase and implementation .
As described above, when creating a customer-centric interface, the customers' opening statements need to be categorized in order to determine what tasks the customers are calling about. Therefore, FIGURE 7 depicts a flow diagram of a method for the automated categorization of statements. The method begins at step 180 and at step 182 a user selects the statements to be categorized. Before system 10 can automatically categorize the statements, the user must have one or more statements to categorize and load the list of statements into system 10. The statements may be opening statements as defined above, written statements from a training session, survey responses, search statements from a web site or pop-up window, statements evaluating a customer' s experience and satisfaction in a test environment, or any other appropriate response to an open-ended question that can be analyzed using content text analysis.
Typically, the statements are recorded, transcribed, configured in a format that can be understood by system 10, and then placed in a text file which may be stored in database 32. Because there may be more than one list of statements and therefore more than one text file, the user chooses what list of statements to categorize by selecting a text file using open file button 144. Open file button 144 allows the user to view all the available files containing statements and then select the file containing the list of statements to be categorized. Once the list of statements has been selected, system 10 reads the list of statements from database 32. After the selection of the statements to be categorized, at step 184 the user decides whether to use rule engine 48 to create new rules to categorize the statements or use existing rules already stored in database 33 to categorize the statements. If at step 184 the user decides to create new rules, then at step 186 the user accesses rule engine 48 to create new rules. New rules are desirable when there have been new products or services recently made available to the customers and the existing rules do not reflect these new products or services or when the statements are from a new domain not covered by the existing rules, such as survey responses where all the existing rules pertain to statements from customer service call centers.
The user utilizes rule engine 48 and rule creation screen 160 to create new rules and then edit the newly created rules. Creation of the rules involves the use of four include boxes 162, 163, 165, and 167 and two exclude boxes 169 and 171. In other embodiments, there may be more or less than four include boxes and more or less than two exclude boxes. The user inputs combinations of words and text strings that should be included in the statement in order for the statement to satisfy the rule include boxes 162, 163, 165, and 167 and combinations of words and text strings that should not be in the statement in order for the statement to satisfy the rule in exclude boxes 169 and 171. Each rule is also associated with a particular category label which the user enters in category label box 164.
For example, a user may want to create a new rule to categorize statements with respect to the late payment of customer bills. Therefore "late" may be entered in include box 162, "bill" may be entered in include box 163, "paid" may be entered in exclude box 169, and "labill" may be entered in category label box 164. This allows for a rule that finds statements that contain the words "late" and "bill" but do not contain the word
"paid." If a statement contains the words "late" and "bill" and does not include the word "paid, " then the statement would be categorized with the category label "labill," meaning the purpose of the statement is to inquire about a late bill that has not yet been paid. Once a user enters in the desired words or text strings in include boxes 162, 163, 165, and 167 and exclude boxes 169 and 171, the user selects apply rule button 166 and the rule appears in rule screen 170 and is available to be edited and used to categorize the statements. The user may then repeat the above process to create as many rules as needed. In addition, other embodiments allow for rules where a noun in the singular form in include box 162 includes all forms of the noun (singular and plural) and a verb in the present tense in include box 162 includes all tenses and forms of that verb. This allows for a bigger hit rate when applying the rules to the statements since one rule is satisfied by a statements containing any form of the noun or verb and saves time because multiple rules are not required for each form of the noun or verb. After the creation of the rules, at step 188 the user groups the rules into sets of rules. There may be different sets of rules for different applications or divisions of a company. For example, the marketing division may have a set of rules to categorize a list of statements while the product development division may have a different set of rules to categorize the same list of statements. This is because different users may be interested in different terms with respect to a list of statements. In addition, different sets of rules may also be necessary for different kinds of statements or statements from different domains. A user may use one set of rules to categorize opening statements from a call center and a different set of rules to categorize survey responses from a web survey questionnaire. Therefore, rule engine 48 allows for the rules to be grouped into different sets of rules with the name for each set of rules displayed in set box 168 and the sets of rules saved in database 33. In addition, the user may group only newly created rules together in a group or group together newly created rules with existing rules when creating sets of rules. At step 190, the rules must be arranged in a rule order in accordance with a rule hierarchy enabling performance engine 56 to apply the rules in the correct order thereby preventing inconsistent results. Typically the rule hierarchy is from specific rules to general rules but can be any other appropriate way of ordering the rules. For a specific to general rule hierarchy, performance engine 56 applies the most specific rules first to a statement and then applies the more general rules if the statement does not satisfy any of the specific rules.
For example, a user wants to find both "phone" and "telephone" separately. A rule specifying "telephone" needs to be above the rule specifying "phone" in the rule hierarchy so that the "telephone" rule is applied to a statement before the "phone" rule is applied to a statement. If the "phone" rule is applied before the "telephone" rule, then when performance engine 56 locates a statement containing the word "telephone," performance engine 56 will find "phone" in "telephone" and categorize the statement with the "phone" category label instead of the "telephone" category label and the statement will be incorrectly categorized. But if the "telephone" rule is placed above the "phone" rule in the rule hierarchy, then performance engine 56 will find "telephone" in the statement, categorize that statement with the "telephone" category label and move on to the next statement without applying the "phone" rule. Therefore, the most specific rules need to be placed at the top of the rule hierarchy and the most general rules need to be placed at the very end or bottom of the rule hierarchy with a gradual gradient from specific to general in-between. Once the rules have been grouped and ordered in a correct rule hierarchy, rule engine 48 stores the newly created rules, sets of rules, and rule hierarchy in database 33 at step 192 so that users and performance engine 56 may later access the rules. After rule engine 48 saves the rules, at step 194 the user selects the rule or the set of rules that the user wants to have performance engine 56 apply to the list of statements. If at step 184 the user decides to not create any new rules but instead to use existing rules, then at step 196 the user selects and edits rules from the lists of existing rules stored in database 33. Existing rules include rules that have already been created and saved by the process outlined above at steps 186 through 194. If a user has already created a set of rules that has worked well in the past in categorizing statements, then the user may want to use these rules instead of creating new rules. The user selects from the list of rules in set box 168 and the rules from the selected set of rules appear in rule screen 170. Once the rules appear in rule screen 170, the user may edit an existing rule such as rule 173 by selecting it in rule screen 170 and clicking edit rule button 156. The rule then appears in rule creation screen 160 and the user may modify include boxes 162, 163, 165, and 167 and exclude boxes 169 and 171. Once the user has a set of rules for performance engine 56 to apply to the list of statements, the process continues to step 198. At step 98, the user selects run button 148 and performance engine 56 applies the selected rules to the list of statements in order to determine a category label for each statement. Performance engine 56 cycles through the list of statements one statement at a time applying the rules to a statement until each statement satisfies a rule. Performance engine 56 begins applying the rules to the list of statements at step 200 by applying the first rule in the rule hierarchy to the first statement in the list of statements. When performance engine 56 applies the rules to the statements, performance engine 56 strips the punctuation off the statements so that "bill," and "bill" do not appear as two different text strings.
At step 202, performance engine 56 determines if the statement satisfies the first rule. Performance engine 56 determines if a statement satisfies a rule by searching the statement for the presence of particular text string combinations or words and the exclusion of other text string combinations or words. For instance, rule 173 is the highest rule in the rule hierarchy shown in rule screen 170. Therefore, performance engine 56 searches the first statement to see if the text string "dsl" is present in the first statement. If "dsl" is not present in the first statement, then the first statement does not satisfy rule 173. If the statement does not satisfy the rule, then at step 204 performance engine 56 checks to see if there are additional rules in the set of rules to apply to the statement. If there are additional rules to apply to the statement, then at step 206 performance engine 56 applies the next rule in the rule hierarchy to the statement and the process returns to step 202 where performance engine 56 determines if the statement satisfies this rule. Steps 202, 204, and 206 repeat until either the statement satisfies a rule at step 202 or until the statement does not satisfy any of the rules at step 202 and there are no more rules to apply to the statement at step 204.
If the statement satisfies a rule at step 202, then at step 208 performance engine 56 assigns the category label associated with the satisfied rule to the statement. So if the statement contained the text string "dsl," then performance engine 56 assigns the "dsl" category label to the statement. But if the statement does not satisfy any of the rules at step 202 and there are no more rules left to apply at step 204, then performance engine 56 applies a catch-all rule to the statement and labels the statement with the catch-all category label at step 210. The catch-all rule and category label is designed for statements that do not fit within any of the other rules. Performance engine 56 labels the statement as catch-all so that the statement may be examined at a later date to determine if the statement really does not satisfy any of the rules or if there is a malfunction of system 10 which resulted in the statement not satisfying any of the rules. A high number of catch-all category labels may indicate that system 10, rule engine 48, or performance engine 56 are not operating correctly and require attention.
After performance engine 56 assigns a category label to the statement at either step 208 or step 210, at step 212 performance engine 56 checks to see if there are additional statements in the list of statements that require categorization. If there are additional statements to be categorized at step 212, then at step 214 performance engine 56 selects the next statement to be categorized and applies the first rule in the rule hierarchy to the statement and then determines if the statement satisfies the rule at step 202. Performance engine 56 repeats steps 202 - 212 until performance engine 56 determines at step 212 that there are no additional statements to be categorized.
For instance, a statement to be categorized is "I cannot access my email account." Performance engine 56 applies the first rule in rule screen 170, rule 173, to the statement. Performance engine 56 applies rule 173 by searching the statement "I cannot access my email account" for the text string "dsl." Performance engine 56 determines that the statement does not contain the text string "dsl" and therefore the statement does not satisfy rule 173. Performance engine 56 then applies each rule below rule 173 to the statement one rule at a time until the statement satisfies a rule. When performance engine 56 gets to rule 175 and applies rule 175 to the statement, performance engine 56 determines that the statement includes the text string "email" and does not include the text strings "bill" and "can't comm." Therefore, the statement satisfies rule 175 and performance engine 56 assigns category label "email" to the statement.
When there are no additional statements to be categorized, performance engine 56 creates an output file at step 216 and the process ends at step 218. The output file includes all the statements from the list of statements and each corresponding category label. An example output file with three statements is shown in Table 1.
Figure imgf000047_0001
The output file allows system 10 or a user to determine the frequency of occurrence for each category label and therefore determine which categories customers are calling the most about. Knowing which categories the customers are calling the most about allows for a customer-centric interface design that takes into account the customers' way of thinking and is therefore easier to for the customer to use. The interface design that is easier for the customer to use allows the customer to accomplish their tasks in less time and a more efficient manner resulting in less company resources being used in servicing the customers and therefore lower costs for a company.
In order to make the customer-centric interface accessible and easy to use for the customers, the customer-centric interface needs to be continually tested and modified using both actual and test data. FIGURE 10 depicts a flow diagram of a method for the automated analysis of performance data. The method begins at step 310 and at step 312 a user or an operator of system 10 selects the performance data to be analyzed. System 10 allows for up to three different log files to be analyzed at one time. In other embodiments, system 10 may analyze more than three log files at the same time. Each time an IVR study or test occurs, a log file containing performance data from that test is created. So if there are three IVR tests in one day - one in the morning, one in the afternoon, and one in the evening - then there will be three log files at the end of the day. System 10 and GUI 46 allow for simultaneous analysis of the three log files at the same time to allow for more efficient operation of system 10.
To analyze more than one log file at a time, the user selects the log file to be analyzed in input windows 230, 232, and 234. If only one log file is to be analyzed, the user selects the log file in input window 230. If more than one log file is to be analyzed, the first log file is selected in input window 230, the second log file is selected in input window 232, and the third log file is selected in input window 234. When selecting the log files to be analyzed, the user may also want to select the location to save the output file which can be done in output window 236.
Once the log files to be analyzed have been selected, the user presses process button 238 and system 10 begins to automatically analyze the performance contained in the log file. At step 314, system 10 selects the first performance data set in the log file to analyze. System 10 selects the performance data set to analyze by selecting the first performance data set in the log file. A performance data set is the recorded data and information regarding one specific participant and one specific task for that participant. Generally in an IVR test, a participant is given four different tasks to accomplish such as "order DSL service" or "change your billing address." For example, a performance data set would contain the recorded information for participant A and the task of ordering DSL service.
An example log file 250 including two example performance data sets 252 and 254 is shown in FIGURE 10. A performance data set includes such information as the start time of the task, each menu accessed by the participant within the IVR, the time each menu was accessed, how long the participant listened to each menu, the key the participant pressed in response to the menu, and the total time the participant interacted with the IVR system.
Performance data sets are separated in a log file by start lines and end lines. Performance data set 252 includes start line 251 and end line 263 while performance data set 254 includes start line 265 and end line 277. Start lines 251 and 265 include the date of the IVR test, what IVR system is being tested, and the time that the first IVR menu begins to play. In start line 251, the date of the test is April 5, 2002, the IVR system being tested is Yahoo2 - Version B, and the first menu began playing at 8:23:53 AM. End lines 263 and 277 include total listening time 276 and 298 which is the total time that the participant spends listening to the menus and interacting with the IVR system. Performance data set 252 has total listening time 276 of 83 seconds and performance data set 254 has total listening time 298 of 64 seconds. Each line in-between start lines 251 and 265 and end lines 263 and 277 provides information regarding various submenus within the IVR accessed by the participant. For performance data set 252 and line 253, BMainMenu was accessed at 8:23:53 AM, the participant listed to BMainMenu for 30 seconds (listening time 256) , pressed the "2" key (key 258), and BMainMenu stopped playing at 8:24:23 AM. Lines 255, 257, 259, and 261 supply the same type of information for each respective menu. Key 258 in line 261 is "TO" which indicates that the participant never made a selection in response to the "B22110" menu and therefore the participant was timed out of the menu.
Once system 10 has selected the performance data set to be analyzed, task engine 50 determines a task code and task for the selected data set at step 316. The performance data sets do not contain a participant number identifying the participant or the task. But the participant number is stored in database 33 in a log-in call record file. When the participants access the IVR simulation application, system 10 stores in database 33 each participant's participant number and the tasks they are to accomplish. Participants are generally given more than one task to accomplish and the participants are to attempt the tasks in a pre-specified order and the log files reflect this specified order of tasks. For example, if each participant is given four tasks to accomplish, then the log file includes four performance data sets for each participant where the four performance data sets for each participant are grouped together in the same sequence as the participant attempted each task. So if participant A was given the four tasks of "order DSL service," "change your billing address," "inquire about a bill payment," and "add call-forwarding," the log file has the four performance data sets for participant A one after the other in the same order as participant A was specified to attempt the tasks. Therefore, task engine 50 locates the participant number in database 33, determines what tasks the participant was supposed to accomplish and the order the tasks were to be accomplished, and determines which performance data sets correlate with which participants and tasks. After task engine 50 determines the correct task for the selected performance data set, at step 318 task engine 50 retrieves from database 33 the correct key sequence for the corresponding task for the selected performance data set. Each task has a distinct correct key sequence so that for example that correct key sequence for "ordering DSL service" is different from the correct key sequence for "changing your billing address." The correct key sequence is the keys pressed in response to the IVR menu prompts that allows the participant to navigate the IVR menus and successfully accomplish the assigned task. For instance, the task of "ordering DSL service" requires the participant to navigate through and listen to three different menus in order to order DSL service. After the first menu, the participant needs to press the "3" key which sends the participant to the second menu. After the second menu the participant needs to press the "2" key which sends the participant to the third menu. After the third menu the participant needs to press the "4" key after which the participant has ordered DSL service and successfully completed the task. Therefore the correct key sequence for the task of "order DSL service" is "3, 2, 4."
At step 320, performance engine 56, having the correct key sequence from task engine 50, searches the selected performance data set for the correct key sequence. Performance engine 56 searches the last few keys 280 for the correct data sequence. Performance engine 56 starts with the line right above end line 277 and begins searching up the lines 275, 273, 271, 269, and 267 to start line 265 looking for the correct key sequence. Performance engine 56 examines the end of the selected performance data set because that is the only location where the correct key sequence may be located because when the participant enters the correct key sequence, the task is accomplished, the performance data set ends, and the participant moves on to the next assigned task. Therefore once the participant enters the last key of the correct key sequence, the next line in the performance data set is end line 277 and a new performance data set begins .
Performance engine 56 compares the recorded key sequence entered by the participant with the correct key sequence at step 322. For example, performance data set 254 is for the task of "changing your billing address" and the task has a correct key sequence of "2, 2, 1, 1, 5." Performance engine 56 compares the correct key sequence with the recorded key sequence in performance data set 254 beginning with line 275 which has "5" as key 280. Performance engine 56 then moves up to line 273 to look for "1" as key 280 and finds "1" as key 280. Performance engine 56 repeats this process for lines 271, 269, and 267 until a line does not have the correct key 280 or until performance engine 56 determines that the recorded key sequence of performance data set 254 is the same as the correct key sequence.
Once performance engine 56 compares the correct key sequence with the recorded key sequence for the selected performance data set at step 322, at step 324 performance engine 56 determines if the task for the selected performance data set was successfully accomplished. The task is successfully accomplished if the recorded key sequence includes the correct key sequence. The task is not successfully accomplished or is a failure if the recorded key sequence does not include the correct key sequence. If the task is not accomplished, then at step 326 performance engine 56 marks the selected performance data set as a failure. If the task is successfully accomplished, then at step 328 performance engine 56 marks the selected performance data set as a success or as passing. For example, performance data set 252 timed out ("TO") in line 261 because the participant made no selection and therefore performance data set 252 cannot have the correct key sequence and performance engine 56 marks performance data set 252 as failing. Determining whether the selected performance data set accomplished the task allows for an objective performance measure and provides a call-routing accuracy.
In addition to call-routing accuracy, system 10 also provides for another objective performance measure - the amount of time the participant listens to each IVR menu and the total amount spent attempting to accomplish or accomplishing the assigned task. The amount of time the participant spends listening to the menu is not a very valuable number unless menu duration times are also taken into account. A menu duration time is the amount of time it takes for a menu to play in its entirety. For instance, a menu may have five different options to choose from and the menu duration time is the amount of time it takes for the menu to play through all five options. At step 330, performance engine 56 obtains the menu duration time from database 33 for the first menu in the selected performance data set. Performance engine 56 also obtains the listening time for the first menu in the selected performance data set. The listening time is the time a participant actually spends listening to a menu before making a selection. For instance, performance data set 254 contains the first menu BMainMenu that has listening time 278 of 30 seconds (line 267) . From database 33, performance engine 56 retrieves that menu BMainMenu has a menu duration time of 30 seconds. Once performance engine 56 obtains both the listening time and the menu duration time, performance engine 56 calculates the response time or the cumulative response time (CRT) for the first menu at step 332. The response time is the difference between the menu duration time and the listening time. Performance engine 56 calculates the response time by subtracting the menu duration time from the listening time. For example, if the main menu of the IVR is 20 seconds in length, and the participant listens to the whole menu and then makes a selection, the participant has a listening time of 20 seconds and receive a CRT score or response time of 0 (20 - 20 = 0) . If the participant only listens to part of a menu, hears their choice and chooses an option before the whole menu plays, then the participant receives a negative CRT score or response time. For instance, if the participant chooses option three 15 seconds (listening time) into a four-option, 20 second menu, the participant receives a CRT score or response time of "-5" (15 - 20 = -5) . Conversely, the participant has a response time of +15 if the participant were to repeat the menu after hearing it once, and then choose option three 15 seconds (35 second listening time) into the second playing of the menu (35 - 20 = 15) . For performance data set 254 and line 267, the participant has a response time or CRT score of 0 because the participant has a listening time of 30 seconds and the BMainMenu menu has a menu duration time of 30 seconds (30 - 30 = 30) .
After the calculation of the response time for the first menu, performance engine 56 at step 334 determines if the selected performance data set has additional menus for which a response time needs to be calculated. If there are additional menus within the selected performance data set at step 334, then at step 336 performance engine 56 obtains the menu duration time from database 33 for the next menu and the listening time for the next menu in the same manner as performance engine 56 obtained the menu duration time and listening time for the first menu at step 330. So for performance data set 254, performance engine 56 obtains the menu duration time and listening time for line 269 and menu "B20." Once performance engine 56 obtains the menu duration time and the listening time for the next menu, at step 338 performance engine 56 calculates the response time for the next menu in the same manner as described above at step 332. The method then returns to step 334 where performance engine 56 determines if the selected performance data set has additional menus that have not yet been analyzed. Steps 334, 336, and 338 repeat until there are no additional menus to be analyzed within the selected performance data. If there are no additional menus within the selected performance data set at step 334, then at step 340 performance engine 56 calculates the total response time for the selected performance data set. The total response time is the difference between the total menu duration time and the total listening time. Performance engine 56 calculates the total response time by first summing the menu duration times and the listening times for each menu within the selected performance data set. Once performance engine 56 has both a total menu duration time and a total listening time, performance engine 56 calculates the total response time for the selected performance data set by subtracting the total menu duration time from the total listening time. A negative total response time indicates that less time was used than required to accomplish the task, a zero response time indicates that the exact amount of time was used by the participant to accomplish the task, and a positive response time indicates that more time was used than required to accomplish the task. For instance, performance data set 64 has a total listening time 298 of 64 seconds and a total menu duration time of 75 seconds. Therefore, performance data set 254 has a total response time of -11 seconds (64 - 75 = -11) . Once performance engine 56 calculates the total response time for the selected performance data set, at step 342 system 10 determines if there are additional performance data sets within the log file to be analyzed. If at step 342 there are additional performance data sets, then system 10 selects the next performance data set at step 344 and the method returns to step 316 and repeats as described above until there are no additional performance data sets in the log file or files to be analyzed at step 342.
When there are no additional performance data sets to be analyzed at step 342, system 10 and performance engine 56 generate an output file at step 346 and the method ends at step 348. The output file is similar in structure to the log file and performance data and is sorted by participant and sequence of task. The output file includes all the information in the log file as well as additional information such as the participant number, the assigned task, the IVR system used by the participant, the response time for each menu, the total response time, and whether the task was successfully accomplished. The output file may also contain the correct key sequence for each performance data set. The output file allows a user of system 10 to determine which IVR menu and tasks may need to be redesigned based on high positive response times or failures to accomplish tasks. For example, a performance data set for a particular task that was successfully accomplished but has very high response times may indicate that the menus need to be redesigned or reworded because although the participants accomplished the task, they had to listen to the menus several times before being able to make a selection.
In addition to the output file, GUI 46 has an additional feature that allows a user of system 10 to quickly determine the reliability of IVR test results. Summary window 240 allows the user to quickly determine the pass/fail results for task accomplishment for each participant. Because participants may not take the IVR test seriously and others may only be taking the test to be paid, not all of the participants actually attempt to accomplish any of the assigned tasks. A participant intentionally failing all assigned tasks is not good for the overall test results and affects the analysis of the IVR system. A participant failing all of their assigned tasks is a good indication that the participant did not really try and that the associated performance data should be ignored when analyzing the output file. Summary window 240 allows the user to quickly peruse each participant's pass/fail results and call-routing accuracy without having to examine the output file and therefore determine which performance data should be disregarded and which tasks need to be tested again.
The call-routing and response time results of the IVR usability test yield important information that can be used in the further development and refinement of IVR systems. Based on these measures, companies can select IVR designs associated with the best objective performance and usability score and have an IVR system that is efficient and satisfactory to the customers.
Further enabling a customer-centric interface design is the ability to tailor the persona of the IVR system to each individual customer. Referring now to FIGURES 11 through 13, a method for conducting a dialog exchange between a user and an IVR system 12 is generally depicted. The method preferably enables an operator of a customer service call center, for example, to achieve, among other benefits, greater numbers of favorable responses to system prompts by matching the active persona of the IVR system 12 to one or more personality traits or characteristics of a current caller into the call center. In general, method 360 preferably identifies one or more personality traits or characteristics of a caller or user and activates an IVR system persona likely to put the user at ease during the user's interaction with the IVR system 12, elicit desirable responses to IVR system 12 prompts, such as sales prompts, as well as achieve other benefits.
Method 360 may be implemented in a variety of ways. For example, method 360 may be implemented in the form of a program of instructions storable on and readable or executable from one or more computer readable media such as floppy discs, CD-ROM, HDD devices, FLASH memory, etc. Alternatively, method 360 may be implemented in one or more ASIC (application specific integrated circuits) . In a further embodiment, method 360 may be implemented using both ASIC and computer readable media. Other methods of enabling method 360 to be stored and/or executed by a computer system, such as system 18, are contemplated and considered within the scope of the present invention. Other embodiments of the invention also include computer-usable media encoding logic such as computer instructions for performing the operations of the invention. Such computer-usable media may include, without limitation, storage media such as floppy disks, hard disks, CD-ROMs, read-only memory, and random access memory; as well as communications media such as wires, optical fibers, microwaves, radio waves, and other electromagnetic or optical carriers. The control logic may also be referred to as a program product. Specifically referring to FIGURE 11, method 360 begins at 362 where IVR system 12 is preferably initialized. Upon initialization of IVR system 12 at 362, method 360 preferably proceeds to 364.
At 364, method 360 preferably remains or loops in a wait-state where a call from a user may be awaited. As with many computer-based systems, IVR system 12 may perform additional tasks, i.e., multi-task, while in a wait-state at 364. In other words, IVR system 12 may perform one or more other computing or data processing functions while awaiting an incoming call at 364. Method 360 preferably maintains IVR system 12 in a wait-state at 364 while there is no call detected or being received on communications link 16. As with many computer-based systems, one or more escape routines may be run alongside or in conjunction with method 360 which will enable a system administrator or other IVR system 12 operator to interrupt method 360 and thereby free up one or more resources of IVR system 12.
Once an incoming call is detected or being received at 364, method 360 preferably proceeds to 366 where a communication connection between IVR system 12 and the user's communications device 40 may be established. User communication device 17 is preferably operable to allow a user to submit voice, touch-tone or other responses to prompts communicated from IVR system 12. Examples of user communications devices include, but are not limited to, telephones, mobile phones, PDAs (personal digital assistant) , personal computers, portable computers, etc.
As mentioned above, a user may contact IVR system 12 via communications link 16. Establishing a communications connection by IVR system 12 can include initiating a program or software sequence operable to accept an incoming call from a calling user. Alternatively, IVR system 12 may include functionality operable to permit IVR system 12 to initiate contact with one or more users. For example, IVR system 12 may be configured with auto-dialer type capabilities. Other methods of establishing a communication connection between IVR system 12 and a user communication device 17 are contemplated within the scope and spirit of the present invention.
In one embodiment of method 360, IVR system 12 may be configured to evaluate or identify one or more characteristics of the call or communication connection from the current user at 368. For example, if the user dialed into a specific one of a plurality of IVR system 12 access numbers, the specific number dialed might be associated with users from a specific geographic region, a specific service, sale, lease or use of a specific product, etc. In addition, IVR system 12 may also be configured to determine whether the user is calling during a holiday, at a particular time of day, etc. IVR system 12 may be further configured with automatic number identification (ANI) , enabling IVR system 12 to identify one or more personal, geographic or other characteristic of the user from their calling number by referring to a customer database. Using either the information gathered from the user's incoming call or from one or more IVR system 12 configurations and settings, method 360 preferably then proceeds to 370 where a first prompt for the user may be generated. Preferably, the first prompt generated by IVR system 12 includes a request for a user response.
Further, the request for a user response will preferably encourage the user to respond with a spoken or verbal response. Depending on the type of communications link 16, and user communication device 17, the request for a user response may assume other preferred constructs. The text, voice, gender, rate of speech and other characteristics of the first prompt may be determined or dictated by the information gathered from the user's incoming call, from one or more IVR system 12 settings, as well as from other factors.
In one embodiment of the present invention, IVR system 12 may identify the user from one or more communication link characteristics of the user's incoming call and accesses a stored user persona profile for the calling user, such as a persona from stored user persona profiles 68. For example, ANI information may be used to identify the caller. The stored user persona profile may contain the IVR system 12 persona used during the user's last call, for example. The first prompt may be generated based on one or more speech parameters identified in the stored user persona profile. Upon generation of a first user prompt at 370, method 360 preferably proceeds to 372. At 372, the first user prompt may be communicated to the user. In general, IVR system 12 preferably communicates the first prompt to the user over communications link 16 to user communication device 17 via communications interface 26. The first prompt may be generated using one or more speech generation applications and/or hardware devices and according to the IVR system persona then in effect, e.g., a default IVR system persona or an IVR system persona identified from one or more call characteristics. Upon communication of the first prompt to the user at 372, method 360 preferably proceeds to 374. At 374, IVR system 12 preferably awaits a user response to the first prompt. To avoid trapping IVR system 12 in a loop waiting for the current user to respond to the first prompt, if no response is detected within a reasonable delay after prompting, method 360 preferably proceeds to 376. At 376, a determination is made as to whether a predetermined amount of overall or total wait time for a user response has been exhausted. If the predetermined amount of overall or total wait time has not been exhausted, method 360 preferably loops at 376 until such an amount of time has lapsed or passed.
Once the predetermined wait time has been exhausted at 376, method 360 preferably proceeds to 378. At 378, IVR system 12 may determine whether a predetermined number of first prompt communication attempts have been exhausted. Again, to aid in the avoidance of locking IVR system 12 in a loop waiting for a user response, a limit to the number of first prompt communication attempts may be implemented in method 360. If at 374 a user response has not been received, at 376 the predetermined wait period for the most recent first prompt communication has been exhausted and at 378 the number of first prompt communication retries have not been exhausted, method 360 preferably returns to 372 where the first prompt may again be communicated to the user. Upon re-prompting the user, method 360 preferably reiterates through the processes indicated at 374, 376 and 378. However, if at 378 it is determined that the number of first prompt communication attempts has been exhausted, method 360 preferably proceeds to 380 where the communication connection with the current user is preferably severed. Once the communications link between the current user and IVR system 12 has been severed, method 360 preferably returns to 364 where the next user call may be awaited. Other implementations of preventing IVR system 12 from being trapped in a loop awaiting a user response to an IVR system 12 prompt are contemplated and should be included within the spirit and scope of the present invention. For example, at block 380, IVR system 12 may transfer the caller to a human operator.
Referring now to FIGURE 12, a flow diagram depicting one embodiment of the continuation of method 360 is illustrated. Method 360 preferably proceeds to 382 of FIGURE 12 as a result of the detection of a user response to the first IVR system prompt at 374 of FIGURE 11.
At 382, the detected and received user response is preferably interrogated or otherwise analyzed to identify one or more of its characteristics. For example, if the user response is verbal or spoken, IVR system 12 will preferably identify one or more speech characteristics associated with the verbal response. The characteristics of speech which may be analyzed by IVR system 12 include, but are not limited to, the speaker's gender, rate of speech, fundamental frequency, frequency range, and amplitude. According to behavioral research, for example, an introvert can be discerned from an extrovert by analyzing the rate, fundamental frequency, frequency range and amplitude of the speaker's speech. Many other speech characteristics may be identified from a speaker's speech and used in the method of the present invention. The delay between an IVR system 12 prompt and the user response may also be monitored and analyzed by IVR system 12, for example, to determine whether the current user is a novice or experienced IVR system 12 user. Additional characteristics or parameters of a user's responses to IVR system 12 prompts may be monitored and analyzed without departing from the spirit and scope of the present invention. In one embodiment of method 360 of the present invention, IVR system 12 may begin processing the user response at 384 generally concurrently with the analysis of the user response at 382. For example, if the first prompt generated by IVR system 12 at 370 presented a plurality of transaction options from which the user was to select one, processing the user's response and selection of a desired transaction at 384 would allow IVR system 12 to initiate the desired user transaction. Alternatively, if the first prompt generated by IVR system 12 requested the caller' s IVR system 12 user identifier, for example, any information associated with the user identifier stored by the IVR system 12 may be retrieved at 384 before, after or generally concurrent with the analysis and identification of the speech characteristics of the user's response at 382.
Once the IVR system 12 has identified one or more characteristics or parameters of the user response relevant for its determination of the user's personality or demeanor at 382, method 360 preferably proceeds to 386. At 386, IVR system 12 may interrogate one or more of the persona libraries 66 preferably stored in storage system 20 on HDD device 60 and/or SAN 62. One goal of the persona library 66 interrogation at 386 is for IVR system 12 to identify a persona available in a persona library 66 which best comports with or matches the current personality or demeanor of the user. Alternatively, IVR system 12 may be configured to select from a plurality of persona characteristics to create an IVR system persona which best matches the current personality or demeanor of the user. By activating an IVR system 12 persona that comports with or matches the current personality or demeanor of the user, according to teachings of the present invention, the user's interaction with IVR system 12 is more likely to be relaxed and the user is more likely to favorably respond to IVR system 12 prompts. According to teachings of the present invention, the personality or demeanor of a user may be defined in a variety of ways. For example, a user personality or demeanor may include IVR system 12 analysis to determine whether a user is likely a novice or experienced IVR system 12 user. Further, according to aspects of the present invention, a user' s personality or demeanor may include the gender of the caller, whether the caller may be characterized as an introvert or extrovert, whether the user is agitated, seems confused or is questioning the system. For example IVR system 12 may be configured to identify when a user is struggling with the system by recognizing that a user has increased the duration and amplitude of their speech. Further, IVR system 12 may be configured to identify tension in a user's voice. Other categories or types of user personalities or demeanors are considered within the scope of the present invention.
After interrogating one or more of the persona libraries 66 preferably included on one or more HDD device 60 or SAN 62 at 386 to identify at least one preferred or optimal IVR system persona, an IVR system persona is selected or created at 388. Upon selection of an IVR system persona at 388, the IVR system 12 persona may be activated at 390. Activation of an IVR system persona may include, but is not limited to, loading one or more person characteristic, i.e., gender, speech rate, etc., into a memory accessible to voice generation software or hardware. Once the selected IVR system 12 persona is activated at 390, method 360 preferably proceeds to 392.
According to teachings of the present invention, the persona of IVR system 12 can have a significant impact on the responsiveness of a user. Accordingly, selection of a preferable, optimal or appropriate IVR system persona and the subsequent dialog exchange with the user in accordance with the IVR system persona provide computer-based call centers an advantage over single persona IVR system 12 based call centers.
Generally at 392, a plurality of user prompts may be generated by IVR system 12 in accordance with the IVR system persona selected at 388. For example, if it is determined that the current user is an experienced IVR system 12 user, quick, brief system prompts may be included in the preferred persona. Similarly, if it is determined that the user is a novice user or is struggling with system, slow, detailed instructions may be provided in accordance with the selected or created persona.
The plurality of user prompts generated at 392 are generally directed to completing a user desired transaction, i.e., the purpose for which the current user contacted IVR system 12, such as to check a balance or pay on an account. Upon the generation of one or more user prompts directed to completing a desired user transaction, method 360 preferably proceeds to 394 where the one or more user prompts may be communicated to the user. For example, if IVR system 12 determines at 72 that the current user is a shy male seeking to check an account balance, the selected IVR system persona may have the characteristics of being a soft spoken male that prompts the user for an account number, asks whether the user would like an account statement mailed to his address of record or whether the user would like his balance spoken to him over the communications link, etc. Once the next user prompt has been communicated to the user at 394, method 360 preferably proceeds to 396 where a user response to the prompt is awaited. In the event a user response is not detected within a reasonable delay after prompting, method 50 preferably proceeds to 398. At 398, a determination is made as to whether a predetermined overall or total wait period for a user response to the IVR system 12 prompt has been exhausted. In the event that the predetermined time period has not been exhausted, method 360 preferably loops at 398 until the predetermined time period has been exhausted. Once the predetermined time period has been exhausted, method 360 preferably proceeds to 400.
At 400, a determination is made as to whether a predetermined total number of IVR system 12 prompt retries has been exhausted. If the predetermined total number of IVR system 12 prompt retries has not been exhausted, method 360 preferably returns to 394 where the IVR system 12 prompt directed to completing the user desired transaction is preferably repeated to the user. However, if at 400 it is determined that the predetermined number of IVR system 12 prompt retries has been exhausted, method 360 preferably proceeds to 402 where the communications connection with the current user may be severed or a bail-out to a human operator effected. Upon severance of the current user's communications connection at 402, method 360 preferably returns to 364 where the next incoming call from a user is awaited.
Referring now to FIGURE 13, a continuation of method 360 as illustrated in FIGURES 11 and 12, is shown according to teachings of the present invention. Method 360 preferably proceeds to 404 of FIGURE 13 in response to detection or reception of a user response to the IVR system 12 prompt directed to completing the desired user transaction communicated at 394.
At 404, one or more parameters or characteristics of the user's response are preferably identified, analyzed or otherwise isolated. In one embodiment of the present invention, each user response to an IVR system prompt may be evaluated for a change in the user's personality, or demeanor. In a further embodiment, only selected user responses to IVR system prompts may be evaluated for a change in the user's personality or demeanor. As mentioned above, generally concurrently with or after receipt of a user response, IVR system 12 may process the user response in furtherance of the desired user transaction, as indicated at 406.
At 408, IVR system 12 preferably compares or otherwise determines whether any differences exist between the user's current personality or demeanor and the personality or demeanor previously detected, e.g., at 382 of FIGURE 12. Specifically, according to teachings of the present invention, IVR system 12 is attempting to monitor the user's personality or demeanor to determine whether a new IVR system persona or change in style of the current persona is likely to elicit more favorable responses from the user, put the user at ease, or otherwise enhance the user's interaction with IVR system 12. In addition, IVR system 12 may be configured to detect whether the user is having difficulty using or interacting with the system and to access and communicate one or more help prompts to aid the user in such instances . If at 408 a change is detected in the user's demeanor or personality, method 360 may return to 386 of FIGURE 12 where the one or more persona libraries 66 may again be interrogated to identify one or more IVR system personas which best comport or match the user's current demeanor or personality. Alternatively, as mentioned above, the style of the current persona may be changed or one or more persona characteristics may be compiled to create an overall IVR system persona which best matches or comports with the user's present demeanor or personality. Upon a return to 386, method 360 preferably again proceeds through selection at 388 and activation at 390 of a new IVR system persona or style. If at 408 there is no detected change in the user's demeanor or personality detected, method 360 preferably proceeds to 410.
At 410, IVR system 12 preferably determines whether the desired user transaction has been completed, i.e., whether the user has received all desired information or whether the user has provided all of the information requested by IVR system 12. If it is determined at 410 that the desired user transaction is incomplete, method 360 preferably returns to 392 of FIGURE 12 where the next prompt in the sequence of prompts directed to completing a desired user transaction may be generated for communication at 394.
However, if at 410 it is determined that the desired user transaction has been completed, method 360 may proceed to 412. In one embodiment of the present invention, personas for users of IVR system 12 may be stored for use during subsequent transactions or dialog exchanges with the user. In such an IVR system 12, the persona for the last transaction with the current user, for example, may be stored in one or more stored user persona profiles 68 on one or more HDD devices 60 or SANs 62. As mentioned above, such stored user persona profiles 68 may be used by IVR system 12 in those instances where the caller can be identified prior to the communication of the first prompt to the user as well as in other instances.
After the persona for the current user has been stored, method 360 preferably proceeds to 414 where the communications link with the user may be severed. Once the communications link has been effectively severed, method 360 preferably returns to 364 of FIGURE 11 where IVR system 12 may await the next incoming call.
In an embodiment of the present invention, a stored user persona 68 may be used to implement one or more security measures. For example, if a stored user persona 68 is supposed to be used by only one user, when IVR system 12 detects a suspect voice pattern a security alert may be generated. Such a security alert might prompt the user to enter an additional password.
Alternatively, such an alert might notify IVR system 12 personnel of the potential breach and leave the matter for the personnel to address. Other embodiments of securing a user account using teachings of the present invention are contemplated and considered within the scope hereof. System 10 allows for the automated creation of a customer-centric interface that directly matches menu prompts with customer tasks, orders and groups the tasks and menu options by the task frequency of occurrence and the customer perceived task relationships, and states the menu prompts using the customers own terminology.
Although the present invention has been described in detail with respect to an IVR system, system 10 may also be utilized for the automated creation of customer- centric interfaces for web sites with respect to developing content for the web site, designs of the web pages, and what tasks to locate on different web pages.
The present invention allows for the automated analysis of one or more log files containing performance data and the generation of an output file including the results of the analysis on the performance data.
Although the example embodiment is described in reference to IVR performance data, in other embodiments IVR system 12 and computer system 18 may also automatically analyze performance data from other systems in addition to IVR systems as well as any other appropriate type of data.
Although the present invention has been described in detail, it should be understood that various changes, substitutions and alterations can be made hereto without the parting from the spirit and scope of the invention as defined by the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A method for the automated creation of a customer-centric interface, the method comprising: collecting a plurality of customer opening statements; creating a list of a plurality of tasks for which a plurality of customers access and use the customer- centric interface; analyzing the customer opening statements to determine a plurality of customer terminology used by the customers; eliciting a plurality of customer feedback regarding how each task relates to each of the other tasks; determining a frequency of occurrence for each task based on the customer opening statements; determining which tasks to include in the customer- centric interface system based on the frequency of occurrence for the tasks; ordering the tasks within the customer-centric interface system based on the frequency of occurrence for the tasks; combining the tasks into one or more groups of tasks based on a plurality of customer perceived task relationships and the frequency of occurrence for the tasks; creating a plurality of menu prompts for the customer-centric interface using the customer terminology, one or more action specific objects, and the customer perceived task relationships; testing the customer-centric interface with one or more test customers; providing the test customers with one or more tasks to perform and accomplish using the customer-centric interface; determining if each test customer accomplished the provided task; determining a customer satisfaction value for each test customer; calculating a response time for each provided task; generating a performance matrix including the response times, the customer satisfaction values, and the task accomplishment; automatically reconfiguring the customer-centric interface system based on the performance matrix; implementing the customer-centric interface for access and use by the customers; monitoring the performance of the customer-centric interface; and modifying the customer-centric interface based on a plurality of post-implementation results.
2. A method for automating the creation of a customer-centric interface, the method comprising: gathering a plurality of customer intention information; automatically analyzing the customer intention information to determine a plurality of customer terminology, a frequency of occurrence for a plurality of tasks, and a plurality of customer perceived task relationships ; generating a plurality of menu prompts utilizing the customer terminology and the frequency of occurrence for the tasks; ordering the menu prompts based on the frequency of occurrence for the tasks and the customer perceived task relationships ; testing the order and the terminology of the menu prompts; and modifying the order and the terminology of the menu prompts based on the testing of the menu prompts.
3. The method of Claim 2 wherein ordering the menu prompts based on the frequency of occurrence for the tasks and the customer perceived task relationships comprises grouping the tasks into one or more groups of related tasks.
4. The method of Claim 2 wherein gathering a plurality of customer intention information comprises collecting a plurality of customer opening statements.
5. The method of Claim 2 wherein gathering a plurality of customer intention information comprises creating a list of the tasks for which a plurality of customers access and use the customer-centric interface.
6. The method of Claim 2 wherein gathering a plurality of customer intention information comprises eliciting a plurality of customer feedback regarding how each task relates to each of the other tasks.
7. The method of Claim 2 wherein generating a plurality of menu prompts based on the customer terminology and the frequency of occurrence for the tasks comprises determining which tasks to create menu prompts for based on the frequency of occurrence for the tasks.
8. The method of Claim 2 wherein testing the order and the terminology of the menu prompts comprises providing one or more test customers with one or more tasks to accomplish using the customer-centric interface.
9. The method of Claim 8 wherein testing the order and the terminology of the menu prompts comprises: determining a customer satisfaction value for each test customer; calculating a response time for each provided task; and determining if each test customer accomplishes the provided task.
10. The method of Claim 9 wherein testing the order and the terminology of the menu prompts comprises generating a performance matrix including the response time, the customer satisfaction value, and the task accomplishment for each task.
11. The method of Claim 2 further comprising: implementing the menu prompts as the customer- centric interface for access and use by a plurality of customers; monitoring the performance of the customer-centric interface; and modifying the menu prompts based on a plurality of post-implementation results.
12. A system for the automated creation of a customer-centric interface, the system comprising: a collection engine operable to gather a plurality of customer intention information; a customer language engine associated with the collection engine, the customer language engine operable to determine a plurality of customer terminology and generate a plurality of customer-centric menu prompts for a plurality of tasks using the customer terminology; a task engine associated with the collection engine, the task engine operable to calculate a frequency of occurrence for each task and order the customer-centric menu prompts based on the frequency of occurrence; a customer structure engine associated with the task engine, the customer structure engine operable to elicit a plurality of customer feedback regarding how each task relates to each of the other tasks and generate a plurality of customer perceived task relationships; and a performance engine associated with the customer language engine, the task engine, and the customer structure engine, the performance engine operable to test the customer-centric menu prompts and modify the customer-centric menu prompts.
13. The system of Claim 12 wherein the customer structure engine is further operable to combine the tasks into one or more groups of tasks based on the customer perceived tasks relationships.
14. The system of Claim 12 wherein the customer intention information comprises a plurality of customer opening statements.
15. The system of Claim 12 wherein the customer language engine is further operable to: automatically identify one or more action words and one or more specific object words used by a plurality of customers interfacing with the customer-centric interface; and identify a relationship between the actions words and the specific object words as used by the customers.
16. The system of Claim 15 wherein the customer language engine is further operable to generate the customer-centric menu prompts utilizing one or more of the action words and one or more of the specific object words .
17. The system of Claim 12 wherein the performance engine is further operable to calculate a response time for each customer-centric menu prompt tested.
18. The system of Claim 12 wherein the performance engine is further operable to generate a performance matrix including one or more test results from the testing of the customer-centric menu prompts.
19. A method for automatically creating and integrating interface structure into a customer-centric interface, the method comprising: compiling a list of a plurality of tasks for which a plurality of customers access and use the customer- centric interface; eliciting a plurality of customer feedback regarding how each task in the list relates to each of the other tasks in the list; aggregating the customer feedback; automatically analyzing the aggregated customer feedback to determine one or more customer perceived task relationships; and structuring the customer-centric interface in accordance with the customer perceived task relationships .
20. The method of Claim 19 wherein compiling a list of a plurality of tasks for which a plurality of customers access and use the customer-centric interface system comprises utilizing a task frequency of occurrence to determine the tasks that are included in the list of the tasks.
21. The method of Claim 19 wherein eliciting a plurality of customer feedback comprises performing one or more customer exercises designed to reveal how the customers perceive one or more relationships between the tasks .
22. The method of Claim 19 wherein automatically analyzing the aggregated customer feedback to determine one or more customer perceived task relationships comprises determining which of the tasks can be combined into one or more groups of tasks based on the customer perceived task relationships.
23. The method of Claim 22 further comprising eliciting customer feedback regarding one or more labels for each of the groups of tasks.
24. The method of Claim 19 wherein automatically analyzing the aggregated customer feedback to determine one or more customer perceived task relationships comprises representing the customer perceived task relationships in a numerical data matrix.
25. The method of Claim 24 further comprising automatically generating a graphical representation of the customer perceived task relationships based on the numerical data matrix to enable the application of the customer perceived task relationships to the interface structure of the customer-centric interface.
26. The method of Claim 19 wherein structuring the customer-centric interface in accordance with the customer perceived task relationships comprises organizing the tasks within the customer-centric interface in accordance with the customer perceived task relationships.
27. The method of Claim 19 wherein structuring the customer-centric interface system in accordance with the customer perceived task relationships comprises combining the tasks into one or more groups of tasks based on the customer perceived task relationships.
28. The method of Claim 27 wherein combining the tasks into one or more groups of tasks based on the customer perceived task relationships comprises: ordering the groups of tasks; and ordering the tasks within each group of tasks.
29. A method for automatically developing customer- centric menu prompts for a customer-centric interface, the method comprising: automatically identifying one or more action words and one or more object words used by a plurality of customers interfacing with the customer-centric interface; determining which of the action words are specific and which of the object words are specific; identifying a relationship between the actions words and the object words as used by the customers; determining a frequency of occurrence for each of the object words and each of the action words; storing the specific actions words and the specific object words in one or more databases; and automatically generalizing the specific action words into one or more general groups of specific action words and the specific object words into one or more general groups of specific object words.
30. The method of Claim 29 wherein automatically identifying one or more action words and one or more object words comprises locating one or more of the object words and one or more of the action words in a plurality of customer opening statements.
31. The method of Claim 29 wherein storing the specific action words and the specific object words comprises storing the specific action words and the specific object words in accordance with the frequency of occurrence.
32. The method of Claim 29 wherein storing the specific actions words and the specific object words in one or more databases comprises storing the specific action words in a specific action database.
33. The method of Claim 29 wherein storing the specific actions words and the specific object words in one or more databases comprises storing the specific object words in a specific object database.
34. The method of Claim 29 wherein storing the specific actions words and the specific object words in one or more databases comprises maintaining the relationship between the specific action words and the specific object words when storing the specific action words and the specific object words.
35. The method of Claim 34 wherein maintaining the relationship between the action words and the object words comprises linking one or more of the specific action words in a specific action database to one or more related specific object words in a specific object database.
36. The method of Claim 29 wherein generalizing the specific action words into one or more groups of specific action words and the specific object words into one or more groups of specific object words comprises: determining one or more commonalties between the specific action words; determining one or more commonalties between the specific object words; grouping together the specific action words having the commonality; and grouping together the specific object words having the commonality.
37. The method of Claim 29 wherein automatically generalizing the specific action words into one or more groups of specific action words and the specific object words into one or more groups of specific object words comprises determining a particular general object word for each group of specific object words and a particular general action word for each group of specific action words.
38. The method of Claim 37 further comprising: storing the general object words in a general object database; and storing the general action words in a general action database.
39. The method of Claim 29 wherein identifying a relationship between the actions words and the object words comprises tracking how the customers use the actions words and the object words together.
40. The method of Claim 29 further comprising creating a plurality of customer-centric menu prompts using one or more of the specific action words and one or more of the specific object words.
41. The method of Claim 40 wherein creating the customer-centric menu prompts comprises automatically retrieving one or more of the specific action words and one or more of the specific object words from one or more of the databases.
42. Software for automating the creation of a customer-centric interface, the software embodied in a computer-readable medium and operable to: gather a plurality of customer intention information; automatically analyze the customer intention information to determine a plurality of customer terminology, a frequency of occurrence for a plurality of tasks, and a plurality of customer perceived task relationships ; generate a plurality of menu prompts utilizing the customer terminology and the frequency of occurrence for the tasks; order the menu prompts based on the frequency of occurrence for the tasks and the customer perceived task relationships ; test the menu prompts; and modify the menu prompts based on the testing of the menu prompts.
43. The software of Claim 42 wherein ordering the menu prompts based on the frequency of occurrence for the tasks and the customer perceived task relationships comprises grouping the tasks into one or more groups of related tasks.
44. The software of Claim 42 wherein generating a plurality of menu prompts based on the customer terminology and the frequency of occurrence for the tasks comprises determining which tasks to create menu prompts for based on the frequency of occurrence for the tasks.
45. The software of Claim 42 wherein testing the menu prompts comprises: determining a customer satisfaction value; calculating a response time for each task; and determining a task accomplishment for each task.
46. The software of Claim 45 wherein testing the menu prompts comprises generating a performance matrix including the response time, the customer satisfaction value, and the task accomplishment for each task.
47. The software of Claim 42 further operable to: implement the menu prompts as the customer-centric interface for access and use by a plurality of customers; monitor the performance of the customer-centric interface; and modify the menu prompts based on a plurality of post-implementation results.
48. A method for the automated analysis of interactive voice response system performance data, the method comprising: determining a task for a performance data set; retrieving a correct key sequence for the task; searching for the correct key sequence at the end of the performance data set; automatically comparing the correct key sequence for the task with a recorded key sequence of the performance data set; determining if the task was successfully completed based on the comparison of the correct key sequence and the recorded key sequence based on the recorded key sequence corresponding with the correct key sequence; scoring the performance data set as successful or unsuccessful; obtaining one or more menu duration times; automatically calculating one or more response times for one or more menus by subtracting the menu duration time from a listening time for each menu; calculating a total response time for the performance data set; and generating an output file sorted by a participant number and by a task number.
49. A method for the automated analysis of performance data, the method comprising: determining a task for a performance data set; retrieving a correct key sequence for the task; automatically comparing the correct key sequence for the task with a recorded key sequence of the performance data set; and automatically calculating one or more response times .
50. The method of Claim 49 wherein calculating one or more response times comprises calculating a response time for an individual menu.
51. The method of Claim 49 wherein calculating one or more response times comprises calculating a total response time for the performance data set.
52. The method of Claim 49 wherein calculating the response time comprises obtaining one or more menu duration times.
53. The method of Claim 52 wherein calculating a response time comprises subtracting the menu duration time from a listening time.
54. The method of Claim 49 wherein determining a task for the performance data set comprises matching a participant number of the performance data set and a sequence number of the performance data set to one of a plurality of tasks in a database.
55. The method of Claim 49 wherein comparing the correct key sequence with a recorded key sequence comprises determining the task is successfully completed if the recorded key sequence corresponds with the correct key sequence.
56. The method of Claim 49 wherein comparing the correct key sequence with a recorded key sequence comprises searching for the correct key sequence at the end of the performance data set.
57. The method of Claim 49 further comprising generating an output file.
58. The method of Claim 57 wherein generating the output file comprises sorting the output file by a participant number and by the order in which a participant performed the task.
59. Software for the automated analysis of performance data, the software embodied in a computer- readable medium and operable to: determine a task for a performance data set; retrieve a correct key sequence for the task; automatically compare the correct key sequence for the task with a recorded key sequence of the performance data set; and automatically calculate one or more response times.
60. The software of Claim 59 wherein calculating one or more response times comprises calculating a response time for one or more menus.
61. The software of Claim 59 wherein calculating one or more response times comprises calculating a total response time for the performance data set.
62. The software of Claim 59 wherein calculating the response time comprises obtaining one or more menu duration times.
63. The software of Claim 62 wherein calculating a response time comprises subtracting the menu duration time from a listening time.
64. The software of Claim 59 wherein determining a task for the performance data set comprises matching a participant number of the performance data set and a sequence number of the performance data set to one of a plurality of tasks in a database.
65. The software of Claim 59 wherein comparing the correct key sequence with a recorded key sequence comprises determining the task is successfully completed if the recorded key sequence corresponds with the correct key sequence.
66. The software of Claim 59 wherein comparing the correct key sequence with a recorded key sequence comprises searching for the correct key sequence at the end of the performance data set.
67. The software of Claim 59 further operable to generate an output file sorted by a participant number and by the order in which a participant performed the task.
68. A system for the automated analysis of performance data, the system comprising: a plurality of performance data sets; a task engine operable to determine a task for each performance data set and retrieve a correct key sequence for each task; and a performance engine associated with the task engine, the performance engine operable to compare the correct key sequence for the task with a recorded key sequence of the performance data set and calculate one or more response times.
69. The system of Claim 68 wherein the performance engine calculates a response time for one or more menus.
70. The system of Claim 68 wherein the performance engine calculates a total response time for each performance data set.
71. The system of Claim 68 wherein the performance engine obtains one or more menu duration times.
72. The system of Claim 71 wherein the performance engine subtracts the menu duration time from a listening time in order to calculate the response time.
73. The system of Claim 68 wherein the performance engine subtracts a menu duration time from a listening time in order to calculate the response time.
74. The system of Claim 68 wherein the task engine matches a participant number of the performance data set and a sequence number of the performance data set to one of a plurality of tasks in a database.
75. The system of Claim 68 wherein the performance engine is further operable to determine if the task was successfully completed based on the recorded key sequence corresponding with the correct key sequence.
76. The system of Claim 68 wherein the performance engine is further operable to generate an output file.
77. The system of Claim 68 wherein the tasks comprise a plurality of simulated interactions with an interactive voice response system.
78. The system of Claim 68 further comprising a graphical user interface associated with the task engine and the performance engine, the graphical user interface having a summary window and operable to allow for the selection of the performance data set to be analyzed.
79. A method of operating an interactive voice response (IVR) system comprising: establishing a communication connection with a user; generating a first prompt, the first prompt operable to elicit a response; communicating the first prompt to the user; awaiting a response from the user to the first prompt; analyzing at least one aspect of the response to identify at least one characteristic relevant to IVR system persona selection; interrogating a library of personas to identify a persona which comports with the at least one identified characteristic; selecting the identified persona from the library of personas; activating the selected persona, the selected persona operable to generate and communicate one or more subsequent IVR system prompts directed to continuing a desired user transaction; monitoring responses from the user to the subsequent IVR system prompts; analyzing at least one user response to the subsequent prompts to identify a change in a user characteristic relevant to IVR system persona selection; and modifying a style of the selected person in response to the identified change in the user characteristic relevant to IVR system persona selection.
80. A method for conducting a dialog exchange between an interactive voice response (IVR) system and a user comprising: identifying at least one user personality trait; automatically selecting an IVR system persona based on the identified user personality trait; and activating the selected IVR system persona.
81. The method of Claim 80, further comprising: generating, by the IVR system, a first prompt requesting a user response; and communicating the first prompt to the user.
82. The method of Claim 81, further comprising generating the first prompt based on at least one call characteristic.
83. The method of Claim 81, further comprising identifying the user personality trait based on the user response to the first prompt.
84. The method of Claim 81, further comprising identifying the user personality trait from at least one speech characteristic associated with a user's verbal response to the first prompt.
85. The method of Claim 81, further comprising: generating, in accordance with one or more IVR system persona parameters, at least one additional prompt, the additional prompt associated with continuing a desired user transaction; and communicating the additional prompt to the user.
86. The method of Claim 85, further comprising: analyzing at least one aspect of a user response to the additional prompt; and determining from the at least one aspect of the user response, whether a user personality trait associated with the user response suggests a change for the selected persona .
87. The method of Claim 86, further comprising automatically selecting a new style for the selected persona in response to determining that the user personality train suggests a change for the selected persona.
88. The method of Claim 86, further comprising automatically selecting a new IVR system persona in response to determining that the user personality trait suggests a change for the selected persona.
89. The method of Claim 86, further comprising repeating the analyzing and determining steps during the prompt and response dialog exchange such that a new IVR system persona may be selected in response to an identified change in a user personality trait.
90. The method of Claim 80, further comprising selecting the IVR system persona from a library of personas operably associated with the interactive voice response system.
91. An interactive voice response (IVR) system comprising: at least one processor; memory operably coupled to the processor; and a program of instructions storable in the memory and executable in the processor, the program of instructions operable to generate and exchange dialog with a user in accordance with an IVR system persona selected from a library of personas; wherein the program of instructions automatically selects the persona according to at least one personality characteristic of the user.
92. The system of Claim 91, further comprising: a communications interface; and the program of instructions further operable to select a first prompt based on at least one call characteristic, the first prompt operable to request a response from the user; and communicate the first prompt to the user via the communications interface.
93. The system of Claim 92, further comprising the program of instructions further operable to identify the at least one personality characteristic of the user from the user response to the first prompt.
94. The system of Claim 93, further comprising the program of instructions further operable to identify at least one speech parameter of a verbal user response to the first prompt and determine the at least one personality characteristic of the user from the identified speech parameter.
95. The system of Claim 91, further comprising the program of instructions further operable to identify a desired user transaction, generate at least one additional prompt directed to completion of the desired user transaction and communicate the at least one additional prompt to the user.
96. The system of Claim 95, further comprising the program of instructions further operable to interrogate a user response to the additional prompt and determine, based on the interrogation, whether the user response to the additional prompt suggests a need for an IVR system persona change.
97. The system of Claim 96, further comprising the program of instructions further operable to select a new IVR system persona in response to the suggestion of a need for a new IVR system persona and activate the new IVR system persona.
98. The system of Claim 96, further comprising the program of instructions further operable to automatically select a new style for the selected persona in response to determining that the user personality characteristic suggests a change for the selected persona.
99. The system of Claim 91, further comprising: a component systems interface operably coupled to the processor; at least one component storage system operably coupled to the component systems interface; and the library of personas stored on the at least one component storage system.
100. A dialog exchange system comprising: at least one processor; memory operably coupled to the processor; a communications interface operably coupled to the processor, the communications interface operable to communicate dialog between a user and the system via a communications link; a component systems interface operably coupled to the processor, the component systems interface operable to transmit information to and receive information from at least one storage device; a library of personas stored on the at least one storage device, each persona in the library corresponding to one or more user personality traits; and a program of instructions storable in the memory and executable in the processor, the program of instructions operable to identify at least one user personality trait and activate a dialog exchange system persona selected from the library of personas according to the at least one identified user personality trait.
101. The dialog exchange system of Claim 100, further comprising the program of instructions further operable to generate one or more prompts directed to completing a desired user transaction and analyze one or more selected user responses to the one or more prompts to determine whether the selected dialog exchange system persona should be changed.
102. The dialog exchange system of Claim 101, further comprising the program of instructions further operable to change the dialog exchange system persona in response to a determination that the selected dialog exchange system persona should be changed, and monitor user responses to prompts generated in accordance with the changed dialog exchange system persona such that a determination may be made as to the need for yet another change to the dialog exchange system persona.
103. A computer readable medium encoding a program of instructions, the program of instructions operable to: automatically select an interactive voice response (IVR) system persona based on at least one personality trait identified in a user response; and conduct a dialog exchange in accordance with the selected IVR system persona.
104. The computer readable medium of Claim 103, wherein the program of instructions is further operable to select the IVR system persona from a library of personas.
105. The computer readable medium of Claim 103, wherein the program of instructions is further operable to create a user persona from a plurality of IVR system persona characteristics.
106. The computer readable medium of Claim 103, wherein the program of instructions is further operable to monitor user responses to subsequent IVR system prompts to determine whether a change in one or more user personality traits suggests a need for changing the selected IVR system persona.
107. The computer readable medium of Claim 106, wherein the program of instructions is further operable to : select a new IVR system persona in response to the suggestion of a need; and conduct a dialog exchange directed to completing a user desired transaction in accordance with the newly selected IVR system.
108. The computer readable medium of Claim 106, wherein the program of instructions is further operable to automatically select a new style for the selected persona in response to detecting the need for changing the selected IVR system persona.
109. The computer readable medium of Claim 103, wherein the program of instructions is further operable to select an IVR system persona from a plurality of user personality traits identified in a verbal user response.
110. A method for categorizing customer service opening statements, the method comprising: collecting a plurality of opening statements to be categorized; creating one or more rules for categorizing the opening statements; grouping the rules into one or more sets of rules; storing the sets of rules; selecting one of the sets of rules to apply to the opening statements; automatically applying the rules in accordance with a rule hierarchy to a list of the opening statements one opening statement at a time; searching each opening statement for one or more text string combinations; automatically determining a category label for each opening statement based upon the presence of one or more of the text string combinations; assigning a category label to each opening statement when each opening statement first satisfies one of the rules; and creating an output file including each opening statement and a corresponding category label.
111. A method for the automated categorization of statements, the method comprising: creating one or more rules for categorizing the statements; selecting one or more of the rules to apply to the statements; automatically applying the rules to a list of the statements; and automatically determining a category label for each statement based upon the rules.
112. The method of Claim 111 wherein automatically applying the rules to a list of the statements comprises retrieving one or more of the rules to be applied to the statements .
113. The method of Claim 111 wherein creating one or more rules comprises grouping the rules into one or more sets of rules.
114. The method of Claim 111 further comprising creating an output file including each statement and a corresponding category label.
115. The method of Claim 111 wherein automatically applying the rules to a list of the statements comprises applying the rules to the statements one statement at a time .
116. The method of Claim 111 further comprising determining a rule hierarchy for applying the rules to the statements.
117. The method of Claim 116 wherein automatically applying the rules to a list of the statements comprises applying the rules to the statements in a particular rule order in accordance with the rule hierarchy.
118. The method of Claim 111 wherein automatically applying the rules to a list of the statements comprises searching each statement for one or more text string combinations .
119. The method of Claim 111 wherein creating one or more rules comprises editing one or more existing rules.
120. The method of Claim 111 wherein automatically determining a category label for each statement comprises assigning a category label for each statement when each statement first satisfies one of the rules.
121. The method of Claim 111 wherein the rules include a catch all rule for categorizing statements that do not satisfy any of the other rules.
122. The method of Claim 111 wherein the statements comprise a plurality of opening statements.
123. The method of Claim 111 further comprising storing the rules.
124. The method of Claim 111 further comprising collecting a plurality of statements to be categorized.
125. Software for the automated categorization of statements, the software embodied in a computer-readable medium and operable to: create one or more rules for categorizing the statements; select one or more of the rules to apply to the statements; automatically apply the rules to a list of the statements; and determine a category label for each statement based upon the rules.
126. The software of Claim 125 wherein the statements comprise a plurality of opening statements.
127. The software of Claim 125 further operable to create an output file, the output file including each statement and a corresponding category label.
128. The software of Claim 127 wherein creating the output file comprises entering each statement and each corresponding category label into a spreadsheet.
129. The software of Claim 125 wherein creating one or more rules comprises grouping the rules into one or more sets of rules.
130. The software of Claim 125 further operable to display a graphical user interface.
131. The software of Claim 125 wherein applying the rules to a list of the statements comprises applying the rules to the statements in a particular rule order in accordance with a rule hierarchy.
132. The software of Claim 125 wherein applying the rules to a list of the statements comprises searching each statement for one or more text string combinations.
133. The software of Claim 125 further operable to store the rules.
134. The software of Claim 125 further operable to assign a category label to each statement when each statement first satisfies one of the rules.
135. A system for the automated categorization of statements, the system comprising: a plurality of rules a rule engine operable to create and store the rules used to categorize the statements; and a performance engine associated with the rule engine, the performance engine operable to automatically apply the rules to the statements and determine a category label for each statement.
136. The system of Claim 135 wherein the statements comprise a plurality of opening statements.
137. The system of Claim 135 further comprising a graphical user interface associated with the rule engine and the performance engine, the graphical user interface operable to display the rules and the category labels.
138. The system of Claim 135 wherein the performance engine is further operable to create an output file.
139. The system of Claim 138 wherein the output file includes each statement and a corresponding category label.
140. The system of Claim 135 wherein the rule engine is further operable to group the rules into one or more sets of rules.
141. The system of Claim 135 wherein the performance engine searches each statement for one or more text string combinations to determine a category label for each statement.
142. The system of Claim 135 wherein the rules include a catch-all rule for categorizing statements that do not satisfy any of the other rules.
143. The system of Claim 135 wherein the performance engine applies the rules to the statements in a particular rule order in accordance with a rule hierarchy.
PCT/US2003/019835 2002-07-02 2003-06-24 Method, system, and apparatus for automating the creation of customer-centric interface WO2004006092A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003253680A AU2003253680A1 (en) 2002-07-02 2003-06-24 Method, system, and apparatus for automating the creation of customer-centric interface

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US10/188,152 2002-07-02
US10/188,152 US20040006473A1 (en) 2002-07-02 2002-07-02 Method and system for automated categorization of statements
US10/217,863 2002-08-13
US10/217,863 US6842504B2 (en) 2002-07-02 2002-08-13 System and method for the automated analysis of performance data
US10/217,873 US7379537B2 (en) 2000-03-21 2002-08-13 Method and system for automating the creation of customer-centric interfaces
US10/217,873 2002-08-13
US10/230,708 2002-08-29
US10/230,708 US20040042592A1 (en) 2002-07-02 2002-08-29 Method, system and apparatus for providing an adaptive persona in speech-based interactive voice response systems

Publications (2)

Publication Number Publication Date
WO2004006092A2 true WO2004006092A2 (en) 2004-01-15
WO2004006092A8 WO2004006092A8 (en) 2004-09-02

Family

ID=30119294

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/019835 WO2004006092A2 (en) 2002-07-02 2003-06-24 Method, system, and apparatus for automating the creation of customer-centric interface

Country Status (3)

Country Link
US (6) US20040006473A1 (en)
AU (1) AU2003253680A1 (en)
WO (1) WO2004006092A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012129854A1 (en) * 2011-04-01 2012-10-04 中兴通讯股份有限公司 Method and device for summarizing after calling
CN103873707A (en) * 2012-12-10 2014-06-18 中国电信股份有限公司 Incoming call reason recording method and call center seat system
US9376479B2 (en) 2002-12-31 2016-06-28 Anjinomoto Althea, Inc. Human growth hormone crystals and methods for preparing them
US11550702B1 (en) 2021-11-04 2023-01-10 T-Mobile Usa, Inc. Ensuring that computer programs are accessible to users with disabilities, such as for use with mobile phones

Families Citing this family (277)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6694482B1 (en) * 1998-09-11 2004-02-17 Sbc Technology Resources, Inc. System and methods for an architectural framework for design of an adaptive, personalized, interactive content delivery system
US7086007B1 (en) * 1999-05-27 2006-08-01 Sbc Technology Resources, Inc. Method for integrating user models to interface design
US6778643B1 (en) * 2000-03-21 2004-08-17 Sbc Technology Resources, Inc. Interface and method of designing an interface
US7142662B2 (en) 2000-07-11 2006-11-28 Austin Logistics Incorporated Method and system for distributing outbound telephone calls
US7103173B2 (en) 2001-07-09 2006-09-05 Austin Logistics Incorporated System and method for preemptive goals based routing of contact records
US7054434B2 (en) 2001-07-09 2006-05-30 Austin Logistics Incorporated System and method for common account based routing of contact records
US7715546B2 (en) 2001-07-09 2010-05-11 Austin Logistics Incorporated System and method for updating contact records
US7065201B2 (en) * 2001-07-31 2006-06-20 Sbc Technology Resources, Inc. Telephone call processing in an interactive voice response call management system
US7305070B2 (en) 2002-01-30 2007-12-04 At&T Labs, Inc. Sequential presentation of long instructions in an interactive voice response system
US6914975B2 (en) * 2002-02-21 2005-07-05 Sbc Properties, L.P. Interactive dialog-based training method
US8068595B2 (en) 2002-03-15 2011-11-29 Intellisist, Inc. System and method for providing a multi-modal communications infrastructure for automated call center operation
US7292689B2 (en) * 2002-03-15 2007-11-06 Intellisist, Inc. System and method for providing a message-based communications infrastructure for automated call center operation
US20030204435A1 (en) * 2002-04-30 2003-10-30 Sbc Technology Resources, Inc. Direct collection of customer intentions for designing customer service center interface
US8661112B2 (en) * 2002-12-20 2014-02-25 Nuance Communications, Inc. Customized interactive voice response menus
WO2004104986A1 (en) * 2003-05-21 2004-12-02 Matsushita Electric Industrial Co., Ltd. Voice output device and voice output method
US7882434B2 (en) * 2003-06-27 2011-02-01 Benjamin Slotznick User prompting when potentially mistaken actions occur during user interaction with content on a display screen
US20050054381A1 (en) * 2003-09-05 2005-03-10 Samsung Electronics Co., Ltd. Proactive user interface
US7460652B2 (en) 2003-09-26 2008-12-02 At&T Intellectual Property I, L.P. VoiceXML and rule engine based switchboard for interactive voice response (IVR) services
US20050075894A1 (en) * 2003-10-03 2005-04-07 Sbc Knowledge Ventures, L.P. System, method & software for a user responsive call center customer service delivery solution
US7027586B2 (en) * 2003-12-18 2006-04-11 Sbc Knowledge Ventures, L.P. Intelligently routing customer communications
US7356475B2 (en) * 2004-01-05 2008-04-08 Sbc Knowledge Ventures, L.P. System and method for providing access to an interactive service offering
US7496500B2 (en) * 2004-03-01 2009-02-24 Microsoft Corporation Systems and methods that determine intent of data and respond to the data based on the intent
WO2005099255A2 (en) * 2004-04-01 2005-10-20 Techsmith Corporation Automated system and method for conducting usability testing
US8528086B1 (en) 2004-04-01 2013-09-03 Fireeye, Inc. System and method of detecting computer worms
US7587537B1 (en) 2007-11-30 2009-09-08 Altera Corporation Serializer-deserializer circuits formed from input-output circuit registers
US8881282B1 (en) 2004-04-01 2014-11-04 Fireeye, Inc. Systems and methods for malware attack detection and identification
US8793787B2 (en) 2004-04-01 2014-07-29 Fireeye, Inc. Detecting malicious network content using virtual environment components
US8566946B1 (en) 2006-04-20 2013-10-22 Fireeye, Inc. Malware containment on connection
US9106694B2 (en) 2004-04-01 2015-08-11 Fireeye, Inc. Electronic message analysis for malware detection
US8898788B1 (en) 2004-04-01 2014-11-25 Fireeye, Inc. Systems and methods for malware attack prevention
US8171553B2 (en) 2004-04-01 2012-05-01 Fireeye, Inc. Heuristic based capture with replay to virtual machine
US7460650B2 (en) * 2004-05-24 2008-12-02 At&T Intellectual Property I, L.P. Method for designing an automated speech recognition (ASR) interface for a customer call center
US20050289232A1 (en) * 2004-06-07 2005-12-29 Rudiger Ebert Method, apparatus, and system for monitoring performance remotely from a user
US7936861B2 (en) * 2004-07-23 2011-05-03 At&T Intellectual Property I, L.P. Announcement system and method of use
US20060026049A1 (en) * 2004-07-28 2006-02-02 Sbc Knowledge Ventures, L.P. Method for identifying and prioritizing customer care automation
US20060026210A1 (en) * 2004-07-28 2006-02-02 Vaszary Mark K Managing feedback data
US8165281B2 (en) * 2004-07-28 2012-04-24 At&T Intellectual Property I, L.P. Method and system for mapping caller information to call center agent transactions
US7580837B2 (en) * 2004-08-12 2009-08-25 At&T Intellectual Property I, L.P. System and method for targeted tuning module of a speech recognition system
US7602898B2 (en) * 2004-08-18 2009-10-13 At&T Intellectual Property I, L.P. System and method for providing computer assisted user support
US8086462B1 (en) 2004-09-09 2011-12-27 At&T Intellectual Property Ii, L.P. Automatic detection, summarization and reporting of business intelligence highlights from automated dialog systems
US7043435B2 (en) * 2004-09-16 2006-05-09 Sbc Knowledgfe Ventures, L.P. System and method for optimizing prompts for speech-enabled applications
US20060062375A1 (en) * 2004-09-23 2006-03-23 Sbc Knowledge Ventures, L.P. System and method for providing product offers at a call center
US7197130B2 (en) 2004-10-05 2007-03-27 Sbc Knowledge Ventures, L.P. Dynamic load balancing between multiple locations with different telephony system
WO2006047595A2 (en) * 2004-10-25 2006-05-04 Whydata, Inc. Apparatus and method for measuring service performance
US7668889B2 (en) 2004-10-27 2010-02-23 At&T Intellectual Property I, Lp Method and system to combine keyword and natural language search results
US7657005B2 (en) * 2004-11-02 2010-02-02 At&T Intellectual Property I, L.P. System and method for identifying telephone callers
DK1666074T3 (en) 2004-11-26 2008-09-08 Bae Ro Gmbh & Co Kg sterilization lamp
US7724889B2 (en) * 2004-11-29 2010-05-25 At&T Intellectual Property I, L.P. System and method for utilizing confidence levels in automated call routing
US7864942B2 (en) * 2004-12-06 2011-01-04 At&T Intellectual Property I, L.P. System and method for routing calls
US7242751B2 (en) 2004-12-06 2007-07-10 Sbc Knowledge Ventures, L.P. System and method for speech recognition-enabled automatic call routing
US20060126808A1 (en) * 2004-12-13 2006-06-15 Sbc Knowledge Ventures, L.P. System and method for measurement of call deflection
US20060126811A1 (en) * 2004-12-13 2006-06-15 Sbc Knowledge Ventures, L.P. System and method for routing calls
US7471774B2 (en) * 2004-12-14 2008-12-30 Cisco Technology, Inc. Method and system of pausing an IVR session
US7751551B2 (en) 2005-01-10 2010-07-06 At&T Intellectual Property I, L.P. System and method for speech-enabled call routing
US7627096B2 (en) * 2005-01-14 2009-12-01 At&T Intellectual Property I, L.P. System and method for independently recognizing and selecting actions and objects in a speech recognition system
US7450698B2 (en) * 2005-01-14 2008-11-11 At&T Intellectual Property 1, L.P. System and method of utilizing a hybrid semantic model for speech recognition
US7627109B2 (en) 2005-02-04 2009-12-01 At&T Intellectual Property I, Lp Call center system for multiple transaction selections
US20060188087A1 (en) * 2005-02-18 2006-08-24 Sbc Knowledge Ventures, Lp System and method for caller-controlled music on-hold
US8130936B2 (en) * 2005-03-03 2012-03-06 At&T Intellectual Property I, L.P. System and method for on hold caller-controlled activities and entertainment
US8223954B2 (en) 2005-03-22 2012-07-17 At&T Intellectual Property I, L.P. System and method for automating customer relations in a communications environment
US7933399B2 (en) * 2005-03-22 2011-04-26 At&T Intellectual Property I, L.P. System and method for utilizing virtual agents in an interactive voice response application
US7636432B2 (en) * 2005-05-13 2009-12-22 At&T Intellectual Property I, L.P. System and method of determining call treatment of repeat calls
US8094803B2 (en) * 2005-05-18 2012-01-10 Mattersight Corporation Method and system for analyzing separated voice data of a telephonic communication between a customer and a contact center by applying a psychological behavioral model thereto
EP1729247A1 (en) * 2005-06-01 2006-12-06 InVision Software AG Resource planning for employees
US8005204B2 (en) * 2005-06-03 2011-08-23 At&T Intellectual Property I, L.P. Call routing system and method of using the same
US7657020B2 (en) * 2005-06-03 2010-02-02 At&T Intellectual Property I, Lp Call routing system and method of using the same
US8503641B2 (en) * 2005-07-01 2013-08-06 At&T Intellectual Property I, L.P. System and method of automated order status retrieval
US8175253B2 (en) 2005-07-07 2012-05-08 At&T Intellectual Property I, L.P. System and method for automated performance monitoring for a call servicing system
US20070165019A1 (en) * 2005-07-12 2007-07-19 Hale Kelly S Design Of systems For Improved Human Interaction
US7839521B2 (en) * 2005-08-09 2010-11-23 Global Print Systems, Inc. Methods and systems for print job management and printing
US7676563B2 (en) * 2005-08-12 2010-03-09 Microsoft Corporation Task-oriented management of server configuration settings
US8526577B2 (en) * 2005-08-25 2013-09-03 At&T Intellectual Property I, L.P. System and method to access content from a speech-enabled automated system
US8548157B2 (en) 2005-08-29 2013-10-01 At&T Intellectual Property I, L.P. System and method of managing incoming telephone calls at a call center
US20070067197A1 (en) * 2005-09-16 2007-03-22 Sbc Knowledge Ventures, L.P. Efficiently routing customer inquiries created with a self-service application
US7328199B2 (en) * 2005-10-07 2008-02-05 Microsoft Corporation Componentized slot-filling architecture
US20070106496A1 (en) * 2005-11-09 2007-05-10 Microsoft Corporation Adaptive task framework
US7822699B2 (en) * 2005-11-30 2010-10-26 Microsoft Corporation Adaptive semantic reasoning engine
US7606700B2 (en) * 2005-11-09 2009-10-20 Microsoft Corporation Adaptive task framework
US20070121873A1 (en) * 2005-11-18 2007-05-31 Medlin Jennifer P Methods, systems, and products for managing communications
US7831585B2 (en) * 2005-12-05 2010-11-09 Microsoft Corporation Employment of task framework for advertising
US7933914B2 (en) * 2005-12-05 2011-04-26 Microsoft Corporation Automatic task creation and execution using browser helper objects
US20070130134A1 (en) * 2005-12-05 2007-06-07 Microsoft Corporation Natural-language enabling arbitrary web forms
US7773731B2 (en) * 2005-12-14 2010-08-10 At&T Intellectual Property I, L. P. Methods, systems, and products for dynamically-changing IVR architectures
US7577664B2 (en) 2005-12-16 2009-08-18 At&T Intellectual Property I, L.P. Methods, systems, and products for searching interactive menu prompting system architectures
US20070203869A1 (en) * 2006-02-28 2007-08-30 Microsoft Corporation Adaptive semantic platform architecture
US7996783B2 (en) * 2006-03-02 2011-08-09 Microsoft Corporation Widget searching utilizing task framework
US20070213988A1 (en) * 2006-03-10 2007-09-13 International Business Machines Corporation Using speech processing technologies for verification sequence instances
DE102006011791B4 (en) * 2006-03-15 2007-10-25 Sartorius Ag Electronic scale
JP4987017B2 (en) * 2006-03-15 2012-07-25 ザトーリウス ウェイング テクノロジー ゲーエムベーハー Electronic scale and operating method thereof
US7961856B2 (en) * 2006-03-17 2011-06-14 At&T Intellectual Property I, L. P. Methods, systems, and products for processing responses in prompting systems
US8050392B2 (en) * 2006-03-17 2011-11-01 At&T Intellectual Property I, L.P. Methods systems, and products for processing responses in prompting systems
US8150692B2 (en) 2006-05-18 2012-04-03 Nuance Communications, Inc. Method and apparatus for recognizing a user personality trait based on a number of compound words used by the user
US8160209B2 (en) * 2006-12-19 2012-04-17 International Business Machines Corporation IVR call routing testing
US7933389B2 (en) * 2006-12-19 2011-04-26 International Business Machines Corporation System and method generating voice sites
CN100518072C (en) * 2006-12-27 2009-07-22 华为技术有限公司 A method and system for processing the client request
US20080250316A1 (en) * 2007-04-04 2008-10-09 Honeywell International Inc. Mechanism to improve a user's interaction with a computer system
US20090043583A1 (en) * 2007-08-08 2009-02-12 International Business Machines Corporation Dynamic modification of voice selection based on user specific factors
US9430660B2 (en) * 2008-01-31 2016-08-30 International Business Machines Corporation Managing access in one or more computing systems
US9635154B1 (en) * 2008-02-08 2017-04-25 West Corporation Real-time monitoring of caller experience for a group of steps in a call flow
US8401156B1 (en) * 2008-02-08 2013-03-19 West Corporation Real-time monitoring of caller experience in a call flow
JP2009252176A (en) * 2008-04-10 2009-10-29 Ntt Docomo Inc Information delivery device and method
US8290125B2 (en) 2008-09-02 2012-10-16 International Business Machines Corporation Voice response unit shortcutting
US9106745B2 (en) * 2008-09-16 2015-08-11 International Business Machines Corporation Voice response unit harvesting
US9003300B2 (en) * 2008-10-03 2015-04-07 International Business Machines Corporation Voice response unit proxy utilizing dynamic web interaction
US8997219B2 (en) 2008-11-03 2015-03-31 Fireeye, Inc. Systems and methods for detecting malicious PDF network content
US9459764B1 (en) * 2008-11-11 2016-10-04 Amdocs Software Systems Limited System, method, and computer program for selecting at least one predefined workflow based on an interaction with a user
US20100318400A1 (en) * 2009-06-16 2010-12-16 Geffen David Method and system for linking interactions
CN101944019B (en) * 2009-07-08 2014-03-12 华为技术有限公司 Method and device for customizing interfaces
US8553872B2 (en) * 2009-07-08 2013-10-08 Nice-Systems Ltd. Method and system for managing a quality process
US20110037611A1 (en) * 2009-08-13 2011-02-17 At&T Intellectual Property I, L.P. Programming a universal remote control using multimedia display
US8410970B2 (en) 2009-08-13 2013-04-02 At&T Intellectual Property I, L.P. Programming a universal remote control via direct interaction
US8832829B2 (en) 2009-09-30 2014-09-09 Fireeye, Inc. Network-based binary file extraction and analysis for malware detection
US9197736B2 (en) * 2009-12-31 2015-11-24 Digimarc Corporation Intuitive computing methods and systems
CN102231130B (en) * 2010-01-11 2015-06-17 国际商业机器公司 Method and device for analyzing computer system performances
US8917828B2 (en) 2010-04-21 2014-12-23 Angel.Com Incorporated Multi-channel delivery platform
US8582727B2 (en) 2010-04-21 2013-11-12 Angel.Com Communication of information during a call
US8699674B2 (en) * 2010-04-21 2014-04-15 Angel.Com Incorporated Dynamic speech resource allocation
US11562013B2 (en) 2010-05-26 2023-01-24 Userzoom Technologies, Inc. Systems and methods for improvements to user experience testing
US11544135B2 (en) 2010-05-26 2023-01-03 Userzoom Technologies, Inc. Systems and methods for the analysis of user experience testing with AI acceleration
US10691583B2 (en) * 2010-05-26 2020-06-23 Userzoom Technologies, Inc. System and method for unmoderated remote user testing and card sorting
US11934475B2 (en) 2010-05-26 2024-03-19 Userzoom Technologies, Inc. Advanced analysis of online user experience studies
US11348148B2 (en) 2010-05-26 2022-05-31 Userzoom Technologies, Inc. Systems and methods for an intelligent sourcing engine for study participants
US11068374B2 (en) 2010-05-26 2021-07-20 Userzoom Technologies, Inc. Generation, administration and analysis of user experience testing
US11494793B2 (en) 2010-05-26 2022-11-08 Userzoom Technologies, Inc. Systems and methods for the generation, administration and analysis of click testing
US8346597B2 (en) 2010-05-28 2013-01-01 Bank Of America Corporation Customer-level macro business performance monitoring
US8762939B1 (en) 2010-07-02 2014-06-24 Nuance Communications, Inc. System and method for displaying key performance indicators in an application design tool
US9378505B2 (en) * 2010-07-26 2016-06-28 Revguard, Llc Automated multivariate testing technique for optimized customer outcome
US8379833B2 (en) 2010-12-17 2013-02-19 Nuance Communications, Inc. System, method, and computer program product for detecting redundancies in information provided by customers in a customer service system
US8971499B1 (en) * 2011-01-06 2015-03-03 West Corporation Method and apparatus of analyzing customer call data to monitor customer call behavior
US8787553B2 (en) * 2011-09-22 2014-07-22 At&T Intellectual Property I, L.P. Implementing a network of intelligent virtual service agents to provide personalized automated responses
US8903712B1 (en) 2011-09-27 2014-12-02 Nuance Communications, Inc. Call steering data tagging interface with automatic semantic clustering
US8761373B1 (en) * 2011-10-03 2014-06-24 Nuance Communications, Inc. System and method for determining IVR application flow from customer-service call recordings
WO2013062589A1 (en) * 2011-10-28 2013-05-02 Intel Corporation Adapting language use in a device
US9477936B2 (en) 2012-02-09 2016-10-25 Rockwell Automation Technologies, Inc. Cloud-based operator interface for industrial automation
US9477749B2 (en) 2012-03-02 2016-10-25 Clarabridge, Inc. Apparatus for identifying root cause using unstructured data
US8825866B2 (en) 2012-05-02 2014-09-02 Nuance Communications, Inc. System and method for enabling demand-based pooling of endpoint resources in a multi-application environment
US9167093B2 (en) * 2012-11-28 2015-10-20 Nice-Systems Ltd. System and method for real-time process management
US8798256B2 (en) * 2012-12-12 2014-08-05 Hartford Fire Insurance Company System and method for telephone call routing using a relational routing matrix
US10572665B2 (en) 2012-12-28 2020-02-25 Fireeye, Inc. System and method to create a number of breakpoints in a virtual machine via virtual machine trapping events
US8976197B1 (en) * 2013-02-21 2015-03-10 Hurricane Electric LLC Solution generating devices and methods
US9881088B1 (en) 2013-02-21 2018-01-30 Hurricane Electric LLC Natural language solution generating devices and methods
US9195829B1 (en) 2013-02-23 2015-11-24 Fireeye, Inc. User interface with real-time visual playback along with synchronous textual analysis log display and event/time index for anomalous behavior detection in applications
US9626509B1 (en) 2013-03-13 2017-04-18 Fireeye, Inc. Malicious content analysis with multi-version application support within single operating environment
US9104867B1 (en) 2013-03-13 2015-08-11 Fireeye, Inc. Malicious content analysis using simulated user interaction without user involvement
US9311479B1 (en) 2013-03-14 2016-04-12 Fireeye, Inc. Correlation and consolidation of analytic data for holistic view of a malware attack
US9413781B2 (en) 2013-03-15 2016-08-09 Fireeye, Inc. System and method employing structured intelligence to verify and contain threats at endpoints
US10713358B2 (en) 2013-03-15 2020-07-14 Fireeye, Inc. System and method to extract and utilize disassembly features to classify software intent
US9989958B2 (en) 2013-05-09 2018-06-05 Rockwell Automation Technologies, Inc. Using cloud-based data for virtualization of an industrial automation environment
US9438648B2 (en) 2013-05-09 2016-09-06 Rockwell Automation Technologies, Inc. Industrial data analytics in a cloud platform
US9703902B2 (en) 2013-05-09 2017-07-11 Rockwell Automation Technologies, Inc. Using cloud-based data for industrial simulation
US9786197B2 (en) * 2013-05-09 2017-10-10 Rockwell Automation Technologies, Inc. Using cloud-based data to facilitate enhancing performance in connection with an industrial automation system
US9495180B2 (en) 2013-05-10 2016-11-15 Fireeye, Inc. Optimized resource allocation for virtual machines within a malware content detection system
US9635039B1 (en) 2013-05-13 2017-04-25 Fireeye, Inc. Classifying sets of malicious indicators for detecting command and control communications associated with malware
US9300686B2 (en) 2013-06-28 2016-03-29 Fireeye, Inc. System and method for detecting malicious links in electronic messages
US9736179B2 (en) 2013-09-30 2017-08-15 Fireeye, Inc. System, apparatus and method for using malware analysis results to drive adaptive instrumentation of virtual machines to improve exploit detection
US9628507B2 (en) 2013-09-30 2017-04-18 Fireeye, Inc. Advanced persistent threat (APT) detection center
US9690936B1 (en) 2013-09-30 2017-06-27 Fireeye, Inc. Multistage system and method for analyzing obfuscated content for malware
US10515214B1 (en) 2013-09-30 2019-12-24 Fireeye, Inc. System and method for classifying malware within content created during analysis of a specimen
US9294501B2 (en) 2013-09-30 2016-03-22 Fireeye, Inc. Fuzzy hash of behavioral results
US9171160B2 (en) 2013-09-30 2015-10-27 Fireeye, Inc. Dynamically adaptive framework and method for classifying malware using intelligent static, emulation, and dynamic analyses
US10831348B1 (en) * 2013-12-13 2020-11-10 Google Llc Ranking and selecting task components based on frequency of completions
US9747446B1 (en) 2013-12-26 2017-08-29 Fireeye, Inc. System and method for run-time object classification
US9756074B2 (en) 2013-12-26 2017-09-05 Fireeye, Inc. System and method for IPS and VM-based detection of suspicious objects
US9292686B2 (en) 2014-01-16 2016-03-22 Fireeye, Inc. Micro-virtualization architecture for threat-aware microvisor deployment in a node of a network environment
US9262635B2 (en) 2014-02-05 2016-02-16 Fireeye, Inc. Detection efficacy of virtual machine-based analysis with application specific events
US9285974B2 (en) 2014-02-28 2016-03-15 Angel.Com Incorporated Application builder platform
US9241010B1 (en) 2014-03-20 2016-01-19 Fireeye, Inc. System and method for network behavior detection
US10242185B1 (en) 2014-03-21 2019-03-26 Fireeye, Inc. Dynamic guest image creation and rollback
US9591015B1 (en) 2014-03-28 2017-03-07 Fireeye, Inc. System and method for offloading packet processing and static analysis operations
US9223972B1 (en) 2014-03-31 2015-12-29 Fireeye, Inc. Dynamically remote tuning of a malware content detection system
US10084813B2 (en) 2014-06-24 2018-09-25 Fireeye, Inc. Intrusion prevention and remedy system
US9398028B1 (en) 2014-06-26 2016-07-19 Fireeye, Inc. System, device and method for detecting a malicious attack based on communcations between remotely hosted virtual machines and malicious web servers
US10805340B1 (en) 2014-06-26 2020-10-13 Fireeye, Inc. Infection vector and malware tracking with an interactive user display
US10002252B2 (en) 2014-07-01 2018-06-19 Fireeye, Inc. Verification of trusted threat-aware microvisor
US10671726B1 (en) 2014-09-22 2020-06-02 Fireeye Inc. System and method for malware analysis using thread-level event monitoring
US10027689B1 (en) 2014-09-29 2018-07-17 Fireeye, Inc. Interactive infection visualization for improved exploit detection and signature generation for malware and malware families
US9690933B1 (en) 2014-12-22 2017-06-27 Fireeye, Inc. Framework for classifying an object as malicious with machine learning for deploying updated predictive models
US9934376B1 (en) 2014-12-29 2018-04-03 Fireeye, Inc. Malware detection appliance architecture
US9838417B1 (en) 2014-12-30 2017-12-05 Fireeye, Inc. Intelligent context aware user interaction for malware detection
US9571636B2 (en) 2014-12-31 2017-02-14 Genesys Telecommunications Laboratories, Inc. Call center builder platform
US11243505B2 (en) 2015-03-16 2022-02-08 Rockwell Automation Technologies, Inc. Cloud-based analytics for industrial automation
US11042131B2 (en) 2015-03-16 2021-06-22 Rockwell Automation Technologies, Inc. Backup of an industrial automation plant in the cloud
US10496061B2 (en) 2015-03-16 2019-12-03 Rockwell Automation Technologies, Inc. Modeling of an industrial automation environment in the cloud
US11513477B2 (en) 2015-03-16 2022-11-29 Rockwell Automation Technologies, Inc. Cloud-based industrial controller
US10148693B2 (en) 2015-03-25 2018-12-04 Fireeye, Inc. Exploit detection system
US9438613B1 (en) 2015-03-30 2016-09-06 Fireeye, Inc. Dynamic content activation for automated analysis of embedded objects
US10417031B2 (en) 2015-03-31 2019-09-17 Fireeye, Inc. Selective virtualization for security threat detection
US10474813B1 (en) 2015-03-31 2019-11-12 Fireeye, Inc. Code injection technique for remediation at an endpoint of a network
US9654485B1 (en) 2015-04-13 2017-05-16 Fireeye, Inc. Analytics-based security monitoring system and method
US20160307142A1 (en) * 2015-04-15 2016-10-20 Xerox Corporation Methods and systems for creating log of one or more events through crowdsourcing
US10726127B1 (en) 2015-06-30 2020-07-28 Fireeye, Inc. System and method for protecting a software component running in a virtual machine through virtual interrupts by the virtualization layer
US11113086B1 (en) 2015-06-30 2021-09-07 Fireeye, Inc. Virtual system and method for securing external network connectivity
US10454950B1 (en) 2015-06-30 2019-10-22 Fireeye, Inc. Centralized aggregation technique for detecting lateral movement of stealthy cyber-attacks
US10642753B1 (en) 2015-06-30 2020-05-05 Fireeye, Inc. System and method for protecting a software component running in virtual machine using a virtualization layer
US10715542B1 (en) 2015-08-14 2020-07-14 Fireeye, Inc. Mobile application risk analysis
US10671665B2 (en) * 2015-09-25 2020-06-02 Oath Inc. Personalized audio introduction and summary of result sets for users
US10033747B1 (en) 2015-09-29 2018-07-24 Fireeye, Inc. System and method for detecting interpreter-based exploit attacks
US10706149B1 (en) 2015-09-30 2020-07-07 Fireeye, Inc. Detecting delayed activation malware using a primary controller and plural time controllers
US9825989B1 (en) 2015-09-30 2017-11-21 Fireeye, Inc. Cyber attack early warning system
US10210329B1 (en) 2015-09-30 2019-02-19 Fireeye, Inc. Method to detect application execution hijacking using memory protection
US10817606B1 (en) 2015-09-30 2020-10-27 Fireeye, Inc. Detecting delayed activation malware using a run-time monitoring agent and time-dilation logic
US10601865B1 (en) 2015-09-30 2020-03-24 Fireeye, Inc. Detection of credential spearphishing attacks using email analysis
US9825976B1 (en) 2015-09-30 2017-11-21 Fireeye, Inc. Detection and classification of exploit kits
US10284575B2 (en) 2015-11-10 2019-05-07 Fireeye, Inc. Launcher for setting analysis environment variations for malware detection
US10447728B1 (en) 2015-12-10 2019-10-15 Fireeye, Inc. Technique for protecting guest processes using a layered virtualization architecture
US10846117B1 (en) 2015-12-10 2020-11-24 Fireeye, Inc. Technique for establishing secure communication between host and guest processes of a virtualization architecture
US10108446B1 (en) 2015-12-11 2018-10-23 Fireeye, Inc. Late load technique for deploying a virtualization layer underneath a running operating system
US10565378B1 (en) 2015-12-30 2020-02-18 Fireeye, Inc. Exploit of privilege detection framework
US10621338B1 (en) 2015-12-30 2020-04-14 Fireeye, Inc. Method to detect forgery and exploits using last branch recording registers
US10050998B1 (en) 2015-12-30 2018-08-14 Fireeye, Inc. Malicious message analysis system
US10133866B1 (en) 2015-12-30 2018-11-20 Fireeye, Inc. System and method for triggering analysis of an object for malware in response to modification of that object
US11552986B1 (en) 2015-12-31 2023-01-10 Fireeye Security Holdings Us Llc Cyber-security framework for application of virtual features
US9824216B1 (en) 2015-12-31 2017-11-21 Fireeye, Inc. Susceptible environment detection system
US10581874B1 (en) 2015-12-31 2020-03-03 Fireeye, Inc. Malware detection system with contextual analysis
US10338785B2 (en) 2016-02-18 2019-07-02 Hartford Fire Insurance Company Processing system for multivariate segmentation of electronic message content
US11847040B2 (en) 2016-03-16 2023-12-19 Asg Technologies Group, Inc. Systems and methods for detecting data alteration from source to target
US10671721B1 (en) 2016-03-25 2020-06-02 Fireeye, Inc. Timeout management services
US10476906B1 (en) 2016-03-25 2019-11-12 Fireeye, Inc. System and method for managing formation and modification of a cluster within a malware detection system
US10601863B1 (en) 2016-03-25 2020-03-24 Fireeye, Inc. System and method for managing sensor enrollment
US10785255B1 (en) 2016-03-25 2020-09-22 Fireeye, Inc. Cluster configuration within a scalable malware detection system
US10893059B1 (en) 2016-03-31 2021-01-12 Fireeye, Inc. Verification and enhancement using detection systems located at the network periphery and endpoint devices
US10169585B1 (en) 2016-06-22 2019-01-01 Fireeye, Inc. System and methods for advanced malware detection through placement of transition events
US10462173B1 (en) 2016-06-30 2019-10-29 Fireeye, Inc. Malware detection verification and enhancement by coordinating endpoint and malware detection systems
US20180052664A1 (en) * 2016-08-16 2018-02-22 Rulai, Inc. Method and system for developing, training, and deploying effective intelligent virtual agent
US10592678B1 (en) 2016-09-09 2020-03-17 Fireeye, Inc. Secure communications between peers using a verified virtual trusted platform module
US10491627B1 (en) 2016-09-29 2019-11-26 Fireeye, Inc. Advanced malware detection using similarity analysis
US10795991B1 (en) 2016-11-08 2020-10-06 Fireeye, Inc. Enterprise search
US10587647B1 (en) 2016-11-22 2020-03-10 Fireeye, Inc. Technique for malware detection capability comparison of network security devices
US10552610B1 (en) 2016-12-22 2020-02-04 Fireeye, Inc. Adaptive virtual machine snapshot update framework for malware behavioral analysis
US10581879B1 (en) 2016-12-22 2020-03-03 Fireeye, Inc. Enhanced malware detection for generated objects
US10523609B1 (en) 2016-12-27 2019-12-31 Fireeye, Inc. Multi-vector malware detection and analysis
US10904286B1 (en) 2017-03-24 2021-01-26 Fireeye, Inc. Detection of phishing attacks using similarity analysis
US10791138B1 (en) 2017-03-30 2020-09-29 Fireeye, Inc. Subscription-based malware detection
US10554507B1 (en) 2017-03-30 2020-02-04 Fireeye, Inc. Multi-level control for enhanced resource and object evaluation management of malware detection system
US10798112B2 (en) 2017-03-30 2020-10-06 Fireeye, Inc. Attribute-controlled malware detection
US10902119B1 (en) 2017-03-30 2021-01-26 Fireeye, Inc. Data extraction system for malware analysis
US10728265B2 (en) * 2017-06-15 2020-07-28 Bae Systems Information And Electronic Systems Integration Inc. Cyber warning receiver
US10601848B1 (en) 2017-06-29 2020-03-24 Fireeye, Inc. Cyber-security system and method for weak indicator detection and correlation to generate strong indicators
US10855700B1 (en) 2017-06-29 2020-12-01 Fireeye, Inc. Post-intrusion detection of cyber-attacks during lateral movement within networks
US10503904B1 (en) 2017-06-29 2019-12-10 Fireeye, Inc. Ransomware detection and mitigation
US10893068B1 (en) 2017-06-30 2021-01-12 Fireeye, Inc. Ransomware file modification prevention technique
US10747872B1 (en) 2017-09-27 2020-08-18 Fireeye, Inc. System and method for preventing malware evasion
US10805346B2 (en) 2017-10-01 2020-10-13 Fireeye, Inc. Phishing attack detection
US11108809B2 (en) 2017-10-27 2021-08-31 Fireeye, Inc. System and method for analyzing binary code for malware classification using artificial neural network techniques
US11057500B2 (en) 2017-11-20 2021-07-06 Asg Technologies Group, Inc. Publication of applications using server-side virtual screen change capture
US11271955B2 (en) 2017-12-28 2022-03-08 Fireeye Security Holdings Us Llc Platform and method for retroactive reclassification employing a cybersecurity-based global data store
US11240275B1 (en) 2017-12-28 2022-02-01 Fireeye Security Holdings Us Llc Platform and method for performing cybersecurity analyses employing an intelligence hub with a modular architecture
US11005860B1 (en) 2017-12-28 2021-05-11 Fireeye, Inc. Method and system for efficient cybersecurity analysis of endpoint events
US11611633B2 (en) 2017-12-29 2023-03-21 Asg Technologies Group, Inc. Systems and methods for platform-independent application publishing to a front-end interface
US10817667B2 (en) 2018-02-07 2020-10-27 Rulai, Inc. Method and system for a chat box eco-system in a federated architecture
US10826931B1 (en) 2018-03-29 2020-11-03 Fireeye, Inc. System and method for predicting and mitigating cybersecurity system misconfigurations
US10956477B1 (en) 2018-03-30 2021-03-23 Fireeye, Inc. System and method for detecting malicious scripts through natural language processing modeling
US11558401B1 (en) 2018-03-30 2023-01-17 Fireeye Security Holdings Us Llc Multi-vector malware detection data sharing system for improved detection
US11003773B1 (en) 2018-03-30 2021-05-11 Fireeye, Inc. System and method for automatically generating malware detection rule recommendations
US11314859B1 (en) 2018-06-27 2022-04-26 FireEye Security Holdings, Inc. Cyber-security system and method for detecting escalation of privileges within an access token
US11075930B1 (en) 2018-06-27 2021-07-27 Fireeye, Inc. System and method for detecting repetitive cybersecurity attacks constituting an email campaign
US11228491B1 (en) 2018-06-28 2022-01-18 Fireeye Security Holdings Us Llc System and method for distributed cluster configuration monitoring and management
US11316900B1 (en) 2018-06-29 2022-04-26 FireEye Security Holdings Inc. System and method for automatically prioritizing rules for cyber-threat detection and mitigation
US11182473B1 (en) 2018-09-13 2021-11-23 Fireeye Security Holdings Us Llc System and method for mitigating cyberattacks against processor operability by a guest process
US11763004B1 (en) 2018-09-27 2023-09-19 Fireeye Security Holdings Us Llc System and method for bootkit detection
US11368475B1 (en) 2018-12-21 2022-06-21 Fireeye Security Holdings Us Llc System and method for scanning remote services to locate stored objects with malware
US11909100B2 (en) 2019-01-31 2024-02-20 Userzoom Technologies, Inc. Systems and methods for the analysis of user experience testing with AI acceleration
US20200250625A1 (en) 2019-02-01 2020-08-06 Community System and method for grouping responses in a one-to-many messaging platform
DK180649B1 (en) * 2019-05-31 2021-11-11 Apple Inc Voice assistant discoverability through on-device targeting and personalization
US11258806B1 (en) 2019-06-24 2022-02-22 Mandiant, Inc. System and method for automatically associating cybersecurity intelligence to cyberthreat actors
US11556640B1 (en) 2019-06-27 2023-01-17 Mandiant, Inc. Systems and methods for automated cybersecurity analysis of extracted binary string sets
US11762634B2 (en) * 2019-06-28 2023-09-19 Asg Technologies Group, Inc. Systems and methods for seamlessly integrating multiple products by using a common visual modeler
US11392700B1 (en) 2019-06-28 2022-07-19 Fireeye Security Holdings Us Llc System and method for supporting cross-platform data verification
US11886585B1 (en) 2019-09-27 2024-01-30 Musarubra Us Llc System and method for identifying and mitigating cyberattacks through malicious position-independent code execution
US11637862B1 (en) 2019-09-30 2023-04-25 Mandiant, Inc. System and method for surfacing cyber-security threats with a self-learning recommendation engine
US11755760B2 (en) 2019-10-18 2023-09-12 Asg Technologies Group, Inc. Systems and methods for secure policies-based information governance
US11055067B2 (en) 2019-10-18 2021-07-06 Asg Technologies Group, Inc. Unified digital automation platform
US11886397B2 (en) 2019-10-18 2024-01-30 Asg Technologies Group, Inc. Multi-faceted trust system
US11941137B2 (en) 2019-10-18 2024-03-26 Asg Technologies Group, Inc. Use of multi-faceted trust scores for decision making, action triggering, and data analysis and interpretation
US11269660B2 (en) 2019-10-18 2022-03-08 Asg Technologies Group, Inc. Methods and systems for integrated development environment editor support with a single code base
US11228682B2 (en) * 2019-12-30 2022-01-18 Genesys Telecommunications Laboratories, Inc. Technologies for incorporating an augmented voice communication into a communication routing configuration
WO2022081476A1 (en) 2020-10-13 2022-04-21 ASG Technologies Group, Inc. dba ASG Technologies Geolocation-based policy rules

Family Cites Families (219)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US158655A (en) * 1875-01-12 Improvement in game apparatus
US518259A (en) * 1894-04-17 Fiber brake
US553074A (en) * 1896-01-14 Charles e
US617001A (en) * 1899-01-03 Fence-machine
US2400635A (en) * 1942-07-13 1946-05-21 Eitel Mccullough Inc Method of making tubes
US4310727A (en) 1980-02-04 1982-01-12 Bell Telephone Laboratories, Incorporated Method of processing special service telephone calls
JPS6134669A (en) 1984-07-27 1986-02-18 Hitachi Ltd Automatic transaction system
US4922519A (en) 1986-05-07 1990-05-01 American Telephone And Telegraph Company Automated operator assistance calls with voice processing
US4694483A (en) 1986-06-02 1987-09-15 Innings Telecom Inc. Computerized system for routing incoming telephone calls to a plurality of agent positions
US4930077A (en) * 1987-04-06 1990-05-29 Fan David P Information processing expert system for text analysis and predicting public opinion based information available to the public
US4964077A (en) 1987-10-06 1990-10-16 International Business Machines Corporation Method for automatically adjusting help information displayed in an online interactive system
US5115501A (en) 1988-11-04 1992-05-19 International Business Machines Corporation Procedure for automatically customizing the user interface of application programs
US5204968A (en) 1989-03-27 1993-04-20 Xerox Corporation Automatic determination of operator training level for displaying appropriate operator prompts
US5870308A (en) 1990-04-06 1999-02-09 Lsi Logic Corporation Method and system for creating and validating low-level description of electronic design
US5311422A (en) 1990-06-28 1994-05-10 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration General purpose architecture for intelligent computer-aided training
US5327529A (en) 1990-09-24 1994-07-05 Geoworks Process of designing user's interfaces for application programs
US5181259A (en) * 1990-09-25 1993-01-19 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration General method of pattern classification using the two domain theory
AU9063891A (en) 1990-11-20 1992-06-11 Unifi Communications Corporation Telephone call handling system
US5323452A (en) 1990-12-18 1994-06-21 Bell Communications Research, Inc. Visual programming of telephone network call processing logic
US5206903A (en) 1990-12-26 1993-04-27 At&T Bell Laboratories Automatic call distribution based on matching required skills with agents skills
US5535321A (en) 1991-02-14 1996-07-09 International Business Machines Corporation Method and apparatus for variable complexity user interface in a data processing system
WO1993009245A1 (en) 1991-10-31 1993-05-13 University Of Pittsburgh Reverse dot blot hybridization using tandem head-to-tail monomers containing probes synthesized by staggered complementary primers
US5263167A (en) 1991-11-22 1993-11-16 International Business Machines Corporation User interface for a relational database using a task object for defining search queries in response to a profile object which describes user proficiency
US5903454A (en) * 1991-12-23 1999-05-11 Hoffberg; Linda Irene Human-factored interface corporating adaptive pattern recognition based controller apparatus
US5734709A (en) 1992-01-27 1998-03-31 Sprint Communications Co. L.P. System for customer configuration of call routing in a telecommunications network
US5335269A (en) 1992-03-12 1994-08-02 Rockwell International Corporation Two dimensional routing apparatus in an automatic call director-type system
US5371807A (en) * 1992-03-20 1994-12-06 Digital Equipment Corporation Method and apparatus for text classification
US5388198A (en) 1992-04-16 1995-02-07 Symantec Corporation Proactive presentation of automating features to a computer user
US5729600A (en) 1992-06-25 1998-03-17 Rockwell International Corporation Automatic call distributor with automated voice responsive call servicing system and method
FR2694105B1 (en) * 1992-07-22 1994-11-25 Bull Sa Use of an on-board interpreter language for the creation of an interactive user interface definition tool.
EP0587290B1 (en) 1992-07-30 2000-01-26 Teknekron Infoswitch Corporation Method and system for monitoring and/or controlling the performance of an organization
US5999908A (en) * 1992-08-06 1999-12-07 Abelow; Daniel H. Customer-based product design module
US5335268A (en) 1992-10-22 1994-08-02 Mci Communications Corporation Intelligent routing of special service telephone traffic
US5432940A (en) * 1992-11-02 1995-07-11 Borland International, Inc. System and methods for improved computer-based training
US5353401A (en) 1992-11-06 1994-10-04 Ricoh Company, Ltd. Automatic interface layout generator for database systems
US5659724A (en) 1992-11-06 1997-08-19 Ncr Interactive data analysis apparatus employing a knowledge base
US5420975A (en) 1992-12-28 1995-05-30 International Business Machines Corporation Method and system for automatic alteration of display of menu options
US5864844A (en) 1993-02-18 1999-01-26 Apple Computer, Inc. System and method for enhancing a user interface with a computer based training tool
CA2091658A1 (en) 1993-03-15 1994-09-16 Matthew Lennig Method and apparatus for automation of directory assistance using speech recognition
US5586060A (en) 1993-06-25 1996-12-17 Sharp Kabushiki Kaisha Compact electronic equipment having a statistical function
AU677393B2 (en) 1993-07-08 1997-04-24 E-Talk Corporation Method and system for transferring calls and call-related data between a plurality of call centres
AU693462B2 (en) 1993-09-22 1998-07-02 E-Talk Corporation Method and system for automatically monitoring the performance quality of call center service representatives
WO1995017711A1 (en) 1993-12-23 1995-06-29 Diacom Technologies, Inc. Method and apparatus for implementing user feedback
US5519772A (en) 1994-01-31 1996-05-21 Bell Communications Research, Inc. Network-based telephone system having interactive capabilities
US5533107A (en) 1994-03-01 1996-07-02 Bellsouth Corporation Method for routing calls based on predetermined assignments of callers geographic locations
US5561711A (en) 1994-03-09 1996-10-01 Us West Technologies, Inc. Predictive calling scheduling system and method
JP3450411B2 (en) * 1994-03-22 2003-09-22 キヤノン株式会社 Voice information processing method and apparatus
AU2275495A (en) 1994-03-31 1995-10-23 Citibank, N.A. Interactive voice response system
US5537470A (en) 1994-04-06 1996-07-16 At&T Corp. Method and apparatus for handling in-bound telemarketing calls
US5724262A (en) 1994-05-31 1998-03-03 Paradyne Corporation Method for measuring the usability of a system and for task analysis and re-engineering
US5873068A (en) * 1994-06-14 1999-02-16 New North Media Inc. Display based marketing message control system and method
US5633909A (en) * 1994-06-17 1997-05-27 Centigram Communications Corporation Apparatus and method for generating calls and testing telephone equipment
US5586171A (en) 1994-07-07 1996-12-17 Bell Atlantic Network Services, Inc. Selection of a voice recognition data base responsive to video data
US5619621A (en) * 1994-07-15 1997-04-08 Storage Technology Corporation Diagnostic expert system for hierarchically decomposed knowledge domains
JP2866310B2 (en) 1994-08-05 1999-03-08 ケイディディ株式会社 International call termination control device
US5706334A (en) 1994-08-18 1998-01-06 Lucent Technologies Inc. Apparatus for providing a graphical control interface
US5819221A (en) 1994-08-31 1998-10-06 Texas Instruments Incorporated Speech recognition using clustered between word and/or phrase coarticulation
US5530744A (en) 1994-09-20 1996-06-25 At&T Corp. Method and system for dynamic customized call routing
US5600781A (en) 1994-09-30 1997-02-04 Intel Corporation Method and apparatus for creating a portable personalized operating environment
US5586219A (en) 1994-09-30 1996-12-17 Yufik; Yan M. Probabilistic resource allocation system with self-adaptive capability
US5594791A (en) 1994-10-05 1997-01-14 Inventions, Inc. Method and apparatus for providing result-oriented customer service
US5615323A (en) 1994-11-04 1997-03-25 Concord Communications, Inc. Displaying resource performance and utilization information
US5758257A (en) 1994-11-29 1998-05-26 Herz; Frederick System and method for scheduling broadcast of and access to video programs and other data using customer profiles
US5832430A (en) 1994-12-29 1998-11-03 Lucent Technologies, Inc. Devices and methods for speech recognition of vocabulary words with simultaneous detection and verification
US5872865A (en) * 1995-02-08 1999-02-16 Apple Computer, Inc. Method and system for automatic classification of video images
US5694559A (en) * 1995-03-07 1997-12-02 Microsoft Corporation On-line help method and system utilizing free text query
US5710884A (en) 1995-03-29 1998-01-20 Intel Corporation System for automatically updating personal profile server with updates to additional user information gathered from monitoring user's electronic consuming habits generated on computer during use
US5671351A (en) * 1995-04-13 1997-09-23 Texas Instruments Incorporated System and method for automated testing and monitoring of software applications
ATE330416T1 (en) 1995-04-24 2006-07-15 Ibm METHOD AND APPARATUS FOR SKILL-BASED ROUTING IN A CALL CENTER
JPH08328590A (en) * 1995-05-29 1996-12-13 Sanyo Electric Co Ltd Voice synthesizer
US5657383A (en) 1995-06-06 1997-08-12 Lucent Technologies Inc. Flexible customer controlled telecommunications handling
US5809282A (en) 1995-06-07 1998-09-15 Grc International, Inc. Automated network simulation and optimization system
US5740549A (en) 1995-06-12 1998-04-14 Pointcast, Inc. Information and advertising distribution system and method
JP3453456B2 (en) 1995-06-19 2003-10-06 キヤノン株式会社 State sharing model design method and apparatus, and speech recognition method and apparatus using the state sharing model
US5684872A (en) 1995-07-21 1997-11-04 Lucent Technologies Inc. Prediction of a caller's motivation as a basis for selecting treatment of an incoming call
US6088429A (en) * 1998-04-07 2000-07-11 Mumps Audiofax, Inc. Interactive telephony system
US5675707A (en) 1995-09-15 1997-10-07 At&T Automated call router system and method
US5832428A (en) 1995-10-04 1998-11-03 Apple Computer, Inc. Search engine for phrase recognition based on prefix/body/suffix architecture
US5771276A (en) 1995-10-10 1998-06-23 Ast Research, Inc. Voice templates for interactive voice mail and voice response system
US6061433A (en) 1995-10-19 2000-05-09 Intervoice Limited Partnership Dynamically changeable menus based on externally available data
US5948058A (en) * 1995-10-30 1999-09-07 Nec Corporation Method and apparatus for cataloging and displaying e-mail using a classification rule preparing means and providing cataloging a piece of e-mail into multiple categories or classification types based on e-mail object information
US5802526A (en) 1995-11-15 1998-09-01 Microsoft Corporation System and method for graphically displaying and navigating through an interactive voice response menu
US5821936A (en) 1995-11-20 1998-10-13 Siemens Business Communication Systems, Inc. Interface method and system for sequencing display menu items
US5848396A (en) 1996-04-26 1998-12-08 Freedom Of Information, Inc. Method and apparatus for determining behavioral profile of a computer user
CA2253867A1 (en) 1996-05-07 1997-11-13 Webline Communications Corporation Method and apparatus for coordinating internet multi-media content with telephone and audio communications
US5727950A (en) 1996-05-22 1998-03-17 Netsage Corporation Agent based instruction system and method
US6374260B1 (en) * 1996-05-24 2002-04-16 Magnifi, Inc. Method and apparatus for uploading, indexing, analyzing, and searching media content
US6014638A (en) 1996-05-29 2000-01-11 America Online, Inc. System for customizing computer displays in accordance with user preferences
US5901214A (en) 1996-06-10 1999-05-04 Murex Securities, Ltd. One number intelligent call processing system
US6052693A (en) * 1996-07-02 2000-04-18 Harlequin Group Plc System for assembling large databases through information extracted from text sources
US6092105A (en) * 1996-07-12 2000-07-18 Intraware, Inc. System and method for vending retail software and other sets of information to end users
US5822744A (en) 1996-07-15 1998-10-13 Kesel; Brad Consumer comment reporting apparatus and method
US6157808A (en) 1996-07-17 2000-12-05 Gpu, Inc. Computerized employee certification and training system
US5757644A (en) 1996-07-25 1998-05-26 Eis International, Inc. Voice interactive call center training method using actual screens and screen logic
US5864605A (en) * 1996-08-22 1999-01-26 At&T Corp Voice menu optimization method and system
US5822397A (en) * 1996-09-20 1998-10-13 Mci Communications Corporation Audio interface for telecommunications test system
US6026381A (en) * 1996-11-05 2000-02-15 Itx Corporation Financial market classification system
US6243375B1 (en) * 1996-11-08 2001-06-05 Gregory J. Speicher Internet-audiotext electronic communications system with multimedia based matching
US5793368A (en) 1996-11-14 1998-08-11 Triteal Corporation Method for dynamically switching between visual styles
US5884029A (en) 1996-11-14 1999-03-16 International Business Machines Corporation User interaction with intelligent virtual objects, avatars, which interact with other avatars controlled by different users
US5999611A (en) * 1996-11-19 1999-12-07 Stentor Resource Centre Inc. Subscriber interface for accessing and operating personal communication services
US6148063A (en) 1996-11-29 2000-11-14 Nortel Networks Corporation Semi-interruptible messages for telephone systems making voice announcements
AU5133398A (en) * 1996-12-03 1998-06-29 Ergolight Ltd. Computerized apparatus and methods for identifying usability problems of a computerized system
US5903641A (en) 1997-01-28 1999-05-11 Lucent Technologies Inc. Automatic dynamic changing of agents' call-handling assignments
US6058435A (en) * 1997-02-04 2000-05-02 Siemens Information And Communications Networks, Inc. Apparatus and methods for responding to multimedia communications based on content analysis
US5899992A (en) 1997-02-14 1999-05-04 International Business Machines Corporation Scalable set oriented classifier
US5963965A (en) * 1997-02-18 1999-10-05 Semio Corporation Text processing and retrieval system and method
US5855565A (en) * 1997-02-21 1999-01-05 Bar-Cohen; Yaniv Cardiovascular mechanically expanding catheter
US5835565A (en) * 1997-02-28 1998-11-10 Hammer Technologies, Inc. Telecommunication system tester with integrated voice and data
US5923745A (en) 1997-02-28 1999-07-13 Teknekron Infoswitch Corporation Routing calls to call centers
US6094476A (en) * 1997-03-24 2000-07-25 Octel Communications Corporation Speech-responsive voice messaging system and method
US6182059B1 (en) * 1997-04-03 2001-01-30 Brightware, Inc. Automatic electronic message interpretation and routing system
US6336109B2 (en) * 1997-04-15 2002-01-01 Cerebrus Solutions Limited Method and apparatus for inducing rules from data classifiers
GB2325062B (en) * 1997-05-06 2002-06-26 Ibm Data object management system
US5953406A (en) 1997-05-20 1999-09-14 Mci Communications Corporation Generalized customer profile editor for call center services
US6038560A (en) * 1997-05-21 2000-03-14 Oracle Corporation Concept knowledge base search and retrieval system
EP0883069A1 (en) * 1997-06-06 1998-12-09 Matsushita Electric Industrial Co., Ltd. A retrieval menu creation device and method, and a recording medium storing a retrieval menu creation program
US6044355A (en) 1997-07-09 2000-03-28 Iex Corporation Skills-based scheduling for telephone call centers
US6292909B1 (en) * 1997-07-14 2001-09-18 Duncan Hare Apparatus for testing communication equipment
US5865862A (en) * 1997-08-12 1999-02-02 Hassan; Shawky Match design with burn preventative safety stem construction and selectively impregnable scenting composition means
US6032129A (en) * 1997-09-06 2000-02-29 International Business Machines Corporation Customer centric virtual shopping experience with actors agents and persona
US6487277B2 (en) * 1997-09-19 2002-11-26 Siemens Information And Communication Networks, Inc. Apparatus and method for improving the user interface of integrated voice response systems
US6134315A (en) * 1997-09-30 2000-10-17 Genesys Telecommunications Laboratories, Inc. Metadata-based network routing
US6108711A (en) * 1998-09-11 2000-08-22 Genesys Telecommunications Laboratories, Inc. Operating system having external media layer, workflow layer, internal media layer, and knowledge base for routing media events between transactions
US6035283A (en) * 1997-10-10 2000-03-07 International Business Machines Corporation Virtual sales person for electronic catalog
US6035336A (en) 1997-10-17 2000-03-07 International Business Machines Corporation Audio ticker system and method for presenting push information including pre-recorded audio
US6801763B2 (en) 1997-10-29 2004-10-05 Metro One Telecommunications, Inc. Technique for effectively communicating travel directions
US6055542A (en) 1997-10-29 2000-04-25 International Business Machines Corporation System and method for displaying the contents of a web page based on a user's interests
GB9723813D0 (en) * 1997-11-11 1998-01-07 Mitel Corp Call routing based on caller's mood
US6016336A (en) 1997-11-18 2000-01-18 At&T Corp Interactive voice response system with call trainable routing
US6353661B1 (en) 1997-12-18 2002-03-05 Bailey, Iii John Edson Network and communication access systems
US5943416A (en) 1998-02-17 1999-08-24 Genesys Telecommunications Laboratories, Inc. Automated survey control routine in a call center environment
US6381640B1 (en) * 1998-09-11 2002-04-30 Genesys Telecommunications Laboratories, Inc. Method and apparatus for automated personalization and presentation of workload assignments to agents within a multimedia communication center
US6332154B2 (en) 1998-09-11 2001-12-18 Genesys Telecommunications Laboratories, Inc. Method and apparatus for providing media-independent self-help modules within a multimedia communication-center customer interface
US6170011B1 (en) 1998-09-11 2001-01-02 Genesys Telecommunications Laboratories, Inc. Method and apparatus for determining and initiating interaction directionality within a multimedia communication center
US6166732A (en) 1998-02-24 2000-12-26 Microsoft Corporation Distributed object oriented multi-user domain with multimedia presentations
GB2334602A (en) * 1998-02-24 1999-08-25 Ibm Developing and testing of a telephony application by simulation of telephony hardware
US6263052B1 (en) 1998-03-04 2001-07-17 The White Stone Group, L.L.C. Autointeraction communication system
US6185534B1 (en) 1998-03-23 2001-02-06 Microsoft Corporation Modeling emotion and personality in a computer user interface
US6330326B1 (en) 1998-03-27 2001-12-11 At&T Corp. Dynamic staffing of service centers to provide substantially zero-delay service
US6173279B1 (en) * 1998-04-09 2001-01-09 At&T Corp. Method of using a natural language interface to retrieve information from one or more data resources
US6173053B1 (en) 1998-04-09 2001-01-09 Avaya Technology Corp. Optimizing call-center performance by using predictive data to distribute calls among agents
US6134530A (en) 1998-04-17 2000-10-17 Andersen Consulting Llp Rule based routing system and method for a virtual sales and service center
US6145096A (en) * 1998-05-06 2000-11-07 Motive Communications, Inc. Method, system and computer program product for iterative distributed problem solving
US6483523B1 (en) * 1998-05-08 2002-11-19 Institute For Information Industry Personalized interface browser and its browsing method
US6249579B1 (en) * 1998-05-29 2001-06-19 Lucent Technologies Inc. Apparatus, method and system for personal telecommunication speed calling utilizing an affinity database
US6289084B1 (en) * 1998-05-29 2001-09-11 Lucent Technologies Inc. Apparatus, method and system for personal telecommunication call screening and alerting
US6405159B2 (en) * 1998-06-03 2002-06-11 Sbc Technology Resources, Inc. Method for categorizing, describing and modeling types of system users
US6161130A (en) * 1998-06-23 2000-12-12 Microsoft Corporation Technique which utilizes a probabilistic classifier to detect "junk" e-mail by automatically updating a training and re-training the classifier based on the updated training set
US6219643B1 (en) * 1998-06-26 2001-04-17 Nuance Communications, Inc. Method of analyzing dialogs in a natural language speech recognition system
US6349290B1 (en) * 1998-06-30 2002-02-19 Citibank, N.A. Automated system and method for customized and personalized presentation of products and services of a financial institution
US6099320A (en) 1998-07-06 2000-08-08 Papadopoulos; Anastasius Authoring system and method for computer-based training
US6269153B1 (en) 1998-07-29 2001-07-31 Lucent Technologies Inc. Methods and apparatus for automatic call routing including disambiguating routing decisions
WO2000007129A1 (en) 1998-07-31 2000-02-10 Summers Gary J Management training simulation method and system
JP3185977B2 (en) * 1998-08-12 2001-07-11 スタンレー電気株式会社 LED lamp
US6226618B1 (en) 1998-08-13 2001-05-01 International Business Machines Corporation Electronic content delivery system
US6389403B1 (en) 1998-08-13 2002-05-14 International Business Machines Corporation Method and apparatus for uniquely identifying a customer purchase in an electronic distribution system
US6389400B1 (en) 1998-08-20 2002-05-14 Sbc Technology Resources, Inc. System and methods for intelligent routing of customer requests using customer and agent models
US6128380A (en) 1998-08-24 2000-10-03 Siemens Information And Communication, Networks, Inc. Automatic call distribution and training system
US6694482B1 (en) * 1998-09-11 2004-02-17 Sbc Technology Resources, Inc. System and methods for an architectural framework for design of an adaptive, personalized, interactive content delivery system
US6606598B1 (en) * 1998-09-22 2003-08-12 Speechworks International, Inc. Statistical computing and reporting for interactive speech applications
US6405170B1 (en) * 1998-09-22 2002-06-11 Speechworks International, Inc. Method and system of reviewing the behavior of an interactive speech recognition application
GB2342528A (en) * 1998-10-05 2000-04-12 Ibm Interactive voice response
US6448980B1 (en) 1998-10-09 2002-09-10 International Business Machines Corporation Personalizing rich media presentations based on user response to the presentation
US6741967B1 (en) * 1998-11-02 2004-05-25 Vividence Corporation Full service research bureau and test center method and apparatus
US7263489B2 (en) * 1998-12-01 2007-08-28 Nuance Communications, Inc. Detection of characteristics of human-machine interactions for dialog customization and analysis
US6067538A (en) 1998-12-22 2000-05-23 Ac Properties B.V. System, method and article of manufacture for a simulation enabled focused feedback tutorial system
US6965925B1 (en) * 1998-12-31 2005-11-15 Nortel Networks, Ltd Distributed open architecture for media and telephony services
US6104790A (en) 1999-01-29 2000-08-15 International Business Machines Corporation Graphical voice response system and method therefor
US6434714B1 (en) 1999-02-04 2002-08-13 Sun Microsystems, Inc. Methods, systems, and articles of manufacture for analyzing performance of application programs
US6278976B1 (en) * 1999-03-25 2001-08-21 Michael Charles Kochian System for the delivery of audio recordings
US6314402B1 (en) * 1999-04-23 2001-11-06 Nuance Communications Method and apparatus for creating modifiable and combinable speech objects for acquiring information from a speaker in an interactive voice response system
US6731744B1 (en) * 1999-04-27 2004-05-04 Sprint Communications Company, L.P. Call processing system and service control point for handling calls to a call center
US6564197B2 (en) 1999-05-03 2003-05-13 E.Piphany, Inc. Method and apparatus for scalable probabilistic clustering using decision trees
US7086007B1 (en) * 1999-05-27 2006-08-01 Sbc Technology Resources, Inc. Method for integrating user models to interface design
US6405149B1 (en) * 1999-06-23 2002-06-11 Louis K. Tsai System and method for testing a telecommunication system
US6178404B1 (en) * 1999-07-23 2001-01-23 Intervoice Limited Partnership System and method to facilitate speech enabled user interfaces by prompting with possible transaction phrases
US6353825B1 (en) * 1999-07-30 2002-03-05 Verizon Laboratories Inc. Method and device for classification using iterative information retrieval techniques
US6782412B2 (en) * 1999-08-24 2004-08-24 Verizon Laboratories Inc. Systems and methods for providing unified multimedia communication services
US6964012B1 (en) * 1999-09-13 2005-11-08 Microstrategy, Incorporated System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services, including deployment through personalized broadcasts
US6282404B1 (en) 1999-09-22 2001-08-28 Chet D. Linton Method and system for accessing multimedia data in an interactive format having reporting capabilities
KR100749016B1 (en) * 1999-10-19 2007-08-14 아메리칸 캘카어 인코포레이티드 Technique for effective navigation based on user preferences
US7065188B1 (en) * 1999-10-19 2006-06-20 International Business Machines Corporation System and method for personalizing dialogue menu for an interactive voice response system
US6807574B1 (en) * 1999-10-22 2004-10-19 Tellme Networks, Inc. Method and apparatus for content personalization over a telephone interface
GB9926134D0 (en) * 1999-11-05 2000-01-12 Ibm Interactive voice response system
US6526382B1 (en) * 1999-12-07 2003-02-25 Comverse, Inc. Language-oriented user interfaces for voice activated services
GB9929284D0 (en) * 1999-12-11 2000-02-02 Ibm Voice processing apparatus
US6748361B1 (en) * 1999-12-14 2004-06-08 International Business Machines Corporation Personal speech assistant supporting a dialog manager
US7099835B2 (en) * 2000-01-31 2006-08-29 Roadside Telematics Corporation Methods and systems for providing life management and enhancement applications and services for telematics and other electronic medium
US6778643B1 (en) * 2000-03-21 2004-08-17 Sbc Technology Resources, Inc. Interface and method of designing an interface
US6920425B1 (en) * 2000-05-16 2005-07-19 Nortel Networks Limited Visual interactive response system and method translated from interactive voice response for telephone utility
US20020055868A1 (en) 2000-05-24 2002-05-09 Dusevic Angela G. System and method for providing a task-centric online environment
GB0013180D0 (en) * 2000-06-01 2000-07-19 Ibm Testing voice message applications
US6618715B1 (en) * 2000-06-08 2003-09-09 International Business Machines Corporation Categorization based text processing
US20020099613A1 (en) * 2000-06-14 2002-07-25 Garret Swart Method for forming and expressing reservables and engagements in a database for a transaction service
US20040085162A1 (en) * 2000-11-29 2004-05-06 Rajeev Agarwal Method and apparatus for providing a mixed-initiative dialog between a user and a machine
US20030161464A1 (en) * 2000-12-15 2003-08-28 International Business Machines Corporation On-hold information service with caller-controlled personalized menu
US7003079B1 (en) * 2001-03-05 2006-02-21 Bbnt Solutions Llc Apparatus and method for monitoring performance of an automated response system
US6823054B1 (en) * 2001-03-05 2004-11-23 Verizon Corporate Services Group Inc. Apparatus and method for analyzing an automated response system
US6810111B1 (en) * 2001-06-25 2004-10-26 Intervoice Limited Partnership System and method for measuring interactive voice response application efficiency
US7573986B2 (en) * 2001-07-18 2009-08-11 Enterprise Integration Group, Inc. Method and system for interjecting comments to improve information presentation in spoken user interfaces
US7065201B2 (en) * 2001-07-31 2006-06-20 Sbc Technology Resources, Inc. Telephone call processing in an interactive voice response call management system
US6868411B2 (en) * 2001-08-13 2005-03-15 Xerox Corporation Fuzzy text categorizer
US7920682B2 (en) * 2001-08-21 2011-04-05 Byrne William J Dynamic interactive voice interface
US6912272B2 (en) * 2001-09-21 2005-06-28 Talkflow Systems, Llc Method and apparatus for managing communications and for creating communication routing rules
US7092888B1 (en) * 2001-10-26 2006-08-15 Verizon Corporate Services Group Inc. Unsupervised training in natural language call routing
US6885733B2 (en) * 2001-12-03 2005-04-26 At&T Corp. Method of providing a user interface for audio telecommunications systems
US7054817B2 (en) * 2002-01-25 2006-05-30 Canon Europa N.V. User interface for speech model generation and testing
US7305070B2 (en) * 2002-01-30 2007-12-04 At&T Labs, Inc. Sequential presentation of long instructions in an interactive voice response system
US6914975B2 (en) * 2002-02-21 2005-07-05 Sbc Properties, L.P. Interactive dialog-based training method
US7103158B2 (en) * 2002-02-28 2006-09-05 Pacific Bell Information Services Dynamic interactive voice architecture
US7131117B2 (en) * 2002-09-04 2006-10-31 Sbc Properties, L.P. Method and system for automating the analysis of word frequencies
US7783475B2 (en) * 2003-01-31 2010-08-24 Comverse, Inc. Menu-based, speech actuated system with speak-ahead capability
US7280968B2 (en) * 2003-03-25 2007-10-09 International Business Machines Corporation Synthetically generated speech responses including prosodic characteristics of speech inputs
US7346151B2 (en) * 2003-06-24 2008-03-18 Avaya Technology Corp. Method and apparatus for validating agreement between textual and spoken representations of words
US7457395B2 (en) * 2003-12-15 2008-11-25 International Business Machines Corporation Dynamic allocation of voice ports and menu options in an interactive voice recognition system
US7317789B2 (en) * 2004-01-07 2008-01-08 International Business Machines Corporation Method and apparatus for automatic telephone menu navigation
US20060026049A1 (en) * 2004-07-28 2006-02-02 Sbc Knowledge Ventures, L.P. Method for identifying and prioritizing customer care automation
US8207936B2 (en) * 2006-06-30 2012-06-26 Sony Ericsson Mobile Communications Ab Voice remote control

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
No Search *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9376479B2 (en) 2002-12-31 2016-06-28 Anjinomoto Althea, Inc. Human growth hormone crystals and methods for preparing them
WO2012129854A1 (en) * 2011-04-01 2012-10-04 中兴通讯股份有限公司 Method and device for summarizing after calling
CN102739881A (en) * 2011-04-01 2012-10-17 中兴通讯股份有限公司 Method for carrying out summarizing after conversation and apparatus thereof
CN103873707A (en) * 2012-12-10 2014-06-18 中国电信股份有限公司 Incoming call reason recording method and call center seat system
US11550702B1 (en) 2021-11-04 2023-01-10 T-Mobile Usa, Inc. Ensuring that computer programs are accessible to users with disabilities, such as for use with mobile phones
US11860767B2 (en) 2021-11-04 2024-01-02 T-Mobile Usa, Inc. Testing computer program accessibility for users with disabilities, such as for use with mobile phones

Also Published As

Publication number Publication date
AU2003253680A8 (en) 2004-01-23
US7379537B2 (en) 2008-05-27
US20080313571A1 (en) 2008-12-18
US20020196277A1 (en) 2002-12-26
US20040042592A1 (en) 2004-03-04
WO2004006092A8 (en) 2004-09-02
US20040006473A1 (en) 2004-01-08
AU2003253680A1 (en) 2004-01-23
US20050078805A1 (en) 2005-04-14
US6842504B2 (en) 2005-01-11
US7551723B2 (en) 2009-06-23
US20040032935A1 (en) 2004-02-19
US8131524B2 (en) 2012-03-06

Similar Documents

Publication Publication Date Title
WO2004006092A2 (en) Method, system, and apparatus for automating the creation of customer-centric interface
US10554817B1 (en) Automation of contact workflow and automated service agents in contact center system
US6922466B1 (en) System and method for assessing a call center
US6898277B1 (en) System and method for annotating recorded information from contacts to contact center
US6823054B1 (en) Apparatus and method for analyzing an automated response system
US7039166B1 (en) Apparatus and method for visually representing behavior of a user of an automated response system
US6904143B1 (en) Apparatus and method for logging events that occur when interacting with an automated call center system
US6970554B1 (en) System and method for observing calls to a call center
US6937705B1 (en) Apparatus and method for visually representing events in calls handled by an automated response system
US20040044950A1 (en) Method and system for automating the analysis of word frequencies
US7539656B2 (en) System and method for providing an intelligent multi-step dialog with a user
US9129215B2 (en) Operation and method for prediction and management of the validity of subject reported data
US7580850B2 (en) Apparatus and method for online advice customer relationship management
Musa The operational profile in software reliability engineering: an overview
JP4806034B2 (en) Method for calculating user expert index by keyword and system for executing this method
US20050195966A1 (en) Method and apparatus for optimizing the results produced by a prediction model
AU2017415315B2 (en) Integrating virtual and human agents in a multi-channel support system for complex software applications
CN110232573A (en) Based on interactive intelligent response system
EP2944077B1 (en) Method and apparatus for analyzing leakage from chat to voice
US20130121580A1 (en) Analysis of service delivery processes based on interrogation of work assisted devices
WO2006050503A9 (en) System and method for identifying and approaching browsers most likely to transact business based upon real-time data mining
US9406075B1 (en) System and method for improving tuning using user provided satisfaction scores
EP2369481A2 (en) A system for detecting usability problems of users while using their mobile devices
KR20210118634A (en) Call center service efficiency improvement system based on data and method thereof
KR102073069B1 (en) Pc as management system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
D17 Declaration under article 17(2)a
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP