WO2002054333A2 - A method and system for improved speech recognition - Google Patents
A method and system for improved speech recognition Download PDFInfo
- Publication number
- WO2002054333A2 WO2002054333A2 PCT/IL2001/001221 IL0101221W WO02054333A2 WO 2002054333 A2 WO2002054333 A2 WO 2002054333A2 IL 0101221 W IL0101221 W IL 0101221W WO 02054333 A2 WO02054333 A2 WO 02054333A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sentence
- input sentence
- sentences
- agent
- weight
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/10—Speech classification or search using distance or distortion measures between unknown speech and reference templates
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/085—Methods for reducing search complexity, pruning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/26—Devices for calling a subscriber
- H04M1/27—Devices whereby a plurality of signals may be stored simultaneously
- H04M1/271—Devices whereby a plurality of signals may be stored simultaneously controlled by voice recognition
Definitions
- the present invention relates to the field of human-machine interfaces. More particularly, the present invention relates to a method and system for improving the accuracy, reliability and probability of voice recognition in human-machine interfaces.
- ASR technology the system is able to recognize the user's voice without a "training" process in lab condition.
- ASR technology has a set of sentences generated, for example, from databases consist Names, Addresses and Numbers that are associated with specific subject. Selected sentences from this set are compared with the vocal input from a user, and if the level of match between one of the sentences and the input sentence reaches a predetermined value, this sentence is output from the ASR system.
- Directed dialogue applications employ ASR technology to guide the conversation with the user, and wait for specific answers from the user.
- An ASR system receives sentences in human language (voice) as an input, and selects N-best sentences (selected from a plurality of sentences which are generated in advanced, using words from the databases and may represent the sentence that was probably said by the user), according to the user's input sentence (where N can be any predefined integer and positive number).
- N can be any predefined integer and positive number.
- Each of the N-best sentences has a corresponding "weight" (i.e., the probability percentage that this sentence has actually been said by the user), which defines the level of match with the sentence said by the user.
- a threshold "weight" is predefined in the ASR as well, in order to decide, which sentence (from the N-best) will be the output of the ASR.
- the sentence, among the N-best sentences, that has the highest weight beyond the threshold, will be output from the ASR (i.e., the most matched sentence, among the N-best, to the sentence said by the user).
- the ASR i.e., the most matched sentence, among the N-best, to the sentence said by the user.
- the threshold level should be reduced, in order to output a sentence, but such reduction may result in inaccurate answers.
- non-of the N-best sentences passes the threshold, and therefore the ASR provides no output.
- IVR Interactive Voice Reply
- Another technology that has been developed is the Interactive Voice Reply (IVR), which is actually a menu that allows the user to choose between two or more possibilities in each stage of the conversation. This technology is also limited due to the fact that the conversation flow is restricted to the options offered by the menus and the way they are structured.
- STT Speech- To-Text
- TTS Text-To-Speech
- the present invention is directed to a method for improving speech recognition.
- a Speech Recognition (SR) system for outputting a sentence that matches an input sentence of a user, the SR system comprises a plurality of predetermined sentences that are associated with a specific subject, and a set of predetermined number of N sentences, selected from the plurality of predetermined sentences, having the highest level of match to the input sentence, the SR system having a predetermined threshold for the level of match, beyond which, the sentence, from the set, that has the highest level of match, is output as a recognized input sentence, is provided.
- a verbal input sentence is received from the user in the SR system and a weight is assigned for the level of match for each sentences from the plurality according to the content of the input sentence.
- N sentences having the highest weight from the plurality are selected. If the weight of at least one of the N sentences is higher than the threshold, the sentence having the highest weight is output as the recognized input sentence. If the weight of each selected sentence is lower than the threshold, the weight of each selected sentence is varied according to different predetermined matching criteria. If the varied weight of at least one of the N sentences is higher than the threshold, the sentence having the highest varied weight is output as the recognized input sentence, otherwise providing indication that correspond to unrecognized input sentence.
- the input sentence may be recognized by further including the assistance of a human agent that is connected to the SR system.
- the set of predetermined number of N sentences to be displayed is forwarded to the agent and the input sentence is played to the agent, so as to allow the agent to select a sentence from the set to be output as the recognized input sentence to be output as the recognized input sentence.
- the input sentence is played to the agent, and the agent is allowed to recognize the input sentence and to type at least a portion of one or more words from the recognized input sentence. If the complete input sentence is typed by the agent, the typed sentence is output as the recognized input sentence. Otherwise, one or more partially typed words are automatically completed and a sentence consisting of completed words is output as the recognized input sentence.
- the input sentence is played to the agent and the agent is allowed to recognize the input sentence and to recite the recognized input sentence to a voice recognition unit adapted to recognize the voice of the agent according to specific parameters that can be access by the voice recognition unit.
- a sentence that corresponds to the recited input sentence is output by the voice recognition unit, as the recognized input sentence.
- the method further comprises recognizing the input sentence using the assistance of a human agent that is connected to the SR system, by performing the following steps: A plurality of input sentences said by a corresponding plurality of users are received and available human agents are allocated, a set of predetermined number of N sentences that are associated with an input sentence, are forwarding to be displayed to each available agent. A different input sentence is played to each available agent and the available agent is allowed to select a sentence from the set to be output as the corresponding recognized input sentence.
- the method further comprises recognizing the input sentence using the assistance of a human agent that is connected to the SR system, by performing the following steps: A plurality of input sentences said by a corresponding plurality of users are received and available human agents are allocated. The input sentence is played to the agent and the agent is allowed to recognize the input sentence and to type at least a portion of one or more words from the recognized input sentence. If the complete input sentence is typed by the agent, the typed sentence is output as the recognized input sentence. Otherwise, one or more partially typed words are automatically completed and a sentence consisting of completed words is output as the recognized input sentence.
- the method further comprises recognizing the input sentence using the assistance of a human agent that is connected to the SR system, by performing the following steps: A plurality of input sentences said by a corresponding plurality of users are received and available human agents are allocated. The input sentence is played to the agent and the agent is allowed to recognize the input sentence and to recite the recognized input sentence to a voice recognition unit, adapted to recognize the voice of the agent according to specific parameters that can be access by the voice recognition unit. A sentence that corresponds to the recited input sentence is output by the voice recognition unit, as the recognized input sentence.
- the weight of each selected sentence may be varied by evaluating the logic meaning of each selected sentence that consists of objects and the logic relation between them, according to comparisons of the objects and the logic relation between them, to different combinations of predetermined, and essentially similar, objects and the logic relation between them, based on human's common knowledge that is relevant to the subject of the selected sentences, and by assigning higher weights to one or more selected sentences, each of which having a logic meaning, according to the level of similarity of its logical meaning, to the logical meaning represented by the essentially similar objects and the logic relation between them. If the assigned weight of at least one of the selected sentences is higher than the threshold, the selected sentence having the highest weight is output as the recognized input sentence.
- the weight of each selected sentence may also be varied by evaluating each selected sentence, according to the context of the selected sentence with respect to previously recited sentences by the user, with respect to expected objects and/or indirect objects and/or subjects that are essentially related to the content of the previously recited sentences and by assigning higher weights to one or more selected sentences, having closer context relation to previously recited sentences. If the assigned weight of at least one of the selected sentences is higher than the threshold, the selected sentence having the highest weight is output as the recognized input sentence.
- the weight of each selected sentence may also be varied by evaluating each selected sentence, according to the context of the selected sentence with respect to expected subsequent state(s) of interaction between the user and the system to which the output of the SR system is input and assigning higher weights to one or more selected sentences, having closer context relation to an expected subsequent state. If the assigned weight of at least one of the selected sentences is higher than the threshold, the selected sentence having the highest weight is output as the recognized input sentence.
- the present invention is also directed to an improving speech recognition system, that comprises: a) Speech Recognition (SR) unit for receiving a verbal input sentence and outputting a sentence that matches an input sentence of a user, the SR system comprises a plurality of predetermined sentences that are associated with a specific subject, and a set of predetermined number of N sentences, selected from the plurality of predetermined sentences, having the highest level of match to the input sentence, the SR system having a predetermined threshold for the level of match, beyond which, the sentence, from the set, that has the highest level of match, is output as a recognized input sentence; and b) processing means for assigning a weight for the level of match for each sentences from the plurality according to the content of the input sentence and for selecting N sentences having the highest weight from the plurality; for outputting the sentence having the highest weight as the recognized input sentence, if the weight of at least one of the N sentences is higher than the threshold; for varying the weight of each selected sentence according to different predetermined matching criteria, if the weight of each selected sentence is lower than
- the system may further comprise a call center that is connected to the SR system and linked to a human agent(s), for recognizing the input sentence using the assistance of the human agent(s).
- the system comprises: a) a control unit for receiving a plurality of input sentences said by a corresponding plurality of users; for allocating available human agents; and for forwarding a set of predetermined number of N sentences that are associated with an input sentence, to be displayed to each available agent; b) circuitry for playing a different input sentence to each available agent; and c) circuitry for outputting the corresponding recognized input sentence that is selected by the available agent from the set, to be output as the corresponding recognized input sentence.
- the system that comprises a call center that is connected to the SR system and linked to a human agent(s), for recognizing the input sentence using the assistance of the human agent(s), may further comprise: a) a control unit for receiving a plurality of input sentences said by a corresponding plurality of users; and for allocating available human agents; b) circuitry for playing a different input sentence to each available agent; c) input means for typing at least a portion of one or more words from the recognized by the available agent; and d) circuitry for outputting the typed sentence as the recognized input sentence and/or computerized means for automatically completing one or more partially typed words before outputting the partially typed sentence.
- the system that comprises a call center that is connected to the SR system and linked to a human agent(s), for recognizing the input sentence using the assistance of the human agent(s), may further comprise: a) a control unit for receiving a plurality of input sentences said by a corresponding plurality of users; and for allocating available human agents; b) circuitry for playing a different input sentence to each available agent; and c) voice recognition unit for recognizing an input sentence recited by the available agent according to specific parameters that can be access by the voice recognition unit and for outputting a sentence corresponding to the recited input sentence, as the recognized input sentence.
- FIG. 1 schematically illustrates a conventional voice recognition system
- FIG. 2 schematically illustrates an enhanced voice recognition system, according to a preferred embodiment of the invention.
- Fig. 1 schematically illustrates a voice recognition system 100 according to a prior art.
- ASR 102 receives as its input sentences from a user, which could be sent, for example, by phone 101.
- ASR 102 tries to guess the sentence of the user, according to predetermined sentences (phrased by using given words and grammar rules for making the sentences) and by the threshold limit.
- ASR 102 provides N-best sentences with their weight. If there is any sentence, among the N-best sentence, which passed the threshold and has the highest weight, this sentence will be provided as an output of ASR 102, and will represent the sentence that probably said by the user for further processing. If there is no such sentence, then ASR 102 failed in recognizing what was the sentence that has been said by the user.
- Fig. 2 schematically illustrates an enhanced voice recognition system 200.
- ASR 102 receives a sentence as an input from a user, which could be sent, for example, by phone 101. ASR 102 tries to guess the sentence received from the user, according to predetermined sentences (phrased by using given words and grammar rules for making the sentences) and to the threshold limit. At the end of recognition process, ASR 102 provides N-best sentences with their corresponding weight. If there is any sentence, among the N-best sentence, which passed the threshold and has the highest weight, this sentence will be provided as an output of ASR 102 and will represent the sentence that was probably said by the user for further processing. If there is no such sentence, then ASR 102 failed in recognizing what was actually the sentence that has been said by the user.
- the N-best sentences that hasn't passed the threshold are transferred to a Common Sense module 201 for further processing.
- Common Sense module 201 increases the weight of each sentence among the N-best sentences that has a logical meaning in reality, and/or decrease the weight of each sentence among the N-best sentences that has less logical meaning in reality, even though it grammatically correct. Taking for example, the sentence “the books are swimming in the air", even though it grammatically correct, it has no logical meaning in reality.
- Common Sense module 201 uses ontology component (not shown in Fig.
- the ontology component may be implemented as a predetermined database that comprises objects (usually nouns) and the logical relation between them.
- Context Handling module 202 attempts to increase the weight of each sentence among the N-best sentences with words that represent objects, indirect objects or subjects that has exist in the context of the previous sentences said during the conversation, and recognized confidently by the ASR. In addition, the weight of each sentence among the N-best sentences that has no context with the previous sentences of the conversation, may be decreased. Context Handling module 202 is used to track the conversation in order to obtain the user's intention at any time.
- the Context Handling module 202 may store subjects, objects and indirect objects that were mentioned directly or indirectly during interaction with the user, and may be related to items stored in an accessible database. For example, during a conversation, a user may use the term 'it' in a sentence instead of a noun used in an earliest sentence. In another example, a user has mention in earlier sentence a name of a movie, and at the current sentence there is a name of an actor. After the Context Handling module 202 completed to check all the N-best sentences, and changed their weights accordingly (as described hereinabove), there may be one or more sentences that will pass the threshold of the ASR 102.
- the sentence that has the highest weight and passes the threshold will represent the sentence that said by the user for further processing by the system (not shown), to which the ASR 102 is attached. At that point the recognition process of the system 200 is completed and paused, until a new sentence will be entered to system 200.
- Flow Handling module 203 is based on a principle of state-machine (i.e., system 200 "knows" which are the possible next steps it can make, according to the previous status of the conversation). Flow Handling module 203 increases the weight of .a sentence, among the N-best, that will allow the system to move from the current state of the conversation, to the next possible state. The weight of the sentence that won't fulfill this state-condition will be decreased.
- Flow Handling module 203 After the Flow Handling module 203 has completed checking all the N-best sentences, and changed their weights as described hereinabove, there may be one or more sentences that will pass the threshold of the ASR.
- the sentence that has the highest weight and passed the threshold will represent the sentence that said by the user for further processing by the system (not shown), to which the ASR 102 is attached. At that point the recognition process of the system 200 is completed and paused, until a new sentence will be entered to system 200.
- each of the modules 201, 202 and 203 is independent and may change the weights of the N-best sentences according to its criteria.
- One of the modules may increase the weight of a specific sentence and the other module may decrease the weight of this specific sentence.
- HAS Manager Human Assistance Manager
- HAS manager 204 receives the unrecognized input sentence of the user and the N-best possibilities of recognition for this input sentence.
- the Agent of a Call Center 205 does the recognition of the user's sentence in a very short time, since the Agent of a Call Center 205 hears only the user sentence that has not been successfully recognized (out of the whole conversation time period).
- Agent of a Call Center 205 hears only the unrecognized sentence of the user and recognizes it by choosing on of the following three alternatives, which are selecting, typing or speaking:
- the N-best sentences are introduced to an Agent of a Call Center 205 and according to block 206, he selects the sentence with closest match to the user's sentence that was heard.
- Agent of a Call Center 205 recognize according to block 207 the sentence of the user that was heard, and he directly types said sentence. In order to save human resources (i.e., to shorten the typing time), parts of the sentence that is typed by the Agent of a Call Center 205 are completed by executing a Completing Application 209 that completes words that are partially typed by the Agent of a Call Center 205. The completion is carried out using complete words that are stored in a database and match the characters that are partially typed.
- Agent of a Call Center 205 hears the user sentence and according to block 208 he recites said sentence clearly (using his own voice) to a Voice Recognition (VR) unit 210.
- This VR unit 210 is "trained” for the task of recognizing this specific voice profile of the Agent of a Call Center 205 (the "training” is carried out by receiving the Agent's voice several times in different variations in his voice in lab conditions). In such case, the recognition accuracy is substantially improved.
- Agent of a Call Center 205 selects one of the three options 206, 207 or 208, the recognized sentence is delivered to further processing, by the system (not shown), to which the ASR 102 ' is attached. At that point the recognition process of the system 200 is completed and paused, until a new sentence will be entered to system 200.
- Agent of a Call Center 205 In order to reduce time of recognition by Agent of a Call Center 205, he receives only the unrecognized sentence from a complete session with the user, instead of listening to the complete conversation with the user.
- a typical session with a user consists of the following time periods:
- the Entry/Exit time reflects the time required for greetings, such as "Hello'V'Bye", respectively.
- This section of the conversation is fixed and does not require HAS intervention.
- Small grammar talks reflects the confirmation words, such as "Tell me Yes or No”.
- Such grammar has high probability to be recognized by automate measures, such as ASR, due to its small size (for example, ' ⁇ es" or "No"). This section of the conversation does not require HAS intervention.
- Question posed reflects the time required for a user to introduce a question/request (this is normally determines the time period "X", as the relative time period to estimate all sections of the conversation). This section of the conversation does not require HAS intervention.
- Computer response time reflects the time required for play the response to the user.
- the computer is "Speaking", while the user listen. This section of the conversation does not require HAS intervention.
- HAS processing time reflects the time required for the HAS to convert an unrecognized sentence, to a string, in one of the three ways that described hereinabove.
- the average conversation segment length is:
- 7X - is total conversation segment, containing HAS intervention.
- 5.5X - is total conversation segment, which does not containing HAS intervention.
- HAS 220 assistance Human Resource
- the principle of "big numbers” will work, which require about 100% recognition, we will need 1 Human Resource per every 15 concurrent users.
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2002217413A AU2002217413A1 (en) | 2001-01-01 | 2001-12-31 | A method and system for improved speech recognition |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IL14067301A IL140673A0 (en) | 2001-01-01 | 2001-01-01 | A method and system for improved speech recognition |
IL140673 | 2001-01-01 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2002054333A2 true WO2002054333A2 (en) | 2002-07-11 |
WO2002054333A3 WO2002054333A3 (en) | 2002-11-21 |
Family
ID=11074993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL2001/001221 WO2002054333A2 (en) | 2001-01-01 | 2001-12-31 | A method and system for improved speech recognition |
Country Status (3)
Country | Link |
---|---|
AU (1) | AU2002217413A1 (en) |
IL (1) | IL140673A0 (en) |
WO (1) | WO2002054333A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102369568A (en) * | 2009-02-03 | 2012-03-07 | 索夫特赫斯公司 | Systems and methods for interactively accessing hosted services using voice communications |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4624008A (en) * | 1983-03-09 | 1986-11-18 | International Telephone And Telegraph Corporation | Apparatus for automatic speech recognition |
US5457768A (en) * | 1991-08-13 | 1995-10-10 | Kabushiki Kaisha Toshiba | Speech recognition apparatus using syntactic and semantic analysis |
US5754978A (en) * | 1995-10-27 | 1998-05-19 | Speech Systems Of Colorado, Inc. | Speech recognition system |
-
2001
- 2001-01-01 IL IL14067301A patent/IL140673A0/en unknown
- 2001-12-31 AU AU2002217413A patent/AU2002217413A1/en not_active Abandoned
- 2001-12-31 WO PCT/IL2001/001221 patent/WO2002054333A2/en not_active Application Discontinuation
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4624008A (en) * | 1983-03-09 | 1986-11-18 | International Telephone And Telegraph Corporation | Apparatus for automatic speech recognition |
US5457768A (en) * | 1991-08-13 | 1995-10-10 | Kabushiki Kaisha Toshiba | Speech recognition apparatus using syntactic and semantic analysis |
US5754978A (en) * | 1995-10-27 | 1998-05-19 | Speech Systems Of Colorado, Inc. | Speech recognition system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102369568A (en) * | 2009-02-03 | 2012-03-07 | 索夫特赫斯公司 | Systems and methods for interactively accessing hosted services using voice communications |
Also Published As
Publication number | Publication date |
---|---|
WO2002054333A3 (en) | 2002-11-21 |
IL140673A0 (en) | 2002-02-10 |
AU2002217413A1 (en) | 2002-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1380153B1 (en) | Voice response system | |
US7783475B2 (en) | Menu-based, speech actuated system with speak-ahead capability | |
US6173266B1 (en) | System and method for developing interactive speech applications | |
US7406413B2 (en) | Method and system for the processing of voice data and for the recognition of a language | |
US9576571B2 (en) | Method and apparatus for recognizing and reacting to user personality in accordance with speech recognition system | |
US6604075B1 (en) | Web-based voice dialog interface | |
EP1267326B1 (en) | Artificial language generation | |
US7228278B2 (en) | Multi-slot dialog systems and methods | |
EP1217609A2 (en) | Speech recognition | |
US20030130849A1 (en) | Interactive dialogues | |
US8457973B2 (en) | Menu hierarchy skipping dialog for directed dialog speech recognition | |
WO2002049253A2 (en) | Method and interface for intelligent user-machine interaction | |
US20050131684A1 (en) | Computer generated prompting | |
EP2028646A1 (en) | Device for modifying and improving the behaviour of speech recognition systems | |
US6591236B2 (en) | Method and system for determining available and alternative speech commands | |
USH2187H1 (en) | System and method for gender identification in a speech application environment | |
CN112131359A (en) | Intention identification method based on graphical arrangement intelligent strategy and electronic equipment | |
CN112685545A (en) | Intelligent voice interaction method and system based on multi-core word matching | |
JP4103085B2 (en) | Interlingual dialogue processing method and apparatus, program, and recording medium | |
WO2002054333A2 (en) | A method and system for improved speech recognition | |
EP1301921B1 (en) | Interactive dialogues | |
US20060069560A1 (en) | Method and apparatus for controlling recognition results for speech recognition applications | |
Williams | Dialogue Management in a mixed-initiative, cooperative, spoken language system | |
US7054813B2 (en) | Automatic generation of efficient grammar for heading selection | |
Goldman et al. | Voice Portals—Where Theory Meets Practice |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
AK | Designated states |
Kind code of ref document: A3 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |