WO2003034217A1 - Event queue managing system - Google Patents

Event queue managing system Download PDF

Info

Publication number
WO2003034217A1
WO2003034217A1 PCT/GB2002/004357 GB0204357W WO03034217A1 WO 2003034217 A1 WO2003034217 A1 WO 2003034217A1 GB 0204357 W GB0204357 W GB 0204357W WO 03034217 A1 WO03034217 A1 WO 03034217A1
Authority
WO
WIPO (PCT)
Prior art keywords
events
event queue
application
event
applications
Prior art date
Application number
PCT/GB2002/004357
Other languages
French (fr)
Inventor
Steven Pope
David Riddoch
Original Assignee
At & T Laboratories-Cambridge Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by At & T Laboratories-Cambridge Limited filed Critical At & T Laboratories-Cambridge Limited
Publication of WO2003034217A1 publication Critical patent/WO2003034217A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications

Definitions

  • US 4779 194 discloses an arrangement for managing event allocation in a system which is capable of scheduling a plurality of processes in a large computer system, such as a multi-processor system.
  • the delivery means may be arranged to respond in accordance with system load.

Abstract

In a computer system (1), an operating system (5) in a kernel (4) has a common event queue (6) in memory shared with a network interface controller (NIC) (3). The NIC (3) receives events such as data packets from a network (2) for delivery to applications (71, 7N) which are run one at a time on the computer (1). Each application has its own event queue (81, 8N) and, when this becomes empty, the currently running application requests delivery of events (30). The operating system (5) distributes (33-36) events from the common event queue (6) to all of the applications (71, 7N) for which events in the common queue (6) are destined.

Description

EVENT QUEUE MANAGING SYSTEM
The present invention relates to an event queue managing system and to a computer system incorporating such a management system. The invention also relates to a method of managing a common event queue, a program for programming a computer to perform such a method, a storage medium containing such a program, and a computer programmed by such a program.
An event queue is a means by which an operating system of a computer can inform applications running one at a time on the or each central processing unit of a computer of interesting events. In its most generic form, it is a queue of events between an event producer and an event consumer. Events can be anything of interest to any of the applications and a typical example is the arrival of data, for example from a network, destined for one of the applications.
Figure 1 of the accompanying drawing illustrates a computer 1 connected to a network 2 such as an Ethernet or a collapsed local area network (CLAN), both of which are well known in this technical field. The computer 1 has a network interface card (NIC) 3 for interfacing with the network 2 and a kernel 4 containing an operating system 5 of the computer and a common event queue 6. The computer 1 contains N applications 7ls ...7N which are run one at a time on the computer (or one at a time on each processor of a multiprocessor computer) in accordance with a schedule arranged by a scheduler 9 within the operating system 5. Each of the applications has its own application event queue 8I,....8N.
When, for example, a data packet destined for one of the applications 81;.. „8N is received from the network 2, the NIC 3 enters the data in the common event queue 6 of the kernel 4 (or supplies the data directly to a user level buffer and enters a synchronisation event in the common event queue 6) and generates an interrupt. An interrupt handler within the operating system 5 distributes the event to the application event queue 8ls....8N of the application 7l5....7κ for which the event is destined. h practice, events received from the network 2 are stored in the common event queue 6 and the NIC 3 does not generate an interrupt for each received packet. Instead, the NIC 3 generates an interrupt if the common event queue 6 remains non-empty for a predetermined time or after a predetermined time period following delivery of an event to the queue 6. These time periods must be carefully chosen as a compromise between short time periods, which reduce the latency of the system but require the handling of relatively large numbers of interrupts and system calls, and longer periods, which increase the latency but reduce the number of interrupts and system calls.
A currently running application which wishes to receive further events from the common event queue 6 must make a system call to the operating system 5 requesting delivery of an event or events which are in the common event queue and which are destined for that application. The operating system 5 checks the common event queue 6 for events destined for the calling application and delivers any such events only to the calling application as the return part of the system call.
US 4779 194 discloses an arrangement for managing event allocation in a system which is capable of scheduling a plurality of processes in a large computer system, such as a multi-processor system.
JP 60 008 945 discloses an arrangement for consolidating a plurality of event queues into a single queue for a plurality of processes.
According to a first aspect of the invention, there is provided an event queue managing system comprising: a common event queue for receiving events, each of which has a destination application to which the event is to be delivered; and delivery means responsive to an event delivery request from a currently running one of a plurality of applications for delivering at least some of the events in the common event queue to the destination applications thereof. According to a second aspect of the invention, there is provided a method of managing a common event queue for events, each of which has a destination application to which the event is to be delivered, the method comprising delivering, in response to an event delivery request from a currently running one of a plurality of applications, at least some of the events in the common event queue to the destination applications thereof.
The system may comprise memory arranged to be shared with the applications. The delivery means may be arranged to deliver the at least some events to the shared memory. The delivery means may be arranged to deliver each of the at least some events to the application event queue in the shared memory of the destination application.
The delivery means may be arranged to deliver all of the events whose destination applications are capable of event reception.
The delivery means may be arranged to deliver all of the events in the common event queue.
The delivery means may be arranged to remove all of the events in the common event queue in response to the event delivery request.
At least one of the events may indicate the arrival of data.
At least one of the events may indicate a direct memory access transfer completion.
At least one of the events may indicate the arrival of an out of band message.
The common event queue may be arranged to receive the events from a network. The system may comprise a network interface controller for supplying the events to the common event queue. The common event queue may be in a kernel. The common event queue may be in memory which is shared by the kernel and the network interface controller.
The delivery means may be arranged to be responsive to an interrupt from the network interface controller.
The delivery means may be arranged to deliver at least some of the events when the content of the common event queue reaches a first predetermined level. The first predetermined level may correspond to the common event queue being nearly full.
The delivery means may be arranged to deliver at least some of the events when the common event queue has remained non-empty for a predetermined time period.
The delivery means may be arranged to respond in accordance with application requirements.
The delivery means may be arranged to respond in accordance with system load.
According to a third aspect of the invention, there is provided a computer system comprising a managing system according to the first aspect of the invention, at least one central processing unit, and a plurality of applications arranged to run one at a time on the or each central processing unit.
The system may comprise a server arranged to be connected to a network.
Each of the applications may have an application event queue. The application event queues may reside in memory shared with an operating system. Each application may be arranged, when running, to produce an event delivery request when the content of the application event queue falls to a second predetermined level. The second predetermined level may correspond to the application event queue being empty. Each of the applications whose application event queue is non-empty may be indicated as being runnable.
The delivery means may be arranged, when delivering an event to any of the application event queues which is full, to increase the capacity of the application event queue.
According to a fourth aspect of the invention, there is provided a program for performing a method according to the second aspect of the invention.
According to a fifth aspect of the invention, there is provided a storage medium containing a program according to the fourth aspect of the invention. According to a sixth aspect of the invention, there is provided a computer programmed by a program according to the fourth aspect of the invention.
It is thus possible to provide an arrangement which reduces the number of interrupts and system calls required to handle the delivery of events, hi particular, because a request for event delivery results in events being delivered not just to the application which made the request but to other applications, the common event queue can be managed for longer before an interrupt is necessary. This can be achieved without substantially affecting the latency.
Whenever an application which is currently being run requires more events, for example because it has processed all of the events in its own event queue, it issues a request for delivery of further events. This results in the delivery of events not only to the application which issued the request but to other applications for which events are present in the common event queue. The same application may then be rescheduled for running, for example if its event queue now contains events for processing.
Alternatively, the application event queues in the applications may be checked and, for example, any application whose event queue is full or nearly full may be scheduled to be run in preference to the application which issued the request. All of the events in the common event queue may be delivered in response to the request provided they are deliverable. For example, an event may be deliverable if the application event queue of its destination application is not full or is full but can be expanded to accommodate one or more further events. Events which cannot be delivered, for example because the application event queue is full and cannot be expanded, may be discarded. The application issuing the request may specify a filter for its queue, for example to avoid delivery of duplicate events or to avoid "receiver livelock" by setting a limited time period for event delivery to ensure that not all of the current time slot for that application is used to deliver the events, (in which case there may not be time to deliver all of the deliverable events from the common event queue).
This management technique for event delivery may be used in combination with known techniques, for example as described hereinbefore. Such an arrangement allows event delivery to be based on the most appropriate or most efficient strategy for the prevailing conditions.
An embodiment of the invention will be described with reference to the accompanying drawings, in which:
Figure 1 is a schematic diagram of a known type of computer system for managing event distribution;
Figure 2 is a schematic diagram of a computer system constituting an embodiment of the invention; and
Figure 3 and 4 are flow diagrams illustrating part of the operation of the computer system of Figure 2.
Figure 2 illustrates a computer system which differs from that shown in Figure 1 in that the common event queue 6 is provided in memory within the operating system 5 shared with the NIC 3 and the application event queues 8ls ...,8N are in memory shared between the operating system 5 and the applications 7\, ...,7N as indicated by the broken lines 10ι, ...,10N- Events such as data packets received from the network 2 by the NIC 3 are thus written directly into the shared memory and the NIC 3 informs the operating system 5 when such events have been placed in the common event queue 6.
Figure 3 illustrates the processing loop of each of the applications 7l ...7N while running on the computer 1. The application is scheduled for running at 20 and a step 21 determines whether the application event queue is empty. If not, a step 22 takes the next event from the application event queue and a step 23 processes this event. When all of the processing which can be accomplished before requiring the next event has been completed, control returns to the step 21.
The scheduler 9 within the operating system 5 schedules a time slot for each application to run on the computer 1. If the application follows the loop comprising the steps 21 to
23 and does not empty its event queue by the end of the time slot, the scheduler 9 deschedules the application and schedules another of the applications for running on the computer 1. The scheduler 9 can also deschedule the application at any time in response to other system requirements, such as another interrupt having made a higher priority application runnable. However, if the step 21 determines that the application event queue is empty before the end of the scheduled time slot for the application, a step
24 issues a request for the delivery of events destined for the application. This is received by the operating system 5, which then executes privileged code on behalf of the application.
Figure 4 illustrates how the operating system 5 handles an event delivery request. The request is received by the operating system at 30. A "timeout" period may be specified by the application which issued the request so as to allow time within that application time slot for the application to be run again, but this is not shown in Figure 4 for simplicity. A step 31 determines whether the common event queue 6 is empty. If not, the next event is taken from the common event queue at 32 and a step 33 determines whether this event is deliverable. In particular, the step 33 determines whether the application event queue in the destination application is full. If not, a step 34 delivers the event to the event queue in the destination application in the shared memory. If the event cannot be delivered, the application event queue is expanded at 35 and the event is then delivered by the step 34. Alternatively, if the application event queue is full, the event maybe discarded, for example if the queue cannot be expanded, and the queue marked as "overflown".
A step 36 determines whether the application event queue is empty and the application is blocked. If so, the application is woken up at 37 and control returns to the step 31. If not, control returns directly to the step 31.
When the step 31 determines that the common event queue is empty, a step 38 determines whether the application which made the event delivery request has an empty application event queue and wishes to block. If not, control returns to the application at 39. Otherwise, a step 40 marks the application as blocked and a step 41 calls the scheduler 9 so as to determine which application should then be scheduled to run on the computer 1.
Following the return step 39, the application which issued the event delivery request runs for the remainder of its time slot or until it issues a further event delivery request, is marked as blocked, or is descheduled.
The actual delivery of events is performed at system level by the shared memory between the kernel 4 and the applications 7l5 ....,7N and so is not limited to delivery of events only to the application which made the request. However, an event delivery request requires a system call from the currently running application to the operating system.
In addition to performing this strategy for delivering events to applications, the operating system 5 has the ability to deliver events in accordance with other strategies. In particular, the NIC 3 determines when the common event queue 6 has remained non empty for a predetermined period of time and generates an interrupt request for the operating system 5. The interrupt handler of the operating system 5 suspends the application which was running when the interrupt was generated and causes the operating system to deliver events to the applications
Figure imgf000011_0001
Alternatively or additionally, the NIC 3 may raise an interrupt each time an event has been waiting in the common event queue 6 for a predetermined period of time. In these cases, the predetermined time period may be selected in accordance with the particular requirements. For example, this may vary dynamically depending on the requirements of each application and the system load. In one example where the system is lightly loaded and there is a requirement for low latency, the relevant time period or periods may be relatively short or even zero.
A further possibility is to generate an interrupt when a "high priority" event arrives at the common event queue 6. The operating system interrupt handler suspends the application which was running when the interrupt was raised and delivers at least that event to its destination application event queue.
The NIC 3 also monitors the common event queue 6 to determine if the queue fills to a predetermined threshold such that it is nearly full. If this happens, then an interrupt is raised so that the operating system 5 delivers some or all of the events in the common event queue 6 so as to avoid an overflow of the common event queue.
It is thus possible to improve the delivery of events to applications. Because all applications receive events when any application issues a request for event delivery, fewer interrupts of and system calls to the operating system are generated. This reduces the processing time required to service interrupt requests and system calls. Also, the latency for all applications is improved. Where applications are not consuming their events at such a rate that delivery requests are issued from the running applications, the system reverts to the conventional event delivery strategy so that performance is no worse than for a conventional type of system. Thus, advantages in operation are achieved with substantially no penalties compared with known systems.

Claims

CLAIMS:
1. An event queue managing system comprising: a common event queue for receiving events, each of which has a destination application to which the event is to be delivered; and delivery means responsive to an event delivery request from a currently running one of a plurality of applications for delivering at least some of the events in the common event queue to the destination applications thereof.
2. A system as claimed in claim 1, comprising memory arranged to be shared with the applications.
3. A system as claimed in claim 2, in which the delivery means is arranged to deliver the at least some events to the shared memory.
4. A system as claimed in claim 2 or 3, in which the shared memory is arranged to contain application event queues of the applications.
5. A system as claimed in claim 4 when dependent on claim 3, in which the delivery means is arranged to deliver each of the at least some events to the application event queue in the shared memory of the destination application.
6. A system as claimed in any one of the preceding claims, in which the delivery means is arranged to deliver all of the events whose destination applications are capable of event reception.
7. A system as claimed in any one of the preceding claims, in which the delivery means is arranged to deliver all of the events in the common event queue.
8. A system as claimed in any one of the preceding claims, in which the delivery means is arranged to remove all of the events in the common event queue in response to the event delivery request.
9. A system as claimed in any one of the preceding claims, in which at least one of the events indicates the arrival of data.
10. A system as claimed in any one of the preceding claims, in which at least one of the events indicates a direct memory access transfer completion.
11. A system as claimed in any one of the preceding claims, in which at least one of the events indicates the arrival of an out of band message.
12. A system as claimed in any one of the preceding claims, in which the common event queue is arranged to receive the events from a network.
13. A system as claimed in claim 12, comprising a network interface controller for supplying the events to the common event queue.
14. A system as claimed in any one of the preceding claims, in which the common event queue is in a kernel.
15. A system as claimed in claim 14 when dependent on claim 13, in which the common event queue is in memory which is shared by the kernel and the network interface controller.
16. A system as claimed in claimed in claim 13 or 15, in which the delivery means is arranged to be responsive to an interrupt from the network interface controller.
17. A system as claimed in any one of the preceding claims, in which the delivery means is arranged to deliver at least some of the events when the content of the common event queue reaches a first predetermined level.
18. A system as claimed in claim 17, in which the first predetermined level corresponds to the common event queue being nearly full.
19. A system as claimed in any one of the preceding claims, in which the delivery means is arranged to deliver at least some of the events when the common event queue has remained non-empty for a predetermined time period.
20. A system as claimed in any one of the preceding claims, in which the delivery means is arranged to respond in accordance with application requirements.
21. A system as claimed in any one of the preceding claims, in which the delivery means is arranged to respond in accordance with system load.
22. A computer system comprising a managing system as claimed in any one of the preceding claims, at least one central processing unit, and a plurality of applications arranged to run one at a time on the or each central processing unit.
23. A system as claimed in claim 22, comprising a server arranged to be connected to a network.
24. A system as claimed in claim 22 or 23, in which each of the applications has an application event queue.
25. A system as claimed in claim 24, in which the application event queues reside in memory shared with an operating system.
26. A system as claimed in claim 24 or 25, in which each application is arranged, when running, to produce an event delivery request when the content of the application event queue falls to a second predetermined level.
27. A system as claimed in claim 26, in which the second predetermined level corresponds to the application event queue being empty.
28. A system as claimed in any one of claims 24 to 27, in which each of the applications whose application event queue is non-empty is indicated as being runnable.
29. A system as claimed in any one of claims 24 to 28, in which the delivery means is arranged, when delivering an event to any of the application event queues which is full, to increase the capacity of the application event queue.
30. A method of managing a common event queue for events, each of which has a destination application to which the event is to be delivered, the method comprising delivering, in response to an event delivery request from a currently running one of a plurality of applications, at least some of the events in the common event queue to the destination applications thereof.
31. A method as claimed in claim 30, comprising delivering the at least some events to memory shared with the applications.
32. A method as claimed in claim 31, in which the shared memory contains application event queues of the applications.
33. A method as claimed in claim 32, in which each of the at least some events is delivered to the application event queue in the shared memory of the destination application.
34. A method as claimed in any one of claims 30 to 33, comprising delivering all of the events whose destination applications are capable of event reception.
35. A method as claimed in any one of claims 30 to 34, comprising delivering all of the events in the common event queue.
36. A method as claimed in any one of claims 30 to 35, comprising removing all of the events in the common event queue in response to the event delivery request.
37. A method as claimed in any one of claims 30 to 36, comprising delivering at least some of the events in response to an interrupt from a network interface controller.
38. A method as claimed in any one of claims 30 or 37, in which at least some of the events are delivered when the content of the common event queue reaches a first predetermined level.
39. A method as claimed in claim 38, in which the first predetermined level corresponds to the common event queue being nearly full.
40. A method as claimed in any one of claims 30 to 39, in which at least some of the events are delivered when the common event queue has remained non-empty for a predetermined time period.
41. A method as claimed in any one of claims 30 to 40, comprising indicating as runnable each application having a non-empty application event queue.
42. A program for programming a computer to perform a method as claimed in any one of claims 30 to 41.
43. A storage medium containing a program as claimed in claim 42.
44. A computer programmed by a program as claimed in claim 42.
PCT/GB2002/004357 2001-10-11 2002-09-26 Event queue managing system WO2003034217A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0124409.4 2001-10-11
GB0124409A GB2380822B (en) 2001-10-11 2001-10-11 Event queue managing system

Publications (1)

Publication Number Publication Date
WO2003034217A1 true WO2003034217A1 (en) 2003-04-24

Family

ID=9923631

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2002/004357 WO2003034217A1 (en) 2001-10-11 2002-09-26 Event queue managing system

Country Status (2)

Country Link
GB (1) GB2380822B (en)
WO (1) WO2003034217A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010148837A1 (en) * 2009-12-25 2010-12-29 中兴通讯股份有限公司 Method and device for unified management of multiple applications on mobile terminal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5938708A (en) * 1997-07-03 1999-08-17 Trw Inc. Vehicle computer system having a non-interrupt cooperative multi-tasking kernel and a method of controlling a plurality of vehicle processes

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS608945A (en) * 1983-06-29 1985-01-17 Nippon Telegr & Teleph Corp <Ntt> Queue controlling circuit
US4779194A (en) * 1985-10-15 1988-10-18 Unisys Corporation Event allocation mechanism for a large data processing system
JPH0756752A (en) * 1993-08-13 1995-03-03 Nec Corp Itemized common queue management system
US6189047B1 (en) * 1997-03-20 2001-02-13 Sun Microsystems, Inc. Apparatus and method for monitoring event queue operations with pluggable event queues

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5938708A (en) * 1997-07-03 1999-08-17 Trw Inc. Vehicle computer system having a non-interrupt cooperative multi-tasking kernel and a method of controlling a plurality of vehicle processes

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MICHAEL BECK, HARALD BÖHME, MIRKO DZIADZKA, ULRICH KUNITZ, ROBERT MAGNUS, DIRK VERWORNER: "Linux-Kernel-Programmierung: Algorithmen und Struktur der Version 1", 1994, ADDISON-WESLEY, BONN, ISBN: 3-89319-803-2, XP002228013 *
WINFRIED KALFA: "Betriebssysteme", 1988, AKADEMIE-VERLAG, BERLIN, ISBN: 3-05-500477-9, XP002228012 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010148837A1 (en) * 2009-12-25 2010-12-29 中兴通讯股份有限公司 Method and device for unified management of multiple applications on mobile terminal
CN101772212B (en) * 2009-12-25 2013-06-12 中兴通讯股份有限公司 Method and device for carrying out unified management on multiple applications on mobile terminal

Also Published As

Publication number Publication date
GB2380822A (en) 2003-04-16
GB2380822B (en) 2005-03-30
GB0124409D0 (en) 2001-11-28

Similar Documents

Publication Publication Date Title
US5247671A (en) Scalable schedules for serial communications controller in data processing systems
EP0617361B1 (en) Scheduling method and apparatus for a communication network
CN101354664B (en) Method and apparatus for interrupting load equilibrium of multi-core processor
US7610413B2 (en) Queue depth management for communication between host and peripheral device
US6237058B1 (en) Interrupt load distribution system for shared bus type multiprocessor system and interrupt load distribution method
EP1856623B1 (en) Including descriptor queue empty events in completion events
US7562366B2 (en) Transmit completion event batching
US8612986B2 (en) Computer program product for scheduling ready threads in a multiprocessor computer based on an interrupt mask flag value associated with a thread and a current processor priority register value
US5875329A (en) Intelligent batching of distributed messages
US6763520B1 (en) Fair assignment of processing resources to queued requests
US20100229179A1 (en) System and method for scheduling thread execution
US20060037021A1 (en) System, apparatus and method of adaptively queueing processes for execution scheduling
US6993613B2 (en) Methods and apparatus for reducing receive interrupts via paced ingress indication
US6907606B1 (en) Method for implementing event transfer system of real time operating system
EP2171934B1 (en) Method and apparatus for data processing using queuing
EP2383659B1 (en) Queue depth management for communication between host and peripheral device
US8032658B2 (en) Computer architecture and process for implementing a virtual vertical perimeter framework for an overloaded CPU having multiple network interfaces
WO2003034217A1 (en) Event queue managing system
US8869171B2 (en) Low-latency communications
GB2348303A (en) Managing multiple task execution according to the task loading of a processor
CN114661415A (en) Scheduling method and computer system
WO2002023329A2 (en) Processor resource scheduler and method
CN116320031A (en) Server response method, device and medium
Hansen et al. Prioritizing network event handling in clusters of workstations
JPH04215160A (en) Information processor

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TN TR TZ UA UG US UZ VC VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: COMMUNICATION PURSUANT TO RULE 69(1) EPC

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP