US20040100982A1 - Distributed real-time operating system - Google Patents
Distributed real-time operating system Download PDFInfo
- Publication number
- US20040100982A1 US20040100982A1 US10/729,478 US72947803A US2004100982A1 US 20040100982 A1 US20040100982 A1 US 20040100982A1 US 72947803 A US72947803 A US 72947803A US 2004100982 A1 US2004100982 A1 US 2004100982A1
- Authority
- US
- United States
- Prior art keywords
- interrupt
- processing
- current
- current interrupt
- message
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
- G05B19/0421—Multiprocessor system
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/05—Programmable logic controllers, e.g. simulating logic interconnections of signals according to ladder diagrams or function charts
- G05B19/052—Linking several PLC's
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
- G05B19/41865—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by job scheduling, process planning, material flow
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4812—Task transfer initiation or dispatching by interrupt, e.g. masked
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
- G06F9/4887—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/6215—Individual queue per QOS, rate or priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/624—Altering the ordering of packets in an individual queue
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9063—Intermediate storage in different physical parts of a node or terminal
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/10—Plc systems
- G05B2219/13—Plc programming
- G05B2219/13001—Interrupt handling
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/10—Plc systems
- G05B2219/13—Plc programming
- G05B2219/13087—Separate interrupt controller for modules
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/25—Pc structure of the system
- G05B2219/25411—Priority interrupt
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/31—From computer integrated manufacturing till monitoring
- G05B2219/31218—Scheduling communication on bus
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/32—Operator till task planning
- G05B2219/32266—Priority orders
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Definitions
- the present invention relates to industrial controllers for controlling industrial processes and equipment and more generally to an operating system suitable for a distributed industrial control system having multiple processing nodes spatially separated about a factory or the like.
- Industrial controllers are special purpose computers used for controlling industrial processes and manufacturing equipment. Under the direction of a stored control program the industrial controller examines a series of inputs reflecting the status of the controlled process and in response, adjusts a series of outputs controlling the industrial process.
- the inputs and outputs may be binary, that is on or off, or analog providing a value within a continuous range of values.
- Centralized industrial controllers may receive electrical inputs from the controlled process through remote input/output (I/O) modules communicating with the industrial controller over a high-speed communication network. Outputs generated by the industrial controller are likewise transmitted over the network to the I/O circuits to be communicated to the controlled equipment.
- the network provides a simplified means of communicating signals over a factory environment without multiple wires and the attendant cost of installation.
- Effective real-time control is provided by executing the control program repeatedly in high speed “scan” cycles. During each scan cycle each input is read and new outputs are computed. Together with the high-speed communications network, this ensures the response of the control program to changes in the inputs and its generation of outputs will be rapid. All information is dealt with centrally by a well-characterized processor and communicated over a known communication network to yield predictable delay times critical to deterministic control.
- the centralized industrial controller architecture is not readily scalable, and with foreseeably large and complex control problems, unacceptable delays will result from the large amount of data that must be communicated to a central location and from the demands placed on the centralized processor. For this reason, it may be desirable to adopt a distributed control architecture in which multiple processors perform portions of the control program at spatially separate locations about the factory. By distributing the control, multiple processors may be brought to bear on the control problem reducing the burden on any individual processor and the amount of input and output data that must be transmitted.
- the distributed control model is not as well characterized as far as guaranteeing performance as is required for real-time control. Delay in the execution of a portion of the control program by one processor can be fatal to successful real-time execution of the control program, and because the demand for individual processor resources fluctuates, the potential for an unexpected overloading of a single processor is possible. This is particularly true when a number of different and independent application programs are executed on the distributed controller and where the application programs compete for the same set of physical hardware resources.
- One weak point in the distributed control model is the introduction of communication delays in the execution of control tasks. These communication delays result from the need for different portions of the control program on different spatially separated hardware to communicate with each other.
- a typical first-in/first-out (FIFO) communication system where outbound messages are queued according to their time of arrival at the communication circuit, a message with a high priority, as may be necessary for the prompt completion of a control task, will always be transmitted later than an earlier arriving message of low priority. This can cause a form of unbounded priority inversion where low priority task block high priority tasks, and this may upset the timing requirements of the real-time control program.
- a second problem with the distributed control model arises from operating distributed control devices in a multi-tasking mode to be shared among different program tasks. Such multi-tasking is necessary for efficient use of hardware resources.
- Present real-time multitasking operating systems allow the assignment of a priority to a given task. The user selects the necessary priority levels for each task to ensure that the timing constraints implicit in the real-time control process are realized.
- the present invention relates to an interrupt manager for use in a distributed control system.
- the interrupt manager includes circuitry that (i) receives interrupt signals including a current interrupt, (ii) determines whether the current interrupt can be processed without delaying processing of a non-interrupt task beyond a predetermined time, and (iii) inhibits, at least temporarily, processing of the current interrupt when it is determined that the processing of the current interrupt would delay processing of the non-interrupt task beyond the predetermined time.
- the present invention additionally relates to a method of handling interrupts for use with a processor in a distributed control system.
- the method includes receiving a current interrupt signal, determining whether processing of the current interrupt signal would delay processing of a non-interrupt task beyond a predetermined time.
- the method further includes inhibiting, at least temporarily, the processing of the current interrupt signal when it is determined that the processing would delay the processing of the non-interrupt task beyond the predetermined time.
- the present invention also relates to a method of scheduling messages being transmitted on a network among spatially-distributed control components of a distributed control system.
- the method includes receiving a message, receiving a relative timing constraint concerning the message, where the relative timing constraint is indicative of an amount of time, and inserting the message into a queue at a location that is a function of the relative timing constraint.
- the present invention additionally relates to a method of coordinating a new control application program with other control application programs being performed on a distributed real-time operating system, where the distributed real-time operating system is for use with a control system having spatially separated control hardware resources.
- the method includes receiving the new control application program, and identifying control hardware resources from a resource list matching control hardware resources required by the new control application program.
- the method further includes allocating portions of a constraint associated with the new control application program to each identified control hardware resource, and determining whether the allocated portions of the constraint of the new control application program can be met while requirements of the other control application programs also are met.
- the present invention further relates to a method of operating an application program on a distributed control system having a plurality of hardware resources.
- the method includes receiving high-level requirements concerning the application program, and determining low-level requirements based upon the high-level requirements.
- the method further includes allocating at least one of the high-level requirements and the low-level requirements among at least some of the plurality of hardware resources, and operating the application program in accordance with the allocated requirements.
- FIG. 1 is a simplified diagram of a distributed control system employing two end nodes and an intervening communication node and showing the processor, memory and communication resources for each node;
- FIG. 2 is a block diagram showing the memory resources of each node of FIG. 1 as allocated to a distributed real-time operating system and different application programs;
- FIG. 3 is an expanded block diagram of the distributed operating system of FIG. 2 such as includes an application list listing application programs to be executed by the distributed control system, a topology showing the topology of the connection of the hardware resources of the nodes of FIG. 1, a resource list detailing the allocation of the hardware resources to the application program and the statistics of their use by each of the application programs, and the executable distributed real-time operating system code;
- FIG. 4 is a pictorial representation of a simplified application program attached to its high-level requirements
- FIG. 5 is a flow chart of the operation of the distributed real-time operating system code of FIG. 3 showing steps upon accepting a new application program to determine the low-level hardware resource requirements and to seek commitments from those hardware resources for the requirements of the new application program;
- FIG. 6 is a detailed version of the flow chart of FIG. 5 showing the process of allocating low-level requirements to hardware resources
- FIG. 7 is a block diagram detailing the step of the flow chart of FIG. 5 of responding to requests for commitment of hardware resources
- FIG. 8 a is a detailed view of the communication circuit of FIG. 1 showing a messaging queue together with a scheduler and a history table as may be implemented via an operating system and showing a message received by the communication circuit over the bus of FIG. 1;
- FIG. 8 b is a figure similar to that of FIG. 8 a showing the scheduler of FIG. 8 a as implemented for multi-tasking of the processors of FIG. 1;
- FIG. 9 is a flow chart showing the steps of operation of enrolling the message of FIG. 8 a or tasks of FIG. 8 b into a queue;
- FIG. 10 is a schematic representation of the interrupt handling system provided by the operating system and processor of FIGS. 1 and 2;
- FIG. 11 is a flow chart showing the steps of operation of the interrupt handling system of FIG. 10.
- a distributed control system 10 includes multiple nodes 12 a , 12 b and 12 c for executing a control program comprised of multiple applications.
- Control end nodes 12 a and 12 c include signal lines 14 communicating between the end nodes 12 a and 12 c and a portion of a controlled process 16 a and 16 b .
- Controlled process portions 16 a and 16 b may communicate by a physical process flow or other paths of communication indicated generally as dotted line 18 .
- end node 12 a may receive signals A and B from process 16 a
- end node 12 c may receive signal C from process 16 b and provide as an output signal D to process 16 b as part of a generalized control strategy.
- End nodes 12 a and 12 c include interface circuitry 20 a and 20 c , respectively, communicating signals on signal lines 14 to internal buses 22 a and 22 c , respectively.
- the internal buses 22 a and 22 c may communicate with the hardware resources of memory 24 a , processor 26 a and communication card 28 a (for end node 12 a ) and memory 24 c , processor 26 c , and network communication card 28 c for end node 12 c .
- Communication card 28 a may communicate via network media 30 a to a communication card 28 b on node 12 b which may communicate via internal bus 22 b to memory 24 b and processor 26 b and to second network communication card 28 b ′ connected to media 30 b which in turn communicates with communication card 28 c.
- a portion of the application program executed by processor 26 c residing in memory 24 c would detect the state of input C and compare it with the state of signals A and B in the received message to produce output signal D.
- the distributed real-time operating system 32 of the present invention may be used such as may be centrally located in one node 12 or in keeping with the distributed nature of the control system distributed among the nodes 12 a , 12 b and 12 c .
- the portions of the operating system 32 are stored in each of the memories 24 a , 24 b and 24 c and intercommunicate to operate as a single system.
- a portion of the operating system 32 that provides a modeling of the hardware resources is located in the particular node 12 a , 12 b and 12 c associated with those hardware resources.
- hardware resource of memory 24 a in node 12 a would be modeled by a portion of the operating system 32 held in memory 24 a.
- memory 24 a , 24 b and 24 c include various application programs 34 or portions of those application programs 34 as may be allocated to their respective nodes.
- an application list 36 lists the application programs 34 that have been accepted for execution by the distributed control system 10 . Contained in the application list 36 are application identifiers 38 and high-level requirements 40 of the application programs as will be described below.
- a hardware resource list 44 provides (as depicted in a first column) a comprehensive listing of each hardware resource of the distributed control system 10 indicating a quantitative measure of that resource.
- quantitative measurements may be provided in terms of millions of instructions per second (MIPs) for processors 26 , numbers of megabytes for memories 24 and megabaud bandwidth for networks. While these are the principal hardware resources and their measures, it will be understood that other hardware resources may also be enrolled in this first column and other units of measures may be used.
- the measures are of “bandwidth”, a term encompassing both an indication of the amount of data and the frequency of occurrence of the data that must be processed.
- a second column of the hardware resource list 44 provides an allocation of the quantitative measure of the resource of a particular row to one or more application programs from the application list 36 identified by an application name.
- the application name may match the application identifier 38 of the application list 36 and the indicated allocation quantitative measure will typically be a portion of the quantitative measure of the first column.
- a third column of the hardware resource list 44 provides an actual usage of the hardware resource by the application program as may be obtained by collecting statistics during running of the application programs. This measure will be statistical in nature and may be given in the units of the quantitative measure for the hardware resource provided in the first column.
- the operating system 32 also includes a topology map 42 indicating the connection of the nodes 12 a , 12 b and 12 c through the network 31 and the location of the hardware resources of the hardware resource list 44 in that topology.
- the operating system also includes an operating system code 48 such as may read the application list 36 , the topology map 42 , and the hardware resource list 44 to ensure proper operation of the distributed control system 10 .
- each application program enrolled in the application list 36 is associated with high-level requirements 40 which will be used by the operating system code 48 .
- these high-level requirements 40 will be determined by the programmer based on the programmer's knowledge of the controlled process 16 and its requirements.
- the application program 34 may include a single ladder rung 50 (shown in FIG. 4) providing for the logical ANDing of inputs A, B and C to produce an output D.
- the high-level requirements 40 would include hardware requirements for inputs and outputs A, B, C and D.
- the high-level requirements 40 may further include “completion-timing constraints” t 1 and indicating a constraint in execution time of the application program 34 needed for real-time control.
- the completion-timing constraint is a maximum period of time that may elapse between occurrences of the last of inputs A, B and C to become logically true and the occurrence of the output signal D.
- the high-level requirements 40 may also include a message size, in this case the size of a message AB which must be sent over the network 31 , or this may be deduced automatically through use of the topology map 42 and an implicit allocation of the hardware.
- the high-level requirements 40 include an “inter-arrival period” t 2 reflecting an assumption about the statistics of the controlled process 16 a in demanding execution of the application program 34 .
- the inter-arrival period t 2 need be no greater than the scanning period of the input circuitry 20 a and 20 c which may be less than the possible bandwidth of the signals A, B and C but which will provide acceptable real-time response.
- the operating system code 48 ensures proper operation of the distributed control system 10 by checking that each new enrolled application program 34 will operate acceptably with the available hardware resources. Prior to any new application program 34 being added to the application list 36 , the operating system code 48 intervenes so as to ensure the necessary hardware resources are available and to ensure that time guarantees may be provided for execution of the application program.
- the operating system code 48 checks that the high-level requirements 40 have been identified for the application program. This identification may read a prepared file of the high-level requirements 40 or may solicit the programmer to input the necessary information about the high-level requirements 40 through a menu structure or the like, or may be semiautomatic involving a review of the application program 34 for its use of hardware resources and the like. As shown and described above with respect to FIG. 4, principally four high-level requirements are anticipated that of hardware requirements, completion-timing constraints, message sizes, and the inter-arrival period. Other high-level requirements are possible including the need for remote system services, the type of priority of the application, etc.
- the high-level requirements 40 are used to determine low-level requirements 60 .
- These low-level requirements may be generally “bandwidths” of particular hardware components such as are listed in the first column of the hardware resource list 44 .
- the low-level requirements will be a simple function of high-level requirements 40 and the objective characteristics of the application program 34 , the function depending on a priori knowledge about the hardware resource.
- the amount of memory will be a function of the application program size whereas, the network bandwidth will be a function of the message size and the inter-arrival period t 2 , and the processor bandwidth will be a function of the application program size and the inter-arrival period t 2 as will be evident to those of ordinary skill in the art.
- the computation of the low-level requirements 60 it is not necessary that the computation of the low-level requirements 60 be precise so long as it is a conservative estimate of low-level resources required.
- high-level requirements 40 and low-level requirements 60 are not fixed and in fact some high-level requirements, for example message size, may in fact be treated as low-level requirements as deduced from the topology map 42 as has been described.
- the process block 62 includes sub-process block 63 where the low-level requirements abstracted at process block 58 are received.
- end nodes 12 a and 12 c are identified based on their hardware links to inputs A, B and C and output D and a tentative allocation of the application program 34 to those nodes and an allocation of necessary processor bandwidth is made to these principal nodes 12 a and 12 c .
- the intermediary node 12 b is identified together with the necessary network 31 and an allocation is made of network space based on message size and the inter-arrival period.
- the burden of storing and executing the application program is then divided at process block 70 allocating to each of memories 24 a and 24 c (and possibly 12 b ), a certain amount of space for the application program 34 and to processors 26 a and 26 c (and possibly 26 b ) a certain amount of their bandwidth for the execution of the portions of the application program 34 based on the size of the application program 34 and the inter-arrival period t 2 .
- Network cards 28 a , 28 b ′, 28 b and 28 c also have allocations to them based on the message size and the inter-arrival period t 2 .
- the allocation of the application program 34 can include intermediate nodes 12 b serving as bridges and routers where no computation will take place. For this reason, instances or portions of the operating system code 48 will also be associated with each of these implicit hardware resources.
- the completion-timing constraint t 1 for the application program 34 is divided among the primary hardware to which the application program 34 is allocated and the implicit hardware used to provide for communication between the possibly separated portions of the application program 34 .
- the completion-timing constraint t 1 is nine milliseconds, a guaranty of time to produce an output after necessary input signals are received, then each node 12 a - c will receive three microseconds of that allocation as a time obligation.
- a request for a commitment based on this allocation including the allocated time obligations and other low-level requirements 60 is made to portions of the operating system code 48 associated with each hardware element.
- portions of the operating system code 48 associated with each node 12 a - c and their hardware resources review the resources requested of them in processor, network, and memory bandwidth and the allocated time obligations and reports back as to whether those commitments may be made keeping within the allocated time obligation. If not, an error is reported at process block 66 .
- code portions responsible for this determination will reside with the hardware resources which they allocate and thus may be provided with the necessary models of the hardware resources by the manufacturers.
- This commitment process is generally represented by decision block 64 and is shown in more detail in FIG. 7 having a first process block 74 where a commitment request is received designating particular hardware resources and required bandwidths.
- the portion of the operating system code 48 associated with the hardware element allocates the necessary hardware portion from hardware resource list 44 possibly modeling it as shown in process block 78 with the other allocated resources of the resource list representing previously enrolled application programs 34 to see if the allocation can be made.
- the allocation may simply be a checking of the hardware resource list 44 to see if sufficient memory is available.
- the modeling may determine whether scheduling may be performed such as will allow the necessary completion-timing constraints t 1 given the inter-arrival period t 2 of the particular application and other applications.
- a master hardware resource list 44 is updated and the application program is enrolled in the application list 36 to run.
- the communication card 28 will typically include a message queue 90 into which messages 91 are placed prior to being transmitted via a receiver/transmitter 92 onto the network 31 .
- a typical network queuing strategy of First-In-First-Out (FIFO) will introduce a variable delay in the transmission of messages caused by the amount of message traffic at any given time.
- FIFO First-In-First-Out
- messages which require completion on a timely basis and which therefore have a high priority may nevertheless be queued behind lower level messages without time criticality.
- priority and time constraints are disregarded, therefore even if ample network bandwidth is available and suitable priority attached to messages 91 associated with control tasks, the completion timing constraints t 1 cannot be guaranteed.
- the communication card 28 of the present invention includes a queue-level scheduler 94 which may receive messages 91 and place them in the queue 90 in a desired order of execution that is independent of the arrival time of the message 91 .
- the scheduler 94 receives the messages 91 and places them in the queue 90 and includes memory 98 holding a history of execution of messages identified to their tasks as will be described below.
- the blocks of the queue 90 , the scheduler 94 and the memory 98 are realized as a portion of the operating system 32 , however, they may alternatively be realized as an application specific integrated circuit (ASIC) as will be understood in the art.
- ASIC application specific integrated circuit
- Each message 91 associated with an application program for which a time constraint exists (guaranteed tasks) to be transmitted by the communication card 28 will contain conventional message data 99 such as may include substantive data of the message and the routing information of the message necessary for transmission on the network 31 .
- the message 91 will also include scheduling data 100 which may be physically attached to the message data 99 or associated with the message data 99 by the operating system 32 .
- the scheduling data 100 includes a user-assigned priority 96 generally indicating a high priority for messages associated with time critical tasks.
- the priority 96 is taken from the priority of the application program 34 of which the message 91 form a part and is determined prior to application program based on the importance of its control task as determined by the user.
- the scheduling data 100 may also include an execution period (EP) indicating the length of time anticipated to be necessary to execute the message for transmission on the network 31 and a deadline period (DP) being in this case the portion of the completion timing constraint t 1 allocated to the particular communication card 28 for transmission of the message 91 .
- the scheduling data 100 also includes a task identification (TID) identifying the particular message 91 to an application program 34 so that the high level requirements of the application program 34 , imputed to the message 91 as will be described, may be determined from the application list 30 described above, and so that the resources and bandwidths allocated to the application program and its portion, held in resource list 44 can be accessed by the communication card 28 and the scheduler 94 .
- E execution period
- DP deadline period
- the scheduling data 100 may be attached by the operating system 32 and in the simplest case is derived from data entered by the control system programmer.
- the execution period after entry may be tracked by the operating system during run-time and modified based on that tracking to provide for accurate estimations of the execution period over time.
- the scheduling data 100 and the message data 99 are provided to the scheduler 94 .
- the scheduler 94 notes the arrival time based on a system clock (not shown) and calculates a LATEST STARTING TIME for the message (LST) as equal to a deadline time minus the execution period.
- the deadline time is calculated as the message arrival time plus the deadline period provided in the message.
- arrival of the message at the communication card 28 is indicated generally at process block 101 and is represented generally as a task, reflecting the fact that the same scheduling system may be used for other than messages as will be described below.
- decision block 102 determines whether the bandwidth limits for the task have been violated.
- the determination of bandwidth limits at block 102 considers, for example, the inter-arrival period t 2 for the messages 91 .
- a message 91 will not be scheduled for transmission until the specified inter-arrival period t 2 expires for the previous transmission of the message 91 .
- the expiration time of the inter-arrival period t 2 is stored in the history memory 98 identified to the TID of the message. This ensures that all guarantees for message execution can be honored.
- the bandwidth limits may include processor time or memory allocations.
- the message is placed in the queue 90 according to its user priority 96 .
- high priority messages always precede low priority messages in the queue 90 .
- the locking out of low priority messages is prevented by the fact that the high priority messages must have guaranteed bandwidths and a portion of the total bandwidth for each resource, the communication card 28 , for example, is reserved for low priority tasks.
- decision block 106 it is determined whether there is a priority tie, meaning that there is another message 91 in the queue 90 with the same priority as the current message 91 . If not, the current message 91 is enrolled in the queue 90 and its position need not be recalculated although its relative location in the queue 90 may change as additional messages are enrolled.
- the scheduler 94 proceeds to process block 108 and the messages with identical priorities are examined to determine which has the earliest LATEST STARTING TIME.
- the LATEST STARTING TIME as described above is an absolute time value indicating when the task must be started. As described above the LATEST STARTING TIME need only be computed once and therefore doesn't cause unbounded numbers of context switches. The current message is placed in order among the message of a similar priority according to the LATEST STARTING TIME with earliest LATEST STARTING TIME first.
- process block 110 If at succeeding process block 110 , there is no tie between the LATEST STARTING TIMES, then the enrollment process is complete. Otherwise, the scheduler 94 proceeds to process block 112 and the messages are examined to determine their deadline periods DP as contained in the scheduling data 100 . A task with a shorter deadline period is accorded the higher priority in the queue 90 on the rationale that shorter deadline periods indicate relative urgency.
- each message 91 rises to the top of the queue 90 for transmission, its LATEST STARTING TIME is examined to see if it has been satisfied. Failure of the task to execute in a timely fashion may be readily determined and reported.
- each processor 26 may be associated with a task queue 119 being substantially identical to the message queue 90 except that each slot in the task queue 119 may represent a particular bandwidth or time slice of processor usage. In this way, enrolling a task in the task list not only determines the order of execution but allocates a particular amount of processor resources to that task.
- New tasks are received again by a scheduler 94 retaining a history of the execution of the task according to task identification (TID) in memory 98 and enrolling the tasks in one of the time slots of the task queue 119 to be forwarded to the processor 26 at the appropriate moment.
- the tasks include similar tasks scheduling data as shown in FIG. 8 a but need not include a message data 99 and may rely on the TID to identify the task implicitly without the need for copying the task into a message for actual transmission.
- the operation of the scheduler 94 as with the case of messages above only allocates to the task the number of time slots in the queue 90 as was reserved in its bandwidth allocation in the resource list 44 . In this way, it can be assured that time guarantees may be enforced by the operating system.
- interrupts normally act directly on the processor 26 to cause the processor 26 to interrupt execution of a current task and to jump to an interrupt subroutine and execute that subroutine to completion before returning to the task that was interrupted.
- the interrupt process involves changing the value of the program counter to the interrupt vector and saving the necessary stack and registers to allow resumption of the interrupt routine upon completion.
- interrupt signals may be masked by software instructions such as may be utilized by the operating system in realizing the mechanism to be described now.
- a similar problem to that described above, of lower priority messages blocking the execution of higher priority messages in the message queue 90 may occur with interrupts.
- a system may be executing a time critical user task when a low priority interrupt, such as that which may occur upon receipt of low priority messages, may occur. Since interrupts are serviced implicitly at a high priority level, the interrupt effects a priority inversion with the high priority task waiting for the low priority task. If many interrupts occur, the high priority tasks may miss its time guarantee.
- circuitry can be employed that receives interrupts and, upon receiving an interrupt, determines whether responding to the current interrupt would delay the execution of other tasks, particularly non-interrupt tasks, in a manner that would be excessive in terms of delaying the execution of the other tasks beyond a predetermined time.
- Various measures and techniques can be utilized to determine whether responding to the current interrupt would excessively delay the execution of other tasks.
- the circuitry can determine whether the number of interrupts that have been processed recently, or are in queue to be processed (e.g., an interrupt that was just received, interrupts that have been received since a particular time, or interrupts that have been received but have not yet been processed), exceeds a certain maximum number. That maximum number can be, but need not be, associated with a particular period of time. For example, the maximum number can represent a maximum number of interrupts that can be performed within a given amount of time.
- a particular characteristic such as a priority characteristic.
- the task generator 120 which receives the interrupt generates a proxy task forwarded to the scheduler 94 .
- the proxy task assumes the scheduling data 100 of the message causing the interrupt and is subject to the same mixed processing as the tasks described above via the scheduler 94 .
- the proxy task may preempt the current task or might wait its turn. This procedure guarantees deterministic packet reception without affecting tasks on the receiving node adversely.
- interrupts 118 from general sources such as communication ports and other external devices are received by an interrupt manager 122 prior to invoking the interrupt hardware on the processor 26 .
- interrupt manager 122 receives interrupts 118 from general sources such as communication ports and other external devices.
- One exception to this is the timer interrupt 118 ′ which provides a regular timer “click” for the system clock which, as described above, is used by the scheduler 94 .
- the interrupt manager 122 provides a masking line 124 to a interrupt storage register 123 , the masking line allowing the interrupt manager 122 to mask or block other interrupts (while storing them for later acceptance) and communicates with an interrupt window timer 126 which is periodically reset by a clock 127 .
- the interrupt manager 122 , its masking line 124 , the interrupt storage register 123 , the interrupt window timer 126 and the window timer are realized by the operating system 32 but as will be understood in the art may also be implemented by discrete circuitry such as an application specific integrated circuit (ASIC).
- ASIC application specific integrated circuit
- the interrupt manager 122 operates so that upon the occurrence of an interrupt as indicated by process block 129 , all further interrupts are masked as indicated by process block 128 .
- the interrupt window timer 126 is then checked to see if a pre-allocated window of time for processing interrupts (the interrupt window) has been exhausted.
- the interrupt window is a percentage of processing time or bandwidth of processor 26 reserved for interrupts and its exact value will depend on a number of variables such as processor speed, the number of external interrupts expected and how long interrupts take to be serviced and is selected by the control system programmer. In the allocation of processor resources described above, the interrupt period is subtracted out prior to allocation to the various application programs.
- the interrupt window timer 126 is reset to its full value on a periodic basis by the clock 127 so as to implement the appropriate percentage of processing time.
- the interrupt window timer 126 is checked to see if the amount of remaining interrupt window is sufficient to allow processing of the current interrupt based on its expected execution period.
- the execution periods may be entered by the control system programmer and keyed to the interrupt type and number. If sufficient time remains in the interrupt window, the execution period is subtracted from the interrupt window and, as determined by decision block 132 , then the interrupt manager 122 proceeds to process block 134 .
- the interrupts 118 are re-enabled via masking line 124 and at process block 136 , the current interrupt is processed.
- nested interrupts may occur which may also be subject to the processing described with respect to process block 129 . If at decision block 132 , there is inadequate time left in the interrupt window, then the interrupt manager 122 proceeds to decision block 138 where it remains until the interrupt window is reset by the clock 127 . At that time, process blocks 134 and 136 may be executed. As mentioned, the interrupt window is subtracted from the bandwidth of the processor 26 that may be allocated to user tasks and therefore the allocation of bandwidth for guaranteeing the execution of user tasks is done under the assumption that the full interrupt window will be used by interrupts taking the highest priority. In this way, interrupts may be executed within the interrupt window without affecting guarantees for task execution.
- a determination is made whether processing of the current interrupt can be completed within a time window in other embodiments a decision as to whether to process the current interrupt can be based upon whether processing of the current interrupt will satisfy other time constraints. For example, in one embodiment, a current interrupt would be processed so long as processing could begin within a set time window. In another embodiment, a current interrupt would be processed so long as processing of the current interrupt did not result in the violation of one or more completion timing constraints or other high-level or low-level requirements.
Abstract
A distributed control system and methods of operating such a control system are disclosed. In one embodiment, the distributed control system is operated in a manner in which interrupts are at least temporarily inhibited from being processed to avoid excessive delays in the processing of non-interrupt tasks. In another embodiment, the distributed control system is operated in a manner in which tasks are queued based upon relative timing constraints that they have been assigned. In a further embodiment, application programs that are executed on the distributed control system are operated in accordance with high-level and/or low-level requirements allocated to resources of the distributed control system.
Description
- The present Application is a continuation-in-part of U.S. patent application Ser. No. 09/408,696 filed on Sep. 30, 1999, and claims the benefit thereof.
- The present invention relates to industrial controllers for controlling industrial processes and equipment and more generally to an operating system suitable for a distributed industrial control system having multiple processing nodes spatially separated about a factory or the like.
- Industrial controllers are special purpose computers used for controlling industrial processes and manufacturing equipment. Under the direction of a stored control program the industrial controller examines a series of inputs reflecting the status of the controlled process and in response, adjusts a series of outputs controlling the industrial process. The inputs and outputs may be binary, that is on or off, or analog providing a value within a continuous range of values.
- Centralized industrial controllers may receive electrical inputs from the controlled process through remote input/output (I/O) modules communicating with the industrial controller over a high-speed communication network. Outputs generated by the industrial controller are likewise transmitted over the network to the I/O circuits to be communicated to the controlled equipment. The network provides a simplified means of communicating signals over a factory environment without multiple wires and the attendant cost of installation.
- Effective real-time control is provided by executing the control program repeatedly in high speed “scan” cycles. During each scan cycle each input is read and new outputs are computed. Together with the high-speed communications network, this ensures the response of the control program to changes in the inputs and its generation of outputs will be rapid. All information is dealt with centrally by a well-characterized processor and communicated over a known communication network to yield predictable delay times critical to deterministic control.
- The centralized industrial controller architecture, however, is not readily scalable, and with foreseeably large and complex control problems, unacceptable delays will result from the large amount of data that must be communicated to a central location and from the demands placed on the centralized processor. For this reason, it may be desirable to adopt a distributed control architecture in which multiple processors perform portions of the control program at spatially separate locations about the factory. By distributing the control, multiple processors may be brought to bear on the control problem reducing the burden on any individual processor and the amount of input and output data that must be transmitted.
- Unfortunately, the distributed control model is not as well characterized as far as guaranteeing performance as is required for real-time control. Delay in the execution of a portion of the control program by one processor can be fatal to successful real-time execution of the control program, and because the demand for individual processor resources fluctuates, the potential for an unexpected overloading of a single processor is possible. This is particularly true when a number of different and independent application programs are executed on the distributed controller and where the application programs compete for the same set of physical hardware resources.
- One weak point in the distributed control model is the introduction of communication delays in the execution of control tasks. These communication delays result from the need for different portions of the control program on different spatially separated hardware to communicate with each other. In a typical first-in/first-out (FIFO) communication system, where outbound messages are queued according to their time of arrival at the communication circuit, a message with a high priority, as may be necessary for the prompt completion of a control task, will always be transmitted later than an earlier arriving message of low priority. This can cause a form of unbounded priority inversion where low priority task block high priority tasks, and this may upset the timing requirements of the real-time control program.
- A second problem with the distributed control model arises from operating distributed control devices in a multi-tasking mode to be shared among different program tasks. Such multi-tasking is necessary for efficient use of hardware resources. Present real-time multitasking operating systems allow the assignment of a priority to a given task. The user selects the necessary priority levels for each task to ensure that the timing constraints implicit in the real-time control process are realized.
- One problem with this approach is first that it is necessarily conservative because the priorities must be set before the fact resulting in poor utilization of the scheduled resource. Further because the timing constraints are not explicit but only indirectly reflected in the priorities set by the user, the operating system is unable to detect a failure to meet the timing constraints during run time.
- On the other hand, some dynamic scheduling systems (which adapt to the circumstances at run-time) exist but they don't accept user assigned priorities and thus provide no guarantee as to which tasks will fail under transient overload conditions. There are also scheduling systems for multi-tasking that allow for both setting of priorities and that have a dynamic component to allow for greater processor utilization, for example, those that use the Maximum Urgency First algorithm. See generally D. B. Stewart and P. K. Khosla, “Real Time Scheduling of Dynamically Reconfigurable Systems,” Proceedings of the 1991 International Conference on Systems Engineering, Dayton August 1991 pp. 139-142.
- Unfortunately, such algorithms require rescheduling of all tasks as a new task becomes ready for execution. This results in greater overhead and produces a potential for an unbounded number of context switches (in which the scheduled resource switches its task) which can be detrimental to guaranteeing a completion time for a particular task as required by real-time control. Further current scheduling systems do not provide any guarantee for execution time of the tasks and the potential allow low priority tasks to fail.
- In particular, the present invention relates to an interrupt manager for use in a distributed control system. The interrupt manager includes circuitry that (i) receives interrupt signals including a current interrupt, (ii) determines whether the current interrupt can be processed without delaying processing of a non-interrupt task beyond a predetermined time, and (iii) inhibits, at least temporarily, processing of the current interrupt when it is determined that the processing of the current interrupt would delay processing of the non-interrupt task beyond the predetermined time.
- The present invention additionally relates to a method of handling interrupts for use with a processor in a distributed control system. The method includes receiving a current interrupt signal, determining whether processing of the current interrupt signal would delay processing of a non-interrupt task beyond a predetermined time. The method further includes inhibiting, at least temporarily, the processing of the current interrupt signal when it is determined that the processing would delay the processing of the non-interrupt task beyond the predetermined time.
- The present invention also relates to a method of scheduling messages being transmitted on a network among spatially-distributed control components of a distributed control system. The method includes receiving a message, receiving a relative timing constraint concerning the message, where the relative timing constraint is indicative of an amount of time, and inserting the message into a queue at a location that is a function of the relative timing constraint.
- The present invention additionally relates to a method of coordinating a new control application program with other control application programs being performed on a distributed real-time operating system, where the distributed real-time operating system is for use with a control system having spatially separated control hardware resources. The method includes receiving the new control application program, and identifying control hardware resources from a resource list matching control hardware resources required by the new control application program. The method further includes allocating portions of a constraint associated with the new control application program to each identified control hardware resource, and determining whether the allocated portions of the constraint of the new control application program can be met while requirements of the other control application programs also are met.
- The present invention further relates to a method of operating an application program on a distributed control system having a plurality of hardware resources. The method includes receiving high-level requirements concerning the application program, and determining low-level requirements based upon the high-level requirements. The method further includes allocating at least one of the high-level requirements and the low-level requirements among at least some of the plurality of hardware resources, and operating the application program in accordance with the allocated requirements.
- FIG. 1 is a simplified diagram of a distributed control system employing two end nodes and an intervening communication node and showing the processor, memory and communication resources for each node;
- FIG. 2 is a block diagram showing the memory resources of each node of FIG. 1 as allocated to a distributed real-time operating system and different application programs;
- FIG. 3 is an expanded block diagram of the distributed operating system of FIG. 2 such as includes an application list listing application programs to be executed by the distributed control system, a topology showing the topology of the connection of the hardware resources of the nodes of FIG. 1, a resource list detailing the allocation of the hardware resources to the application program and the statistics of their use by each of the application programs, and the executable distributed real-time operating system code;
- FIG. 4 is a pictorial representation of a simplified application program attached to its high-level requirements;
- FIG. 5 is a flow chart of the operation of the distributed real-time operating system code of FIG. 3 showing steps upon accepting a new application program to determine the low-level hardware resource requirements and to seek commitments from those hardware resources for the requirements of the new application program;
- FIG. 6 is a detailed version of the flow chart of FIG. 5 showing the process of allocating low-level requirements to hardware resources;
- FIG. 7 is a block diagram detailing the step of the flow chart of FIG. 5 of responding to requests for commitment of hardware resources;
- FIG. 8a is a detailed view of the communication circuit of FIG. 1 showing a messaging queue together with a scheduler and a history table as may be implemented via an operating system and showing a message received by the communication circuit over the bus of FIG. 1;
- FIG. 8b is a figure similar to that of FIG. 8a showing the scheduler of FIG. 8a as implemented for multi-tasking of the processors of FIG. 1;
- FIG. 9 is a flow chart showing the steps of operation of enrolling the message of FIG. 8a or tasks of FIG. 8b into a queue;
- FIG. 10 is a schematic representation of the interrupt handling system provided by the operating system and processor of FIGS. 1 and 2; and
- FIG. 11 is a flow chart showing the steps of operation of the interrupt handling system of FIG. 10.
- Referring now to FIG. 1, a distributed
control system 10 includesmultiple nodes Control end nodes signal lines 14 communicating between theend nodes process 16 a and 16 b.Controlled process portions 16 a and 16 b may communicate by a physical process flow or other paths of communication indicated generally as dottedline 18. - In the present example,
end node 12 a may receive signals A and B from process 16 a, and endnode 12 c may receive signal C fromprocess 16 b and provide as an output signal D to process 16 b as part of a generalized control strategy. -
End nodes interface circuitry signal lines 14 tointernal buses 22 a and 22 c, respectively. Theinternal buses 22 a and 22 c may communicate with the hardware resources ofmemory 24 a,processor 26 a andcommunication card 28 a (forend node 12 a) andmemory 24 c,processor 26 c, andnetwork communication card 28 c forend node 12 c.Communication card 28 a may communicate vianetwork media 30 a to acommunication card 28 b onnode 12 b which may communicate viainternal bus 22 b tomemory 24 b andprocessor 26 b and to secondnetwork communication card 28 b′ connected tomedia 30 b which in turn communicates withcommunication card 28 c. - Generally during operation of distributed control system application programs are allocated between
memories respective nodes links memory 24 a would monitor signals A and B and send a message indicating both were true, or in this example send a message indicating the state of signals A and B tonode 12 c via a path throughcommunication cards - A portion of the application program executed by
processor 26 c residing inmemory 24 c would detect the state of input C and compare it with the state of signals A and B in the received message to produce output signal D. - The proper execution of this simple distributed application program requires not only the allocation of the application program portions to the
necessary nodes communication networks - Referring now to FIG. 2 for this latter purpose, the distributed real-
time operating system 32 of the present invention may be used such as may be centrally located in onenode 12 or in keeping with the distributed nature of the control system distributed among thenodes operating system 32 are stored in each of thememories operating system 32 that provides a modeling of the hardware resources (as will be described) is located in theparticular node memory 24 a innode 12 a would be modeled by a portion of theoperating system 32 held inmemory 24 a. - In addition to portions of the
operating system 32,memory various application programs 34 or portions of thoseapplication programs 34 as may be allocated to their respective nodes. - Referring now to FIG. 3, the
operating system 32 collectively provides a number of resources for ensuring proper operation of the distributedcontrol system 10. First, anapplication list 36 lists theapplication programs 34 that have been accepted for execution by the distributedcontrol system 10. Contained in theapplication list 36 areapplication identifiers 38 and high-level requirements 40 of the application programs as will be described below. - A
hardware resource list 44 provides (as depicted in a first column) a comprehensive listing of each hardware resource of the distributedcontrol system 10 indicating a quantitative measure of that resource. For example, for the principle hardware resources ofprocessors 26,networks 31 and memories 24, quantitative measurements may be provided in terms of millions of instructions per second (MIPs) forprocessors 26, numbers of megabytes for memories 24 and megabaud bandwidth for networks. While these are the principal hardware resources and their measures, it will be understood that other hardware resources may also be enrolled in this first column and other units of measures may be used. Generally, the measures are of “bandwidth”, a term encompassing both an indication of the amount of data and the frequency of occurrence of the data that must be processed. - A second column of the
hardware resource list 44 provides an allocation of the quantitative measure of the resource of a particular row to one or more application programs from theapplication list 36 identified by an application name. The application name may match theapplication identifier 38 of theapplication list 36 and the indicated allocation quantitative measure will typically be a portion of the quantitative measure of the first column. - A third column of the
hardware resource list 44 provides an actual usage of the hardware resource by the application program as may be obtained by collecting statistics during running of the application programs. This measure will be statistical in nature and may be given in the units of the quantitative measure for the hardware resource provided in the first column. - The
operating system 32 also includes atopology map 42 indicating the connection of thenodes network 31 and the location of the hardware resources of thehardware resource list 44 in that topology. - Finally, the operating system also includes an
operating system code 48 such as may read theapplication list 36, thetopology map 42, and thehardware resource list 44 to ensure proper operation of the distributedcontrol system 10. - Referring now to FIG. 4, each application program enrolled in the
application list 36 is associated with high-level requirements 40 which will be used by theoperating system code 48. Generally, these high-level requirements 40 will be determined by the programmer based on the programmer's knowledge of the controlledprocess 16 and its requirements. - Thus, for the application described above with respect to FIG. 1, the
application program 34 may include a single ladder rung 50 (shown in FIG. 4) providing for the logical ANDing of inputs A, B and C to produce an output D. The high-level requirements 40 would include hardware requirements for inputs and outputs A, B, C and D. The high-level requirements 40 may further include “completion-timing constraints” t1 and indicating a constraint in execution time of theapplication program 34 needed for real-time control. Generally the completion-timing constraint is a maximum period of time that may elapse between occurrences of the last of inputs A, B and C to become logically true and the occurrence of the output signal D. - The high-
level requirements 40 may also include a message size, in this case the size of a message AB which must be sent over thenetwork 31, or this may be deduced automatically through use of thetopology map 42 and an implicit allocation of the hardware. - Finally, the high-
level requirements 40 include an “inter-arrival period” t2 reflecting an assumption about the statistics of the controlled process 16 a in demanding execution of theapplication program 34. As a practical matter the inter-arrival period t2 need be no greater than the scanning period of theinput circuitry - Referring now to FIG. 5, the
operating system code 48 ensures proper operation of the distributedcontrol system 10 by checking that each new enrolledapplication program 34 will operate acceptably with the available hardware resources. Prior to anynew application program 34 being added to theapplication list 36, theoperating system code 48 intervenes so as to ensure the necessary hardware resources are available and to ensure that time guarantees may be provided for execution of the application program. - At
process block 56, theoperating system code 48 checks that the high-level requirements 40 have been identified for the application program. This identification may read a prepared file of the high-level requirements 40 or may solicit the programmer to input the necessary information about the high-level requirements 40 through a menu structure or the like, or may be semiautomatic involving a review of theapplication program 34 for its use of hardware resources and the like. As shown and described above with respect to FIG. 4, principally four high-level requirements are anticipated that of hardware requirements, completion-timing constraints, message sizes, and the inter-arrival period. Other high-level requirements are possible including the need for remote system services, the type of priority of the application, etc. - Referring still to FIG. 5, as indicated by
process block 58, the high-level requirements 40 are used to determine low-level requirements 60. These low-level requirements may be generally “bandwidths” of particular hardware components such as are listed in the first column of thehardware resource list 44. Generally, the low-level requirements will be a simple function of high-level requirements 40 and the objective characteristics of theapplication program 34, the function depending on a priori knowledge about the hardware resource. For example, the amount of memory will be a function of the application program size whereas, the network bandwidth will be a function of the message size and the inter-arrival period t2, and the processor bandwidth will be a function of the application program size and the inter-arrival period t2 as will be evident to those of ordinary skill in the art. As will be seen, it is not necessary that the computation of the low-level requirements 60 be precise so long as it is a conservative estimate of low-level resources required. - The distinction between high-
level requirements 40 and low-level requirements 60 is not fixed and in fact some high-level requirements, for example message size, may in fact be treated as low-level requirements as deduced from thetopology map 42 as has been described. - Once the low-
level requirements 60 have been determined, atprocess block 62, they are allocated to particular hardware elements distributed in thecontrol system 10. Referring also to FIG. 6, theprocess block 62 includessub-process block 63 where the low-level requirements abstracted atprocess block 58 are received. Atprocess block 66,end nodes application program 34 to those nodes and an allocation of necessary processor bandwidth is made to theseprincipal nodes process block 68 with reference to thetopology map 42, theintermediary node 12 b is identified together with thenecessary network 31 and an allocation is made of network space based on message size and the inter-arrival period. - The burden of storing and executing the application program is then divided at
process block 70 allocating to each ofmemories application program 34 and toprocessors application program 34 based on the size of theapplication program 34 and the inter-arrival period t2.Network cards application program 34 can includeintermediate nodes 12 b serving as bridges and routers where no computation will take place. For this reason, instances or portions of theoperating system code 48 will also be associated with each of these implicit hardware resources. - There are a large number of different allocative mechanisms, however, in the preferred embodiment the application program is divided according to the nodes associated with its inputs per U.S. Pat. No. 5,896,289 to Struger issued Apr. 20, 1999 and entitled: “Output Weighted Partitioning Method for a Control Program in a Highly Distributed Control System” assigned to the same assignee as the present invention and hereby incorporated by reference.
- During this allocation of the
application program 34, the completion-timing constraint t1 for theapplication program 34 is divided among the primary hardware to which theapplication program 34 is allocated and the implicit hardware used to provide for communication between the possibly separated portions of theapplication program 34. Thus, if the completion-timing constraint t1 is nine milliseconds, a guaranty of time to produce an output after necessary input signals are received, then eachnode 12 a-c will receive three microseconds of that allocation as a time obligation. - At
process block 72, a request for a commitment based on this allocation including the allocated time obligations and other low-level requirements 60 is made to portions of theoperating system code 48 associated with each hardware element. - At
decision block 64, portions of theoperating system code 48 associated with eachnode 12 a-c and their hardware resources review the resources requested of them in processor, network, and memory bandwidth and the allocated time obligations and reports back as to whether those commitments may be made keeping within the allocated time obligation. If not, an error is reported atprocess block 66. Generally, it is contemplated that code portions responsible for this determination will reside with the hardware resources which they allocate and thus may be provided with the necessary models of the hardware resources by the manufacturers. - This commitment process is generally represented by
decision block 64 and is shown in more detail in FIG. 7 having afirst process block 74 where a commitment request is received designating particular hardware resources and required bandwidths. Atprocess block 76, the portion of theoperating system code 48 associated with the hardware element allocates the necessary hardware portion fromhardware resource list 44 possibly modeling it as shown inprocess block 78 with the other allocated resources of the resource list representing previously enrolledapplication programs 34 to see if the allocation can be made. In the case of the static resources such as memory, the allocation may simply be a checking of thehardware resource list 44 to see if sufficient memory is available. In dynamic resources such as the processors and the network, the modeling may determine whether scheduling may be performed such as will allow the necessary completion-timing constraints t1 given the inter-arrival period t2 of the particular application and other applications. - At the conclusion of the modeling and resource allocation including adjustments that may be necessary from the modeling at
process block 80, a report is made back to the other components of theoperating system code 48. If that report is that a commitment may be had for all hardware resources of the high-level requirements 40, then the program proceeds to processblock 82 instead ofprocess block 66 representing the error condition as has been described. - At
process block 82, a masterhardware resource list 44 is updated and the application program is enrolled in theapplication list 36 to run. - During execution of the
application program 34 and as indicated byprocess block 84, statistics are collected on its actual bandwidth usage for the particular hardware resources to which it is assigned. These are stored in the third column of thehardware resource list 44 shown in FIG. 3 and is shown in theblock 45 associated with FIG. 5 and may be used to change the amount of allocation toparticular application programs 34, indicated byarrow 86, so as to improve hardware resource utilization. - Referring now to FIG. 8a, the
communication card 28 will typically include amessage queue 90 into whichmessages 91 are placed prior to being transmitted via a receiver/transmitter 92 onto thenetwork 31. A typical network queuing strategy of First-In-First-Out (FIFO) will introduce a variable delay in the transmission of messages caused by the amount of message traffic at any given time. Of particular importance, messages which require completion on a timely basis and which therefore have a high priority may nevertheless be queued behind lower level messages without time criticality. In such aqueue 90, priority and time constraints are disregarded, therefore even if ample network bandwidth is available and suitable priority attached tomessages 91 associated with control tasks, the completion timing constraints t1 cannot be guaranteed. - To overcome this limitation, the
communication card 28 of the present invention includes a queue-level scheduler 94 which may receivemessages 91 and place them in thequeue 90 in a desired order of execution that is independent of the arrival time of themessage 91. Thescheduler 94 receives themessages 91 and places them in thequeue 90 and includesmemory 98 holding a history of execution of messages identified to their tasks as will be described below. Generally the blocks of thequeue 90, thescheduler 94 and thememory 98 are realized as a portion of theoperating system 32, however, they may alternatively be realized as an application specific integrated circuit (ASIC) as will be understood in the art. - Each
message 91 associated with an application program for which a time constraint exists (guaranteed tasks) to be transmitted by thecommunication card 28 will containconventional message data 99 such as may include substantive data of the message and the routing information of the message necessary for transmission on thenetwork 31. In addition, themessage 91 will also includescheduling data 100 which may be physically attached to themessage data 99 or associated with themessage data 99 by theoperating system 32. - The
scheduling data 100 includes a user-assignedpriority 96 generally indicating a high priority for messages associated with time critical tasks. Thepriority 96 is taken from the priority of theapplication program 34 of which themessage 91 form a part and is determined prior to application program based on the importance of its control task as determined by the user. - The
scheduling data 100 may also include an execution period (EP) indicating the length of time anticipated to be necessary to execute the message for transmission on thenetwork 31 and a deadline period (DP) being in this case the portion of the completion timing constraint t1 allocated to theparticular communication card 28 for transmission of themessage 91. Thescheduling data 100 also includes a task identification (TID) identifying theparticular message 91 to anapplication program 34 so that the high level requirements of theapplication program 34, imputed to themessage 91 as will be described, may be determined from the application list 30 described above, and so that the resources and bandwidths allocated to the application program and its portion, held inresource list 44 can be accessed by thecommunication card 28 and thescheduler 94. - The
scheduling data 100 may be attached by theoperating system 32 and in the simplest case is derived from data entered by the control system programmer. The execution period after entry may be tracked by the operating system during run-time and modified based on that tracking to provide for accurate estimations of the execution period over time. - Upon arrival of a message at the
communication card 28, thescheduling data 100 and themessage data 99 are provided to thescheduler 94. Thescheduler 94 notes the arrival time based on a system clock (not shown) and calculates a LATEST STARTING TIME for the message (LST) as equal to a deadline time minus the execution period. The deadline time is calculated as the message arrival time plus the deadline period provided in the message. - Referring now to FIG. 9, arrival of the message at the
communication card 28 is indicated generally atprocess block 101 and is represented generally as a task, reflecting the fact that the same scheduling system may be used for other than messages as will be described below. - Following
process block 101 isdecision block 102 which determines whether the bandwidth limits for the task have been violated. The determination of bandwidth limits atblock 102 considers, for example, the inter-arrival period t2 for themessages 91. Amessage 91 will not be scheduled for transmission until the specified inter-arrival period t2 expires for the previous transmission of themessage 91. The expiration time of the inter-arrival period t2 is stored in thehistory memory 98 identified to the TID of the message. This ensures that all guarantees for message execution can be honored. More generally for a task other than a message, the bandwidth limits may include processor time or memory allocations. - If at
process block 102, there is no remaining allocation of network bandwidth for the particular task and the task is guaranteed, it is not executed until the bandwidth again becomes available. - At succeeding
block 104, if the bandwidth limits have not been violated, the message is placed in thequeue 90 according to itsuser priority 96. Thus, high priority messages always precede low priority messages in thequeue 90. The locking out of low priority messages is prevented by the fact that the high priority messages must have guaranteed bandwidths and a portion of the total bandwidth for each resource, thecommunication card 28, for example, is reserved for low priority tasks. - At
decision block 106, it is determined whether there is a priority tie, meaning that there is anothermessage 91 in thequeue 90 with the same priority as thecurrent message 91. If not, thecurrent message 91 is enrolled in thequeue 90 and its position need not be recalculated although its relative location in thequeue 90 may change as additional messages are enrolled. - If at
decision block 106 there is a priority tie, thescheduler 94 proceeds to process block 108 and the messages with identical priorities are examined to determine which has the earliest LATEST STARTING TIME. The LATEST STARTING TIME as described above is an absolute time value indicating when the task must be started. As described above the LATEST STARTING TIME need only be computed once and therefore doesn't cause unbounded numbers of context switches. The current message is placed in order among the message of a similar priority according to the LATEST STARTING TIME with earliest LATEST STARTING TIME first. - If at succeeding
process block 110, there is no tie between the LATEST STARTING TIMES, then the enrollment process is complete. Otherwise, thescheduler 94 proceeds to process block 112 and the messages are examined to determine their deadline periods DP as contained in thescheduling data 100. A task with a shorter deadline period is accorded the higher priority in thequeue 90 on the rationale that shorter deadline periods indicate relative urgency. - At succeeding process block114 if there remains a tie according to the above criteria between
messages 91 then atprocess block 116, the tie is broken according to the execution period, EP, of themessages 91. Here the rationale is that in the case of transient overload, executing the task with the shortest execution period will ensure execution of the greatest number of tasks. - A system clock with sufficient resolution will prevent a tie beyond this point by ensuring that the LATEST STARTING TIMES are highly distinct.
- These steps of determining priority may be simplified by concatenating the
relevant scheduling data 100 into a single binary value of sufficient length. The user priority forms the most significant bits of this value and the execution period the least significant bits. This binary value may then be examined to place the messages (or tasks) in thequeue 90. - As each
message 91 rises to the top of thequeue 90 for transmission, its LATEST STARTING TIME is examined to see if it has been satisfied. Failure of the task to execute in a timely fashion may be readily determined and reported. - As mentioned, the scheduling system used for the
communication card 28 described above is equally applicable to scheduling other resources within the distributed operating system, for example, theprocessors 26. Referring to FIG. 8b, eachprocessor 26 may be associated with a task queue 119 being substantially identical to themessage queue 90 except that each slot in the task queue 119 may represent a particular bandwidth or time slice of processor usage. In this way, enrolling a task in the task list not only determines the order of execution but allocates a particular amount of processor resources to that task. New tasks are received again by ascheduler 94 retaining a history of the execution of the task according to task identification (TID) inmemory 98 and enrolling the tasks in one of the time slots of the task queue 119 to be forwarded to theprocessor 26 at the appropriate moment. The tasks include similar tasks scheduling data as shown in FIG. 8a but need not include amessage data 99 and may rely on the TID to identify the task implicitly without the need for copying the task into a message for actual transmission. - Referring to FIG. 9, the operation of the
scheduler 94 as with the case of messages above only allocates to the task the number of time slots in thequeue 90 as was reserved in its bandwidth allocation in theresource list 44. In this way, it can be assured that time guarantees may be enforced by the operating system. - As is understood in the art, interrupts normally act directly on the
processor 26 to cause theprocessor 26 to interrupt execution of a current task and to jump to an interrupt subroutine and execute that subroutine to completion before returning to the task that was interrupted. The interrupt process involves changing the value of the program counter to the interrupt vector and saving the necessary stack and registers to allow resumption of the interrupt routine upon completion. Typically interrupt signals may be masked by software instructions such as may be utilized by the operating system in realizing the mechanism to be described now. - Referring now to FIGS. 8a and 8 b, a similar problem to that described above, of lower priority messages blocking the execution of higher priority messages in the
message queue 90, may occur with interrupts. For example, a system may be executing a time critical user task when a low priority interrupt, such as that which may occur upon receipt of low priority messages, may occur. Since interrupts are serviced implicitly at a high priority level, the interrupt effects a priority inversion with the high priority task waiting for the low priority task. If many interrupts occur, the high priority tasks may miss its time guarantee. - This priority-inversion problem can be solved in a number of ways. Generally speaking, circuitry can be employed that receives interrupts and, upon receiving an interrupt, determines whether responding to the current interrupt would delay the execution of other tasks, particularly non-interrupt tasks, in a manner that would be excessive in terms of delaying the execution of the other tasks beyond a predetermined time. Various measures and techniques can be utilized to determine whether responding to the current interrupt would excessively delay the execution of other tasks. For example, the circuitry can determine whether the number of interrupts that have been processed recently, or are in queue to be processed (e.g., an interrupt that was just received, interrupts that have been received since a particular time, or interrupts that have been received but have not yet been processed), exceeds a certain maximum number. That maximum number can be, but need not be, associated with a particular period of time. For example, the maximum number can represent a maximum number of interrupts that can be performed within a given amount of time.
- Alternatively, a determination can be made whether the current interrupt satisfies a particular characteristic, such as a priority characteristic. For example, referring to FIG. 8a, upon a receipt of a message from
network 31, an interrupt 118 may be generated and passed to atask generator 120 shown in FIG. 8b. Thetask generator 120 which receives the interrupt generates a proxy task forwarded to thescheduler 94. The proxy task assumes thescheduling data 100 of the message causing the interrupt and is subject to the same mixed processing as the tasks described above via thescheduler 94. Depending on its priority andother scheduling data 100, the proxy task may preempt the current task or might wait its turn. This procedure guarantees deterministic packet reception without affecting tasks on the receiving node adversely. - Alternatively, a determination can be made whether processing of the current interrupt will be accomplished in a manner satisfying a particular time constraint. For example, referring now to FIG. 10 in an alternate form of interrupt management, interrupts118 from general sources such as communication ports and other external devices are received by an interrupt
manager 122 prior to invoking the interrupt hardware on theprocessor 26. One exception to this is the timer interrupt 118′ which provides a regular timer “click” for the system clock which, as described above, is used by thescheduler 94. The interruptmanager 122 provides amasking line 124 to a interruptstorage register 123, the masking line allowing the interruptmanager 122 to mask or block other interrupts (while storing them for later acceptance) and communicates with an interruptwindow timer 126 which is periodically reset by aclock 127. Generally, the interruptmanager 122, itsmasking line 124, the interruptstorage register 123, the interruptwindow timer 126 and the window timer are realized by theoperating system 32 but as will be understood in the art may also be implemented by discrete circuitry such as an application specific integrated circuit (ASIC). - Referring to FIG. 11, the interrupt
manager 122 operates so that upon the occurrence of an interrupt as indicated byprocess block 129, all further interrupts are masked as indicated byprocess block 128. The interruptwindow timer 126 is then checked to see if a pre-allocated window of time for processing interrupts (the interrupt window) has been exhausted. The interrupt window is a percentage of processing time or bandwidth ofprocessor 26 reserved for interrupts and its exact value will depend on a number of variables such as processor speed, the number of external interrupts expected and how long interrupts take to be serviced and is selected by the control system programmer. In the allocation of processor resources described above, the interrupt period is subtracted out prior to allocation to the various application programs. The interruptwindow timer 126 is reset to its full value on a periodic basis by theclock 127 so as to implement the appropriate percentage of processing time. - At
process block 130, after the masking of the interrupts atprocess block 128, the interruptwindow timer 126 is checked to see if the amount of remaining interrupt window is sufficient to allow processing of the current interrupt based on its expected execution period. The execution periods may be entered by the control system programmer and keyed to the interrupt type and number. If sufficient time remains in the interrupt window, the execution period is subtracted from the interrupt window and, as determined bydecision block 132, then the interruptmanager 122 proceeds to process block 134. Atprocess block 134, the interrupts 118 are re-enabled via maskingline 124 and atprocess block 136, the current interrupt is processed. By re-enabling the interrupts atprocess block 134, nested interrupts may occur which may also be subject to the processing described with respect to process block 129. If atdecision block 132, there is inadequate time left in the interrupt window, then the interruptmanager 122 proceeds to decision block 138 where it remains until the interrupt window is reset by theclock 127. At that time, process blocks 134 and 136 may be executed. As mentioned, the interrupt window is subtracted from the bandwidth of theprocessor 26 that may be allocated to user tasks and therefore the allocation of bandwidth for guaranteeing the execution of user tasks is done under the assumption that the full interrupt window will be used by interrupts taking the highest priority. In this way, interrupts may be executed within the interrupt window without affecting guarantees for task execution. - Although, in the above-described embodiment, a determination is made whether processing of the current interrupt can be completed within a time window, in other embodiments a decision as to whether to process the current interrupt can be based upon whether processing of the current interrupt will satisfy other time constraints. For example, in one embodiment, a current interrupt would be processed so long as processing could begin within a set time window. In another embodiment, a current interrupt would be processed so long as processing of the current interrupt did not result in the violation of one or more completion timing constraints or other high-level or low-level requirements.
- The above description has been that of a preferred embodiment of the present invention. It will occur to those that practice the art that many modifications may be made without departing from the spirit and scope of the invention. In order to apprise the public of the various embodiments that may fall within the scope of the invention, the following claims are made.
Claims (30)
1. An interrupt manager for use in a distributed control system, the interrupt manager comprising:
circuitry that:
(i) receives interrupt signals including a current interrupt;
(ii) determines whether the current interrupt can be processed without delaying processing of a non-interrupt task beyond a predetermined time; and
(iii) inhibits, at least temporarily, processing of the current interrupt when it is determined that the processing of the current interrupt would delay the processing of the non-interrupt task beyond the predetermined time.
2. The interrupt manager of claim 1 , wherein the circuitry determines whether the current interrupt can be processed without delaying the processing of the non-interrupt task beyond the predetermined time by determining whether a total number of interrupts including at least one of the current interrupt, a recently-performed interrupt and a pending interrupt would exceed a maximum number of interrupts.
3. The interrupt manager of claim 2 , wherein the maximum number is associated with a time interval and represents a maximum number of interrupts that can be performed within the time interval.
4. The interrupt manager of claim 3 , wherein the total number of interrupts includes, in addition to the current interrupt, any interrupts that have been received and not yet processed.
5. The interrupt manager of claim 3 , wherein the total number of interrupts includes, in addition to the current interrupt, all interrupts that have been received since a first time.
6. The interrupt manager of claim 1 , wherein the circuitry determines whether the current interrupt can be processed without delaying the processing of the non-interrupt task beyond the predetermined time by determining whether the processing of the current interrupt could be completed within an interrupt window.
7. The interrupt manager of claim 6 , wherein the interrupt window is refreshed upon expirations of window periods.
8. The interrupt manager of claim 1 , wherein the circuitry determines whether the current interrupt can be processed without delaying the processing of the non-interrupt task beyond the predetermined time by determining whether the processing of the current interrupt could begin within an interrupt window.
9. The interrupt manager of claim 1 , wherein the circuitry inhibits the processing of the current interrupt by masking the current interrupt.
10. The interrupt manager of claim 1 , wherein the circuitry determines whether the current interrupt can be processed without delaying the processing of the non-interrupt task beyond the predetermined time by comparing a first priority associated with the current interrupt with a second priority of the non-interrupt task.
11. The interrupt manager of claim 10 , wherein the first priority is that of a proxy task generated in response to the receiving of the current interrupt based upon scheduling data of a message causing the current interrupt.
12. The interrupt manager of claim 1 , wherein the circuitry temporarily inhibits the processing of the current interrupt by placing information relating to the current interrupt in a later position in a queue.
13. The interrupt manager of claim 1 , wherein the circuitry determines whether current interrupt can be processed without delaying the processing of the non-interrupt task beyond the predetermined time by determining whether at least one completion timing constraint associated with the non-interrupt task would be violated if the processing of the current interrupt occurred.
14. A method of handling interrupts for use with a processor in a distributed control system, the method comprising:
receiving a current interrupt signal;
determining whether processing of the current interrupt signal would delay processing of a non-interrupt task beyond a predetermined time; and
inhibiting, at least temporarily, the processing of the current interrupt signal when it is determined that the processing would delay the processing of the non-interrupt task beyond the predetermined time.
15. The method of claim 14 , further comprising delaying the processing of the current interrupt signal to a later time if it is determined that the processing would delay the processing of the non-interrupt task beyond the predetermined time.
16. The method of claim 14 , wherein the determining includes:
comparing a total number of interrupt signals including at least the current interrupt signal with a maximum number of interrupt signals.
17. The method of claim 14 , wherein the determining includes determining whether at least one of: the processing of the current interrupt signal can be begun within a current time window; and the processing of the current interrupt signal can be completed within the current time window.
18. The method of claim 14 , wherein the inhibiting occurs by at least one of: (i) masking the current interrupt signal; and (ii) placing a task related to the current interrupt signal in a queue for later processing.
19. A method of scheduling messages being transmitted on a network among spatially-distributed control components of a distributed control system, the method comprising:
receiving a message;
receiving a relative timing constraint concerning the message, wherein the relative timing constraint is indicative of an amount of time; and
inserting the message into a queue at a location that is a function of the relative timing constraint.
20. The method of claim 19 , wherein the relative timing constraint is at least one of a completion timing constraint, a deadline period, and an execution period.
21. The method of claim 19 , wherein the location is also a function of a priority associated with the message and of an absolute timing constraint concerning the message.
22. The method of claim 21 , wherein the absolute timing constraint is a particular time.
23. The method of claim 19 , wherein the inserting of the message into the queue is governed by a message scheduler implemented by a processor executing a portion of a distributed operating system providing respective portions of an overall completion timing constraint of a communication circuit to each of a plurality of application programs, the respective portions setting respective deadlines for the application programs.
24. A method of coordinating a new control application program with other control application programs being performed on a distributed real-time operating system, wherein the distributed real-time operating system is for use with a control system having spatially separated control hardware resources, the method comprising:
(a) receiving the new control application program;
(b) identifying control hardware resources from a resource list matching control hardware resources required by the new control application program;
(c) allocating portions of a constraint associated with the new control application program to each identified control hardware resource; and
(d) determining whether the allocated portions of the constraint of the new control application program can be met while requirements of the other control application programs also are met.
25. The method of claim 24 , wherein the constraint is a completion timing constraint.
26. The method of claim 24 , further comprising:
collecting statistics regarding a usage of the control hardware resources as the new control application program and other control application programs are being performed; and
optimizing the usage of the control hardware resources based at least in part upon the collected statistics.
27. A method of operating an application program on a distributed control system having a plurality of hardware resources, the method comprising:
receiving high-level requirements concerning the application program;
determining low-level requirements based upon the high-level requirements;
allocating at least one of the high-level requirements and the low-level requirements among at least some of the plurality of hardware resources; and
operating the application program in accordance with the allocated requirements.
28. The method of claim 27 ,
wherein the high-level requirements include at least one of a hardware requirement, a completion-timing constraint, a message size, an inter-arrival period, a need for remote system services, and a type of priority, and
wherein the low-level requirements include at least one of an amount of memory, a network bandwidth, and a processor bandwidth.
29. The method of claim 27 , wherein the allocating of the low-level requirements includes allocating the low-level requirements to both a primary hardware resource and an implicit hardware resource.
30. The method of claim 27 , further comprising:
determining whether the allocated requirements are consistent with other allocated requirements associated with other application programs, prior to operating the application program.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/729,478 US20040100982A1 (en) | 1999-09-30 | 2003-12-05 | Distributed real-time operating system |
EP04028839A EP1538497B1 (en) | 2003-12-05 | 2004-12-06 | Distributed real time operating system |
DE602004022375T DE602004022375D1 (en) | 2003-12-05 | 2004-12-06 | Distributed real-time operating system |
US12/367,012 US7809876B2 (en) | 1999-09-30 | 2009-02-06 | Distributed real-time operating system |
US12/877,532 US8843652B2 (en) | 1999-09-30 | 2010-09-08 | Distributed real-time operating system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/408,696 US6687257B1 (en) | 1999-08-12 | 1999-09-30 | Distributed real-time operating system providing dynamic guaranteed mixed priority scheduling for communications and processing |
US10/729,478 US20040100982A1 (en) | 1999-09-30 | 2003-12-05 | Distributed real-time operating system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/408,696 Continuation-In-Part US6687257B1 (en) | 1999-08-12 | 1999-09-30 | Distributed real-time operating system providing dynamic guaranteed mixed priority scheduling for communications and processing |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/367,012 Division US7809876B2 (en) | 1999-09-30 | 2009-02-06 | Distributed real-time operating system |
US12/877,532 Division US8843652B2 (en) | 1999-09-30 | 2010-09-08 | Distributed real-time operating system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040100982A1 true US20040100982A1 (en) | 2004-05-27 |
Family
ID=34465791
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/729,478 Abandoned US20040100982A1 (en) | 1999-09-30 | 2003-12-05 | Distributed real-time operating system |
US12/367,012 Expired - Fee Related US7809876B2 (en) | 1999-09-30 | 2009-02-06 | Distributed real-time operating system |
US12/877,532 Expired - Lifetime US8843652B2 (en) | 1999-09-30 | 2010-09-08 | Distributed real-time operating system |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/367,012 Expired - Fee Related US7809876B2 (en) | 1999-09-30 | 2009-02-06 | Distributed real-time operating system |
US12/877,532 Expired - Lifetime US8843652B2 (en) | 1999-09-30 | 2010-09-08 | Distributed real-time operating system |
Country Status (3)
Country | Link |
---|---|
US (3) | US20040100982A1 (en) |
EP (1) | EP1538497B1 (en) |
DE (1) | DE602004022375D1 (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050249133A1 (en) * | 2004-05-07 | 2005-11-10 | Interdigital Technology Corporation | Medium access control layer architecture for supporting enhanced uplink |
US20060064532A1 (en) * | 2004-09-23 | 2006-03-23 | International Business Machines Corp. | Method and system for creating and dynamically selecting an arbiter design in a data processing system |
US20070013362A1 (en) * | 2005-07-18 | 2007-01-18 | Loh Aik K | Framework that maximizes the usage of testhead resources in in-circuit test system |
US20080229327A1 (en) * | 2007-03-16 | 2008-09-18 | Ricoh Company, Limited | Information processing apparatus, information processing mehtod and computer program product |
US20090013170A1 (en) * | 2005-03-04 | 2009-01-08 | Daimlerchrysler Ag | Control Device With Configurable Hardware Modules |
US20090172230A1 (en) * | 1999-09-30 | 2009-07-02 | Sivaram Balasubramanian | Distributed real-time operating system |
US20090234514A1 (en) * | 2005-03-04 | 2009-09-17 | Diamlerchrysler Ag | Method and Device for Executing Prioritized Control Processes |
US20090328039A1 (en) * | 2008-06-26 | 2009-12-31 | International Business Machines Corporation | Deterministic Real Time Business Application Processing In A Service-Oriented Architecture |
US20100082117A1 (en) * | 2008-09-29 | 2010-04-01 | Korsberg Edward C | Industrial controller with coordination of network transmissions using global clock |
CN102144093A (en) * | 2008-08-23 | 2011-08-03 | 德风公司 | Method for controlling a wind farm |
US20120106385A1 (en) * | 2008-10-31 | 2012-05-03 | Kanapathipillai Ketheesan | Channel bandwidth estimation on hybrid technology wireless links |
US20140135950A1 (en) * | 2011-07-12 | 2014-05-15 | Phoenix Contact Gmbh & Co. Kg | Method and system for the dynamic allocation of program functions in distributed control systems |
US20160077883A1 (en) * | 2014-01-31 | 2016-03-17 | Google Inc. | Efficient Resource Utilization in Data Centers |
US9367211B1 (en) * | 2012-11-08 | 2016-06-14 | Amazon Technologies, Inc. | Interface tab generation |
US10171370B1 (en) * | 2014-05-30 | 2019-01-01 | Amazon Technologies, Inc. | Distribution operating system |
US10503549B2 (en) * | 2013-04-09 | 2019-12-10 | National Instruments Corporation | Time critical tasks scheduling |
US20220244697A1 (en) * | 2019-07-03 | 2022-08-04 | Omron Corporation | Control system, setting device, and computer-readable storage medium |
US20230344782A1 (en) * | 2022-04-22 | 2023-10-26 | Robert Bosch Gmbh | Method for a configuration in a network |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8201205B2 (en) | 2005-03-16 | 2012-06-12 | Tvworks, Llc | Upstream bandwidth management methods and apparatus |
US11120406B2 (en) * | 2006-11-16 | 2021-09-14 | Comcast Cable Communications, Llc | Process for abuse mitigation |
JP5010314B2 (en) * | 2007-03-16 | 2012-08-29 | 日本電気株式会社 | Information processing apparatus, information processing method, and program |
WO2009095812A1 (en) * | 2008-01-28 | 2009-08-06 | Nxp B.V. | Dual operating systems on a single processor |
US8151008B2 (en) * | 2008-07-02 | 2012-04-03 | Cradle Ip, Llc | Method and system for performing DMA in a multi-core system-on-chip using deadline-based scheduling |
JP2010198600A (en) * | 2009-02-02 | 2010-09-09 | Omron Corp | Industrial controller |
US7996595B2 (en) * | 2009-04-14 | 2011-08-09 | Lstar Technologies Llc | Interrupt arbitration for multiprocessors |
US8321614B2 (en) * | 2009-04-24 | 2012-11-27 | Empire Technology Development Llc | Dynamic scheduling interrupt controller for multiprocessors |
US8260996B2 (en) * | 2009-04-24 | 2012-09-04 | Empire Technology Development Llc | Interrupt optimization for multiprocessors |
EP2435885B1 (en) | 2009-05-25 | 2018-04-18 | Vestas Wind Systems A/S | One global precise time and one maximum transmission time |
US8234431B2 (en) * | 2009-10-13 | 2012-07-31 | Empire Technology Development Llc | Interrupt masking for multi-core processors |
US10185594B2 (en) * | 2009-10-29 | 2019-01-22 | International Business Machines Corporation | System and method for resource identification |
JP5308383B2 (en) * | 2010-03-18 | 2013-10-09 | パナソニック株式会社 | Virtual multiprocessor system |
KR101717494B1 (en) * | 2010-10-08 | 2017-03-28 | 삼성전자주식회사 | Apparatus and Method for processing interrupt |
US11323337B2 (en) | 2011-09-27 | 2022-05-03 | Comcast Cable Communications, Llc | Resource measurement and management |
US8983630B2 (en) | 2011-12-01 | 2015-03-17 | Honeywell International Inc. | Real time event viewing across distributed control system servers |
US20130275108A1 (en) * | 2012-04-13 | 2013-10-17 | Jiri Sofka | Performance simulation of services |
US9106557B2 (en) | 2013-03-13 | 2015-08-11 | Comcast Cable Communications, Llc | Scheduled transmission of data |
US9810345B2 (en) * | 2013-12-19 | 2017-11-07 | Dresser, Inc. | Methods to improve online diagnostics of valve assemblies on a process line and implementation thereof |
FR3021108B1 (en) * | 2014-05-16 | 2016-05-06 | Thales Sa | METHOD FOR REAL-TIME SERVICE EXECUTION, IN PARTICULAR FLIGHT MANAGEMENT, AND REAL-TIME SYSTEM USING SUCH A METHOD |
CN105740072B (en) * | 2014-12-10 | 2020-12-04 | 中兴通讯股份有限公司 | Method and device for displaying system resources |
EP3032363A1 (en) * | 2014-12-12 | 2016-06-15 | Siemens Aktiengesellschaft | Method for operating an automation device |
EP3045994A1 (en) * | 2015-01-14 | 2016-07-20 | Siemens Aktiengesellschaft | Method and device for controlling the data communication flow in an industrial communication network taking into account real time data traffic |
CN106406246B (en) * | 2015-07-31 | 2019-09-20 | 中国联合网络通信集团有限公司 | The method and device of scheduling message transmission |
CN106445675B (en) * | 2016-10-20 | 2019-12-31 | 焦点科技股份有限公司 | B2B platform distributed application scheduling and resource allocation method |
CN107908483B (en) * | 2017-10-16 | 2020-06-16 | 福建天泉教育科技有限公司 | Message management method and terminal |
CN108762899B (en) * | 2018-05-16 | 2020-05-15 | 武汉轻工大学 | Cloud task rescheduling method and device |
CN108762142B (en) * | 2018-05-24 | 2020-12-29 | 新华三技术有限公司 | Communication equipment and processing method thereof |
CN109358953B (en) * | 2018-09-20 | 2020-09-08 | 中南大学 | Multitask application unloading method in micro cloud |
EP3690563A1 (en) * | 2019-01-29 | 2020-08-05 | Siemens Aktiengesellschaft | Distribution of a control task between multiple processor cores |
US11449339B2 (en) * | 2019-09-27 | 2022-09-20 | Red Hat, Inc. | Memory barrier elision for multi-threaded workloads |
CN111088645A (en) * | 2019-11-02 | 2020-05-01 | 珠海格力电器股份有限公司 | Household appliance combination structure |
CN111930490B (en) * | 2020-09-25 | 2021-06-15 | 武汉中科通达高新技术股份有限公司 | Streaming media task management method and device |
CN113419437B (en) * | 2021-06-30 | 2022-04-19 | 四川虹美智能科技有限公司 | Intelligent home data synchronization method and device based on MVVM (multifunction vehicle management model) framework and MQTT (message queuing time) protocol |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4562539A (en) * | 1982-04-28 | 1985-12-31 | International Computers Limited | Data processing system |
US4625308A (en) * | 1982-11-30 | 1986-11-25 | American Satellite Company | All digital IDMA dynamic channel allocated satellite communications system and method |
US5136718A (en) * | 1987-09-04 | 1992-08-04 | Digital Equipment Corporation | Communications arrangement for digital data processing system employing heterogeneous multiple processing nodes |
US5452201A (en) * | 1993-08-24 | 1995-09-19 | Allen-Bradley Company, Inc. | Industrial controller with highly distributed processing |
US5530643A (en) * | 1993-08-24 | 1996-06-25 | Allen-Bradley Company, Inc. | Method of programming industrial controllers with highly distributed processing |
US5542076A (en) * | 1991-06-14 | 1996-07-30 | Digital Equipment Corporation | Method and apparatus for adaptive interrupt servicing in data processing system |
US5619409A (en) * | 1995-06-12 | 1997-04-08 | Allen-Bradley Company, Inc. | Program analysis circuitry for multi-tasking industrial controller |
US5675791A (en) * | 1994-10-31 | 1997-10-07 | International Business Machines Corporation | Method and system for database load balancing |
US5765000A (en) * | 1994-12-29 | 1998-06-09 | Siemens Energy & Automation, Inc. | Dynamic user interrupt scheme in a programmable logic controller |
US5845149A (en) * | 1996-04-10 | 1998-12-01 | Allen Bradley Company, Llc | Industrial controller with I/O mapping table for linking software addresses to physical network addresses |
US5884046A (en) * | 1996-10-23 | 1999-03-16 | Pluris, Inc. | Apparatus and method for sharing data and routing messages between a plurality of workstations in a local area network |
US5896289A (en) * | 1996-09-05 | 1999-04-20 | Allen-Bradley Company, Llc | Output weighted partitioning method for a control program in a highly distributed control system |
US5937199A (en) * | 1997-06-03 | 1999-08-10 | International Business Machines Corporation | User programmable interrupt mask with timeout for enhanced resource locking efficiency |
US5949673A (en) * | 1997-06-13 | 1999-09-07 | Allen-Bradley Company, Llc | Hybrid centralized and distributed industrial controller |
US5949674A (en) * | 1997-11-04 | 1999-09-07 | Allen-Bradley Company, Llc | Reconstruction tool for editing distributed industrial controller programs |
US6182120B1 (en) * | 1997-09-30 | 2001-01-30 | International Business Machines Corporation | Method and system for scheduling queued messages based on queue delay and queue priority |
US6216109B1 (en) * | 1994-10-11 | 2001-04-10 | Peoplesoft, Inc. | Iterative repair optimization with particular application to scheduling for integrated capacity and inventory planning |
US6370572B1 (en) * | 1998-09-04 | 2002-04-09 | Telefonaktiebolaget L M Ericsson (Publ) | Performance management and control system for a distributed communications network |
US6381652B1 (en) * | 1999-06-15 | 2002-04-30 | Raytheon Company | High bandwidth processing and communication node architectures for processing real-time control messages |
US6418459B1 (en) * | 1998-06-09 | 2002-07-09 | Advanced Micro Devices, Inc. | Isochronous task scheduling structure for a non-real-time operating system |
US6560018B1 (en) * | 1994-10-27 | 2003-05-06 | Massachusetts Institute Of Technology | Illumination system for transmissive light valve displays |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NL8100930A (en) * | 1981-02-26 | 1982-09-16 | Philips Nv | DATA COMMUNICATION SYSTEM. |
JPH04306735A (en) * | 1991-04-04 | 1992-10-29 | Toshiba Corp | Asynchronous interruption inhiobition mechanism |
JPH05197573A (en) * | 1991-08-26 | 1993-08-06 | Hewlett Packard Co <Hp> | Task controlling system with task oriented paradigm |
FR2711831B1 (en) * | 1993-10-26 | 1997-09-26 | Intel Corp | Method and circuit for storing and prioritizing erasure orders in a memory device. |
US5457735A (en) * | 1994-02-01 | 1995-10-10 | Motorola, Inc. | Method and apparatus for queuing radio telephone service requests |
US5918219A (en) * | 1994-12-14 | 1999-06-29 | Isherwood; John Philip | System and method for estimating construction project costs and schedules based on historical data |
US5560018A (en) * | 1994-12-16 | 1996-09-24 | International Business Machines Corporation | Providing external interrupt serialization compatibility in a multiprocessing environment for software written to run in a uniprocessor environment |
US5768599A (en) * | 1995-02-28 | 1998-06-16 | Nec Corporation | Interrupt managing system for real-time operating system |
US5636124A (en) * | 1995-03-08 | 1997-06-03 | Allen-Bradley Company, Inc. | Multitasking industrial controller |
US5627745A (en) * | 1995-05-03 | 1997-05-06 | Allen-Bradley Company, Inc. | Parallel processing in a multitasking industrial controller |
US5729540A (en) * | 1995-10-19 | 1998-03-17 | Qualcomm Incorporated | System and method for scheduling messages on a common channel |
JP3663710B2 (en) * | 1996-01-17 | 2005-06-22 | ヤマハ株式会社 | Program generation method and processor interrupt control method |
US6097961A (en) * | 1996-11-06 | 2000-08-01 | Nokia Mobile Phones Limited | Mobile station originated SMS using digital traffic channel |
US6104962A (en) * | 1998-03-26 | 2000-08-15 | Rockwell Technologies, Llc | System for and method of allocating processing tasks of a control program configured to control a distributed control system |
US7937364B1 (en) * | 1999-03-09 | 2011-05-03 | Oracle International Corporation | Method and system for reliable access of messages by multiple consumers |
US6801943B1 (en) * | 1999-04-30 | 2004-10-05 | Honeywell International Inc. | Network scheduler for real time applications |
US6567840B1 (en) * | 1999-05-14 | 2003-05-20 | Honeywell Inc. | Task scheduling and message passing |
US6389500B1 (en) * | 1999-05-28 | 2002-05-14 | Agere Systems Guardian Corporation | Flash memory |
US6378022B1 (en) * | 1999-06-17 | 2002-04-23 | Motorola, Inc. | Method and apparatus for processing interruptible, multi-cycle instructions |
US6633942B1 (en) * | 1999-08-12 | 2003-10-14 | Rockwell Automation Technologies, Inc. | Distributed real-time operating system providing integrated interrupt management |
US6687257B1 (en) * | 1999-08-12 | 2004-02-03 | Rockwell Automation Technologies, Inc. | Distributed real-time operating system providing dynamic guaranteed mixed priority scheduling for communications and processing |
US6487455B1 (en) * | 1999-09-30 | 2002-11-26 | Rockwell Automation Technologies, Inc. | Distributed real time operating system |
US20040100982A1 (en) * | 1999-09-30 | 2004-05-27 | Sivaram Balasubramanian | Distributed real-time operating system |
JP2003519831A (en) * | 1999-12-30 | 2003-06-24 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Multitasking software architecture |
FR2818769B1 (en) * | 2000-12-21 | 2004-06-18 | Eads Airbus Sa | MULTI-TASK REAL-TIME OPERATION METHOD AND SYSTEM |
US8094804B2 (en) * | 2003-09-26 | 2012-01-10 | Avaya Inc. | Method and apparatus for assessing the status of work waiting for service |
US7930700B1 (en) * | 2005-05-23 | 2011-04-19 | Hewlett-Packard Development Company, L.P. | Method of ordering operations |
FR2908576A1 (en) * | 2006-11-14 | 2008-05-16 | Canon Kk | METHOD, DEVICE AND SOFTWARE APPLICATION FOR SCHEDULING A PACKET TRANSMISSION OF A DATA STREAM |
-
2003
- 2003-12-05 US US10/729,478 patent/US20040100982A1/en not_active Abandoned
-
2004
- 2004-12-06 EP EP04028839A patent/EP1538497B1/en active Active
- 2004-12-06 DE DE602004022375T patent/DE602004022375D1/en active Active
-
2009
- 2009-02-06 US US12/367,012 patent/US7809876B2/en not_active Expired - Fee Related
-
2010
- 2010-09-08 US US12/877,532 patent/US8843652B2/en not_active Expired - Lifetime
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4562539A (en) * | 1982-04-28 | 1985-12-31 | International Computers Limited | Data processing system |
US4625308A (en) * | 1982-11-30 | 1986-11-25 | American Satellite Company | All digital IDMA dynamic channel allocated satellite communications system and method |
US5136718A (en) * | 1987-09-04 | 1992-08-04 | Digital Equipment Corporation | Communications arrangement for digital data processing system employing heterogeneous multiple processing nodes |
US5542076A (en) * | 1991-06-14 | 1996-07-30 | Digital Equipment Corporation | Method and apparatus for adaptive interrupt servicing in data processing system |
US5452201A (en) * | 1993-08-24 | 1995-09-19 | Allen-Bradley Company, Inc. | Industrial controller with highly distributed processing |
US5530643A (en) * | 1993-08-24 | 1996-06-25 | Allen-Bradley Company, Inc. | Method of programming industrial controllers with highly distributed processing |
US6216109B1 (en) * | 1994-10-11 | 2001-04-10 | Peoplesoft, Inc. | Iterative repair optimization with particular application to scheduling for integrated capacity and inventory planning |
US6560018B1 (en) * | 1994-10-27 | 2003-05-06 | Massachusetts Institute Of Technology | Illumination system for transmissive light valve displays |
US5675791A (en) * | 1994-10-31 | 1997-10-07 | International Business Machines Corporation | Method and system for database load balancing |
US5765000A (en) * | 1994-12-29 | 1998-06-09 | Siemens Energy & Automation, Inc. | Dynamic user interrupt scheme in a programmable logic controller |
US5619409A (en) * | 1995-06-12 | 1997-04-08 | Allen-Bradley Company, Inc. | Program analysis circuitry for multi-tasking industrial controller |
US5845149A (en) * | 1996-04-10 | 1998-12-01 | Allen Bradley Company, Llc | Industrial controller with I/O mapping table for linking software addresses to physical network addresses |
US5896289A (en) * | 1996-09-05 | 1999-04-20 | Allen-Bradley Company, Llc | Output weighted partitioning method for a control program in a highly distributed control system |
US5884046A (en) * | 1996-10-23 | 1999-03-16 | Pluris, Inc. | Apparatus and method for sharing data and routing messages between a plurality of workstations in a local area network |
US5937199A (en) * | 1997-06-03 | 1999-08-10 | International Business Machines Corporation | User programmable interrupt mask with timeout for enhanced resource locking efficiency |
US5949673A (en) * | 1997-06-13 | 1999-09-07 | Allen-Bradley Company, Llc | Hybrid centralized and distributed industrial controller |
US6182120B1 (en) * | 1997-09-30 | 2001-01-30 | International Business Machines Corporation | Method and system for scheduling queued messages based on queue delay and queue priority |
US5949674A (en) * | 1997-11-04 | 1999-09-07 | Allen-Bradley Company, Llc | Reconstruction tool for editing distributed industrial controller programs |
US6418459B1 (en) * | 1998-06-09 | 2002-07-09 | Advanced Micro Devices, Inc. | Isochronous task scheduling structure for a non-real-time operating system |
US6370572B1 (en) * | 1998-09-04 | 2002-04-09 | Telefonaktiebolaget L M Ericsson (Publ) | Performance management and control system for a distributed communications network |
US6381652B1 (en) * | 1999-06-15 | 2002-04-30 | Raytheon Company | High bandwidth processing and communication node architectures for processing real-time control messages |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8843652B2 (en) * | 1999-09-30 | 2014-09-23 | Rockwell Automation Technologies, Inc. | Distributed real-time operating system |
US20100333102A1 (en) * | 1999-09-30 | 2010-12-30 | Sivaram Balasubramanian | Distributed Real-Time Operating System |
US20090172230A1 (en) * | 1999-09-30 | 2009-07-02 | Sivaram Balasubramanian | Distributed real-time operating system |
US7809876B2 (en) | 1999-09-30 | 2010-10-05 | Rockwell Automation Technologies, Inc. | Distributed real-time operating system |
US20050249133A1 (en) * | 2004-05-07 | 2005-11-10 | Interdigital Technology Corporation | Medium access control layer architecture for supporting enhanced uplink |
US10225825B2 (en) | 2004-05-07 | 2019-03-05 | Interdigital Technology Corporation | Medium access control layer architecture for supporting enhanced uplink |
US9467983B2 (en) | 2004-05-07 | 2016-10-11 | Interdigital Technology Corporation | Medium access control layer architecture for supporting enhanced uplink |
US20120320866A1 (en) * | 2004-05-07 | 2012-12-20 | Interdigital Technology Corporation | Medium access control layer architecture for supporting enhanced uplink |
US8259752B2 (en) * | 2004-05-07 | 2012-09-04 | Interdigital Technology Corporation | Medium access control layer architecture for supporting enhanced uplink |
US8805354B2 (en) * | 2004-05-07 | 2014-08-12 | Interdigital Technology Corporation | Medium access control layer architecture for supporting enhanced uplink |
US7287111B2 (en) * | 2004-09-23 | 2007-10-23 | International Business Machines Corporation | Method and system for creating and dynamically selecting an arbiter design in a data processing system |
US20060064532A1 (en) * | 2004-09-23 | 2006-03-23 | International Business Machines Corp. | Method and system for creating and dynamically selecting an arbiter design in a data processing system |
US20090234514A1 (en) * | 2005-03-04 | 2009-09-17 | Diamlerchrysler Ag | Method and Device for Executing Prioritized Control Processes |
US20090013170A1 (en) * | 2005-03-04 | 2009-01-08 | Daimlerchrysler Ag | Control Device With Configurable Hardware Modules |
US7253606B2 (en) * | 2005-07-18 | 2007-08-07 | Agilent Technologies, Inc. | Framework that maximizes the usage of testhead resources in in-circuit test system |
US20070013362A1 (en) * | 2005-07-18 | 2007-01-18 | Loh Aik K | Framework that maximizes the usage of testhead resources in in-circuit test system |
US20080229327A1 (en) * | 2007-03-16 | 2008-09-18 | Ricoh Company, Limited | Information processing apparatus, information processing mehtod and computer program product |
US20160335126A1 (en) * | 2008-06-26 | 2016-11-17 | International Business Machines Corporation | Deterministic real time business application processing in a service-oriented architecture |
US9430293B2 (en) * | 2008-06-26 | 2016-08-30 | International Business Machines Corporation | Deterministic real time business application processing in a service-oriented architecture |
US20090328039A1 (en) * | 2008-06-26 | 2009-12-31 | International Business Machines Corporation | Deterministic Real Time Business Application Processing In A Service-Oriented Architecture |
US20150301867A1 (en) * | 2008-06-26 | 2015-10-22 | International Business Machines Corporation | Deterministic real time business application processing in a service-oriented architecture |
US10908963B2 (en) * | 2008-06-26 | 2021-02-02 | International Business Machines Corporation | Deterministic real time business application processing in a service-oriented architecture |
US9047125B2 (en) * | 2008-06-26 | 2015-06-02 | International Business Machines Corporation | Deterministic real time business application processing in a service-oriented architecture |
CN102144093A (en) * | 2008-08-23 | 2011-08-03 | 德风公司 | Method for controlling a wind farm |
US7930041B2 (en) * | 2008-09-29 | 2011-04-19 | Rockwell Automation Technologies, Inc. | Industrial controller with coordination of network transmissions using global clock |
US20100082117A1 (en) * | 2008-09-29 | 2010-04-01 | Korsberg Edward C | Industrial controller with coordination of network transmissions using global clock |
US8937877B2 (en) * | 2008-10-31 | 2015-01-20 | Venturi Ip Llc | Channel bandwidth estimation on hybrid technology wireless links |
US20120106385A1 (en) * | 2008-10-31 | 2012-05-03 | Kanapathipillai Ketheesan | Channel bandwidth estimation on hybrid technology wireless links |
US9674729B2 (en) | 2008-10-31 | 2017-06-06 | Venturi Wireless, Inc. | Channel bandwidth estimation on hybrid technology wireless links |
US20140135950A1 (en) * | 2011-07-12 | 2014-05-15 | Phoenix Contact Gmbh & Co. Kg | Method and system for the dynamic allocation of program functions in distributed control systems |
US9389604B2 (en) * | 2011-07-12 | 2016-07-12 | Phoenix Contact Gmbh & Co. Kg | Method and system for the dynamic allocation of program functions in distributed control systems |
US9367211B1 (en) * | 2012-11-08 | 2016-06-14 | Amazon Technologies, Inc. | Interface tab generation |
US10503549B2 (en) * | 2013-04-09 | 2019-12-10 | National Instruments Corporation | Time critical tasks scheduling |
CN105849715A (en) * | 2014-01-31 | 2016-08-10 | 谷歌公司 | Efficient resource utilization in data centers |
US9823948B2 (en) * | 2014-01-31 | 2017-11-21 | Google Inc. | Efficient resource utilization in data centers |
US20160077883A1 (en) * | 2014-01-31 | 2016-03-17 | Google Inc. | Efficient Resource Utilization in Data Centers |
US10171370B1 (en) * | 2014-05-30 | 2019-01-01 | Amazon Technologies, Inc. | Distribution operating system |
US20220244697A1 (en) * | 2019-07-03 | 2022-08-04 | Omron Corporation | Control system, setting device, and computer-readable storage medium |
US20230344782A1 (en) * | 2022-04-22 | 2023-10-26 | Robert Bosch Gmbh | Method for a configuration in a network |
Also Published As
Publication number | Publication date |
---|---|
US20100333102A1 (en) | 2010-12-30 |
US20090172230A1 (en) | 2009-07-02 |
US8843652B2 (en) | 2014-09-23 |
DE602004022375D1 (en) | 2009-09-17 |
EP1538497A3 (en) | 2008-02-20 |
EP1538497A2 (en) | 2005-06-08 |
US7809876B2 (en) | 2010-10-05 |
EP1538497B1 (en) | 2009-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7809876B2 (en) | Distributed real-time operating system | |
US6687257B1 (en) | Distributed real-time operating system providing dynamic guaranteed mixed priority scheduling for communications and processing | |
US6633942B1 (en) | Distributed real-time operating system providing integrated interrupt management | |
US9424093B2 (en) | Process scheduler employing adaptive partitioning of process threads | |
EP1703388B1 (en) | Process scheduler employing adaptive partitioning of process threads | |
Sprunt et al. | Aperiodic task scheduling for hard-real-time systems | |
Tindell | Using offset information to analyse static priority pre-emptively scheduled task sets | |
Stankovic et al. | Evaluation of a flexible task scheduling algorithm for distributed hard real-time systems | |
US6757897B1 (en) | Apparatus and methods for scheduling and performing tasks | |
US8631409B2 (en) | Adaptive partitioning scheduler for multiprocessing system | |
US5613129A (en) | Adaptive mechanism for efficient interrupt processing | |
US7076781B2 (en) | Resource reservation for large-scale job scheduling | |
US6487455B1 (en) | Distributed real time operating system | |
US5838957A (en) | Multi-stage timer implementation for telecommunications transmission | |
US6473780B1 (en) | Scheduling of direct memory access | |
US5768572A (en) | Timer state control optimized for frequent cancel and reset operations | |
Kalogeraki et al. | Dynamic scheduling for soft real-time distributed object systems | |
García et al. | Minimizing the effects of jitter in distributed hard real-time systems | |
Racu et al. | Improved response time analysis of tasks scheduled under preemptive round-robin | |
JP5480322B2 (en) | Performance control method, system and program thereof | |
EP1076275B1 (en) | Distributed real-time operating system | |
US20070189179A1 (en) | Response time prediction method for frames on a serial bus | |
KR940003846B1 (en) | Real-time processing scheduling method in electronic exchange | |
Fokkink et al. | Real-Time Embedded Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ROCKWELL AUTOMATION TECHNOLOGIES, INC., OHIO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BALASUBRAMANIAN, SIVARAM;REEL/FRAME:014793/0926 Effective date: 20031203 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |