WO1989001203A1 - Mimd computer system - Google Patents

Mimd computer system Download PDF

Info

Publication number
WO1989001203A1
WO1989001203A1 PCT/GB1988/000594 GB8800594W WO8901203A1 WO 1989001203 A1 WO1989001203 A1 WO 1989001203A1 GB 8800594 W GB8800594 W GB 8800594W WO 8901203 A1 WO8901203 A1 WO 8901203A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
control graph
token
instructions
graph
Prior art date
Application number
PCT/GB1988/000594
Other languages
French (fr)
Inventor
Andrew Julian Beer
William Jeffrey Christmas
Stephen Tudor Davies
John Norman Harrison
Derek Owen Morris
Christopher John Mottershead
Keith Duncan Roberts
Original Assignee
The British Petroleum Company Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The British Petroleum Company Plc filed Critical The British Petroleum Company Plc
Publication of WO1989001203A1 publication Critical patent/WO1989001203A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8007Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/45Exploiting coarse grain parallelism in compilation, i.e. parallelism between groups of instructions

Definitions

  • the present invention relates to multiprocessor computers and in particular to the coordination and synchronisation of processors in a MIMD computer.
  • Computers consisting of multiple processors are sometimes designed to operate in a mode known in the art as Multiple
  • each of the processors executes an independent sequence of instructions and operates on an independent set of data.
  • a processor must not execute an instruction which is dependent on data from another processor until the other processor has generated the data.
  • a means for such synchronisation forms an important part of the present invention.
  • the synchronisation is often provided for by means of a global, or shared, memory, which may be accessed by any of the processors.
  • the processors can then synchronise with each other by modifying and monitoring data values in the global memory. This is often aided by the provision of indivisible (atomic) read-modify-write operations on the global memory.
  • a disadvantage with global memory architectures is that access to the global memory can limit the speed of the computer as a whole or limit the number of processors that can be effectively used in such a system. It would be desirable to provide a means of synchronising MIMD processors which is not unduly expensive and yet does not limit the speed of the computer or the number of processors that can be effectively used.
  • the paths of control flow and data dependencies in a program can be represented by means of a directed graph.
  • a directed graph can be automatically extracted from a given computer program and represented by a set of linked nodes or control flow instructions.
  • the execution of a control graph can be thought of in terms of tokens flowing through the graph.
  • Each token represents a thread of control, enabling operations to take place in accordance with its location in the graph.
  • Each of the nodes in the directed graph may be considered to be a control graph instruction.
  • the purpose of such an instruction is to take one or more tokens from its input and to place one or more tokens on its output, i.e. to transfer a token from the arc preceding the node and place it on the succeeding (ie following) arc.
  • the most primitive form of control graph instruction takes an input token and creates an output token. It also identifies to a plurality of data processors one or more data manipulation instructions which are to be executed.
  • a MIMD computer comprises: a plurality of data processors, each capable of executing a sequential stream of a single entry point and a single exit point; at least one control graph processor for (a) issuing streams of instructions to said data processors, and (b) causing synchronisation between some or all of said data processors; in accordance with control graph instructions executed by the control graph processor.
  • the streams of instructions may be issued directly by the control graph processor i.e. the instructions may be- issued from the control graph processor.
  • the instructions be issued indirectly and may, for example, be held in an area of memory and passed directly to a data processor, with the control graph processor identifying which instruction is to be issued to which data processor.
  • a MIMD computer in accordance with the present invention may comprise:
  • control graph memory for storing control flow and synchronisation instructions
  • data memory for storing information for the data processors.
  • This invention combines the advantages of separate control flow execution with the effective synchronisation of MIMD processors without limiting the speed of operation of the computer or the number of processors that can be effectively used in the computer.
  • the control graph processor performs the functions of control flow execution and process synchronisation by executing, or interpreting, a control graph represented by means of control flow and synchronisation instructions.
  • a program executed by a computer comprises a control graph and a set of data-processing operations.
  • the control graph is executed by the control graph processor whilst the data-processing operations are executed by the MIMD processors under the control of the control graph processor.
  • Both the control graph and the set of data-processing operations are normally generated by a compiler from a single program expressed in a high-level language. To allow for subprograms, the control graph may be hierarchical.
  • the present invention further comprises a method of operating a MIMD computer which comprises:
  • the plurality of data processors may be identical or varied and, in general, any processors capable of executing a stream of instructions may be used.
  • the processors may be conventional microprocessors.
  • control graph processor depends on the representation chosen for the control graph.
  • the set of control graph instructions will normally include some form of selection (or branching), to allow data-dependent control flow, and some form of forking, to allow a single thread of control to become two or more threads of control.
  • a subgraph call instruction can be provided which causes the execution of a sub-control graph in a hierarchical fashion.
  • the synchronisation instructions will usually include a join operation, to allow two or more threads of control to synchronise and become a single thread of control.
  • a selection instruction causes a single token to travel down one of a plurality of branches in the control graph, the choice of branch being data-dependent.
  • a fork instruction converts a single token into two or more tokens, one following each of the succeeding branches.
  • a join instruction causes a token to be held up until there is a token available on each other branch meeting at the join.
  • a subgraph call instruction causes a token to be passed to the associated sub-control graph and returned upon completion of that sub-control graph.
  • control graph can be automatically generated from a program.
  • This control graph generation forms part of the compilation process whereby a program is translated into a form suitable for execution by a particular computer.
  • the methods of construction and operation of compilers and the advantages of using a compiler are fully discussed in the prior art.
  • the use of a compiler does not form part of, and is not essential to the operation of, the present invention.
  • the use of such a compiler is one way in which a programmer can conveniently use a computer according to the present invention.
  • Figure 1 is a diagrammatic representation of the types of nodes in the control graph
  • Figure 2 is a section of program and a diagrammatic representation of its associated control graph
  • Figure 3 is a diagrammatic representation of a form of a MIMD computer comprising a control graph processor.
  • a control graph comprises nodes of six distinct types. These are represented graphically in Figure 1 and are referred to as Predicate, Either, Fork, Join, Subgraph and Basic Operation nodes.
  • a control graph is constructed from these types of nodes and has a unique entry point and a unique exit point.
  • a control graph is executed by placing a token at its entry point and following the rules given below for progressing tokens in accordance with the nodes of the control graph. The execution of the control graph is complete with a token arrives at the exit point.
  • a "Predicate node” as used in this specification is a node which performs the function of selection between alternative branches in the flow of control of the, program. That is to say, when a token arrives at a Predicate node, an associated boolean datum is interrogated and the token progresses along one of the two branches in the control graph in accordance with the value of the datum.
  • An "Either node” is used to recombine the two branches from a Predicate node. When a token arrives at either branch entering an Either node, it passes on. It can be seen that the Either node does not perform an active control or synchronisation function but it is included to make control graphs well structured.
  • a "Fork node” as used in this specification is a node which performs the function of forking the control of flow. That is to say, when a token arrives at a Fork node, a token moves down each of the following branches in the control graph. Fork nodes allow for multiple threads of control and hence MIMD operation.
  • a "Join node” as used in this specification is a node which is used for the purpose of synchronising separate threads of control. A token does not pass out of a Join node until a token has arrived on each of the two branches entering the node. In this way, two separate threads of control flow are synchronised and recombined into a single thread.
  • a "Subgraph node” as used in this specification is a node which is used to control the execution of subprograms. When a token arrives at a Subgraph node, it passes to the entry point of the associated sub-control graph. When a token arrives at the exit point of the sub-control graph, it passes back to the Subgraph node and continues on its way through the control graph.
  • a "Basic Operation node” as used in this specification is a node which causes the activation of an operation that is not related to control flow.
  • such an operation is referred to as a data manipulation operation and takes the form of a sequence of one or more instructions, with a single entry point and a single exit point, which is executed by one of the MIMD processors.
  • a token arrives at a Basic Operation node, the associated data manipulation operation is started.
  • the operation passes out of the Basic Operation node. It is by this means that a control graph exerts control over a computation.
  • the graphical representation of a Basic Operation node is the same as that of a Subgraph node. This is convenient as the effect of both types of node is to bring about an operation that is defined elsewhere, and wait for that operation to finish.
  • Figure 2 shows a section of a program written in Occam, together with its equivalent control graph.
  • the PAR construct is represented by means of Fork and Join nodes whilst the IF construct is represented by means of Predicate and Either nodes.
  • the assignments are treated as basic operations, to be carried out by the MIMD processors. For the sake of clarity, this example has been kept simple. It will be appreciated that this set of node types allows the representation of much more complex programs.
  • the set of six node types described above constitutes the preferred composition of a control graph.
  • This set of node types is both simple enough to allow a control graph processor to be constructed and powerful enough to allow the control flow and synchronisation requirements of most programming languages to be represented.
  • the present invention does not rely on this particular set of node types and alternatives may be used in other embodiments of the invention.
  • control graph processor In order to build a control graph processor, it is desirable to have a representation of control graphs suitable for storage in memory components. For this reason, each of the node types described above has an equivalent control graph instruction. These types of instruction constitute the instruction set recognised by the control graph processor.
  • a graphical representation of a control graph (as in figure 2, for example) has the same meaning as, and is isomorphic to, the equivalent representation in terms of control graph instructions. The reason for introducing the two representations is that the former is easy for humans to comprehend and the latter is more conveniently stored and processed by a machine.
  • a MIMD computer preferably has a control graph processor-which comprises: at least one sequencer; a token heap, where tokens marking points of execution temporarily reside; a control graph dictionary, where control graph instructions are stored; a tree store, where the dynamic state of control graph calls is maintained.
  • control graph dictionary stores not only a top level control graph but also sub-control graphs and the tree stores not only the dynamic state of calls to the top level control graph but also the dynamic state of calls to sub-control graphs.
  • a sub-control graph has the same form as a top level control graph.
  • the sequencer is the unit which is directly involved in the execution of the control flow instructions obtained from the control graph dictionary.
  • the sequencer fetches instructions from an area of memory holding control graph instructions and interprets the instructions so as to move a token through a control graph. It is preferred to provide a plurality of sequencers.
  • a preferred method of operating a MIMD computer having a plurality of sequencers for executing control graph instructions comprises each sequencer: fetching a token pointer from an area of memory acting as a token heap, fetching control graph instructions from an area of memory acting as a control graph dictionary, pointed to by the token pointer, altering the token pointer in accordance with the execution of the control graph instructions, creating a new token pointer and storing it on the token heap whenever there is a fork in the control flow, and fetching a new token pointer from the token heap whenever the token pointer no longer points to a further instruction to be executed.
  • a preferred method of operating a MIMD computer which is capable of executing sub-programs containing control graph instructions comprises:
  • the MIMD computer comprises a control graph processor (1) and a plurality of data processing units (2).
  • the control graph processor comprises a plurality of control graph sequencers (3) linked to the data processing units (2).
  • the control graph sequencers are also linked to a control graph dictionary (4), a token heap (5) and a subgraph tree store (6).
  • the control graph dictionary (4) is an area of memory used to store the control graph, and its subgraphs, for the program being executed. These graphs are represented by collections of control graph instructions mentioned above.
  • the graphs are, in general, stored as a hierarchical set of sub-control graphs, each of which may contain references to other sub-control graphs. Any individual sub-control graph may be referenced simultaneously from more than one point in a calling sub-control graph and by more than one sub-control graph.
  • the control graph sequencers (3) are responsible for exerting control over the data processing units (2). This is achieved by moving tokens through the control graph stored in the sub-control graph dictionary (4) and invoking the data processing units (2) when tokens reach Basic Operation nodes.
  • a token pointer is used to identify the location of the token in the control graph.
  • a token pointer can be thought of as being equivalent to an instruction pointer in a conventional computer. However, there may be many token pointers in existence at any given moment.
  • a control graph sequencer (3) (hereafter referred to as a
  • ⁇ sequencer' fetches instructions from the control graph dictionary (4) and interprets them in order to move a token through a graph.
  • a sequencer When a sequencer generates a new token (eg. in response to a Fork instruction) it puts the new token pointer on the token heap (5).
  • the token heap (5) is another area of memory, where token pointers reside when they are not being progressed through the graph by a sequencer.
  • a sequencer falls idle because its token has been consumed (eg. as a result of a Join instruction) or because it has been reset, it fetches another token pointer from the token heap.
  • the tree store (6) is a further area of memory which is used to store the current state of execution of the control graph processor.
  • each token pointer has an associated tree-frame pointer that is equivalent to a stack frame pointer in a conventional computer.
  • the tree is also used to note the arrival of the first token at a Join node.
  • the first token (of a pair) to arrive at a Join node is not allowed to proceed any further in the graph.
  • the second token arrives, it can proceed to the next node in the control graph.
  • the tree is a convenient place to note the arrival of the first token because it is easily accessed by the sequencer that is processing the second token.
  • a tree is composed of a set of linked frames, each of which has the following structure:
  • "return-address” is the location in the control graph dictionary of the control graph instruction following the Subgraph instruction in the calling control graph;
  • parent-address is the location in the tree of the frame associated with the calling subgraph
  • join-bits is a pair of bits that indicate the arrival of a token at one or other of the input branches to a Join instruction - there are as many.pairs of bits as there are Join nodes in the associated sub-control graph.
  • return-address and parent-address have null values, which is sufficient to indicate that the frame is in fact a root frame.
  • Tokens are now considered in more detail.
  • new tokens. are created from time to time.
  • the token pointers are placed on the token heap while they are waiting for a sequencer to become available.
  • the execution of the graph can be started by placing,. " on the token heap, a token pointer which locates a token at the entry point of the graph.
  • the token records each have the following structure, both while being processed by a sequencer and when residing on the token heap: concerned, the only action required by an Either instruction is an unconditional modification of the token location. For this reason, the Either node need not be explicitly represented in other embodiments.
  • the branches that merge at an Either node could instead be merged at the succeeding node in the control graph.
  • the Fork node Since the Fork node creates two tokens where formerly there was one, and a sequencer can only process one token at a time, it executes the fork by putting one of the token pointers on the token heap and continuing to execute the other one.
  • the Fork node is represented by a Fork instruction with the format:
  • the first token that arrives at a Join node needs to be stopped and its arrival noted. This arrival is recorded in the tree by setting a join-bit, as mentioned above.
  • the join-bit for the first is cleared and a single token is progressed.
  • the sequencer checks that the two tokens arrive from the two different routes meeting at the Join node. For this reason, the Join node is represented by a pair of Join instructions having the form:
  • the Left-join instruction operates as : if right-join-bit is set then reset right-join-bit token location : « address else if left-join-bit is set then error else set left- oin-bit fetch another token from the token heap and vice-versa for the Right-join instruction. If no check is made of the fact that the two tokens should arrive from two routes, then the two join bits can be replaced with a single bit and the pair of instructions can be replaced with a single Join instruction.
  • the Subgraph node is represented by a Subgraph instruction which causes a new frame to be created on the tree and the token to be passed into the sub-control graph.
  • the format of the instruction is :
  • control graph dictionary Preferably, only one ⁇ copy of the sub-control graph for each subroutine is stored in the control graph dictionary.
  • This graph may be executed several times, possibly simultaneously, with each execution having an associated tree-frame.
  • a copy of the sub-control graph could be substituted for the Subgraph node at the time of compilation and thus avoid the need for Subgraph instructions.
  • such a scheme would be wasteful of control graph dictionary space and would prohibit arbitrarily recursive subroutines.
  • Terminal instruction has the form : Terminal
  • the Basic Operation node is the means whereby data processing operations are invoked.
  • the format is :
  • the sequencer waits for the operation to complete and then progresses the same token.
  • the sequencer could fetch another token from the token heap and process it while the data, processing unit is performing the basic operation for the first token.
  • each data processing unit (2) comprises of instruction moving means and a data processor, together with an instruction memory and a data memory.
  • the instruction moving means accepts the basic operation command from the sequencer and sends a stream of instructions (obtained from the instruction memory) to the data processor in accordance with the parameter-address information.
  • the stream of instructions constitutes a single-entry single-exit (SESE) block which has been generated by the compiler.
  • SESE single-entry single-exit
  • the control graph processor may be constructed from a conventional microprocessor, acting as a control graph sequencer, and a single physical memory, holding the control graph dictionary, token heap and tree store.
  • the sequencer is a special purpose processor which directly executes control graphs (rather than interpreting them) and the three storage areas are provided by separate physical memory units.
  • the MIMD processors may be microprocessors, each of which executes an instruction stream generated by an associated instruction moving means.
  • Each instruction moving means may be constructed from a DMA (direct memory access) device. The use of an instruction moving means of this nature is not an important feature and other embodiments may use other means to generate the instruction streams.

Abstract

A multiprocessor MIMD computer has a plurality of data processors and a control graph processor for issuing streams of instructions to the data processors and for causing synchronisation between data processors in accordance with control graph instructions.

Description

MIMD computer system
The present invention relates to multiprocessor computers and in particular to the coordination and synchronisation of processors in a MIMD computer.
Computers consisting of multiple processors are sometimes designed to operate in a mode known in the art as Multiple
Instruction Stream, Multiple Data Stream - or MIMD. In the MIMD mode, each of the processors executes an independent sequence of instructions and operates on an independent set of data. In order for the plurality of processors to collaborate on an overall task, it is necessary for them to synchronise at various points in order that in order that instructions are executed in the correct order. THus a processor must not execute an instruction which is dependent on data from another processor until the other processor has generated the data. A means for such synchronisation forms an important part of the present invention.
In existing MIMD computers, the synchronisation is often provided for by means of a global, or shared, memory, which may be accessed by any of the processors. The processors can then synchronise with each other by modifying and monitoring data values in the global memory. This is often aided by the provision of indivisible (atomic) read-modify-write operations on the global memory. A disadvantage with global memory architectures is that access to the global memory can limit the speed of the computer as a whole or limit the number of processors that can be effectively used in such a system. It would be desirable to provide a means of synchronising MIMD processors which is not unduly expensive and yet does not limit the speed of the computer or the number of processors that can be effectively used.
The paths of control flow and data dependencies in a program can be represented by means of a directed graph. Such a graph can be automatically extracted from a given computer program and represented by a set of linked nodes or control flow instructions. The execution of a control graph can be thought of in terms of tokens flowing through the graph. Each token represents a thread of control, enabling operations to take place in accordance with its location in the graph. Thus, in a conventional serial program, there is only a single thread of control and hence a single token in the control graph. However, in a program designed to exploit a MIMD computer, there may be many threads of control and each of these would have an associated token in the control graph.
Each of the nodes in the directed graph may be considered to be a control graph instruction. The purpose of such an instruction is to take one or more tokens from its input and to place one or more tokens on its output, i.e. to transfer a token from the arc preceding the node and place it on the succeeding (ie following) arc. The most primitive form of control graph instruction takes an input token and creates an output token. It also identifies to a plurality of data processors one or more data manipulation instructions which are to be executed. According to the present invention, a MIMD computer comprises: a plurality of data processors, each capable of executing a sequential stream of a single entry point and a single exit point; at least one control graph processor for (a) issuing streams of instructions to said data processors, and (b) causing synchronisation between some or all of said data processors; in accordance with control graph instructions executed by the control graph processor.
The streams of instructions may be issued directly by the control graph processor i.e. the instructions may be- issued from the control graph processor. Alternatively the instructions be issued indirectly and may, for example, be held in an area of memory and passed directly to a data processor, with the control graph processor identifying which instruction is to be issued to which data processor.
A MIMD computer in accordance with the present invention may comprise:
(a) a control graph memory for storing control flow and synchronisation instructions, and (b) a data memory for storing information for the data processors. This invention combines the advantages of separate control flow execution with the effective synchronisation of MIMD processors without limiting the speed of operation of the computer or the number of processors that can be effectively used in the computer. The control graph processor performs the functions of control flow execution and process synchronisation by executing, or interpreting, a control graph represented by means of control flow and synchronisation instructions. Thus a program executed by a computer according to the present invention comprises a control graph and a set of data-processing operations. The control graph is executed by the control graph processor whilst the data-processing operations are executed by the MIMD processors under the control of the control graph processor. Both the control graph and the set of data-processing operations are normally generated by a compiler from a single program expressed in a high-level language. To allow for subprograms, the control graph may be hierarchical.
The present invention further comprises a method of operating a MIMD computer which comprises:
(a) executing independent streams of instructions on a plurality of first processors, each stream consisting of blocks, each of the said blocks consisting of a sequence of instructions, each block having a single entry point and a single exit point;
(b) issuing the blocks of instructions to the first processors in accordance with control graph instructions executed by a second processor and (c) synchronising two or more of the first processors in accordance with control graph instructions executed by the second processor. The plurality of data processors may be identical or varied and, in general, any processors capable of executing a stream of instructions may be used. Thus, for example, the processors may be conventional microprocessors.
The nature of the control graph processor depends on the representation chosen for the control graph.
The set of control graph instructions will normally include some form of selection (or branching), to allow data-dependent control flow, and some form of forking, to allow a single thread of control to become two or more threads of control. To cater for subprogram calls, a subgraph call instruction can be provided which causes the execution of a sub-control graph in a hierarchical fashion.
The synchronisation instructions will usually include a join operation, to allow two or more threads of control to synchronise and become a single thread of control. In terms of tokens, a selection instruction causes a single token to travel down one of a plurality of branches in the control graph, the choice of branch being data-dependent. A fork instruction converts a single token into two or more tokens, one following each of the succeeding branches. A join instruction causes a token to be held up until there is a token available on each other branch meeting at the join. A subgraph call instruction causes a token to be passed to the associated sub-control graph and returned upon completion of that sub-control graph.
As indicated above, such a control graph can be automatically generated from a program. This control graph generation forms part of the compilation process whereby a program is translated into a form suitable for execution by a particular computer. The methods of construction and operation of compilers and the advantages of using a compiler are fully discussed in the prior art. The use of a compiler does not form part of, and is not essential to the operation of, the present invention. However, the use of such a compiler is one way in which a programmer can conveniently use a computer according to the present invention.
The invention is now further illustrated with reference to the drawings in which Figure 1 is a diagrammatic representation of the types of nodes in the control graph;
Figure 2 is a section of program and a diagrammatic representation of its associated control graph;
Figure 3 is a diagrammatic representation of a form of a MIMD computer comprising a control graph processor.
Preferably, a control graph comprises nodes of six distinct types. These are represented graphically in Figure 1 and are referred to as Predicate, Either, Fork, Join, Subgraph and Basic Operation nodes. A control graph is constructed from these types of nodes and has a unique entry point and a unique exit point. A control graph is executed by placing a token at its entry point and following the rules given below for progressing tokens in accordance with the nodes of the control graph. The execution of the control graph is complete with a token arrives at the exit point. A "Predicate node" as used in this specification is a node which performs the function of selection between alternative branches in the flow of control of the, program. That is to say, when a token arrives at a Predicate node, an associated boolean datum is interrogated and the token progresses along one of the two branches in the control graph in accordance with the value of the datum.
An "Either node" is used to recombine the two branches from a Predicate node. When a token arrives at either branch entering an Either node, it passes on. It can be seen that the Either node does not perform an active control or synchronisation function but it is included to make control graphs well structured.
A "Fork node" as used in this specification is a node which performs the function of forking the control of flow. That is to say, when a token arrives at a Fork node, a token moves down each of the following branches in the control graph. Fork nodes allow for multiple threads of control and hence MIMD operation.
A "Join node" as used in this specification is a node which is used for the purpose of synchronising separate threads of control. A token does not pass out of a Join node until a token has arrived on each of the two branches entering the node. In this way, two separate threads of control flow are synchronised and recombined into a single thread.
A "Subgraph node" as used in this specification is a node which is used to control the execution of subprograms. When a token arrives at a Subgraph node, it passes to the entry point of the associated sub-control graph. When a token arrives at the exit point of the sub-control graph, it passes back to the Subgraph node and continues on its way through the control graph.
A "Basic Operation node" as used in this specification is a node which causes the activation of an operation that is not related to control flow. In this embodiment, such an operation is referred to as a data manipulation operation and takes the form of a sequence of one or more instructions, with a single entry point and a single exit point, which is executed by one of the MIMD processors. When a token arrives at a Basic Operation node, the associated data manipulation operation is started. When the operation is complete, the token passes out of the Basic Operation node. It is by this means that a control graph exerts control over a computation. As indicated in figure 1, the graphical representation of a Basic Operation node is the same as that of a Subgraph node. This is convenient as the effect of both types of node is to bring about an operation that is defined elsewhere, and wait for that operation to finish.
The function of the node types described above is clarified by means of an example. Figure 2 shows a section of a program written in Occam, together with its equivalent control graph. Note that the PAR construct is represented by means of Fork and Join nodes whilst the IF construct is represented by means of Predicate and Either nodes. The assignments are treated as basic operations, to be carried out by the MIMD processors. For the sake of clarity, this example has been kept simple. It will be appreciated that this set of node types allows the representation of much more complex programs.
The set of six node types described above constitutes the preferred composition of a control graph. This set of node types is both simple enough to allow a control graph processor to be constructed and powerful enough to allow the control flow and synchronisation requirements of most programming languages to be represented. However, the present invention does not rely on this particular set of node types and alternatives may be used in other embodiments of the invention.
In order to build a control graph processor, it is desirable to have a representation of control graphs suitable for storage in memory components. For this reason, each of the node types described above has an equivalent control graph instruction. These types of instruction constitute the instruction set recognised by the control graph processor. A graphical representation of a control graph (as in figure 2, for example) has the same meaning as, and is isomorphic to, the equivalent representation in terms of control graph instructions. The reason for introducing the two representations is that the former is easy for humans to comprehend and the latter is more conveniently stored and processed by a machine.
A MIMD computer according to the invention preferably has a control graph processor-which comprises: at least one sequencer; a token heap, where tokens marking points of execution temporarily reside; a control graph dictionary, where control graph instructions are stored; a tree store, where the dynamic state of control graph calls is maintained.
As indicated above the present invention may be applied to MIMD computers executing subprograms whose control flow may be represented by sub-control graphs. Preferably therefore the control graph dictionary stores not only a top level control graph but also sub-control graphs and the tree stores not only the dynamic state of calls to the top level control graph but also the dynamic state of calls to sub-control graphs.
A sub-control graph has the same form as a top level control graph.
The sequencer is the unit which is directly involved in the execution of the control flow instructions obtained from the control graph dictionary. The sequencer fetches instructions from an area of memory holding control graph instructions and interprets the instructions so as to move a token through a control graph. It is preferred to provide a plurality of sequencers. A preferred method of operating a MIMD computer having a plurality of sequencers for executing control graph instructions comprises each sequencer: fetching a token pointer from an area of memory acting as a token heap, fetching control graph instructions from an area of memory acting as a control graph dictionary, pointed to by the token pointer, altering the token pointer in accordance with the execution of the control graph instructions, creating a new token pointer and storing it on the token heap whenever there is a fork in the control flow, and fetching a new token pointer from the token heap whenever the token pointer no longer points to a further instruction to be executed. A preferred method of operating a MIMD computer which is capable of executing sub-programs containing control graph instructions comprises:
1) storing information about the current state of execution of the control graph in an area of memory acting as a tree store in the form of frames linked together as a tree structure, corresponding to levels of control in the control graph program wherein (a) the information stored in a top level frame corresponding to the highest level of control indicates whether or not all the token which relate to different threads of control have reached the same JOIN instruction in the control graph, and (b) the information stored in lower level frame corresponding to a lower level of control also comprises (i) information on the location in the control graph dictionary of the control flow or synchronisation instruction following the instruction calling the subprogram corresponding to that frame and
(ii) information on the location in the tree store of a frame associated with the program calling the subprogram corresponding to the said lower level frame, and 2) storing information, which information points to the frame which corresponds to the level of control currently active, in an area of memory acting as a tree frame pointer .
The function of the units mentioned above can be best seen from the following description of one form of the invention. One design for a control graph processor is shown schematically in figure 3. The operation of this control graph processor and the function of its various components are now described in some detail. The MIMD computer comprises a control graph processor (1) and a plurality of data processing units (2). The control graph processor comprises a plurality of control graph sequencers (3) linked to the data processing units (2). The control graph sequencers are also linked to a control graph dictionary (4), a token heap (5) and a subgraph tree store (6).
The control graph dictionary (4) is an area of memory used to store the control graph, and its subgraphs, for the program being executed. These graphs are represented by collections of control graph instructions mentioned above. The graphs are, in general, stored as a hierarchical set of sub-control graphs, each of which may contain references to other sub-control graphs. Any individual sub-control graph may be referenced simultaneously from more than one point in a calling sub-control graph and by more than one sub-control graph.
The control graph sequencers (3) are responsible for exerting control over the data processing units (2). This is achieved by moving tokens through the control graph stored in the sub-control graph dictionary (4) and invoking the data processing units (2) when tokens reach Basic Operation nodes. As the significant aspect of a token is its location within the control graph, it is not necessary to implement the token directly by means of a datum that moves within the subgraph dictionary - instead a token pointer is used to identify the location of the token in the control graph. A token pointer can be thought of as being equivalent to an instruction pointer in a conventional computer. However, there may be many token pointers in existence at any given moment. A control graph sequencer (3) (hereafter referred to as a
sequencer') fetches instructions from the control graph dictionary (4) and interprets them in order to move a token through a graph. When a sequencer generates a new token (eg. in response to a Fork instruction) it puts the new token pointer on the token heap (5). The token heap (5) is another area of memory, where token pointers reside when they are not being progressed through the graph by a sequencer. When a sequencer falls idle because its token has been consumed (eg. as a result of a Join instruction) or because it has been reset, it fetches another token pointer from the token heap. The tree store (6) is a further area of memory which is used to store the current state of execution of the control graph processor. As mentioned earlier, it may happen that many tokens are traversing the same sub-control graph at any given moment. When a sub-control graph is being processed, some record must be kept of the point in the calling graph that the token must return to upon completion of the sub-control graph. In a conventional single-threaded environment, this is achieved by storing a return address on a stack. However, to allow for multiple threads of control the required data structure is in the form of a tree and it is this tree structure that is stored in the tree store. Each token pointer has an associated tree-frame pointer that is equivalent to a stack frame pointer in a conventional computer.
The tree is also used to note the arrival of the first token at a Join node. The first token (of a pair) to arrive at a Join node is not allowed to proceed any further in the graph. When the second token arrives, it can proceed to the next node in the control graph. The tree is a convenient place to note the arrival of the first token because it is easily accessed by the sequencer that is processing the second token.
A tree is composed of a set of linked frames, each of which has the following structure:
Figure imgf000013_0001
where:
"return-address" is the location in the control graph dictionary of the control graph instruction following the Subgraph instruction in the calling control graph;
"parent-address" is the location in the tree of the frame associated with the calling subgraph, and
"join-bits" is a pair of bits that indicate the arrival of a token at one or other of the input branches to a Join instruction - there are as many.pairs of bits as there are Join nodes in the associated sub-control graph.
At the outer level of the control graph, return-address and parent-address have null values, which is sufficient to indicate that the frame is in fact a root frame.
Tokens are now considered in more detail. As a result of Fork instructions, new tokens.are created from time to time. As extra tokens are created, the token pointers are placed on the token heap while they are waiting for a sequencer to become available. There can be many sequencers that are simultaneously processing different tokens in the graph; each of these can deposit token pointers on the heap and each can remove a token pointer from the heap to process it. Thus the execution of the graph can be started by placing,."on the token heap, a token pointer which locates a token at the entry point of the graph.
The token records each have the following structure, both while being processed by a sequencer and when residing on the token heap: concerned, the only action required by an Either instruction is an unconditional modification of the token location. For this reason, the Either node need not be explicitly represented in other embodiments. The branches that merge at an Either node could instead be merged at the succeeding node in the control graph.
Since the Fork node creates two tokens where formerly there was one, and a sequencer can only process one token at a time, it executes the fork by putting one of the token pointers on the token heap and continuing to execute the other one. The Fork node is represented by a Fork instruction with the format:
Figure imgf000015_0001
Which the sequencer processes as: build a new token on token heap with left-address and current tree-frame-address token location :» right-address The choice of putting the left-address token (as opposed to the right-address token) on the heap is arbitrary. Note that another sequencer may be available to process the token that has been put on the heap.
The first token that arrives at a Join node needs to be stopped and its arrival noted. This arrival is recorded in the tree by setting a join-bit, as mentioned above. When the second token arrives at the join, the join-bit for the first is cleared and a single token is progressed. In this embodiment, the sequencer checks that the two tokens arrive from the two different routes meeting at the Join node. For this reason, the Join node is represented by a pair of Join instructions having the form:
Figure imgf000015_0002
Figure imgf000015_0003
The Left-join instruction operates as : if right-join-bit is set then reset right-join-bit token location :« address else if left-join-bit is set then error else set left- oin-bit fetch another token from the token heap and vice-versa for the Right-join instruction. If no check is made of the fact that the two tokens should arrive from two routes, then the two join bits can be replaced with a single bit and the pair of instructions can be replaced with a single Join instruction.
The Subgraph node is represented by a Subgraph instruction which causes a new frame to be created on the tree and the token to be passed into the sub-control graph. The format of the instruction is :
Figure imgf000016_0001
and operates as : create new frame on tree, storing return-address token location:= start-address
Preferably, only one^copy of the sub-control graph for each subroutine is stored in the control graph dictionary. This graph may be executed several times, possibly simultaneously, with each execution having an associated tree-frame. Alternatively, a copy of the sub-control graph could be substituted for the Subgraph node at the time of compilation and thus avoid the need for Subgraph instructions. However, such a scheme would be wasteful of control graph dictionary space and would prohibit arbitrarily recursive subroutines.
A special instruction is used to mark the exit point of a control graph whether a top level control graph or a sub-control graph. The Terminal instruction has the form : Terminal
and it operates as : if tree frame is the root frame (not a subgraph frame) then program has finished else token location:- return-address from tree frame current tree frame:- parent-address from tree frame dispose of tree frame
The Basic Operation node is the means whereby data processing operations are invoked. The format is :
Figure imgf000017_0001
and operates as : pass type and parameter-address to data processing unit and activate wait for unit to finish token location:- address
In this embodiment, the sequencer waits for the operation to complete and then progresses the same token. In other embodiments, the sequencer could fetch another token from the token heap and process it while the data, processing unit is performing the basic operation for the first token.
In this embodiment, each data processing unit (2) comprises of instruction moving means and a data processor, together with an instruction memory and a data memory. The instruction moving means accepts the basic operation command from the sequencer and sends a stream of instructions (obtained from the instruction memory) to the data processor in accordance with the parameter-address information. The stream of instructions constitutes a single-entry single-exit (SESE) block which has been generated by the compiler. When the SESE block has been executed by the processor, the instruction moving means signals back to the sequencer to indicate that the basic operation has finished.
The control graph processor may be constructed from a conventional microprocessor, acting as a control graph sequencer, and a single physical memory, holding the control graph dictionary, token heap and tree store. However, preferably, the sequencer is a special purpose processor which directly executes control graphs (rather than interpreting them) and the three storage areas are provided by separate physical memory units. The MIMD processors may be microprocessors, each of which executes an instruction stream generated by an associated instruction moving means. Each instruction moving means may be constructed from a DMA (direct memory access) device. The use of an instruction moving means of this nature is not an important feature and other embodiments may use other means to generate the instruction streams.

Claims

Claims :
1. A MIMD computer comprising: a plurality of data processors, each capable of executing a sequential stream of at least one instruction each stream having a single entry point and a single exit point; at least one control graph processor for
(a) issuing streams of instructions to said data processors, and
(b) causing synchronisation between some or all of said data processors; in accordance with control graph instructions executed by the control graph processor.
2. A MIMD computer according to claim 1 comprising
(a) a control graph memory for storing control graph instructions and
(b) a data memory for storing information for the data processors.
3. A MIMD computer according to either of the preceding claims wherein the control graph processor recognises Predicate, Either,
Fork, Join, Subgraph and Basic Operation instructions.
4. A MIMD computer according to any of the preceding claims wherein the control graph processor consists of: at least one sequencer; a token heap, for temporarily storing tokens marking points of execution; a control graph dictionary, for storing control graph instructions; a tree store, for storing the dynamic state of subgraph calls.
5. A method of operating a MIMD computer for executing a program with a plurality of threads of control which comprises:
(a) executing independent streams of instructions on a plurality of first processors, each stream consisting of blocks, each of said blocks consisting of a sequence of instructions, and each block having a single entry point and a single exit point;
(b) issuing the blocks of instructions to the first processors in accordance with control graph instructions executed by a second processor.
6. A method of operating a MIMD computer according to claim 5 wherein the second processor comprises a plurality of sequencers for executing control graph instructions wherein each sequencer fetches instructions from an area of memory holding control graph instructions, and interprets the instructions so as to move a token through a control graph.
7. A method of operating a MIMD computer according to claim 6 having a plurality of sequencers for executing control graph instructions which comprises each sequencer: fetching a token pointer from an area of memory acting as a token heap, fetching control graph instructions from an area of memory acting as a control graph dictionary, pointed to by the token pointer, altering the token pointer in accordance with the execution of the control graph instructions, creating a new token pointer and storing it on the token heap whenever there is a fork in the control flow, and fetching a new token pointer from the token heap whenever the token pointer no longer points to a further instruction to be executed.
8. A method of operating a MIMD computer according to either of claims 6 or 7 which computer is capable of executing subprograms, which subprograms have control graph instructions to be executed by the second processor comprising 1) storing information about the current state of execution of the control graph in an area of memory acting as a tree store in the form of frames linked together as a tree structure, corresponding to levels of control in the control flow program and subprogram wherein (a) the information stored in a top level frame corresponding to the highest level of control indicates whether or not all the token which relate to different threads of control have reached the same JOIN instruction in the control graph, and (b) the information stored in lower level frame corresponding to a lower level of control also comprises
(i) information on the location in the control graph dictionary of the control flow or synchronisation instruction following the instruction calling the subprogram corresponding to that frame and (ii) information on the location in the tree of a frame associated with the program calling the subprogram corresponding to the said lower level frame, and
2) storing information, which information points to the frame which corresponds to the level of control currently active, in an area of memory acting as a tree frame pointer .
PCT/GB1988/000594 1987-07-25 1988-07-21 Mimd computer system WO1989001203A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB8717689 1987-07-25
GB878717689A GB8717689D0 (en) 1987-07-25 1987-07-25 Computers

Publications (1)

Publication Number Publication Date
WO1989001203A1 true WO1989001203A1 (en) 1989-02-09

Family

ID=10621327

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1988/000594 WO1989001203A1 (en) 1987-07-25 1988-07-21 Mimd computer system

Country Status (2)

Country Link
GB (1) GB8717689D0 (en)
WO (1) WO1989001203A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190303153A1 (en) * 2018-04-03 2019-10-03 Intel Corporation Apparatus, methods, and systems for unstructured data flow in a configurable spatial accelerator
US10558575B2 (en) 2016-12-30 2020-02-11 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10565134B2 (en) 2017-12-30 2020-02-18 Intel Corporation Apparatus, methods, and systems for multicast in a configurable spatial accelerator
US10564980B2 (en) 2018-04-03 2020-02-18 Intel Corporation Apparatus, methods, and systems for conditional queues in a configurable spatial accelerator
US10572376B2 (en) 2016-12-30 2020-02-25 Intel Corporation Memory ordering in acceleration hardware
US10678724B1 (en) 2018-12-29 2020-06-09 Intel Corporation Apparatuses, methods, and systems for in-network storage in a configurable spatial accelerator
US10817291B2 (en) 2019-03-30 2020-10-27 Intel Corporation Apparatuses, methods, and systems for swizzle operations in a configurable spatial accelerator
US10853276B2 (en) 2013-09-26 2020-12-01 Intel Corporation Executing distributed memory operations using processing elements connected by distributed channels
US10891240B2 (en) 2018-06-30 2021-01-12 Intel Corporation Apparatus, methods, and systems for low latency communication in a configurable spatial accelerator
US10915471B2 (en) 2019-03-30 2021-02-09 Intel Corporation Apparatuses, methods, and systems for memory interface circuit allocation in a configurable spatial accelerator
US10942737B2 (en) 2011-12-29 2021-03-09 Intel Corporation Method, device and system for control signalling in a data path module of a data stream processing engine
US10965536B2 (en) 2019-03-30 2021-03-30 Intel Corporation Methods and apparatus to insert buffers in a dataflow graph
US11029927B2 (en) 2019-03-30 2021-06-08 Intel Corporation Methods and apparatus to detect and annotate backedges in a dataflow graph
US11037050B2 (en) 2019-06-29 2021-06-15 Intel Corporation Apparatuses, methods, and systems for memory interface circuit arbitration in a configurable spatial accelerator
US11086816B2 (en) 2017-09-28 2021-08-10 Intel Corporation Processors, methods, and systems for debugging a configurable spatial accelerator
US11200186B2 (en) 2018-06-30 2021-12-14 Intel Corporation Apparatuses, methods, and systems for operations in a configurable spatial accelerator
US11907713B2 (en) 2019-12-28 2024-02-20 Intel Corporation Apparatuses, methods, and systems for fused operations using sign modification in a processing element of a configurable spatial accelerator

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3614745A (en) * 1969-09-15 1971-10-19 Ibm Apparatus and method in a multiple operand stream computing system for identifying the specification of multitasks situations and controlling the execution thereof
US4229790A (en) * 1978-10-16 1980-10-21 Denelcor, Inc. Concurrent task and instruction processor and method
EP0118781A2 (en) * 1983-02-10 1984-09-19 Masahiro Sowa Control flow parallel computer system
US4514807A (en) * 1980-05-21 1985-04-30 Tatsuo Nogi Parallel computer
EP0144779A2 (en) * 1983-11-07 1985-06-19 Masahiro Sowa Parallel processing computer
US4636948A (en) * 1985-01-30 1987-01-13 International Business Machines Corporation Method for controlling execution of application programs written in high level program language
EP0231594A2 (en) * 1986-01-22 1987-08-12 Mts Systems Corporation Interactive multilevel hierarchical data flow programming system
EP0244928A1 (en) * 1986-05-01 1987-11-11 The British Petroleum Company p.l.c. Improvements relating to control flow in computers

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3614745A (en) * 1969-09-15 1971-10-19 Ibm Apparatus and method in a multiple operand stream computing system for identifying the specification of multitasks situations and controlling the execution thereof
US4229790A (en) * 1978-10-16 1980-10-21 Denelcor, Inc. Concurrent task and instruction processor and method
US4514807A (en) * 1980-05-21 1985-04-30 Tatsuo Nogi Parallel computer
EP0118781A2 (en) * 1983-02-10 1984-09-19 Masahiro Sowa Control flow parallel computer system
EP0144779A2 (en) * 1983-11-07 1985-06-19 Masahiro Sowa Parallel processing computer
US4636948A (en) * 1985-01-30 1987-01-13 International Business Machines Corporation Method for controlling execution of application programs written in high level program language
EP0231594A2 (en) * 1986-01-22 1987-08-12 Mts Systems Corporation Interactive multilevel hierarchical data flow programming system
EP0244928A1 (en) * 1986-05-01 1987-11-11 The British Petroleum Company p.l.c. Improvements relating to control flow in computers

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PROCEEDINGS OF THE IEEE, Vol. 72, No. 1, January 1984, New York (US), HARRY F. JORDAN, "Experience with Pipelined Multiple Instruction Streams", p. 113-123. *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10942737B2 (en) 2011-12-29 2021-03-09 Intel Corporation Method, device and system for control signalling in a data path module of a data stream processing engine
US10853276B2 (en) 2013-09-26 2020-12-01 Intel Corporation Executing distributed memory operations using processing elements connected by distributed channels
US10558575B2 (en) 2016-12-30 2020-02-11 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10572376B2 (en) 2016-12-30 2020-02-25 Intel Corporation Memory ordering in acceleration hardware
US11086816B2 (en) 2017-09-28 2021-08-10 Intel Corporation Processors, methods, and systems for debugging a configurable spatial accelerator
US10565134B2 (en) 2017-12-30 2020-02-18 Intel Corporation Apparatus, methods, and systems for multicast in a configurable spatial accelerator
US10564980B2 (en) 2018-04-03 2020-02-18 Intel Corporation Apparatus, methods, and systems for conditional queues in a configurable spatial accelerator
US11307873B2 (en) * 2018-04-03 2022-04-19 Intel Corporation Apparatus, methods, and systems for unstructured data flow in a configurable spatial accelerator with predicate propagation and merging
US20190303153A1 (en) * 2018-04-03 2019-10-03 Intel Corporation Apparatus, methods, and systems for unstructured data flow in a configurable spatial accelerator
US11200186B2 (en) 2018-06-30 2021-12-14 Intel Corporation Apparatuses, methods, and systems for operations in a configurable spatial accelerator
US10891240B2 (en) 2018-06-30 2021-01-12 Intel Corporation Apparatus, methods, and systems for low latency communication in a configurable spatial accelerator
US11593295B2 (en) 2018-06-30 2023-02-28 Intel Corporation Apparatuses, methods, and systems for operations in a configurable spatial accelerator
US10678724B1 (en) 2018-12-29 2020-06-09 Intel Corporation Apparatuses, methods, and systems for in-network storage in a configurable spatial accelerator
US10915471B2 (en) 2019-03-30 2021-02-09 Intel Corporation Apparatuses, methods, and systems for memory interface circuit allocation in a configurable spatial accelerator
US10965536B2 (en) 2019-03-30 2021-03-30 Intel Corporation Methods and apparatus to insert buffers in a dataflow graph
US11029927B2 (en) 2019-03-30 2021-06-08 Intel Corporation Methods and apparatus to detect and annotate backedges in a dataflow graph
US10817291B2 (en) 2019-03-30 2020-10-27 Intel Corporation Apparatuses, methods, and systems for swizzle operations in a configurable spatial accelerator
US11693633B2 (en) 2019-03-30 2023-07-04 Intel Corporation Methods and apparatus to detect and annotate backedges in a dataflow graph
US11037050B2 (en) 2019-06-29 2021-06-15 Intel Corporation Apparatuses, methods, and systems for memory interface circuit arbitration in a configurable spatial accelerator
US11907713B2 (en) 2019-12-28 2024-02-20 Intel Corporation Apparatuses, methods, and systems for fused operations using sign modification in a processing element of a configurable spatial accelerator

Also Published As

Publication number Publication date
GB8717689D0 (en) 1987-09-03

Similar Documents

Publication Publication Date Title
Rumbaugh A data flow multiprocessor
WO1989001203A1 (en) Mimd computer system
CA1159151A (en) Cellular network processors
Aiken et al. A development environment for horizontal microcode
JP3461704B2 (en) Instruction processing system and computer using condition codes
Espasa et al. Decoupled vector architectures
Schauser et al. Compiler-controlled multithreading for lenient parallel languages
WO1990014629A2 (en) Parallel multithreaded data processing system
JPH04505818A (en) Parallel multi-thread data processing system
Theobald et al. Overview of the Threaded-C language
Bic A process-oriented model for efficient execution of dataflow programs
Halstead Jr An assessment of Multilisp: Lessons from experience
Anantharaman et al. A hardware accelerator for speech recognition algorithms
Dietz Common Subexpression Induction?
Romein Multigame-an environment for distributed game-tree search
Burnley Architecture for realtime VME systems
Tremblay et al. Threaded-C language reference manual (release 2.0)
Gilad et al. O-structures: Semantics for versioned memory
Nicolau et al. ROPE: a statically scheduled supercomputer architecture
Dai et al. A basic architecture supporting LGDG computation
EP0244928A1 (en) Improvements relating to control flow in computers
JPH0784797A (en) Method and device for registering source code row number to load module
JPH04287121A (en) Tuple space system
Maurer Mapping the Data Flow Model of Computation into an Enhanced Von Neumann Processor.
JP3240647B2 (en) Computer language structured processing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE FR GB IT LU NL SE