WO1994010638A1 - Scalable dimensionless array - Google Patents

Scalable dimensionless array Download PDF

Info

Publication number
WO1994010638A1
WO1994010638A1 PCT/AU1993/000573 AU9300573W WO9410638A1 WO 1994010638 A1 WO1994010638 A1 WO 1994010638A1 AU 9300573 W AU9300573 W AU 9300573W WO 9410638 A1 WO9410638 A1 WO 9410638A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
array
processing element
matrix
operations
Prior art date
Application number
PCT/AU1993/000573
Other languages
French (fr)
Inventor
Warren Marwood
Allen Patrick Clarke
Robert John Clarke
Original Assignee
The Commonwealth Of Australia
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Commonwealth Of Australia filed Critical The Commonwealth Of Australia
Priority to AU54126/94A priority Critical patent/AU5412694A/en
Priority to CA002148719A priority patent/CA2148719A1/en
Publication of WO1994010638A1 publication Critical patent/WO1994010638A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8046Systolic arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/3001Arithmetic instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30036Instructions to perform operations on packed data, e.g. vector, tile or matrix operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3877Concurrent instruction execution, e.g. pipeline, look ahead using a slave processor, e.g. coprocessor

Definitions

  • This invention relates to the general field of digital computing and in particular to a scalable array of globally clocked multiply/accumulate floating point processing elements.
  • a processor may:
  • the processing elements are connected only to their nearest neighbours, and so the problems of routing, fan-out and clock skew are minimised. Data and results move synchronously through the array of elements.
  • the name applied to this approach to computation with arrays of identical processing elements is systolic.
  • A, B and C are matrices of a size equal to the order of the array.
  • Conformal matrices can be multiplied with this array under certain restrictions the results are re-circulated through the array, and larger order matrix product can be computed if the task is partitioned.
  • Chapter 1 Ther, Whitehouse and Bromley ['Signal Processing Applications for Systolic Arrays', Record of the 14th Asilomar Conference on Circuits, Systems and Computers, IEEE No. 80CH 1625-3, 1980 ] subsequently demonstrated the use of inner-product- accumulate processing elements for the same matrix product algorithm. In thi case, the results are formed in-place, and do not move between processing elements.
  • the primary advantage of systolic processing over conventional linear processing is speed.
  • the systolic architecture uses the fact that for matrix multiplication, the same operand data may be reused many times in the computation of cross-product terms, thereby making better use of the availabl data bandwith.
  • the improved performance comes at the cost of flexibility.
  • Prior art devices have been designed for very specific applications such as Fast Fourier Transform computations or video signal processing.
  • An advantage of the present invention is the ability of the same device to be usef for a wide variety of matrix computations without the need for hardware reconfiguration.
  • the device is particularly useful when implemented as an architectural enhancement to a computer in which case the processing power of the computer is considerably enhanced.
  • a processing element suitable for use in a scalable array processor comprising : at least one input register means adapted to receive and process serial operands in the form of ⁇ instruction, data ⁇ 2-tuples; a memory means adapted to store temporary results and constants; a computing means adapted to perform logical operations; an output register means adapted to output results from the processing element; a control and sequencing means adapted to control the operation of the processing element; a plurality of data buses adapted to provide communication between the plurality of means.
  • the computing logical means consists of a shifter/normaliser means adapted to shift/normalise data and an arithmetic means adapted to perform logical operations such as but not limited to addition, subtraction and partial multiplication operations.
  • processing element is adapted to perform floating point multiply, floating point add and floating point multiply-accumulate which is used for inner product accumulate operations.
  • the input register is adapted to output a copy of the input operan bit with a one clock period delay.
  • N input registers In preference there are N input registers and the processing element is suitabl for use in a N-dimensional scalable array processor.
  • N can be any positive integer.
  • the input registers convert the input serial data to an internal representation comprising separate sign, fraction and exponent.
  • the memory means consists of read only memory for storage of constants and a read/write memory for storage of temporary results.
  • the shifter/normalizer means is adapted to perform binary weighted barrel shifting wherein the shifter function is determined by a control input to the shifter/normalizer and the normalizer function effects a data dependent shift of up to 15 bits within a single clock cycle.
  • the arithmetic means implements logical operations such as but not limited to floating point addition, multiplication and multiply-accumulate algorithms using a parallel microcoded data path.
  • the arithmetic means comprises a logical unit such as but not limited to an input-multiplexer, an adder, an output shifter, flags unit and a control unit.
  • the output register can be loaded in parts to enable the conversion from the internal representation to IEEE 754 floating point format.
  • the output register can be parallel loaded from the arithmetic means or can b serially loaded from a serial source. The register is unloaded serially.
  • control and sequencing means includes timing and control logic, a microcode ROM, address decoders, branch control logic, flags logic, instruction register, instruction decoder and a program counter.
  • processing element has an accumulator comparison means.
  • the invention consists of a scalable array processor chip comprising an array of processing elements each said element including : at least one input register means adapted to receive and process serial operands in the form of ⁇ instruction, data ⁇ 2-tuples; a memory means adapted to store temporary results and constants; a shifter/normalizer means adapted to shift or normalize data; an arithmetic means adapted to perform logical operations such as but not limited to addition, subtraction and partial multiplication operations; an output register means adapted to output results from the processing element; a control and sequencing means adapted to control the operation of the processing element; a plurality of data buses adapted to provide communication between the plurality of means; and wherein each processing element has means for communication only with adjacent elements.
  • the array of elements comprise an interconnected lattice of at least one dimension.
  • the array of elements comprise an interconnected lattice of at least two dimensions.
  • the scalable array processor chip is adapted to perform at least the functions of computing the product of two or more matrices, computing the element-wise product of two or more matrices, computing the sum of two or more matrices, permuting the rows and columns of a matrix and transposing a matrix.
  • a computing apparatus comprising a host processor, at least one scalable array processor chip and a plurality of data formatters wherein the scalable array processor chip(s) and plurality of data formatters are adapted to perform matrix operations otherwise performed by the host processor.
  • the apparatus includes a memory cache adapted to store operand data and temporary or intermediate results.
  • a method of performing matrix operations comprising the steps of : (a) providing a plurality of processing elements in the form of an array adapted to perform systolic processing operations; (b) receiving operand matrix data for processing from a host or data source; (c) formatting the operand matrix data in a data formatter by adding an instruction to form an ⁇ instruction, data ⁇ 2-tuple;
  • the data formatter is of the type the subject of co-pending paten application number PL5696 entitled "DATA FORMATTER".
  • the sets of 2-tuples are known as operand wavefronts.
  • an indication is provided to the data formatter that the unload operation is occurring, allowing synchronization of data transfers to and from the processin array.
  • step (g) the results are transmitted in sets containing one result from each of the processing elements at the left edge of the array.
  • Such a set is known as a result wavefront.
  • FIG. 1 is a schematic diagram of one processing element
  • FIG. 2 is a schematic diagram of one embodiment of a systolic array processing element chip utilising the elements of FIG. 1 ;
  • FIG. 3 is a schematic diagram of a first embodiment of a processing apparatus utilizing the chip of FIG. 2;
  • FIG. 4 is a schematic diagram of an inner-product-step processor
  • FIG. 5 is a schematic diagram of an inner-product-accumulate process
  • FIG. 6 is a schematic example of the entry of operand wavefronts to a processor array
  • FIG. 7 is a schematic example of the unloading of result wavefronts fro a processor array
  • FIG. 8 is a schematic example of the entry of element-wise operand wavefronts to a processor array
  • FIG. 9 is a schematic example of the unloading of element-wise result wavefronts from a processor array.
  • FIG. 10m is a schematic diagram of a second embodiment of a processi apparatus.
  • each processing element consists of number of input registers, a memory consisting of a register file and a constan ROM, a shifter/normalizer, an arithmetic unit, output registers and a control an sequencing unit.
  • the datapath elements ( input registers, memory, shifter/normalizer, arithmeti unit and output registers) are interconnected by three parallel data buses.
  • serial interfaces are provided to and from each of the input registers and the output register to allow communication between processing elements and to facilitate construction of arbitrarily large arrays of processing elements.
  • An array computes 2N2 floating point operations (1 multiply and 1 accumulate for each processing element in the array) in the time taken to fetch 2N operands. To ensure that the computation is bandwidth limited, each processing element needs only compute at a rate of one floating point operati every N data fetches. This fact leads to the conclusion that very cheap processing elements can be used in the array.
  • FIG. 1 A schematic of the processing element is shown in FIG. 1.
  • the choice of a simple microcoded datapath and sequential algorithms to perform the floating point operations means that the size of the processing element can be kept small. Many such processing elements can therefore be placed on a single chip.
  • the fact that processing elements implemented in this manner are slowe than those built using fully parallel algorithms and architectures becomes, insignificant as the size of the array is increased. This is because the processing performance achieved is limited by the data bandwidth (and array size), not by the computation rate for a single processing element.
  • the input registers receive serial operands in the form of ⁇ instruction, data ⁇ 2-tuples from adjacent processing elements to the left or to or in the case of processing elements at the top or left boundary of the array, from operand data formatters. They then separate the instruction and reform the data to an internal representation consisting of separate sign, exponent an fraction words. This data is available to the processing element via the X and internal data buses.
  • the input registers also compute the sign of the product the two inputs, check for zero operand data and implement the Booth encoder used during multiplication operations.
  • the memory consists of a Register File and a Constant ROM:
  • the register file is a 5 word memory used to hold the product (both fraction and exponent), accumulator (both fraction and exponent) and temporary results.
  • the product and accumulator registers can be swapped under the control of microcode to facilitate efficient implementation of the pre-alignment operation i the floating point addition and accumulation algorithms.
  • the registers can be loaded from the R bus, and their contents can be read from either the X or Y buses.
  • the Constant ROM stores a number of constants that are used during the implementation of the floating point algorithms. These can be read via the and Y operand data buses.
  • Shifter/Normalizer Under microcode control, the shifter can either operate as shifter (for pre-alignment of fractions before addition) or a normalizer. When acting as a shifter, it performs right-shift operations on one operand datum (Th X operand). The amount by which the datum is shifted is determined by a previously computed shift that is applied to the second input to the shifter (the operand). The shifter can shift 0 to 15 bits right within one clock cycle. When acting as a normalizer, the Shifter/Normalizer performs either a right shift by one bit, or a left shift by 0 to 15 bits within one cycle. In this case the shift is applied to the X operand input to the shifter and is independent of the Y operand input.
  • the value of the shift is data dependent.
  • a right-shift is performed if the value on the X input is the result of a computation which had overflowed (such as in the case of addition of two normalized numbers having the same exponent). Otherwise, a left-shift is performed.
  • the Shifter/Normalizer When acting as a normalizer, the Shifter/Normalizer at the same time computes the offset (exponent offset) that must be applied to the exponent of the number being normalized in order to compensate for the shift that is applied. Shifting and normalization operations that require shifts of greater than 51 bits can be implemented by multiple passes through the shifter/normalizer.
  • the arithmetic unit consists of an input multiplexer, an adder, result shifter and a flags unit. There are two parallel data inputs (X and Y) to th arithmetic unit and a single parallel data output (R).
  • the input multiplexer can be used to complement and/or left-shift the X operand under control of the Booth encoding logic contained in the input registers. This feature is used in th implementation of multiplication using a modified Booth algorithm.
  • the multiplexer can also be controlled directly by the processing element's microcode to facilitate the implementation of addition, subtraction and data- move operations.
  • the adder performs conventional two's complement addition.
  • the carry input to the adder can be controlled by either the booth encoder logic or the processing element's microcode. Both addition and subtraction can be performed by the combination of input-multiplexer and adder.
  • the output shifter latches either the result of the computation or the result divided by 4. This feature is used during partial multiplication operations.
  • the latched result remains valid until the next time th arithmetic unit is used.
  • the latched result can be written onto the result bus R.
  • a number of flags are set or cleared depending upon the result latched by the arithmetic unit's output shifter. These flags include the sign of the result, whether or not the result is zero, and whether or not the result is less than or equal to 15 (used to support multi-pass shifting during addition pre-alignment).
  • the output register module is used to communicate the results of computations back from the processing element toward the left boundary of an array of processing elements.
  • the output register can be parallel loaded from the arithmetic unit or can be serially loaded from a serial source (often the serial source is from another processing element's output register).
  • the output register is unloaded serially.
  • the output register is parall loadable by the arithmetic unit in three parts: sign, exponent and fraction. Thi facilitates conversion from the internal data representation to IEEE 754 floatin point format.
  • a flag is set to indicate that a register unload is in progress (UIP).
  • This module includes a microcode ROM, a program counter, branch control logic, flags logic, an instruction register, an instruction decoder, address decoders and timing and control logic.
  • This circuitry is used sequence the processing element through its operations. Each clock cycle, th microcode ROM issues a microinstruction to the processing element's datapa units, and thereby controls the function and timing of the data operations bein performed. Data and control flags fed to the branch control logic enable the processing element to perform data dependent operations required for implementation of the floating point algorithms. Fields of the instruction transmitted serially to the processing element as part of the ⁇ instruction, data ⁇ 2-tuple are also fed to the branch control logic and flags logic of the Control a Sequencing Unit.
  • the instructions specified in the ⁇ instruction, data ⁇ 2-tuple are distinct from the set of microinstructions implemented by the processing element.
  • the instructions specified in the ⁇ instruction, data ⁇ 2-tuple control the flow of execution of the processing element's microcode.
  • the internal data representation used by the processing element uses two 32- bit data words to represent each IEEE single precision number. One of the two words represents the mantissa in 2's complement form, normalized to bit 29. The second word represents the exponent using an exponent bias of 229. This format provides better resolution in the mantissa than IEEE single precision format, and the use of a large exponent field virtually guarantees that exponent overflow cannot occur.
  • each processing element multiplication is facilitated by the inclusion of modified Booth encoder and multiplexer.
  • the denormalisation and normalisation operations required by the floating point accumulation or additio algorithms are facilitated by the repeated application of the shifter circuit which can shift up to 15 bits in a single cycle.
  • FIG. 2 shows a scalable array processing chip composed of a 5 x 4 rectangula array of single precision floating point processing elements which accept serial dataflow operands, and which perform a set of operations on those operands.
  • Each operand consists of a 5-bit instruction followed by an IEEE standard single precision number.
  • Each processing element is a microcoded ALU with 32-bit parallel datapath that includes dedicated hardware support for floating point multiplication and addition algorithms.
  • the array of processing elements is clocked synchronously.
  • the three bit- serial links provide communication between processing elements.
  • One link is provided for each of the two input X and Y operands and one for the output, or result operand R.
  • input data is transferred from left to righ across the array, and output results are transmitted from right to left.
  • Chips can be cascaded arbitrarily in both X and Y directions.
  • the operation of the scalable array processing chip is described with reference to the system block diagram shown in FIG. 3.
  • the data interface provides communication between the scalable array chip and the host system.
  • the data formatter elements are described separately in a co-pending application number PL5696 entitled DATA FORMATTER.
  • each processing element consists of two orthoganal data transmission paths for X and Y operands, each consisting of a single one bit delay cell and a 32-bit data storage register.
  • the X operand path also includes a 5 bit instruction register. Data is input to the array as a sequence of ⁇ instruction.data ⁇ 2-tuples. These are split into separate instruction and data words on receipt by the input registers.
  • Each X data operand consists of a 5-bit instruction followed by a single 32-bit IEEE 754 standard floating point number. A variable length gap of several clock periods may be present between operands for I/O synchronisation.
  • the operand is transmitted in bit serial form into the processing element. When the entire ⁇ instruction, data ⁇ 2-tuple is held within the processing element, it is cross-loaded into parallel holding registers.
  • the instruction is decoded and used to control the execution of the floating point algorithms.
  • the data is converted by hardware into the internal extended format.
  • the internal format has both extended precision and extended dynamic range when compared wit the IEEE standard.
  • bit-serial data is bit-skewed on entry to adjacent processing elements on the array boundary. This skew is preserved between adjacent elements within the array by passing the data through the single-bit delay stage in each processing element before re-transmitting it to the next processing element.
  • serial data both minimises the I/O pin count at the array boundary and allows adjacent processing elements to both commence and conclude the computations with a time differential of only one bit period.
  • the advantage of the bit-skewing approach over a broadcast architecture is that there is no need to drive long buses with large buffers and thereby provides the capability for arbitrary expansion of the array.
  • Bit skewing has the advantage over word-skewing in that less wavefronts are required to complete a processing task.
  • the bit-skewed approach therefore results in the minimisation of job time.
  • the computation time is minimised for both a single job and a job stream.
  • an operand wavefront is issued to the array which causes the unloading of the results into the output registers of the processing elements.
  • Clocking of the scalable array processing chips is performed by a single phase 50% duty cycle clock from which all internal timing signals are generated.
  • the clock is buffered on entry to the chip and is distributed to each processing element. It is re-buffered within the processing element where it is used as a locally synchronous clock.
  • each processing element generates a second, synchronous clock of the same frequency but with a duty cycle determined by a self-timed circuit.
  • the secondary clock is used to provide timing information for bus precharging, data transfers and evaluation of execution units.
  • FIG. 4 shows schematically the inner-product-step process described by Kung and Leiserson. Data is clocked into each processing cell from the left and top edges while the results are clocked out from right to left.
  • an inner product accumulate algorithm is used in preference to the inner product step process common in much of the prior art.
  • the inner-produc accumulate process is depicted schematically in FIG. 5. Data is again clocked into the element from the left and top but in this case the result is formed in place.
  • An explicit unload phase is implemented to obtain the result after the computation is complete.
  • An advantage of the inner product accumulate algorithm over the inner product step approach is illustrated when matrix products are computed for matrix operands which are rectangular.
  • the inner product step process requires the recirculation of the result partial product matrix.
  • the inner product accumulate algorithm computes the resul in-place, and incurs no hardware penalties, irrespective of the length of the inner products.
  • the sequence of operations performed by the processing elements is determined by the 5 bit instruction transmitted as part of the X operand.
  • the five instruction fields and their function are listed in the table below.
  • the default operation performed by the PE i.e., when none of the fields of the instruction are asserted is an inner-product operation implemented as a floating point multiply-accumulate, the input X and Y operands being multiplied and accumulated with the contents of the accumulator.
  • the accumulator contents is cleared before the computation commences. This generally occurs for the first wavefront of a matrix multiplication, and also when executing element-wise operations.
  • the accumulator is cleared before the computation is commenced but after the ACTIVE flag is set if (the SDE field) is set, and after the accumulator has been unloaded into the result register (if the LDR field is set).
  • an internal flag, ACTIVE one per processing element is set to indicate that this processing element is an active element. Only active elements are permitted to unload results during element-wise operations.
  • HAD field If the HAD field is asserted, the operation being performed is deemed to be an element-wise (hadamard) operation. If this field is set, only those processing elements flagged as active elements (as determined by their ACTIVE flags) ca unload their accumulator contents into their output register R.
  • the accumulator contents from the previous computation are converted back to IEEE format and are unloaded into the processing element's output register.
  • the processing element issues a flag (UIP) to indicate that the unload is in progress.
  • UIP a flag
  • C A • B
  • SDE set active element
  • the processing elements accept operand data and return results in IEEE standard format. Internally, an extended precision format is used for both the mantissa and exponent of the partial results.
  • s is the sign bit of the mantissa. +/ is represented as 0/1 respectively.
  • g is a guard bit used to avoid mantissa overflow during accumulation.
  • f is a fraction (mantissa) bit.
  • the mantissa is normalized : the most significant fraction bit is 1 (explicit). is the position of the binary point (showing that the mantissa is normalized).
  • e is a bit of the exponent, which is held in biased form. The exponent bias is 229.
  • the result of the operation is the transpose of the matrix A. If an arbitrary orthogonal set of elements have their flags set, a permutation of the input matrix will be performed by this element-wise product.
  • the accumulator contents are converted from the internal format to an IEEE standard form.
  • Numbers outsid the range that can be represented by the IEEE single precision format are truncated to zero (in the case of results with large negative exponents, includin IEEE denormalized numbers) or limited to infinity (in the case of numbers with large positive exponents). In both cases, the sign of the zeros or infinities are retained (unless the result is a true zero, in which case positive zero is always returned).
  • the IEEE representation of the result is loaded into a separate output register which is concatenated with other output registers in adjacent processing elements to form an output register chain.
  • the result is output in a serial form through this register chain.
  • Matrix algorithms which are elements of the set of primitive operators ⁇ multiplication, addition, element-wise (or Hadamard) multiplication, permutation ⁇ are performed directly by the processing array. Implementation these operations for operands whose dimension exceeds the size of the array is possible by mathematically partitioning the operations to a set of operations which can be computed separately using the available array size.
  • recursive algorithms can be implemented which recirculate the output of the array back to its input. This can be a useful method to minimise memory bandwidth requirements in particular applications.
  • FIG. 6 shows the way in which conformal matrix operands are entered into the systolic array. Bit-skewing is indicated by the small offset between adjacent rows of A and columns of B.
  • the elements are obtained in th order shown in FIG. 7.
  • FIG. 8 shows the entry of conformal matrices to a 4 4 subarray of the chip for the purposes of element-wise addition or multiplication. Only those elements shown as • are used.
  • FIG. 9 shows the relationship between rows of data which are output from the array after an element-wise operation. Due to the word-length registers prese in the output register chain, the data is skewed by one word-time plus one bit- time. The additional bit-time delay is caused by the bit skewing of the input operands.
  • the invention has been implemented in a system hosted by a Sun SPARCstation.
  • the matrix processor is interfaced to the. Sun SPARCstation via the SBus.
  • This arrangement is convenient since it allows th SCAP hardware to operate using virtual addressing, with virtual to physical translation being performed by the SBus controller in the SPARCstation.
  • the host processor and the matrix processor therefore share the same data space, so both can interact with the matrix data directly.
  • This approach does howeve have its own disadvantages, the most critical being the fact that the data transfer rate across the SBus tends to be quite low due to the overheads of address translation.
  • the matrix processor also includes a cache memory subsystem.
  • the cache supports burst mode data transfers across the SBus on cache misses and can also be used to hold frequently use operand matrices (such as coefficient matrices in transform applications) and t store temporary or intermediate results.
  • a novel cache partitioning scheme has been implemented.
  • the technique allows the cache to be dynamically divided into a number of regions that are guaranteed not to interact thereby ensuring that fetches for one matrix operan do not interfere with fetches for the other.
  • the data controllers determine how the cache is partitioned on a per-operand/result basis (it is also possible to assign a cache partition to the instruction streams) by issuing an 8-bit space address along with each address generated. Each bit of the space address ca be set or cleared, or can take on the value of one of the generated address bit In our system implementation, three bits of this space address are used to control non-cached accesses, temporary matrix accesses and temporary matr initialization. Four bits are used to partition the cache into up-to 16 independe regions.
  • the two custom chips implemented during the development of this system are a processing element array chip and a data controller chip. Both chips were designed using a generic 1.2 micron double layer metal CMOS process rule-s and were retargetted for fabrication using a 1.0 micron process using a gate shrink.
  • the processing element array chips are full custom integrated circuits each containing an array of 4 rows by 5 columns of floating point processing elements. Because the overall computation rate is limited by the available dat bandwidth, the speed of computation of the processing elements if not overly important. Therefore, the architecture has been designed to yield processing elements (PEs) that are physically small rather than being particularly fast. Each complete floating point unit occupies only 2.7sq mm.
  • the processing element does not include a dedicated hardware multiplier, but is implemented as a simple microprogrammed 32-bit datapath with hardware support to aid the floating point computations, as illustrated in FIG. 5.
  • the PE hardware incorporates a booth encoder and multiplexer to facilitate multiplication using an iterative modified booth algorithm, and also a shifter/normalizer that can be used for pre-addition alignment as well as post addition normalization.
  • the shifter When used as a normalizer, the shifter has the ability compute the amount by which the exponent must be adjusted during the sam time that the normalization occurs. Computation of the floating point arithmeti operations (multiply/accumulate, multiply or add/subtract) are completed withi 40 clock cycles.
  • the processing element array chip accepts IEEE single precision floating poin numbers as inputs and feeds results back through the data controllers in the same format. Internally, a proprietary number representation is used, includin a 31 bit exponent that virtually eliminates the possibility of exponent overflow.
  • the chips operate at 20MHz clock speed, achieving around 20 MFLOPS peak performance per chip.
  • Processing arrays of arbitrary size can be built with no external components simply by stacking the chips to form a two dimensional array.
  • the pin-out of the chip is such that 1-to-1 connection of inputs and outputs of adjacent chips can be made. All communication to and from the array is via the edge elements of the array.
  • Operand data enters the array on left and top edges. This data is known as the X and Y operand data respectively.
  • the result data (R) emerges from the left edge of the array and can be extracted independently from the application of operand wavefronts (that is, the operand and result streams operate in parallel).
  • the only global signals in the array are clock and reset. Because all communication is local (nearest neighbour only), the system is insensitive to clock skew from one side of the array to the other. The only requirement is tha the skew between adjacent PEs is kept under control. This can be readily achieved by orderly layout of clock routing and/or insertion of clock buffering.
  • the processing elements are low power devices due to their architecture.
  • the entire chip containing 20 processing elements dissipates less than half a watt. This corresponds to less than 5mA per processing element at 20MHz operation, or 5mA per MFLOP.

Abstract

A processing element for use in a scalable array processor chip which can perform a number of point matrix operations for conformable matrices of arbitrary order on an array of fixed size. The processing element includes a number of input and output registers, storage registers, a shifter/normaliser, and arithmetic unit (datapath elements) and a control sequencing unit. The datapath elements are connected by a number of parallel data buses, with the input and output registers connected by serial interfaces.

Description

SCALABLE DIMENSIONLESS ARRAY
TECHNICAL FIELD
This invention relates to the general field of digital computing and in particular to a scalable array of globally clocked multiply/accumulate floating point processing elements.
BACKGROUND ART
Kung and Leiserson ['Systolic Arrays (for VLSI)' in Sparse Matrix Proceeding 1978, Soc. for Industrial and Applied Mathematics, 1979] presented the conce of performing matrix operations using arrays of simple processing elements. Each processing element implements a simple primitive operation. As an example, at a given time, a processor may:
read the input data vector {a(in),b(in),c(in)}, perform an arithmetic operation such as c(out) = a(in)b(in) + c(in), write the output data vector {a(out),b(out),c(out)}.
The processing elements are connected only to their nearest neighbours, and so the problems of routing, fan-out and clock skew are minimised. Data and results move synchronously through the array of elements. The name applied to this approach to computation with arrays of identical processing elements is systolic.
An example algorithm quoted by Kung and Leiserson was the matrix product. Using a systolic array in which the processing elements executed the local algorithm presented above, known as an inner-product-step algorithm, they showed that the system level algorithm which this implemented was a matrix product of computational order O(N), rather than the computational order 0(N for the matrix product implemented on a conventional scalar architecture. The matrix product is represented simply as
C = AB
where A, B and C are matrices of a size equal to the order of the array. Conformal matrices can be multiplied with this array under certain restrictions the results are re-circulated through the array, and larger order matrix product can be computed if the task is partitioned. Speiser, Whitehouse and Bromley ['Signal Processing Applications for Systolic Arrays', Record of the 14th Asilomar Conference on Circuits, Systems and Computers, IEEE No. 80CH 1625-3, 1980 ] subsequently demonstrated the use of inner-product- accumulate processing elements for the same matrix product algorithm. In thi case, the results are formed in-place, and do not move between processing elements. The only difference between the description of this algorithm and t algorithm described above is that the input and output phases of the algorith do not include the reading and writing of c(in) and c(out) respectively, and tha an explicit unload phase must be added at the end of the algorithm to return t results.
The primary advantage of systolic processing over conventional linear processing is speed. The systolic architecture uses the fact that for matrix multiplication, the same operand data may be reused many times in the computation of cross-product terms, thereby making better use of the availabl data bandwith. The improved performance, however, comes at the cost of flexibility. Prior art devices have been designed for very specific applications such as Fast Fourier Transform computations or video signal processing. An advantage of the present invention is the ability of the same device to be usef for a wide variety of matrix computations without the need for hardware reconfiguration. The device is particularly useful when implemented as an architectural enhancement to a computer in which case the processing power of the computer is considerably enhanced.
DISCLOSURE OF THE INVENTION
It is an object of this invention to provide a processing element for use in a scalable array processor which is able to implement a set of primitive floating point matrix operations for conformable matrices of arbitrary order on an arra of fixed size.
It is a further object to provide a scalable array processor chip which is able t perform one or more of the following functions : compute the product of two matrices compute the element-wise (Hadamard) product of two matrices compute the sum of two matrices permute the rows and columns of a matrix transpose a matrix.
It is a still further object of this invention to at least provide the public with a useful alternative to existing systolic devices.
Therefore, according to perhaps one form of this invention, although this need not be the only or indeed the broadest form, there is proposed a processing element suitable for use in a scalable array processor comprising : at least one input register means adapted to receive and process serial operands in the form of {instruction, data} 2-tuples; a memory means adapted to store temporary results and constants; a computing means adapted to perform logical operations; an output register means adapted to output results from the processing element; a control and sequencing means adapted to control the operation of the processing element; a plurality of data buses adapted to provide communication between the plurality of means.
In preference the computing logical means consists of a shifter/normaliser means adapted to shift/normalise data and an arithmetic means adapted to perform logical operations such as but not limited to addition, subtraction and partial multiplication operations.
In preference the processing element is adapted to perform floating point multiply, floating point add and floating point multiply-accumulate which is used for inner product accumulate operations.
In preference the input register is adapted to output a copy of the input operan bit with a one clock period delay.
In preference there are N input registers and the processing element is suitabl for use in a N-dimensional scalable array processor.
In preference N can be any positive integer. In preference the input registers convert the input serial data to an internal representation comprising separate sign, fraction and exponent.
In preference the memory means consists of read only memory for storage of constants and a read/write memory for storage of temporary results.
In preference the shifter/normalizer means is adapted to perform binary weighted barrel shifting wherein the shifter function is determined by a control input to the shifter/normalizer and the normalizer function effects a data dependent shift of up to 15 bits within a single clock cycle.
In preference the arithmetic means implements logical operations such as but not limited to floating point addition, multiplication and multiply-accumulate algorithms using a parallel microcoded data path.
In preference the arithmetic means comprises a logical unit such as but not limited to an input-multiplexer, an adder, an output shifter, flags unit and a control unit.
In preference the output register can be loaded in parts to enable the conversion from the internal representation to IEEE 754 floating point format. The output register can be parallel loaded from the arithmetic means or can b serially loaded from a serial source. The register is unloaded serially.
In preference the control and sequencing means includes timing and control logic, a microcode ROM, address decoders, branch control logic, flags logic, instruction register, instruction decoder and a program counter.
In preference there are three data buses, an X bus, a Y bus and a R bus. The and Y buses are called operand buses and the R bus is called the result bus.
In preference the processing element has an accumulator comparison means.
In another form the invention consists of a scalable array processor chip comprising an array of processing elements each said element including : at least one input register means adapted to receive and process serial operands in the form of {instruction, data} 2-tuples; a memory means adapted to store temporary results and constants; a shifter/normalizer means adapted to shift or normalize data; an arithmetic means adapted to perform logical operations such as but not limited to addition, subtraction and partial multiplication operations; an output register means adapted to output results from the processing element; a control and sequencing means adapted to control the operation of the processing element; a plurality of data buses adapted to provide communication between the plurality of means; and wherein each processing element has means for communication only with adjacent elements.
In preference the array of elements comprise an interconnected lattice of at least one dimension.
In preference the array of elements comprise an interconnected lattice of at least two dimensions.
In preference the scalable array processor chip is adapted to perform at least the functions of computing the product of two or more matrices, computing the element-wise product of two or more matrices, computing the sum of two or more matrices, permuting the rows and columns of a matrix and transposing a matrix.
In a yet further form of the invention there is proposed a computing apparatus comprising a host processor, at least one scalable array processor chip and a plurality of data formatters wherein the scalable array processor chip(s) and plurality of data formatters are adapted to perform matrix operations otherwise performed by the host processor.
In preference the apparatus includes a memory cache adapted to store operand data and temporary or intermediate results.
In a still further form of the invention there is proposed a method of performing matrix operations comprising the steps of : (a) providing a plurality of processing elements in the form of an array adapted to perform systolic processing operations; (b) receiving operand matrix data for processing from a host or data source; (c) formatting the operand matrix data in a data formatter by adding an instruction to form an {instruction, data} 2-tuple;
(d) transferring sets of 2-tuples to the processing element array to cause th processing elements to process the data in accordance with the instruction; (e) repeating the steps (b) to (d) a number of times said number of times being dictated by the matrix operation being performed;
(f) unloading the results of the matrix operations into an output result register of the processing elements (under control of an instruction specified b the operand 2-tuple); (g) transferring the contents of the output result registers held within the plurality of processing elements back to data formatters as result wavefronts;
(h) storing the result wavefront data back to a host or data sink; and
(i) repeating the steps (f) to (h) a number of times said number of times being dictated by the matrix operation being performed.
In preference, the data formatter is of the type the subject of co-pending paten application number PL5696 entitled "DATA FORMATTER".
The sets of 2-tuples are known as operand wavefronts. During the unload ste an indication is provided to the data formatter that the unload operation is occurring, allowing synchronization of data transfers to and from the processin array.
During step (g), the results are transmitted in sets containing one result from each of the processing elements at the left edge of the array. Such a set is known as a result wavefront.
There may be as many result wavefronts held within the array as there are columns of processing elements.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of this invention a preferred embodiment will now b described with reference to the attached drawings in which :
FIG. 1 is a schematic diagram of one processing element;
FIG. 2 is a schematic diagram of one embodiment of a systolic array processing element chip utilising the elements of FIG. 1 ;
FIG. 3 is a schematic diagram of a first embodiment of a processing apparatus utilizing the chip of FIG. 2;
FIG. 4 is a schematic diagram of an inner-product-step processor;
FIG. 5 is a schematic diagram of an inner-product-accumulate process
FIG. 6 is a schematic example of the entry of operand wavefronts to a processor array;
FIG. 7 is a schematic example of the unloading of result wavefronts fro a processor array;
FIG. 8 is a schematic example of the entry of element-wise operand wavefronts to a processor array;
FIG. 9 is a schematic example of the unloading of element-wise result wavefronts from a processor array; and
FIG. 10m is a schematic diagram of a second embodiment of a processi apparatus.
A"
BEST MODE FOR CARRYING OUT THE INVENTION
Referring now to the drawings in detail, each processing element consists of number of input registers, a memory consisting of a register file and a constan ROM, a shifter/normalizer, an arithmetic unit, output registers and a control an sequencing unit.
The datapath elements ( input registers, memory, shifter/normalizer, arithmeti unit and output registers) are interconnected by three parallel data buses. In addition serial interfaces are provided to and from each of the input registers and the output register to allow communication between processing elements and to facilitate construction of arbitrarily large arrays of processing elements.
An array computes 2N2 floating point operations (1 multiply and 1 accumulate for each processing element in the array) in the time taken to fetch 2N operands. To ensure that the computation is bandwidth limited, each processing element needs only compute at a rate of one floating point operati every N data fetches. This fact leads to the conclusion that very cheap processing elements can be used in the array.
A schematic of the processing element is shown in FIG. 1. The choice of a simple microcoded datapath and sequential algorithms to perform the floating point operations means that the size of the processing element can be kept small. Many such processing elements can therefore be placed on a single chip. The fact that processing elements implemented in this manner are slowe than those built using fully parallel algorithms and architectures becomes, insignificant as the size of the array is increased. This is because the processing performance achieved is limited by the data bandwidth (and array size), not by the computation rate for a single processing element.
The functions performed by each module in the processing element are described below.
Input Registers: The input registers receive serial operands in the form of {instruction, data} 2-tuples from adjacent processing elements to the left or to or in the case of processing elements at the top or left boundary of the array, from operand data formatters. They then separate the instruction and reform the data to an internal representation consisting of separate sign, exponent an fraction words. This data is available to the processing element via the X and internal data buses. The input registers also compute the sign of the product the two inputs, check for zero operand data and implement the Booth encoder used during multiplication operations.
Memory: The memory consists of a Register File and a Constant ROM: The register file is a 5 word memory used to hold the product (both fraction and exponent), accumulator (both fraction and exponent) and temporary results. The product and accumulator registers can be swapped under the control of microcode to facilitate efficient implementation of the pre-alignment operation i the floating point addition and accumulation algorithms. The registers can be loaded from the R bus, and their contents can be read from either the X or Y buses. The Constant ROM stores a number of constants that are used during the implementation of the floating point algorithms. These can be read via the and Y operand data buses.
Shifter/Normalizer: Under microcode control, the shifter can either operate as shifter (for pre-alignment of fractions before addition) or a normalizer. When acting as a shifter, it performs right-shift operations on one operand datum (Th X operand). The amount by which the datum is shifted is determined by a previously computed shift that is applied to the second input to the shifter (the operand). The shifter can shift 0 to 15 bits right within one clock cycle. When acting as a normalizer, the Shifter/Normalizer performs either a right shift by one bit, or a left shift by 0 to 15 bits within one cycle. In this case the shift is applied to the X operand input to the shifter and is independent of the Y operand input. The value of the shift is data dependent. A right-shift is performed if the value on the X input is the result of a computation which had overflowed (such as in the case of addition of two normalized numbers having the same exponent). Otherwise, a left-shift is performed. When acting as a normalizer, the Shifter/Normalizer at the same time computes the offset (exponent offset) that must be applied to the exponent of the number being normalized in order to compensate for the shift that is applied. Shifting and normalization operations that require shifts of greater than 51 bits can be implemented by multiple passes through the shifter/normalizer.
Arithmetic Unit: The arithmetic unit consists of an input multiplexer, an adder, result shifter and a flags unit. There are two parallel data inputs (X and Y) to th arithmetic unit and a single parallel data output (R). The input multiplexer can be used to complement and/or left-shift the X operand under control of the Booth encoding logic contained in the input registers. This feature is used in th implementation of multiplication using a modified Booth algorithm. The multiplexer can also be controlled directly by the processing element's microcode to facilitate the implementation of addition, subtraction and data- move operations.
The adder performs conventional two's complement addition. The carry input to the adder can be controlled by either the booth encoder logic or the processing element's microcode. Both addition and subtraction can be performed by the combination of input-multiplexer and adder. Under control of the processing element's microcode, the output shifter latches either the result of the computation or the result divided by 4. This feature is used during partial multiplication operations. The latched result remains valid until the next time th arithmetic unit is used. The latched result can be written onto the result bus R. A number of flags are set or cleared depending upon the result latched by the arithmetic unit's output shifter. These flags include the sign of the result, whether or not the result is zero, and whether or not the result is less than or equal to 15 (used to support multi-pass shifting during addition pre-alignment).
Output Registers: The output register module is used to communicate the results of computations back from the processing element toward the left boundary of an array of processing elements. The output register can be parallel loaded from the arithmetic unit or can be serially loaded from a serial source (often the serial source is from another processing element's output register). The output register is unloaded serially. The output register is parall loadable by the arithmetic unit in three parts: sign, exponent and fraction. Thi facilitates conversion from the internal data representation to IEEE 754 floatin point format.
During the time when the arithmetic unit is converting the accumulator content into IEEE floating point format, a flag is set to indicate that a register unload is in progress (UIP).
Control and Sequencing: This module includes a microcode ROM, a program counter, branch control logic, flags logic, an instruction register, an instruction decoder, address decoders and timing and control logic. This circuitry is used sequence the processing element through its operations. Each clock cycle, th microcode ROM issues a microinstruction to the processing element's datapa units, and thereby controls the function and timing of the data operations bein performed. Data and control flags fed to the branch control logic enable the processing element to perform data dependent operations required for implementation of the floating point algorithms. Fields of the instruction transmitted serially to the processing element as part of the {instruction, data} 2-tuple are also fed to the branch control logic and flags logic of the Control a Sequencing Unit. These also determine the sequence of microinstructions,. executed by the processing element. The instructions specified in the {instruction, data} 2-tuple are distinct from the set of microinstructions implemented by the processing element The instructions specified in the {instruction, data} 2-tuple control the flow of execution of the processing element's microcode. The internal data representation used by the processing element uses two 32- bit data words to represent each IEEE single precision number. One of the two words represents the mantissa in 2's complement form, normalized to bit 29. The second word represents the exponent using an exponent bias of 229. This format provides better resolution in the mantissa than IEEE single precision format, and the use of a large exponent field virtually guarantees that exponent overflow cannot occur.
Within each processing element, multiplication is facilitated by the inclusion of modified Booth encoder and multiplexer. The denormalisation and normalisation operations required by the floating point accumulation or additio algorithms are facilitated by the repeated application of the shifter circuit which can shift up to 15 bits in a single cycle.
FIG. 2 shows a scalable array processing chip composed of a 5 x 4 rectangula array of single precision floating point processing elements which accept serial dataflow operands, and which perform a set of operations on those operands. Each operand consists of a 5-bit instruction followed by an IEEE standard single precision number. Each processing element is a microcoded ALU with 32-bit parallel datapath that includes dedicated hardware support for floating point multiplication and addition algorithms.
The array of processing elements is clocked synchronously. The three bit- serial links provide communication between processing elements. One link is provided for each of the two input X and Y operands and one for the output, or result operand R. As shown in FIG. 2, input data is transferred from left to righ across the array, and output results are transmitted from right to left. Chips can be cascaded arbitrarily in both X and Y directions.
The operation of the scalable array processing chip is described with reference to the system block diagram shown in FIG. 3. The data interface provides communication between the scalable array chip and the host system. The data formatter elements are described separately in a co-pending application number PL5696 entitled DATA FORMATTER.
The I/O architecture of each processing element consists of two orthoganal data transmission paths for X and Y operands, each consisting of a single one bit delay cell and a 32-bit data storage register. The X operand path also includes a 5 bit instruction register. Data is input to the array as a sequence of {instruction.data} 2-tuples. These are split into separate instruction and data words on receipt by the input registers.
Each X data operand consists of a 5-bit instruction followed by a single 32-bit IEEE 754 standard floating point number. A variable length gap of several clock periods may be present between operands for I/O synchronisation. The operand is transmitted in bit serial form into the processing element. When the entire {instruction, data} 2-tuple is held within the processing element, it is cross-loaded into parallel holding registers. The instruction is decoded and used to control the execution of the floating point algorithms. The data is converted by hardware into the internal extended format. The internal format has both extended precision and extended dynamic range when compared wit the IEEE standard.
The bit-serial data is bit-skewed on entry to adjacent processing elements on the array boundary. This skew is preserved between adjacent elements within the array by passing the data through the single-bit delay stage in each processing element before re-transmitting it to the next processing element. The use of serial data both minimises the I/O pin count at the array boundary and allows adjacent processing elements to both commence and conclude the computations with a time differential of only one bit period. The advantage of the bit-skewing approach over a broadcast architecture is that there is no need to drive long buses with large buffers and thereby provides the capability for arbitrary expansion of the array.
Bit skewing has the advantage over word-skewing in that less wavefronts are required to complete a processing task. The bit-skewed approach therefore results in the minimisation of job time. The computation time is minimised for both a single job and a job stream.
At the completion of a set of computations, an operand wavefront is issued to the array which causes the unloading of the results into the output registers of the processing elements.
Clocking of the scalable array processing chips is performed by a single phase 50% duty cycle clock from which all internal timing signals are generated. The clock is buffered on entry to the chip and is distributed to each processing element. It is re-buffered within the processing element where it is used as a locally synchronous clock. In addition, each processing element generates a second, synchronous clock of the same frequency but with a duty cycle determined by a self-timed circuit. The secondary clock is used to provide timing information for bus precharging, data transfers and evaluation of execution units.
FIG. 4 shows schematically the inner-product-step process described by Kung and Leiserson. Data is clocked into each processing cell from the left and top edges while the results are clocked out from right to left. For a matrix product algorithm, an inner product accumulate algorithm is used in preference to the inner product step process common in much of the prior art. The inner-produc accumulate process is depicted schematically in FIG. 5. Data is again clocked into the element from the left and top but in this case the result is formed in place. An explicit unload phase is implemented to obtain the result after the computation is complete. An advantage of the inner product accumulate algorithm over the inner product step approach is illustrated when matrix products are computed for matrix operands which are rectangular. The inner product step process requires the recirculation of the result partial product matrix. In contrast, the inner product accumulate algorithm computes the resul in-place, and incurs no hardware penalties, irrespective of the length of the inner products.
The sequence of operations performed by the processing elements is determined by the 5 bit instruction transmitted as part of the X operand. The five instruction fields and their function are listed in the table below.
Figure imgf000015_0001
TABLE 1 The default operation performed by the PE (i.e., when none of the fields of the instruction are asserted) is an inner-product operation implemented as a floating point multiply-accumulate, the input X and Y operands being multiplied and accumulated with the contents of the accumulator.
If the CLR field is asserted, the accumulator contents is cleared before the computation commences. This generally occurs for the first wavefront of a matrix multiplication, and also when executing element-wise operations. The accumulator is cleared before the computation is commenced but after the ACTIVE flag is set if (the SDE field) is set, and after the accumulator has been unloaded into the result register (if the LDR field is set).
If the SDE field is set AND the value held in the accumulator (from the previou operation) is non-zero, an internal flag, ACTIVE (one per processing element) is set to indicate that this processing element is an active element. Only active elements are permitted to unload results during element-wise operations.
If the HAD field is asserted, the operation being performed is deemed to be an element-wise (hadamard) operation. If this field is set, only those processing elements flagged as active elements (as determined by their ACTIVE flags) ca unload their accumulator contents into their output register R.
If the LDR field is asserted, the accumulator contents from the previous computation are converted back to IEEE format and are unloaded into the processing element's output register.
During the unloading process, the processing element issues a flag (UIP) to indicate that the unload is in progress.
If the ADD field is asserted, the X and Y operands are added rather than multiplied prior to the result being stored in the accumulator. The HAD and CL fields must also be asserted for matrix addition instructions.
The element-wise operations of addition and multiplication defined by
C = A + B where cij = aij + bij
C = A • B where cij = aijbij are performed by first setting the active-element (ACTIVE) flag in a desired processing element. This procedure is typically done once during system initialization. It is achieved by issuing an instruction with the SDE (set active element) field asserted. When this occurs, processing elements that contain non-zero results in their accumulators set the value of their ACTIVE flag to TRUE.
The processing elements accept operand data and return results in IEEE standard format. Internally, an extended precision format is used for both the mantissa and exponent of the partial results.
The internal formats used for the representation of mantissa and exponent are as follows:
2's complement mantissa sgf.ffflfffffffffffffffffffffffff
Exponent Oeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
ExponentBias 001000000000000000000000000000000
where: s is the sign bit of the mantissa. +/ is represented as 0/1 respectively. g is a guard bit used to avoid mantissa overflow during accumulation. f is a fraction (mantissa) bit. The mantissa is normalized : the most significant fraction bit is 1 (explicit). is the position of the binary point (showing that the mantissa is normalized). e is a bit of the exponent, which is held in biased form. The exponent bias is 229.
If the flags in the anti-active processing elements have been set by a prior SD instruction, and an element-wise multiplication of a matrix A with the unit matri is executed, the result of the operation is the transpose of the matrix A. If an arbitrary orthogonal set of elements have their flags set, a permutation of the input matrix will be performed by this element-wise product.
When an unload (LDR) instruction is received, the accumulator contents are converted from the internal format to an IEEE standard form. Numbers outsid the range that can be represented by the IEEE single precision format are truncated to zero (in the case of results with large negative exponents, includin IEEE denormalized numbers) or limited to infinity (in the case of numbers with large positive exponents). In both cases, the sign of the zeros or infinities are retained (unless the result is a true zero, in which case positive zero is always returned).
The IEEE representation of the result is loaded into a separate output register which is concatenated with other output registers in adjacent processing elements to form an output register chain. The result is output in a serial form through this register chain.
Matrix algorithms which are elements of the set of primitive operators {multiplication, addition, element-wise (or Hadamard) multiplication, permutation} are performed directly by the processing array. Implementation these operations for operands whose dimension exceeds the size of the array is possible by mathematically partitioning the operations to a set of operations which can be computed separately using the available array size.
For the particular case where the problem size does not exceed the size of the array, recursive algorithms can be implemented which recirculate the output of the array back to its input. This can be a useful method to minimise memory bandwidth requirements in particular applications.
If a matrix multiplication is commenced with an instruction which does not clea the accumulator, the result of the multiplication will be summed with the prior result. This gives a matrix multiplication/accumulation capability which has direct application to the evaluation of complex matrix operations.
FIG. 6 shows the way in which conformal matrix operands are entered into the systolic array. Bit-skewing is indicated by the small offset between adjacent rows of A and columns of B. Each element of the processor array computes a element cij of the result matrix C, by evaluating the inner product ci j =k=o aikk]cj • Wnen tne ,ast wavefront has been input to the array, the result matrix may be read from the array. The elements are obtained in th order shown in FIG. 7.
If the processing elements on the main diagonal in the array have their active element flag set by an arbitrary prior operation as shown in FIG. 8, the processor array can be used for the element-wise operations of addition and Hadamard multiplication. FIG. 8 shows the entry of conformal matrices to a 4 4 subarray of the chip for the purposes of element-wise addition or multiplication. Only those elements shown as • are used.
FIG. 9 shows the relationship between rows of data which are output from the array after an element-wise operation. Due to the word-length registers prese in the output register chain, the data is skewed by one word-time plus one bit- time. The additional bit-time delay is caused by the bit skewing of the input operands.
In a second embodiment the invention has been implemented in a system hosted by a Sun SPARCstation. The matrix processor is interfaced to the. Sun SPARCstation via the SBus. This arrangement is convenient since it allows th SCAP hardware to operate using virtual addressing, with virtual to physical translation being performed by the SBus controller in the SPARCstation. The host processor and the matrix processor therefore share the same data space, so both can interact with the matrix data directly. This approach does howeve have its own disadvantages, the most critical being the fact that the data transfer rate across the SBus tends to be quite low due to the overheads of address translation.
To compensate for this low data rate, the matrix processor also includes a cache memory subsystem. The cache supports burst mode data transfers across the SBus on cache misses and can also be used to hold frequently use operand matrices (such as coefficient matrices in transform applications) and t store temporary or intermediate results.
A novel cache partitioning scheme has been implemented. The technique allows the cache to be dynamically divided into a number of regions that are guaranteed not to interact thereby ensuring that fetches for one matrix operan do not interfere with fetches for the other. The data controllers determine how the cache is partitioned on a per-operand/result basis (it is also possible to assign a cache partition to the instruction streams) by issuing an 8-bit space address along with each address generated. Each bit of the space address ca be set or cleared, or can take on the value of one of the generated address bit In our system implementation, three bits of this space address are used to control non-cached accesses, temporary matrix accesses and temporary matr initialization. Four bits are used to partition the cache into up-to 16 independe regions. Use of the temporary matrix control bits of the space address allows temporary result matrices to be stored entirely within the cache without being written out to the host. In fact, such matrices are entirely invisible to the host processor. The maximum data throughput obtainable using the cache is 12.5 Mwords/second.
The two custom chips implemented during the development of this system are a processing element array chip and a data controller chip. Both chips were designed using a generic 1.2 micron double layer metal CMOS process rule-s and were retargetted for fabrication using a 1.0 micron process using a gate shrink.
The processing element array chips are full custom integrated circuits each containing an array of 4 rows by 5 columns of floating point processing elements. Because the overall computation rate is limited by the available dat bandwidth, the speed of computation of the processing elements if not overly important. Therefore, the architecture has been designed to yield processing elements (PEs) that are physically small rather than being particularly fast. Each complete floating point unit occupies only 2.7sq mm.
The processing element does not include a dedicated hardware multiplier, but is implemented as a simple microprogrammed 32-bit datapath with hardware support to aid the floating point computations, as illustrated in FIG. 5.
The PE hardware incorporates a booth encoder and multiplexer to facilitate multiplication using an iterative modified booth algorithm, and also a shifter/normalizer that can be used for pre-addition alignment as well as post addition normalization. When used as a normalizer, the shifter has the ability compute the amount by which the exponent must be adjusted during the sam time that the normalization occurs. Computation of the floating point arithmeti operations (multiply/accumulate, multiply or add/subtract) are completed withi 40 clock cycles.
The processing element array chip accepts IEEE single precision floating poin numbers as inputs and feeds results back through the data controllers in the same format. Internally, a proprietary number representation is used, includin a 31 bit exponent that virtually eliminates the possibility of exponent overflow.
The chips operate at 20MHz clock speed, achieving around 20 MFLOPS peak performance per chip. Processing arrays of arbitrary size can be built with no external components simply by stacking the chips to form a two dimensional array. The pin-out of the chip is such that 1-to-1 connection of inputs and outputs of adjacent chips can be made. All communication to and from the array is via the edge elements of the array. Operand data enters the array on left and top edges. This data is known as the X and Y operand data respectively. The result data (R) emerges from the left edge of the array and can be extracted independently from the application of operand wavefronts (that is, the operand and result streams operate in parallel).
The only global signals in the array are clock and reset. Because all communication is local (nearest neighbour only), the system is insensitive to clock skew from one side of the array to the other. The only requirement is tha the skew between adjacent PEs is kept under control. This can be readily achieved by orderly layout of clock routing and/or insertion of clock buffering.
The processing elements are low power devices due to their architecture. The entire chip containing 20 processing elements dissipates less than half a watt. This corresponds to less than 5mA per processing element at 20MHz operation, or 5mA per MFLOP.
Figure imgf000021_0001
TABLE 2
The performance attained by the apparatus of the second embodiment for a range of applications is shown in Table 3.
Figure imgf000022_0001
TABLE 3

Claims

1. A processing element suitable for use in a scalable array processor comprising of at least one input register means adapted to receive and process serial operands in the form of {instruction, data} 2-tuples, a memory means adapted to store temporary results and constants, a computing means adapted to perform logical operations, an output register means adapted to output results from the processing element, a control and sequencing means adapted to control the operation of the processing element; a plurality of data buses adapted to provide communication between the plurality of means.
2. An apparatus as in claim 1 wherein the computing logical means consists of a shifter/normaliser means adapted to shift/normalise data and an arithmetic means adapted to perform logical operations such as but not limited to addition, subtraction and partial multiplication operations.
3. An apparatus as in claim 1 wherein the processing element is adapted to perform floating point multiply, floating point add and floating point multiply- accumulate which can be used for inner product accumulate operations.
4. An apparatus as in claim 1 wherein the input register is adapted to outpu a copy of an input operand bit with a one clock period delay.
5. An apparatus as in claim 1 wherein there are N input registers and the processing element is suitable for use in a N-dimensional scalable array processor.
6. An apparatus as in claim 5 wherein N can be any positive integer.
7. An apparatus as in claim 1 wherein the input registers are adapted to convert the input serial data to an internal representation comprising separate sign, fraction and exponent.
8. An apparatus as in claim 1 wherein the memory means consists of read only memory for storage of constants and a read/write memory for storage of temporary results.
9. An apparatus as in claim 2 wherein the shifter/normalizer means is adapted to perform binary weighted barrel shifting wherein the shifter function i determined by a control input to the shifter/normalizer and the normalizer function effects a data dependent shift of up to 15 bits within a single clock cycle.
10. An apparatus as in claim 2 wherein the arithmetic means implements logical operations such as but not limited to floating point addition, multiplicatio and multiply-accumulate algorithms using a parallel microcoded data path.
11. An apparatus as in claim 2 wherein the arithmetic means comprises a logical unit such as but not limited to an input-multiplexer, an adder, an output shifter, a flags unit and a control unit.
12. An apparatus as in claim 1 wherein the output register is adapted to be loaded in parts to enable the conversion from the internal representation to IEEE 754 floating point format and the output register is adapted to be parallel loaded from the arithmetic means or serially loaded from a serial source and the register is unloaded serially.
13. An apparatus as in claim 1 wherein the control and sequencing means includes timing and control logic, a microcode ROM, address decoders, branch control logic, flags logic, instruction register, instruction decoder and a program counter.
14. An apparatus as in claim 1 wherein there are three data buses, an X bu a Y bus and a R bus and where the X and Y buses are called operand buses and the R bus is called the result bus.
15. An apparatus as in claim 1 wherein there is an accumulator comparison means.
16. A scalable array processor chip comprising an array of processing elements each said element including at least one input register means adapted to receive and process serial operands in the form of {instruction, data 2-tuples, a memory means adapted to store temporary results and constants, shifter/normalizer means adapted to shift or normalize data, an arithmetic means adapted to perform logical operations such as but not limited to addition subtraction and partial multiplication operations, an output register means adapted to output results from the processing element, a control and sequencing means adapted to control the operation of the processing element, a plurality of data buses adapted to provide communication between the plurality of means and wherein each processing element has means for communication only with adjacent elements.
17. An apparatus as in claim 16 wherein the array of elements comprise an interconnected lattice of at least one dimension.
18. An apparatus as in claim 16 wherein the array of elements comprise an interconnected lattice of at least two dimensions.
19. An apparatus as in claim 16 wherein the scalable array processor chip i adapted to perform at least the functions of computing the product of two or more matrices, computing the element-wise product of two or more matrices, computing the sum of two or more matrices, permuting the rows and columns a matrix and transposing a matrix.
20. A computing apparatus comprising a host processor, at least one scalable array processor chip and a plurality of data formatters wherein the scalable array processor chip(s) and plurality of data formatters are adapted to perform matrix operations otherwise performed by the host processor.
21. An apparatus as in claim 20 wherein the apparatus includes a memory cache adapted to store operand data and temporary or intermediate results.
22. A method of performing matrix operations comprising the steps of :
(a) providing a plurality of processing elements in the form of an array adapted to perform systolic processing operations;
(b) receiving operand matrix data for processing from a host or data source; (c) formatting the operand matrix data in a data formatter by adding an instruction to form an {instruction, data} 2-tuple;
(d) transferring sets of 2-tuples to the processing element array to cause th processing elements to process the data in accordance with the instruction;
(e) repeating the steps (b) to (d) a number of times said number of times being dictated by the matrix operation being performed;
(f) unloading the results of the matrix operations into an output result register of the processing elements (under control of an instruction specified by the operand 2-tuple);
(g) transferring the contents of the output result registers held within the plurality of processing elements back to data formatters as result wavefronts; (h) storing the result wavefront data back to a host or data sink; and (i) repeating the steps (f) to (h) a number of times said number of times being dictated by the matrix operation being performed.
PCT/AU1993/000573 1992-11-05 1993-11-05 Scalable dimensionless array WO1994010638A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU54126/94A AU5412694A (en) 1992-11-05 1993-11-05 Scalable dimensionless array
CA002148719A CA2148719A1 (en) 1992-11-05 1993-11-05 Scalable dimensionless array

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AUPL5697 1992-11-05
AUPL569792 1992-11-05

Publications (1)

Publication Number Publication Date
WO1994010638A1 true WO1994010638A1 (en) 1994-05-11

Family

ID=3776520

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU1993/000573 WO1994010638A1 (en) 1992-11-05 1993-11-05 Scalable dimensionless array

Country Status (3)

Country Link
AU (1) AU5412694A (en)
CA (1) CA2148719A1 (en)
WO (1) WO1994010638A1 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2393278A (en) * 2002-09-17 2004-03-24 Micron Europe Ltd Transposing data in an array of processing elements by shifting data diagonally
GB2393280A (en) * 2002-09-17 2004-03-24 Micron Europe Ltd Transposing data in a plurality of processing elements using a memory stack.
US7263543B2 (en) 2003-04-23 2007-08-28 Micron Technology, Inc. Method for manipulating data in a group of processing elements to transpose the data using a memory stack
US7581080B2 (en) 2003-04-23 2009-08-25 Micron Technology, Inc. Method for manipulating data in a group of processing elements according to locally maintained counts
US7596678B2 (en) 2003-04-23 2009-09-29 Micron Technology, Inc. Method of shifting data along diagonals in a group of processing elements to transpose the data
US7676648B2 (en) 2003-04-23 2010-03-09 Micron Technology, Inc. Method for manipulating data in a group of processing elements to perform a reflection of the data
US7913062B2 (en) 2003-04-23 2011-03-22 Micron Technology, Inc. Method of rotating data in a plurality of processing elements
EP2302510A1 (en) * 1998-08-24 2011-03-30 MicroUnity Systems Engineering, Inc. System for matrix multipy operation with wide operand architecture and method
WO2018228703A1 (en) * 2017-06-16 2018-12-20 Huawei Technologies Co., Ltd. Multiply accumulator array and processor device
WO2019023046A1 (en) * 2017-07-24 2019-01-31 Tesla, Inc. Accelerated mathematical engine
CN110609804A (en) * 2018-06-15 2019-12-24 瑞萨电子株式会社 Semiconductor device and method of controlling semiconductor device
CN111291320A (en) * 2020-01-16 2020-06-16 西安电子科技大学 Double-precision floating-point complex matrix operation optimization method based on HXDDSP chip
WO2021108660A1 (en) * 2019-11-27 2021-06-03 Amazon Technologies, Inc. Systolic array including fused multiply accumulate with efficient prenormalization and extended dynamic range
US11113233B1 (en) 2020-06-29 2021-09-07 Amazon Technologies, Inc. Multiple busses in a grouped systolic array
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11157287B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system with variable latency memory access
US11232062B1 (en) 2020-06-29 2022-01-25 Amazon Technologies, Inc. Parallelism within a systolic array using multiple accumulate busses
US11308027B1 (en) 2020-06-29 2022-04-19 Amazon Technologies, Inc. Multiple accumulate busses in a systolic array
US11308026B1 (en) 2020-06-29 2022-04-19 Amazon Technologies, Inc. Multiple busses interleaved in a systolic array
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11422773B1 (en) 2020-06-29 2022-08-23 Amazon Technologies, Inc. Multiple busses within a systolic array processing element
EP4064040A1 (en) * 2021-03-25 2022-09-28 Intel Corporation Supporting 8-bit floating point format operands in a computing architecture
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11816446B2 (en) 2019-11-27 2023-11-14 Amazon Technologies, Inc. Systolic array component combining multiple integer and floating-point data types
US11842169B1 (en) 2019-09-25 2023-12-12 Amazon Technologies, Inc. Systolic multiply delayed accumulate processor architecture
US11880682B2 (en) 2021-06-30 2024-01-23 Amazon Technologies, Inc. Systolic array with efficient input reduction and extended array performance
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4543642A (en) * 1982-01-26 1985-09-24 Hughes Aircraft Company Data Exchange Subsystem for use in a modular array processor
US4686645A (en) * 1983-07-28 1987-08-11 National Research Development Corporation Pipelined systolic array for matrix-matrix multiplication
US4933895A (en) * 1987-07-10 1990-06-12 Hughes Aircraft Company Cellular array having data dependent processing capabilities
US5095527A (en) * 1988-08-18 1992-03-10 Mitsubishi Denki Kabushiki Kaisha Array processor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4543642A (en) * 1982-01-26 1985-09-24 Hughes Aircraft Company Data Exchange Subsystem for use in a modular array processor
US4686645A (en) * 1983-07-28 1987-08-11 National Research Development Corporation Pipelined systolic array for matrix-matrix multiplication
US4933895A (en) * 1987-07-10 1990-06-12 Hughes Aircraft Company Cellular array having data dependent processing capabilities
US5095527A (en) * 1988-08-18 1992-03-10 Mitsubishi Denki Kabushiki Kaisha Array processor

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2302510A1 (en) * 1998-08-24 2011-03-30 MicroUnity Systems Engineering, Inc. System for matrix multipy operation with wide operand architecture and method
GB2393278B (en) * 2002-09-17 2006-08-09 Micron Europe Ltd Method for manipulating data in a group of processing elements to transpose the data
GB2393280B (en) * 2002-09-17 2006-01-18 Micron Europe Ltd Method for manipulating data in a group of processing elements to transpose the data using a memory stack
GB2393278A (en) * 2002-09-17 2004-03-24 Micron Europe Ltd Transposing data in an array of processing elements by shifting data diagonally
GB2393280A (en) * 2002-09-17 2004-03-24 Micron Europe Ltd Transposing data in a plurality of processing elements using a memory stack.
US7263543B2 (en) 2003-04-23 2007-08-28 Micron Technology, Inc. Method for manipulating data in a group of processing elements to transpose the data using a memory stack
US7581080B2 (en) 2003-04-23 2009-08-25 Micron Technology, Inc. Method for manipulating data in a group of processing elements according to locally maintained counts
US7596678B2 (en) 2003-04-23 2009-09-29 Micron Technology, Inc. Method of shifting data along diagonals in a group of processing elements to transpose the data
US7676648B2 (en) 2003-04-23 2010-03-09 Micron Technology, Inc. Method for manipulating data in a group of processing elements to perform a reflection of the data
US7913062B2 (en) 2003-04-23 2011-03-22 Micron Technology, Inc. Method of rotating data in a plurality of processing elements
US7930518B2 (en) 2003-04-23 2011-04-19 Micron Technology, Inc. Method for manipulating data in a group of processing elements to perform a reflection of the data
US8135940B2 (en) 2003-04-23 2012-03-13 Micron Technologies, Inc. Method of rotating data in a plurality of processing elements
US8856493B2 (en) 2003-04-23 2014-10-07 Micron Technology, Inc. System of rotating data in a plurality of processing elements
WO2018228703A1 (en) * 2017-06-16 2018-12-20 Huawei Technologies Co., Ltd. Multiply accumulator array and processor device
US11157287B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system with variable latency memory access
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
KR20200027011A (en) * 2017-07-24 2020-03-11 테슬라, 인크. Accelerated math engine
CN111095241A (en) * 2017-07-24 2020-05-01 特斯拉公司 Accelerated math engine
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
JP2020528621A (en) * 2017-07-24 2020-09-24 テスラ,インコーポレイテッド Accelerated math engine
EP3659051A4 (en) * 2017-07-24 2021-04-14 Tesla, Inc. Accelerated mathematical engine
EP4258182A3 (en) * 2017-07-24 2024-01-03 Tesla, Inc. Accelerated mathematical engine
CN111095241B (en) * 2017-07-24 2023-09-12 特斯拉公司 Accelerating math engine
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
WO2019023046A1 (en) * 2017-07-24 2019-01-31 Tesla, Inc. Accelerated mathematical engine
KR20220007709A (en) * 2017-07-24 2022-01-18 테슬라, 인크. Accelerated mathematical engine
KR102353241B1 (en) * 2017-07-24 2022-01-19 테슬라, 인크. Accelerated Math Engine
KR102557589B1 (en) 2017-07-24 2023-07-20 테슬라, 인크. Accelerated mathematical engine
US11698773B2 (en) 2017-07-24 2023-07-11 Tesla, Inc. Accelerated mathematical engine
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11403069B2 (en) * 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
KR20220140028A (en) * 2017-07-24 2022-10-17 테슬라, 인크. Accelerated mathematical engine
KR102452757B1 (en) * 2017-07-24 2022-10-11 테슬라, 인크. Accelerated mathematical engine
US11797304B2 (en) 2018-02-01 2023-10-24 Tesla, Inc. Instruction set architecture for a vector computational unit
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
CN110609804A (en) * 2018-06-15 2019-12-24 瑞萨电子株式会社 Semiconductor device and method of controlling semiconductor device
US11842169B1 (en) 2019-09-25 2023-12-12 Amazon Technologies, Inc. Systolic multiply delayed accumulate processor architecture
US11816446B2 (en) 2019-11-27 2023-11-14 Amazon Technologies, Inc. Systolic array component combining multiple integer and floating-point data types
WO2021108660A1 (en) * 2019-11-27 2021-06-03 Amazon Technologies, Inc. Systolic array including fused multiply accumulate with efficient prenormalization and extended dynamic range
US11467806B2 (en) 2019-11-27 2022-10-11 Amazon Technologies, Inc. Systolic array including fused multiply accumulate with efficient prenormalization and extended dynamic range
CN111291320A (en) * 2020-01-16 2020-06-16 西安电子科技大学 Double-precision floating-point complex matrix operation optimization method based on HXDDSP chip
CN111291320B (en) * 2020-01-16 2023-12-15 西安电子科技大学 Double-precision floating point complex matrix operation optimization method based on HXPS chip
US11232062B1 (en) 2020-06-29 2022-01-25 Amazon Technologies, Inc. Parallelism within a systolic array using multiple accumulate busses
US11422773B1 (en) 2020-06-29 2022-08-23 Amazon Technologies, Inc. Multiple busses within a systolic array processing element
US11762803B2 (en) 2020-06-29 2023-09-19 Amazon Technologies, Inc. Multiple accumulate busses in a systolic array
US11113233B1 (en) 2020-06-29 2021-09-07 Amazon Technologies, Inc. Multiple busses in a grouped systolic array
US11308027B1 (en) 2020-06-29 2022-04-19 Amazon Technologies, Inc. Multiple accumulate busses in a systolic array
US11308026B1 (en) 2020-06-29 2022-04-19 Amazon Technologies, Inc. Multiple busses interleaved in a systolic array
EP4064040A1 (en) * 2021-03-25 2022-09-28 Intel Corporation Supporting 8-bit floating point format operands in a computing architecture
US11880682B2 (en) 2021-06-30 2024-01-23 Amazon Technologies, Inc. Systolic array with efficient input reduction and extended array performance

Also Published As

Publication number Publication date
CA2148719A1 (en) 1994-05-11
AU5412694A (en) 1994-05-24

Similar Documents

Publication Publication Date Title
WO1994010638A1 (en) Scalable dimensionless array
US11456856B2 (en) Method of operation for a configurable number theoretic transform (NTT) butterfly circuit for homomorphic encryption
US10120649B2 (en) Processor and method for outer product accumulate operations
US5473554A (en) CMOS multiplexor
US5596763A (en) Three input arithmetic logic unit forming mixed arithmetic and boolean combinations
US5606677A (en) Packed word pair multiply operation forming output including most significant bits of product and other bits of one input
US5805913A (en) Arithmetic logic unit with conditional register source selection
USRE44190E1 (en) Long instruction word controlling plural independent processor operations
US6058473A (en) Memory store from a register pair conditional upon a selected status bit
US6016538A (en) Method, apparatus and system forming the sum of data in plural equal sections of a single data word
EP0267729B1 (en) An orthogonal transform processor
EP0186958A2 (en) Digital data processor for matrix-vector multiplication
US6026484A (en) Data processing apparatus, system and method for if, then, else operation using write priority
US4853887A (en) Binary adder having a fixed operand and parallel-serial binary multiplier incorporating such an adder
Lim et al. A serial-parallel architecture for two-dimensional discrete cosine and inverse discrete cosine transforms
US5363322A (en) Data processor with an integer multiplication function on a fractional multiplier
EP4206996A1 (en) Neural network accelerator with configurable pooling processing unit
US5689695A (en) Conditional processor operation based upon result of two consecutive prior processor operations
Lei et al. FPGA implementation of an exact dot product and its application in variable-precision floating-point arithmetic
Ruiz et al. Parallel-pipeline 8/spl times/8 forward 2-D ICT processor chip for image coding
de Lange et al. Design and implementation of a floating-point quasi-systolic general purpose CORDIC rotator for high-rate parallel data and signal processing.
US9223743B1 (en) Multiplier operable to perform a variety of operations
US20080126468A1 (en) Decoding apparatus for vector booth multiplication
Schmidt et al. KPROC—an instruction systolic architecture for parallel prefix applications
McKinney et al. VLSI design of an FFT processor network

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR BY CA CH CZ DE DK ES FI GB HU JP KP KR KZ LK LU LV MG MN MW NL NO NZ PL PT RO RU SD SE SK UA US UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2148719

Country of ref document: CA

122 Ep: pct application non-entry in european phase