US20050049984A1 - Neural networks and neural memory - Google Patents

Neural networks and neural memory Download PDF

Info

Publication number
US20050049984A1
US20050049984A1 US10/960,032 US96003204A US2005049984A1 US 20050049984 A1 US20050049984 A1 US 20050049984A1 US 96003204 A US96003204 A US 96003204A US 2005049984 A1 US2005049984 A1 US 2005049984A1
Authority
US
United States
Prior art keywords
threshold
neural
sum
bit
weightless
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/960,032
Inventor
Douglas King
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BAE Systems PLC
Original Assignee
BAE Systems PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB9726752.0A external-priority patent/GB9726752D0/en
Priority claimed from GBGB9823361.2A external-priority patent/GB9823361D0/en
Application filed by BAE Systems PLC filed Critical BAE Systems PLC
Priority to US10/960,032 priority Critical patent/US20050049984A1/en
Publication of US20050049984A1 publication Critical patent/US20050049984A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/02Comparing digital values
    • G06F7/026Magnitude comparison, i.e. determining the relative order of operands based on their numerical value, e.g. window comparator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/14Conversion to or from non-weighted codes
    • H03M7/16Conversion to or from unit-distance codes, e.g. Gray code, reflected binary code
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/14Conversion to or from non-weighted codes
    • H03M7/16Conversion to or from unit-distance codes, e.g. Gray code, reflected binary code
    • H03M7/165Conversion to or from thermometric code

Definitions

  • This invention relates to neural networks incorporating sum and threshold devices and in particular, but not exclusively to such networks capable of functioning as a neural pattern matcher.
  • the invention also extends to sum and threshold devices for receiving weightless synaptic inputs and a weighted threshold value.
  • Hamming value is used to define the number of bits set in 1-dimensional arrays such as a binary number, tuple, vector or 2 or higher dimensional arrays, that is the number of 1's set.
  • the Hamming value relationship of two binary numbers or arrays indicates which has the greater Hamming value or whether the Hamming values are the same.
  • thermometer code is used in the conventional sense to indicate that successive bit positions are weighted, particularly . . . 16, 8, 4, 2, 1 although other weighted representations are possible.
  • Weightless binary is a set of binary digits 1 and 0, each representing just “1” and “0” respectively. There is no least significant bit (LSB) or most significant bit (MSB). The set of bits may be ordered or without order. If all the 1's are grouped together e.g. [111000] then the code is referred to as a thermometer code, thermocode or bar graph code, all collectively referred to herein as “thermometer codes”. Equally, the term thermometer code is used broadly to cover 1 or higher dimensional arrays in which the set bits have been aggregated around a pre-set focal bit, which may be anywhere in the array.
  • weightless bits A set of weightless bits is referred to herein as a “weightless tuple” or “weightless vector” and these terms are not intended to be restricted to ordered sets.
  • a real-valued synaptic value is multiplied by a synaptic connection strength or weight value, and summed with other similarly treated synapses before they are all summed and thresholded to form a neural output.
  • the weight value is a real-valued synaptic connection strength and hence the common usage of the term “weighted neural network”.
  • binary RAM-based neural networks that do not employ real-valued connection weights but instead rely on the values of the binary bits being either 0 or 1. Accordingly, there are two contexts of weightlessness: without synaptic connection strength, and without binary code weighting.
  • the arrangements described herein employ weightless binary manipulation mechanisms and may be used to engineer weightless artificial neural networks, otherwise referred to as weightless-weightless artificial neural networks.
  • this invention is concerned with the comparison of two weightless vectors in terms of their Hamming values.
  • This process is broadly equivalent to the function of a binary neuron. If the neuron receives a vector, A, of weightless synaptic values (e.g. [10110010]), and a vector, T, of weightless neural threshold values (e.g. [00101000]), the neuron may be required to fire because the Hamming value of A is greater than the Hamming value of T.
  • the threshold, T can be thought of as a set of inhibitory synaptic values which must be exceeded if the neuron is to be fired.
  • the neural networks, devices, and techniques disclosed herein may be used in flight control systems, voting systems with redundancy, safety critical systems, telecommunications systems, decision making systems, and artificial intelligence systems, such as neural networks.
  • this invention provides a neural network comprising:—
  • said neural memory comprises means for storing a plurality of 2-dimensional data arrays.
  • the neural network preferably includes means for presenting said input data in parallel to a data plane for correlation with said generic template.
  • the means for correlating preferably comprises an array of logic elements.
  • the network includes further sum and threshold means for comparing the sum of said correlation results with a threshold and for providing an output representative of a match, if said sum exceeds said threshold.
  • the sum and threshold devices may take many forms but the, or at least one of the, sum and threshold devices preferably comprises a Hamming value comparator made up of a plurality of interconnected bit manipulation cells, each bit manipulation cell being operable to effect at least one of a bit shift and a bit elimination operation.
  • this invention provides a device for providing an output representative of a sum and threshold function performed on a weightless input and a threshold value, which comprises means for converting said weightless input into thermometer code (as herein defined), and means for monitoring the bit at a bit position corresponding to said threshold.
  • this invention provides a neural network comprising:—
  • FIG. 1 is a schematic diagram of a sum and threshold (SAT) device
  • FIG. 2 is a diagram of a neural network for processing data in accordance with this invention.
  • FIG. 3 is a diagram of a sum and threshold element with weightless synaptic inputs and a weighted binary threshold
  • FIG. 4 is a diagram of a sum and threshold element of the type in FIG. 3 , employing a thermometer code converter
  • FIG. 5 is a diagram of a bit manipulator cell used in the thermometer code converter of FIG. 4 .
  • sum and threshold devices which may take many forms. Examples of novel Hamming value comparators which may serve as sum and threshold devices are described in our copending UK Patent Application No 9726752.0 and our copending International Patent Application No. PCT/GB98/______ (Our reference XA1154).
  • the sum and threshold detectors may take the form of a binary thermometer code converter of the type described in our co-pending International Patent Application No. PCT/GB98/______ (our reference 03-7127) with a suitable output selector, as to be described below.
  • neural data and a neural threshold are supplied to a Hamming value comparator 10 of one of the types discussed above, whether of one, two or of greater dimension.
  • the input and threshold tuples are weightless.
  • the output of the comparator 10 indicates whether the neural data has exceeded the neural threshold.
  • the output is then viewed as the single output of the comparator or neuron which is taken to have “fired” if the bit has set. In this sense, the Hamming value comparator 10 acts as a binary neuron.
  • FIG. 2 shows an embodiment of a neural pattern matcher 12 which uses an array of sum and threshold devices, which may be those of the type just described, or any other suitable SAT element.
  • this is illustrated as a 3-dimensional array, comprising a neural memory block 14 of dimensions w ⁇ d ⁇ m, where ‘m’ is the number of bits in the digital input word, ‘w’ is the number of bits in the width of the input pattern and ‘d’ is the number of exemplars in the neural memory.
  • the arrangement of neural memory is referred to elsewhere herein as neuroram.
  • the 3-dimensional array further includes a first sum and threshold region 16 (otherwise referred to as SAT columns) made up of (w ⁇ m) SAT devices 18 each marked SAT1 having d inputs (only one of these shown for clarity).
  • a correlation plane 20 made up of (w ⁇ m) 2 input EX-NOR gates 22 each of which receives an input from an associated SAT1 device 14 and an input from an associated data plane 24 which, optionally, may be held in separate bit memories 26 .
  • the outputs of the EX-NOR gates 22 in the correlation plane 20 are passed horizontally to a second sum and threshold region (referred to as a SAT plane), which consist of a single SAT2 device 28 with w ⁇ m inputs.
  • Respective thresholds T 1 of up to d bits are supplied to the SAT1 devices 18 , and a threshold T 2 of up to w ⁇ m bits is supplied to the SAT2 device 28 .
  • incoming digital data is encoded using a unit Hamming distance code such as thermometer code, Gray code, or is in the form of a half tone bit map etc.
  • Data for training is presented in parallel to the input of the data plane 24 , or sequentially using a planar shift register (not shown).
  • the coded data may also be scrambled with a binary key string using EX-OR gates.
  • the neural memory is programmed with ‘d’ exemplars each of w ⁇ m bits which can be prestored, learnt in a separate training routine, or adaptively altered during use.
  • the thresholds T 1 and T 2 determine the learning rate and quality factor of the neural pattern matcher/neural filter and are set according to fixed values, e.g. 66%. Alternatively the threshold may be adaptively set to determine the irritability (firing rate) of the system. Thresholds T 1 and T 2 may be thought of as a control over the degree of confidence attached to a match.
  • the SAT1 device will provide a set bit or “fire” if there are more than 8 bits set in the column of neural memory above it. If T 1 is raised, then the SAT1 device will not fire until a greater proportion of the bits in the memory above it are set; in other words the degree of similarity of the bits in that column must be higher for the SAT1 to fire.
  • the outputs of all the (w ⁇ m) SAT1 devices may be considered as a generic template against which new data in the data plane is compared.
  • the threshold T 2 is an indicator of the degree of confidence insofar as the greater T 2 , the greater the number of correlations there have to be between the input data plane and the generic template plane for the pattern to be accepted.
  • the update of the neural memory can be “supervised” during a training phase in which “d” exemplars or patterns of the required class are presented to the system and the neural memory then frozen.
  • the update of neural memory can be unsupervised, so that the system continuously learns or adapts to incoming patterns after some initial patterns have been fed in. Initially it may be wise to set the neural memory to random values when used in an unsupervised mode of training. To prevent drifting, a portion of the memory may be made “read only”.
  • the array may be regarded as a pattern matcher, or a category or class correlator. It is analogous in some respects to a cortical column in physiological neural systems, and a group of arrays or neurorams is analogous to a cortical map.
  • a group of neurorams can be ordered linearly or in planes. Planar arrangements can be formed from sub-patterns of neurorams, such as triangles, squares, hexagons, etc.
  • the array provides a hardware embodiment of a neural auto-associative memory.
  • a Sum and Threshold element has been designed that accepts a weightless binary input and a weighted binary threshold.
  • This Sum and Threshold element utilises a thermometer code converter, for example based on the thermometer code converter array described in our copending UK Patent Application 9726752.0 or International Patent Application PCT/GB98/______ (03-7127) and the appropriate output or outputs of the array are monitored or processed in accordance with the value of the threshold.
  • FIG. 3 shows a general arrangement of a sum and threshold element with weightless synaptic inputs and a weighted binary threshold.
  • N weightless bits are supplied to a thermometer code converter 30 to obtain thermometer code N t , which is passed to one or more selectors 32 , each of which also receives a respective weighted threshold value T 1 , T 2 , etc.
  • the selector 32 decodes the weighted threshold value and looks at the appropriate bit position in the thermometer code N t , and if set (indicating that the thermometer code has a Hamming value greater than the specified threshold), the selector sets its output bit to indicate that N>T.
  • FIG. 4 is a circuit diagram of an example of a sum and threshold device having eight synaptic inputs and a choice of four thresholds.
  • the device comprises a thermometer code converter section 34 , made up of 2-bit manipulator cells 36 of the type shown in FIG. 5 .
  • the 2-bit manipulator cells each comprise an OR gate 38 and an AND gate 40 , interconnected such that inputs a,b map to outputs Ya,Yb as follows:—
  • the thresholds in this example are I>6, I>5, I>4 and I>3, meaning that only the fourth to seventh outputs (from the bottom of the array as viewed) are required. Because of the truncation, the fifth to seventh layers include AND is gates 42 at the lower truncation boundary, and the seventh layer includes an OR 44 gate.
  • thermometer code conversion section 34 passes to a weighted binary selector 46 which acts as a threshold decoder.
  • a weighted binary selector 46 which acts as a threshold decoder.
  • Such devices are already known for use as binary multiplexers or logical selectors.
  • the selector 40 comprises two weighted inputs 48 , 50 which are each connected to the inputs of 3-input AND gates 52 , the other input of each AND gate being a respective output from the thermometer code conversion section 34 . Selected terminals of the lower three AND gates are inverted, and the outputs of the AND gates pass to an OR gate 54 .
  • Different permutations of 0's and 1's applied to the weighted inputs select different bit positions at the output of the thermometer code conversion section 34 .
  • the selector 46 has the following mapping:— Inputs Thermometer Code T 1 T 0 Bit Position 0 0 4 0 1 5 1 0 6 1 1 7
  • the device will fire only if the Hamming value of the weightless input is greater than 5.
  • thermometer code converter If two or more thresholds are to be determined, then further weighted binary selectors could be connected to the output of the thermometer code converter, as shown in FIG. 3 .
  • thermometer code converter corresponding to T fixed +1 where T fixed is the fixed threshold, would be the output.
  • the devices described above provide robust hardware implementation which is fault tolerant due to its weightless techniques.
  • the implementation of the above arrangement is technology independent; electronic, ionic, magnetic or electromagnetic (e.g. optical) implementation are all suitable.

Abstract

A neural pattern matcher is made up of an array of first sum and threshold SAT1 devices 18 each of which receives a number of inputs and a threshold value, and fires a 1 output if the number of inputs exceeds the threshold value. The outputs of the array of the SAT1 devices may be considered as a 2D image or generic template against which new data supplied into the registers 26 making up a data plane 24 are correlated at a correlation plane 20 of EX-NOR gates 22. The outputs of the EX-NOR gates themselves may be summed and thresholded by a seconded sum and threshold device 28 to provide a neural output ‘1’ or ‘0’ indicating match or no match. The matcher may therefore behave as a neural auto-associative memory which continually adapts to the input data to recognize data of a particular specified class.

Description

  • This invention relates to neural networks incorporating sum and threshold devices and in particular, but not exclusively to such networks capable of functioning as a neural pattern matcher. The invention also extends to sum and threshold devices for receiving weightless synaptic inputs and a weighted threshold value.
  • The apparatus and methods described herein may usefully incorporate, utilise, be used with or incorporated into any of the apparatus or methods described in our co-pending U.K. Patent Application No. 9726752.0 or our co-pending PCT Patent Applications Nos. PCT/GB98/______, PCT/GB98/______, PCT/GB98/______, (Our references 03-7127, XA1154, XA1156 and XA1000), the entire contents of which are incorporated herein by reference.
  • Terminology
  • The term “Hamming value” is used to define the number of bits set in 1-dimensional arrays such as a binary number, tuple, vector or 2 or higher dimensional arrays, that is the number of 1's set. The Hamming value relationship of two binary numbers or arrays indicates which has the greater Hamming value or whether the Hamming values are the same.
  • The term “weighted binary” is used in the conventional sense to indicate that successive bit positions are weighted, particularly . . . 16, 8, 4, 2, 1 although other weighted representations are possible. “Weightless binary” is a set of binary digits 1 and 0, each representing just “1” and “0” respectively. There is no least significant bit (LSB) or most significant bit (MSB). The set of bits may be ordered or without order. If all the 1's are grouped together e.g. [111000] then the code is referred to as a thermometer code, thermocode or bar graph code, all collectively referred to herein as “thermometer codes”. Equally, the term thermometer code is used broadly to cover 1 or higher dimensional arrays in which the set bits have been aggregated around a pre-set focal bit, which may be anywhere in the array.
  • A set of weightless bits is referred to herein as a “weightless tuple” or “weightless vector” and these terms are not intended to be restricted to ordered sets.
  • In traditional neural networks, a real-valued synaptic value is multiplied by a synaptic connection strength or weight value, and summed with other similarly treated synapses before they are all summed and thresholded to form a neural output. The weight value is a real-valued synaptic connection strength and hence the common usage of the term “weighted neural network”. However, it is also possible to have binary RAM-based neural networks that do not employ real-valued connection weights but instead rely on the values of the binary bits being either 0 or 1. Accordingly, there are two contexts of weightlessness: without synaptic connection strength, and without binary code weighting. The arrangements described herein employ weightless binary manipulation mechanisms and may be used to engineer weightless artificial neural networks, otherwise referred to as weightless-weightless artificial neural networks.
  • In one context, this invention is concerned with the comparison of two weightless vectors in terms of their Hamming values. This process is broadly equivalent to the function of a binary neuron. If the neuron receives a vector, A, of weightless synaptic values (e.g. [10110010]), and a vector, T, of weightless neural threshold values (e.g. [00101000]), the neuron may be required to fire because the Hamming value of A is greater than the Hamming value of T. In this example, the threshold, T, can be thought of as a set of inhibitory synaptic values which must be exceeded if the neuron is to be fired. The neural networks, devices, and techniques disclosed herein may be used in flight control systems, voting systems with redundancy, safety critical systems, telecommunications systems, decision making systems, and artificial intelligence systems, such as neural networks.
  • According to one aspect, this invention provides a neural network comprising:—
      • an array of bit memory means defining a neural memory for storing binary bits representing a plurality of exemplars,
      • an array of sum and threshold devices each for receiving as inputs respective bits from said bit memory means and for providing a preset output if the sum of said inputs exceeds a preset threshold, thereby to obtain a generic template representing said exemplars, and
      • means for comparing or correlating a set of input data with said generic template and providing an output representative of extent of matching between said set of input data and said generic template.
  • Preferably, said neural memory comprises means for storing a plurality of 2-dimensional data arrays. The neural network preferably includes means for presenting said input data in parallel to a data plane for correlation with said generic template. The means for correlating preferably comprises an array of logic elements. In a preferred embodiment the network includes further sum and threshold means for comparing the sum of said correlation results with a threshold and for providing an output representative of a match, if said sum exceeds said threshold.
  • The sum and threshold devices may take many forms but the, or at least one of the, sum and threshold devices preferably comprises a Hamming value comparator made up of a plurality of interconnected bit manipulation cells, each bit manipulation cell being operable to effect at least one of a bit shift and a bit elimination operation.
  • In another aspect, this invention provides a device for providing an output representative of a sum and threshold function performed on a weightless input and a threshold value, which comprises means for converting said weightless input into thermometer code (as herein defined), and means for monitoring the bit at a bit position corresponding to said threshold.
  • In a further aspect, this invention provides a neural network comprising:—
      • an array of bit memory means defining a neural network for storing binary bits representing a plurality of exemplars, and
      • an array of sum and threshold devices each for receiving as inputs respective bits from said bit memory means and for providing a preset output if the sum of said inputs exceeds a preset threshold, thereby to obtain a generic template representing said exemplars.
  • Whilst the invention has been described above, it extends to any inventive combination of the features set out above or in the following description.
  • The invention may be performed in various ways, and, by way of example only, various embodiments thereof will now be described in detail, reference being made to the accompanying drawings which utilise the conventional symbols for logic gates and in which:—
  • FIG. 1 is a schematic diagram of a sum and threshold (SAT) device;
  • FIG. 2 is a diagram of a neural network for processing data in accordance with this invention;
  • FIG. 3 is a diagram of a sum and threshold element with weightless synaptic inputs and a weighted binary threshold;
  • FIG. 4 is a diagram of a sum and threshold element of the type in FIG. 3, employing a thermometer code converter; and
  • FIG. 5 is a diagram of a bit manipulator cell used in the thermometer code converter of FIG. 4.
  • The embodiments described herein make use of sum and threshold devices, which may take many forms. Examples of novel Hamming value comparators which may serve as sum and threshold devices are described in our copending UK Patent Application No 9726752.0 and our copending International Patent Application No. PCT/GB98/______ (Our reference XA1154). Alternatively, the sum and threshold detectors may take the form of a binary thermometer code converter of the type described in our co-pending International Patent Application No. PCT/GB98/______ (our reference 03-7127) with a suitable output selector, as to be described below. The Hamming Comparators and Binary Code Converters described in these documents have the advantage that they can be implemented asynchronously, and thus be robust, fault tolerant and highly immune to RFI/EMI effects. However, of course, conventional sum and threshold devices may also be used in carrying out this invention.
  • Referring now to FIG. 1, neural data and a neural threshold are supplied to a Hamming value comparator 10 of one of the types discussed above, whether of one, two or of greater dimension. The input and threshold tuples are weightless. The output of the comparator 10 indicates whether the neural data has exceeded the neural threshold. The output is then viewed as the single output of the comparator or neuron which is taken to have “fired” if the bit has set. In this sense, the Hamming value comparator 10 acts as a binary neuron.
  • FIG. 2 shows an embodiment of a neural pattern matcher 12 which uses an array of sum and threshold devices, which may be those of the type just described, or any other suitable SAT element. For ease of visualisation, this is illustrated as a 3-dimensional array, comprising a neural memory block 14 of dimensions w×d×m, where ‘m’ is the number of bits in the digital input word, ‘w’ is the number of bits in the width of the input pattern and ‘d’ is the number of exemplars in the neural memory. The arrangement of neural memory is referred to elsewhere herein as neuroram.
  • The 3-dimensional array further includes a first sum and threshold region 16 (otherwise referred to as SAT columns) made up of (w×m) SAT devices 18 each marked SAT1 having d inputs (only one of these shown for clarity). Beneath the sum and threshold region 116 there is a correlation plane 20 made up of (w×m) 2 input EX-NOR gates 22 each of which receives an input from an associated SAT1 device 14 and an input from an associated data plane 24 which, optionally, may be held in separate bit memories 26. The outputs of the EX-NOR gates 22 in the correlation plane 20 are passed horizontally to a second sum and threshold region (referred to as a SAT plane), which consist of a single SAT2 device 28 with w×m inputs.
  • Respective thresholds T1 of up to d bits are supplied to the SAT1 devices 18, and a threshold T2 of up to w×m bits is supplied to the SAT2 device 28.
  • In use, incoming digital data is encoded using a unit Hamming distance code such as thermometer code, Gray code, or is in the form of a half tone bit map etc. Data for training (or for recognition after learning) is presented in parallel to the input of the data plane 24, or sequentially using a planar shift register (not shown). Optionally, the coded data may also be scrambled with a binary key string using EX-OR gates. The neural memory is programmed with ‘d’ exemplars each of w×m bits which can be prestored, learnt in a separate training routine, or adaptively altered during use.
  • The thresholds T1 and T2 determine the learning rate and quality factor of the neural pattern matcher/neural filter and are set according to fixed values, e.g. 66%. Alternatively the threshold may be adaptively set to determine the irritability (firing rate) of the system. Thresholds T1 and T2 may be thought of as a control over the degree of confidence attached to a match.
  • In considering operation of the device it is helpful to consider the action of one of the SAT1 devices. If the threshold T1 for that particular device is set at 66% of the number of exemplars (e.g. T1 is 8 if the neural memory is 12 bits deep), then the SAT1 device will provide a set bit or “fire” if there are more than 8 bits set in the column of neural memory above it. If T1 is raised, then the SAT1 device will not fire until a greater proportion of the bits in the memory above it are set; in other words the degree of similarity of the bits in that column must be higher for the SAT1 to fire.
  • The outputs of all the (w×m) SAT1 devices may be considered as a generic template against which new data in the data plane is compared.
  • The threshold T2 is an indicator of the degree of confidence insofar as the greater T2, the greater the number of correlations there have to be between the input data plane and the generic template plane for the pattern to be accepted.
  • The update of the neural memory can be “supervised” during a training phase in which “d” exemplars or patterns of the required class are presented to the system and the neural memory then frozen. Alternatively, the update of neural memory can be unsupervised, so that the system continuously learns or adapts to incoming patterns after some initial patterns have been fed in. Initially it may be wise to set the neural memory to random values when used in an unsupervised mode of training. To prevent drifting, a portion of the memory may be made “read only”.
  • In this arrangement, the array may be regarded as a pattern matcher, or a category or class correlator. It is analogous in some respects to a cortical column in physiological neural systems, and a group of arrays or neurorams is analogous to a cortical map. A group of neurorams can be ordered linearly or in planes. Planar arrangements can be formed from sub-patterns of neurorams, such as triangles, squares, hexagons, etc. The array provides a hardware embodiment of a neural auto-associative memory.
  • As a modification of the sum and threshold technique described above, a Sum and Threshold element has been designed that accepts a weightless binary input and a weighted binary threshold. This Sum and Threshold element utilises a thermometer code converter, for example based on the thermometer code converter array described in our copending UK Patent Application 9726752.0 or International Patent Application PCT/GB98/______ (03-7127) and the appropriate output or outputs of the array are monitored or processed in accordance with the value of the threshold.
  • FIG. 3 shows a general arrangement of a sum and threshold element with weightless synaptic inputs and a weighted binary threshold. N weightless bits are supplied to a thermometer code converter 30 to obtain thermometer code Nt, which is passed to one or more selectors 32, each of which also receives a respective weighted threshold value T1, T2, etc. The selector 32 decodes the weighted threshold value and looks at the appropriate bit position in the thermometer code Nt, and if set (indicating that the thermometer code has a Hamming value greater than the specified threshold), the selector sets its output bit to indicate that N>T.
  • FIG. 4 is a circuit diagram of an example of a sum and threshold device having eight synaptic inputs and a choice of four thresholds. The device comprises a thermometer code converter section 34, made up of 2-bit manipulator cells 36 of the type shown in FIG. 5. The 2-bit manipulator cells each comprise an OR gate 38 and an AND gate 40, interconnected such that inputs a,b map to outputs Ya,Yb as follows:—
      • Ya=A OR B
      • Yb=A AND B
  • It should be noted that in the device of FIG. 4, there are eight inputs, thus requiring odd layers nominally of four 2-bit manipulator cells 36 wide and even layers nominally of three 2-bit manipulator cells 38 wide, making up eight layers in all, although the fifth to eighth layers have been truncated in this case.
  • The thresholds in this example are I>6, I>5, I>4 and I>3, meaning that only the fourth to seventh outputs (from the bottom of the array as viewed) are required. Because of the truncation, the fifth to seventh layers include AND is gates 42 at the lower truncation boundary, and the seventh layer includes an OR 44 gate.
  • The output of the thermometer code conversion section 34 passes to a weighted binary selector 46 which acts as a threshold decoder. Such devices are already known for use as binary multiplexers or logical selectors. In this example, which allows selection of one of four threshold values 3, 4, 5, 6, the selector 40 comprises two weighted inputs 48, 50 which are each connected to the inputs of 3-input AND gates 52, the other input of each AND gate being a respective output from the thermometer code conversion section 34. Selected terminals of the lower three AND gates are inverted, and the outputs of the AND gates pass to an OR gate 54. Different permutations of 0's and 1's applied to the weighted inputs select different bit positions at the output of the thermometer code conversion section 34.
  • The selector 46 has the following mapping:—
    Inputs Thermometer Code
    T1 T0 Bit Position
    0 0 4
    0 1 5
    1 0 6
    1 1 7
  • Thus if the weighted input is (1,0) the device will fire only if the Hamming value of the weightless input is greater than 5.
  • If two or more thresholds are to be determined, then further weighted binary selectors could be connected to the output of the thermometer code converter, as shown in FIG. 3.
  • It will be appreciated also that the circuit could be simplified to respond to a given specific threshold; in this instance a binary selector as such would not be required and instead the output of the thermometer code converter corresponding to Tfixed+1 where Tfixed is the fixed threshold, would be the output.
  • The devices described above provide robust hardware implementation which is fault tolerant due to its weightless techniques.
  • In general, the implementation of the above arrangement is technology independent; electronic, ionic, magnetic or electromagnetic (e.g. optical) implementation are all suitable.

Claims (3)

1-7. (Cancelled)
8. A device for providing an output representative of a sum and threshold function performed on a weightless input and a threshold value, which comprises means for converting said weightless input into thermometer code as herein defined, and means for monitoring the bit at a bit position corresponding to said threshold.
9. A device for providing an output representative of a sum and threshold function performed on a weightless input and a threshold value, which comprises a converter for converting said weightless input into thermometer code as herein defined, and a monitor for monitoring the bit at a bit position corresponding to said threshold.
US10/960,032 1997-12-19 2004-10-08 Neural networks and neural memory Abandoned US20050049984A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/960,032 US20050049984A1 (en) 1997-12-19 2004-10-08 Neural networks and neural memory

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
GBGB9726752.0A GB9726752D0 (en) 1997-12-19 1997-12-19 Binary code converters and comparators
GB9823361.2 1998-10-27
GB9726752.0 1998-10-27
GBGB9823361.2A GB9823361D0 (en) 1998-10-27 1998-10-27 Neural networks and neural memory
PCT/GB1998/003832 WO1999033019A1 (en) 1997-12-19 1998-12-18 Neural networks and neural memory
US36858499A 1999-08-05 1999-08-05
US10/960,032 US20050049984A1 (en) 1997-12-19 2004-10-08 Neural networks and neural memory

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US36858499A Division 1997-12-19 1999-08-05

Publications (1)

Publication Number Publication Date
US20050049984A1 true US20050049984A1 (en) 2005-03-03

Family

ID=26312796

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/368,585 Expired - Fee Related US6262676B1 (en) 1997-12-19 1999-08-05 Binary code converters and comparators
US10/960,032 Abandoned US20050049984A1 (en) 1997-12-19 2004-10-08 Neural networks and neural memory

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/368,585 Expired - Fee Related US6262676B1 (en) 1997-12-19 1999-08-05 Binary code converters and comparators

Country Status (7)

Country Link
US (2) US6262676B1 (en)
EP (2) EP1040582B1 (en)
JP (2) JP2001502834A (en)
AU (2) AU1678799A (en)
DE (2) DE69818863T2 (en)
ES (2) ES2209233T3 (en)
WO (2) WO1999033184A2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050076071A1 (en) * 2002-02-25 2005-04-07 King Douglas B Ordering by hamming value
US20070147568A1 (en) * 2005-02-12 2007-06-28 Kennen Technologies, Llc. General purpose set theoretic processor
US20070299797A1 (en) * 2006-06-26 2007-12-27 Saffron Technology, Inc. Nonlinear Associative Memories Using Linear Arrays of Associative Memory Cells, and Methods of Operating Same
US7774286B1 (en) 2006-10-24 2010-08-10 Harris Curtis L GPSTP with multiple thread functionality
US8667230B1 (en) * 2010-10-19 2014-03-04 Curtis L. Harris Recognition and recall memory
US10089577B2 (en) 2016-08-05 2018-10-02 Xilinx, Inc. Binary neural networks on progammable integrated circuits
US20190108280A1 (en) * 2017-10-10 2019-04-11 Alibaba Group Holding Limited Image search and index building
TWI708249B (en) * 2018-05-01 2020-10-21 美商超捷公司 Method and apparatus for high voltage generation for analog neural memory in deep learning artificial neural network
US10839286B2 (en) 2017-09-14 2020-11-17 Xilinx, Inc. System and method for implementing neural networks in integrated circuits
US11615300B1 (en) 2018-06-13 2023-03-28 Xilinx, Inc. System and method for implementing neural networks in integrated circuits

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0204410D0 (en) * 2002-02-25 2002-04-10 Bae Systems Plc Weighgtless thermocoder
EP1610465A4 (en) * 2003-03-25 2006-04-05 Fujitsu Ltd Encoder circuit and a/d converter circuit
JP4842989B2 (en) * 2008-03-28 2011-12-21 株式会社アドバンテスト Priority encoder, time digital converter and test device using the same
FR2933514B1 (en) * 2008-07-02 2012-10-19 Canon Kk SIMILARITY ENCODING AND DECODING METHODS AND DEVICES FOR XML TYPE DOCUMENTS
CN103365814B (en) * 2013-06-27 2016-08-17 深圳市汇顶科技股份有限公司 A kind of serial data transmission method and system thereof
US10819791B2 (en) * 2013-10-11 2020-10-27 Ge Aviation Systems Llc Data communications network for an aircraft

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4942517A (en) * 1987-10-08 1990-07-17 Eastman Kodak Company Enhanced input/output architecture for toroidally-connected distributed-memory parallel computers
US5029305A (en) * 1988-12-21 1991-07-02 Texas Instruments Incorporated Method and apparatus for error correction in thermometer code arrays
US5063521A (en) * 1989-11-03 1991-11-05 Motorola, Inc. Neuram: neural network with ram
US5072130A (en) * 1986-08-08 1991-12-10 Dobson Vernon G Associative network and signal handling element therefor for processing data
US5113507A (en) * 1988-10-20 1992-05-12 Universities Space Research Association Method and apparatus for a sparse distributed memory system
US5156009A (en) * 1988-11-11 1992-10-20 Transphere Systems Limited Method for storing produce
US5218562A (en) * 1991-09-30 1993-06-08 American Neuralogix, Inc. Hamming data correlator having selectable word-length
US5357597A (en) * 1991-06-24 1994-10-18 International Business Machines Corporation Convolutional expert neural system (ConExNS)
US5382955A (en) * 1993-11-04 1995-01-17 Tektronix, Inc. Error tolerant thermometer-to-binary encoder
US5426757A (en) * 1990-01-24 1995-06-20 Hitachi, Ltd. Data processing circuits in a neural network for processing first data stored in local register simultaneous with second data from a memory
US5454064A (en) * 1991-11-22 1995-09-26 Hughes Aircraft Company System for correlating object reports utilizing connectionist architecture
US5459466A (en) * 1995-02-23 1995-10-17 Tektronix, Inc. Method and apparatus for converting a thermometer code to a gray code
US5487133A (en) * 1993-07-01 1996-01-23 Intel Corporation Distance calculating neural network classifier chip and system
US5509106A (en) * 1990-05-22 1996-04-16 International Business Machines Corporation Triangular scalable neural array processor
US5630021A (en) * 1994-09-30 1997-05-13 United Microelectronics Corp. Hamming neural network circuit
US5892962A (en) * 1996-11-12 1999-04-06 Lucent Technologies Inc. FPGA-based processor
US5951711A (en) * 1993-12-30 1999-09-14 Texas Instruments Incorporated Method and device for determining hamming distance between two multi-bit digital words
US5974521A (en) * 1993-12-12 1999-10-26 Neomagic Israel Ltd. Apparatus and method for signal processing
US6035057A (en) * 1997-03-10 2000-03-07 Hoffman; Efrem H. Hierarchical data matrix pattern recognition and identification system
US6622134B1 (en) * 1999-01-05 2003-09-16 International Business Machines Corporation Method of constructing data classifiers and classifiers constructed according to the method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4586025A (en) 1985-10-04 1986-04-29 Tektronix, Inc. Error tolerant thermometer-to-binary encoder
EP0217009A3 (en) 1985-10-04 1989-05-03 Tektronix, Inc. Thermometer-to-adjacent binary encoder
GB2223369B (en) 1988-08-18 1992-11-18 Plessey Co Plc Analogue-to-digital converters
DE69212093T2 (en) * 1991-09-20 1997-01-16 Philips Electronics Nv Data recoding method for thermometric code, decoder and recoding device for using this method
JP2917095B2 (en) 1993-07-08 1999-07-12 テクトロニクス・インコーポレイテッド Thermometer code processing method and apparatus
DE69430528T2 (en) * 1994-07-28 2003-01-02 Ibm Search / sort circuit for neural networks

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5072130A (en) * 1986-08-08 1991-12-10 Dobson Vernon G Associative network and signal handling element therefor for processing data
US4942517A (en) * 1987-10-08 1990-07-17 Eastman Kodak Company Enhanced input/output architecture for toroidally-connected distributed-memory parallel computers
US5113507A (en) * 1988-10-20 1992-05-12 Universities Space Research Association Method and apparatus for a sparse distributed memory system
US5156009A (en) * 1988-11-11 1992-10-20 Transphere Systems Limited Method for storing produce
US5029305A (en) * 1988-12-21 1991-07-02 Texas Instruments Incorporated Method and apparatus for error correction in thermometer code arrays
US5063521A (en) * 1989-11-03 1991-11-05 Motorola, Inc. Neuram: neural network with ram
US5875347A (en) * 1990-01-24 1999-02-23 Hitachi, Ltd. Neural network processing system using semiconductor memories
US5426757A (en) * 1990-01-24 1995-06-20 Hitachi, Ltd. Data processing circuits in a neural network for processing first data stored in local register simultaneous with second data from a memory
US5509106A (en) * 1990-05-22 1996-04-16 International Business Machines Corporation Triangular scalable neural array processor
US5617512A (en) * 1990-05-22 1997-04-01 International Business Machines Corporation Triangular scalable neural array processor
US5357597A (en) * 1991-06-24 1994-10-18 International Business Machines Corporation Convolutional expert neural system (ConExNS)
US5218562A (en) * 1991-09-30 1993-06-08 American Neuralogix, Inc. Hamming data correlator having selectable word-length
US5454064A (en) * 1991-11-22 1995-09-26 Hughes Aircraft Company System for correlating object reports utilizing connectionist architecture
US5487133A (en) * 1993-07-01 1996-01-23 Intel Corporation Distance calculating neural network classifier chip and system
US5382955A (en) * 1993-11-04 1995-01-17 Tektronix, Inc. Error tolerant thermometer-to-binary encoder
US5974521A (en) * 1993-12-12 1999-10-26 Neomagic Israel Ltd. Apparatus and method for signal processing
US5951711A (en) * 1993-12-30 1999-09-14 Texas Instruments Incorporated Method and device for determining hamming distance between two multi-bit digital words
US5630021A (en) * 1994-09-30 1997-05-13 United Microelectronics Corp. Hamming neural network circuit
US5459466A (en) * 1995-02-23 1995-10-17 Tektronix, Inc. Method and apparatus for converting a thermometer code to a gray code
US5892962A (en) * 1996-11-12 1999-04-06 Lucent Technologies Inc. FPGA-based processor
US6035057A (en) * 1997-03-10 2000-03-07 Hoffman; Efrem H. Hierarchical data matrix pattern recognition and identification system
US6622134B1 (en) * 1999-01-05 2003-09-16 International Business Machines Corporation Method of constructing data classifiers and classifiers constructed according to the method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050076071A1 (en) * 2002-02-25 2005-04-07 King Douglas B Ordering by hamming value
US7392229B2 (en) 2005-02-12 2008-06-24 Curtis L. Harris General purpose set theoretic processor
US20070147568A1 (en) * 2005-02-12 2007-06-28 Kennen Technologies, Llc. General purpose set theoretic processor
US7657496B2 (en) 2006-06-26 2010-02-02 Saffron Technology, Inc. Nonlinear associative memories using linear arrays of associative memory cells, and methods of operating same
WO2008002510A3 (en) * 2006-06-26 2008-05-22 Saffron Technology Inc Nonlinear associative memories using linear arrays of associative memory cells, and methods of operating same
JP2009541914A (en) * 2006-06-26 2009-11-26 サフロン・テクノロジー,インコーポレイテッド Nonlinear associative memory using a linear array of associative memory cells and method of operation thereof
US20070299797A1 (en) * 2006-06-26 2007-12-27 Saffron Technology, Inc. Nonlinear Associative Memories Using Linear Arrays of Associative Memory Cells, and Methods of Operating Same
US7774286B1 (en) 2006-10-24 2010-08-10 Harris Curtis L GPSTP with multiple thread functionality
US8667230B1 (en) * 2010-10-19 2014-03-04 Curtis L. Harris Recognition and recall memory
US10089577B2 (en) 2016-08-05 2018-10-02 Xilinx, Inc. Binary neural networks on progammable integrated circuits
US10839286B2 (en) 2017-09-14 2020-11-17 Xilinx, Inc. System and method for implementing neural networks in integrated circuits
US20190108280A1 (en) * 2017-10-10 2019-04-11 Alibaba Group Holding Limited Image search and index building
WO2019075117A1 (en) * 2017-10-10 2019-04-18 Alibaba Group Holding Limited Image search and index building
TWI708249B (en) * 2018-05-01 2020-10-21 美商超捷公司 Method and apparatus for high voltage generation for analog neural memory in deep learning artificial neural network
US11615300B1 (en) 2018-06-13 2023-03-28 Xilinx, Inc. System and method for implementing neural networks in integrated circuits

Also Published As

Publication number Publication date
US6262676B1 (en) 2001-07-17
WO1999033019A1 (en) 1999-07-01
EP1040582A2 (en) 2000-10-04
ES2209233T3 (en) 2004-06-16
WO1999033184A2 (en) 1999-07-01
DE69815390T2 (en) 2004-05-13
DE69818863D1 (en) 2003-11-13
JP2001502834A (en) 2001-02-27
EP1038260B1 (en) 2003-06-04
AU1678799A (en) 1999-07-12
DE69818863T2 (en) 2004-09-09
ES2197520T3 (en) 2004-01-01
JP3413213B2 (en) 2003-06-03
WO1999033184A3 (en) 1999-09-02
AU1769699A (en) 1999-07-12
EP1040582B1 (en) 2003-10-08
EP1038260A1 (en) 2000-09-27
JP2001502879A (en) 2001-02-27
DE69815390D1 (en) 2003-07-10

Similar Documents

Publication Publication Date Title
US20050049984A1 (en) Neural networks and neural memory
Hong Parallel, self-organizing hierarchical neural networks
US5151969A (en) Self-repairing trellis networks
Blum Necessary conditions for optimum distributed sensor detectors under the Neyman-Pearson criterion
US5216750A (en) Computation system and method using hamming distance
Yan Prototype optimization for nearest neighbor classifiers using a two-layer perceptron
EP1038215B9 (en) Hamming value comparison for unweighted bit arrays
Zollner et al. Fast generating algorithm for a general three-layer perceptron
US6347309B1 (en) Circuits and method for shaping the influence field of neurons and neural networks resulting therefrom
US5870728A (en) Learning procedure for multi-level neural network
Amarnath et al. Addressing soft error and security threats in dnns using learning driven algorithmic checks
US5712959A (en) Neural network architecture for non-Gaussian components of a mixture density function
Wang et al. Binary neural network training algorithms based on linear sequential learning
Wan et al. Efficient error-correcting output codes for adversarial learning robustness
Ersoy et al. Parallel, self-organizing, hierarchical neural networks. II
Liou et al. Optimally Spaced Autoencoder
Brodsky et al. Binary backpropagation in content addressable memory
JP4696529B2 (en) Multi-layer neural network device and its software
Hussain et al. Decoding a class of nonbinary codes using neural networks
Drucker Implementation of minimum error expert system
Pedroni et al. Learning in the hypercube
Tseng et al. An optimal dimension expansion procedure for obtaining linearly separable subsets
SU785867A1 (en) Device for determining maximum number from a group of numbers
LEVARY Rule-based artificial neural networks
Kang et al. Large scale pattern recognition system using hierarchical neural network and false-alarming nodes

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE