WO1991014235A1 - Recognition of patterns in images - Google Patents

Recognition of patterns in images Download PDF

Info

Publication number
WO1991014235A1
WO1991014235A1 PCT/US1991/001534 US9101534W WO9114235A1 WO 1991014235 A1 WO1991014235 A1 WO 1991014235A1 US 9101534 W US9101534 W US 9101534W WO 9114235 A1 WO9114235 A1 WO 9114235A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
image
panel
index
size
Prior art date
Application number
PCT/US1991/001534
Other languages
French (fr)
Inventor
Robert L. Harvey
Paul N. Dicaprio
Karl G. Heinemann
Original Assignee
Massachusetts Institute Of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Massachusetts Institute Of Technology filed Critical Massachusetts Institute Of Technology
Publication of WO1991014235A1 publication Critical patent/WO1991014235A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G01N15/1433
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume, or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Electro-optical investigation, e.g. flow cytometers
    • G01N15/1468Electro-optical investigation, e.g. flow cytometers with spatial resolution of the texture or inner structure of the particle

Definitions

  • This invention relates to recognition by machines of patterns in images.
  • Machine based visual recognition schemes typically use combinations of opto-electronic devices and computer data processing techniques to recognize objects.
  • recognizing an object requires determining whether a certain pattern (corresponding to the object) appears within a field-of-view (FOV) of an input image.
  • the pattern generally is defined by
  • Perceivable patterns may occur in the presence of:
  • An input image is here meant to include any two-dimensional, spatially ordered array of signal intensities.
  • the signals may be of any frequency within the entire electromagnetic spectrum, such as infrared radiation signals and radar ranging signals.
  • visual recognition here denotes recognition of an object based on electromagnetic
  • the human visual recognition system operates in two stages, first locating patterns of interest within the FOV, and then classifying the
  • Biological vision systems can rapidly segment an input image in a manner described as "preattentive.” It has been found experimentally that segmentation is context-sensitive, i.e., what is perceived as a pattern at a given location can depend on patterns at nearby locations.
  • the invention features apparatus for recognizing a pattern within an input image based on visual characteristics of the pattern, the image being represented by signals whose values correspond to the visual characteristics.
  • the apparatus includes a location channel which determines the
  • the location channel includes a coarse locator which makes a coarse determination of the existence and location of the pattern within the image, and a fine locator, responsive to the coarse locator, which makes a fine determination of the location of the pattern within the image.
  • the coarse locator includes a neural network which compares the image with traces corresponding to general shapes of interest.
  • the coarse locator operates with respect to a field of view within the image and a feedback path from the classification channel to the locator channel controls the position of the field of view within the image.
  • the fine locator includes circuitry for responding to feedback from the classification channel in order to adjust the position of a field of view within the image in order to
  • the coarse locator provides a feedforward signal to the fine locator which also affects the fine position of the field of view.
  • the classification channel includes a signal processer for preprocessing the signal values, a signal analyzer responsive to the signal processor for
  • the signal analyzer includes edge detectors for detecting information about edges of the pattern. Some edge detectors are adapted to generate measures of the strengths of edges in predetermined orientations within portions of the image.
  • predetermined orientations include vertical, horizontal, and 45 degrees.
  • Other edge detectors are adapted to generate measures of the existence of edges at the periphery of a portion of the image. The edges are detected at the top, bottom, and each side of the
  • the signal analyzer also includes a gross size detector for detecting the gross size of a pattern within a portion of the image.
  • the measures of the visual characteristics are arrayed as a spectrum for delivery to the classifier. Measures which correspond to coarser features appear in the lower end of the spectrum and measures which
  • the signal analyzer includes a feedback path for providing the measures of the visual
  • the invention features apparatus including an orientation analyzer adapted to analyze the orientations of edges of the pattern within subwindows of the image, and a strength analyzer adapted to analyze the strengths of edges of the pattern near the periphery of a portion of a window of the image.
  • the orientation analyzer includes detectors for detecting the strengths of orientation of edges in four different possible orientations: 0, 45, 90, and 135 degrees, respectively.
  • the apparatus also includes a classifier for processing the outputs of the
  • a mapper causes outputs corresponding to subwindows of the image to be treated in the spectrum in an order such that outputs of subwindows nearer to the center of the image are treated as appearing lower on the spectrum than outputs of subwindows nearer the periphery of the image.
  • Each analyzer includes neural networks.
  • the strength analyzer includes an averaging module for averaging elements of the window to derive an averaged window, and four neural networks for processing the averaged window to determine the strength of edges at the north, south, east, and west peripheries of the window.
  • the invention features apparatus for categorizing, among a set of user-specified categories, a pattern which appears in an image based on visual characteristics of the pattern, the image being represented by signals whose values correspond to the visual characteristics.
  • the apparatus includes an unsupervised classifier adapted to define classes of patterns and to categorize the patterns based on the visual features and the classes, and a supervised classifier adapted to map the classes to the set of user-specified categories.
  • the unsupervised classifier is an ART2 classifier.
  • the invention features apparatus including a location channel which determines the location of the pattern within the image based on the signal values, a classification channel which categorizes the pattern based on the signal values, and a feedback path from the classification channel to the location channel to cause the location channel to adapt to classification results generated by the classification channel.
  • the abnormal or normal state of a biological cell within an image is determined based on visual characteristics of the cell, and the cell is categorized, among a set of
  • the invention provides a highly effective, efficient scheme for recognizing patterns.
  • Computer processing power is devoted more heavily to portions of the image which contain possible patterns.
  • the spectrum is arranged to place relatively gross features at the lower end and relatively detailed features at the upper end which aids analysis of the relationship between features and the resulting classification.
  • Biological cells in particular cervical cells in a Pap smear, can be quickly and automatically analyzed to determine their normal or abnormal state.
  • Fig. 1 is a diagram of an image pattern and windows and a subwindow of the image.
  • Fig. 2 is a functional block diagram of an object recognition system.
  • Fig. 3 is a diagram of a spectrum of pattern information.
  • Fig. 4 is a diagram of edge recognition networks.
  • Fig. 5 is a table of possible outputs for example input edge patterns.
  • Fig. 6 is a diagram of the effect of window averaging.
  • Fig. 7 is a diagram of edge recognition
  • Fig. 8 is a table of edge recognition network outputs.
  • Fig. 9 is a top view of a slide of a Pap smear.
  • Fig. 10 is a chart of cervical cells.
  • Fig. 11 is a schematic view of a microscope and stage and related electronics for examining the Pap smear.
  • Fig. 12 is a diagram of a cell within a window.
  • Figs. 13A and B are photographs of NORMAL and ABNORMAL cells and related spectra.
  • Fig. 14 is a spectrum for a NORMAL cell.
  • Fig. 15 is a chart of results in a simple
  • Figs. 16 and 17 are graphs of error rate versus training size.
  • FIG. 1 consider, by way of example, an image 10 consisting of a 525 by 525 array of 8-bit pixel values.
  • the pixels are arrayed along the x and y axes and the z axis represents an 8-bit luminance value of each pixel.
  • a pattern 12 representing an object to be recognized within the image is defined by a
  • the recognition task is performed by a visual recognition system 8 which
  • System 8 includes a location channel 9 which locates patterns of interest in the selected FOV and a classification channel 11 which classifies patterns
  • the location channel may detect the existence of a pattern in the lower left corner of the FOV and the classifier may identify the pattern as that of the class of objects known as an automobile.
  • the classification channel is the classification channel
  • the classification channel consists of a Lateral Geniculate Nucleus (LGN) module 30 which receives the input image pixel values and performs initial processing of the image.
  • Module 30 feeds three other modules: a visual area 1 (V1) module 56, a visual area 2 (V2) module 32, and a sum module 54.
  • V1 visual area 1
  • V2 visual area 2
  • V2 visual area 2
  • sum module 54 a sum module 54.
  • These three modules perform further detailed processing and generate pattern size, orientation, and location information about the image which is conceptually arrayed along a "frequency" spectrum 72.
  • the information in the spectrum is passed to an Inferior Temporal Cortex 1 (ITC1) module 58 and then to an Inferior Temporal Cortex 2 (ITC2) module 66 which classify the pattern and provide the
  • ITC1 Inferior Temporal Cortex 1
  • ITC2 Inferior Temporal Cortex 2
  • the modules of the classification channel are also assigned numbers on Fig. 2 (such as A17, A18) which correspond to well-known Brodmann areas of the human brain with similar
  • the classification channel uses a
  • the array of image pixels is organized into 9 windows 14a ... 14i, each containing a 175 by 175 array of pixels. Processing proceeds window by window and each window represents a FOV within the image.
  • the location channel operates on one window at a time.
  • the location channel 9 determines the location within the window presently being processed (the active window) at which any pattern lies and
  • the 10-bit value 21 includes 5 bits which provide a row index and 5 bits which provide a column index for positioning the 175 by 175 bit window within the 525 by 525 input image.
  • the 10-bit value specifies the center of the 175 by 175 window within the 525 by 525 input image. Five bits give the row coordinate and five bits give the column coordinate of the 525 by 525 image. The center is given to an accuracy of + or - 9 bits.
  • the 10-bit location value and a 1-bit window enable signal 23 cause a row and column select unit 25 to indicate to LGN 30 that a pattern has been found and is located in a window whose position is specified by the 10-bit value.
  • the active window 27 i.e., the FOV
  • the pixels within the shifted window are then processed by a calibrate unit 34 and a normalize unit 36 to distribute their intensities across a gray-scale.
  • the resulting preprocessed window 37 is then sent to the later modules.
  • the calibrate unit calculates a histogram of the pixel values of the pattern within the selected (active) window.
  • the histogram is typically concentrated in a sub-band within the total possible range of 0 to 255.
  • the calibration unit spreads the histogram over the entire 0 to 255 range by linearly mapping the histogram values in the sub-band to the values 0 to 255, with the lowest value in the sub-band being mapped to 0 and the highest value in the sub-band being mapped to 255.
  • the lowest value of the histogram sub-band is defined as the value where the number of pixels falls to 1% of the cumulative number.
  • the highest value of the histogram is defined as the value where the number of pixels first exceeds 99.25% of the cumulative number.
  • the normalize unit then rescales the pixel values by dividing each of them by 255 so that all pixel values leaving the LGN module are in the range from 0 to 1. In Fig. 2, the [0,1] indicates that the values lie between 0 and l.
  • (Vl) module Referring again to Fig. 1, in the V1 module, the active window is further subdivided into 625 subwindows 42 each having an array of 7 by 7 pixels (the subwindow 42 in Fig. 1 is shown at a much larger scale than the window 38 from which it came, for clarity).
  • the window is first fed to a spiral map module 62 which performs a spiral mapping of the 625 subwindows, taking the upper left hand subwindow first (i.e., subwindow 40 of Fig. 1), then the other subwindows in the top row from left to right, then the subwindows in the right column from top to bottom, then the bottom row, left column, second row, and so on, finally ending with the center subwindow.
  • the upper left hand subwindow first i.e., subwindow 40 of Fig. 1
  • the other subwindows in the top row from left to right then the subwindows in the right column from top to bottom, then the bottom row, left column, second row,
  • subwindows are then delivered one by one in the spiral order to the visarea 1 unit 63.
  • each 7 by 7 pixel subwindow is processed to generate measures of the visual strengths of the edges of the patterns in the horizontal
  • visarea 1 generates measures of the magnitude of the luminance gradient in the four
  • each edge measurement is performed for each 7 by 7 subwindow by a
  • Visarea 1 thus includes four neural networks 202, 204, 206, 208, each of which receives the pixels of each subwindow 57 and generates one of the outputs 210.
  • each neuron can be either an excitatory-type or an inhibitory-type, but not both simultaneously.
  • a set of actual interconnection weights useful for the four networks for the example are set forth in Appendix A.
  • Each of the detectors is a three layer neural network having an input layer, a hidden layer, and a single output neuron.
  • Appendix A includes two sets of four matrices each. One set of four matrices (marked horizontal) is used for the horizontal and vertical detectors; the other set of four matrices
  • the four matrices A, B, C, and D contain interconnection weight values respectively for interconnections within the hidden layer
  • each row in a matrix represents all of the interconnections from a given neuron, and each column represents all of the interconnections to a given neuron.
  • the diagonal of the A matrix thus
  • detectors are designed using the genetic algorithm in the manner described in the copending patent application cited below, for a particular orientation and gradient
  • luminance lines and gradient magnitudes model similar processing that occurs in biological visual systems.
  • a technique for determining the interconnection weights for the neural network is set forth in copending United States patent application, serial number 468,857, filed on the same day as the parent of this application, and incorporated by reference.
  • each line represents pixels of constant value.
  • the indicated gradient in the pattern can be reversed without
  • object classification is done in part on the basis of these orientation strengths over a set of subwindows.
  • the four orientation signals generated by visarea 1 for each of the 625 7 by 7 pixel subwindows yields a total of 2500 orientation values for the entire window.
  • the 2500 orientation signal values 71 generated by visarea 1 can be arrayed as lines on a spectrum 72 in which the length of each horizontal line represents the magnitude of the signal.
  • the positions along the spectrum may be thought of as corresponding to different "frequencies".
  • orientation signal lines for each window are arranged in order as shown, and the successive subwindows in the spiral order are arranged in order along the spectrum so that the first subwindow's lines appear first.
  • the outer subwindows of the windowed image are nearer the top of the spectrum (lower frequency) and the inner subwindows are nearer the bottom.
  • feature-generating module in the classification channel is visarea 2 module 32.
  • the function of this module is to detect edges near the perimeter of the 175 by 175 pixel window. Since only the outside edges of the pattern are of interest in this step, the window image is first defocused by a 25 by 25 average unit 49.
  • this averaging smears details of the pattern (the detail is captured by visarea 1), but retains the necessary outside edge information.
  • the averaging produces a single smeared 7 by 7 pixel image 230, 230' of the pattern in the 175 by 175 window 232, 232'. As shown, the averaging simplifies the pattern edges to enable them to be easily detected.
  • visarea 2 includes four neural networks, 234, 236, 238, 240, each of which detects the presence or absence of an edge.
  • Two 3 by 7 pixel detectors 234, 236 detect the presence of nearly horizontal edges respectively at the top and bottom of the window image.
  • Two 7 by 3 pixel detectors 238, 240 detect the presence of nearly vertical edges
  • edge detectors are like the ones in visarea 1 except the input images are now 7 by 3 or 3 by 7 instead of 7 by 7.
  • Each detector uses 25 neurons with fixed interconnection weights.
  • a set of actual interconnection weights for these four neural networks are set forth in Appendix B. Only one set of four matrices is provided; these may be used in all of the four different detectors simply by
  • the output of the visarea 2 unit is four spectrum lines 45 which measure the north, south, east, and west edge strengths. These four lines also comprise part of the spectrum 72 used by the classifier.
  • the four network outputs of visarea 2 are all high, while for a pattern in the lower right corner, the north and west outputs are low while the south and east outputs are high.
  • the third feature-generating module is a sum module 54. This module sums the pixel values in the 175 by 175 pixel window. The computed sum is a measure of the gross size of the pattern in the window and it is used as one of the input spectrum values to the
  • classification is achieved by interpreting a combination of the visual feature measures discussed above.
  • these feature measures include some values which have been only slightly processed (the output of the sum module), some moderately processed (the output of the visarea 2 module), and some highly processed (the output of the visarea 1 module). Because the spectrum includes lines from visarea 1, from visarea 2, and from sum, the
  • the visarea 1 module outputs are adjusted by subtracting the minimum (usually negative) of all of the visarea 1 outputs from each of the visarea 1 outputs to ensure that the visarea 1 portion of the spectrum is entirely positive with a minimum value of zero.
  • the visarea 2 and sum outputs are multiplied by scale factors which depend on the window size used in LGN 30 (Fig. 2). For a window size of 175 by 175, the scale factors are 0.1 for the visarea 2 outputs and 0.01 for the sum module output. For a window size of 42 by 42, the factors are 1.5 and 0.3 respectively. This weighting ensures that the window size used in LGN 30 (Fig. 2). For a window size of 175 by 175, the scale factors are 0.1 for the visarea 2 outputs and 0.01 for the sum module output. For a window size of 42 by 42, the factors are 1.5 and 0.3 respectively. This weighting ensures that the window size used in LGN 30 (Fig. 2). For a window size of 175 by 175, the scale factors are 0.1 for the visarea
  • classifier gives equal significance to information about size, edges, and detail structure.
  • classification is done, using the spectrum 72 of information, by an
  • the unsupervised classifier ITCl module uses the ART 2 classifier technique discussed in G.
  • the input spectrum is "impressed" on the bottom layer of the ART2.
  • This classifier automatically selects characteristics of the input spectrum (or pattern) to define a category. Subsequent patterns are compared to patterns stored in the long-term memory (LTM) trace 59.
  • LTM long-term memory
  • ART2 is a two-slab neural network. One slab is called F1 and consists of 3 interacting layers which perform noise filtering and signal enhancement. The second slab is called F2 and consists of a single interacting layer. The F2 neurons are used to indicate by their activity the category of the input pattern.
  • the input patterns, after processing by F1 are judged to be close or far from the LTM traces. If a new input spectrum is different from previous spectra, then a new category is defined for the input. If a new input spectrum is similar to a previous
  • category class then the existing category is updated with an additional example.
  • the classifier is 'trained' by presenting to it a sequence of example patterns which are then categorized by ITC1. In principle, if the examples are sufficiently different, a distinct category will be defined for each example. If some of the examples are similar to one another, then a smaller number of categories are defined.
  • the orient unit 250 determines the closeness of the match between the input and a stored pattern based on a positive number
  • the confidence unit 252 associates the closeness measure
  • the first ten ITC1 output nodes are in a category (say category 1) that is trucks, the first ten ITC1 output nodes are in a category (say category 1) that is trucks, the first ten ITC1 output nodes are in a category (say category 1) that is trucks, the first ten ITC1 output nodes are in a category (say category 1) that is trucks, the first ten ITC1 output nodes are in a category (say category 1) that
  • the class name 73 the confidence level 75, and the location 77 in the FOV. If the confidence level is not high enough, then the system tries to identify the pattern by evaluating the input image again, as explained below.
  • the function of the location channel is to isolate an individual pattern in the FOV so that the classification channel processing can be applied to that pattern.
  • the location channel includes a Superior
  • Colliculus (superc) module 18 includes the LGN, visarea 2, and Posterior Parietal Cortex (PPC) modules.
  • the location channel supports both feedforward and feedback flows of signals.
  • active window involves a two-stage process consisting of coarse location followed by fine location and
  • the superc module performs the coarse location procedure.
  • a modified ART2 neural network is used to grossly locate objects of interest within the FOV.
  • the F2 slab of the ART2 is used to impress a stored LTM trace on the top layer of the F1 slab. LTM traces for the general shapes of interest are computed off-line and stored in the superc.
  • the system is 'primed' to locate a particular class of objects.
  • a 175 by 175 pixel window is extracted from the input image and impressed on the bottom layer of the ART2.
  • the pattern specified by the LTM trace 19 is compared to the windowed image.
  • the LTM trace is designed so that an object of the correct general size will cause a match, even if off-center, to indicate its presence.
  • a row map unit 24 is used to map the windowed input to the ART2 input. Because the input window is 175 by 175, there are 30,625 input pixels delivered to the ART2. If no match is found, then another
  • non-overlapping window in the image is input as the active window and evaluated for the presence of an object.
  • the degree of match between the image pattern and the LTM traces is used as an enable signal 23 to the LGN module.
  • the selection of the coarse window position from among the nine possible windows is done by a fovea move unit 20.
  • the coarse position 22 is sent to the row map unit, and to the PPC module for further adjustment.
  • the second stage of the location process is the fine adjustment and pull-in stage.
  • This pull-in stage is done by a feedback path which includes the LGN, visarea 2, and PPC modules.
  • the function of the LGN and visarea 2 modules was described above.
  • the center of attention, or fovea i.e., the location of the center of the active window
  • fovea is adjusted to center the window on the pattern of interest.
  • the object 12 is not centered in any of the nine original windows of the image.
  • the object pattern is made to lie in the center of the window as shown by reference numeral 50.
  • the centering function evaluates the outputs of visarea 2, i.e., the strength of the four edges of the window, which are sent to PPC on lines 81.
  • the strength of the edge measurements will be about equal. If the object is only partially in the window, then one or more of the edges will be missing and the corresponding edge
  • the window is moved in a direction that will tend to equalize the edge strengths.
  • the fovea delta 1 unit 46 in the PPC implements the control law for moving the window.
  • One possible control law is a standard bang-bang rule with a
  • the window is moved a fixed amount vertically, up or down depending on the sign of the difference. For example, if north - south is positive and larger than the positive threshold, then the window is moved
  • the output of the fovea delta 1 box is the magnitude of adjustment for the location in the vertical and horizontal directions, and is fed to the fovea adjust unit 83.
  • the fovea adjust unit adjusts the value provided by the fovea move unit 20 and delivers the current location values in the horizontal and vertical directions on line 21. Adjustments may be made one pixel at a time in either direction.
  • a second pull-in path includes the LGN, visarea 2, ITC1, ITC2, and PPC modules. This path is used to take additional looks at an object when the confidence in pattern identification is low. if the confidence level is judged to be insufficient, then an enable signal 99 from ITC2 activates a fovea delta2 unit 68 in PPC. This unit generates a random adjustment of the window in the vertical and horizontal directions. This random adjustment gives the system a second chance to achieve a better pattern classification.
  • a counter in ITC2 (not shown) is used to limit the number of retries. After some preset number of retries, the system stores the object's conjectured identity together with the confidence level and location, and then goes on to search for other objects.
  • a slew enable signal 101 is used to
  • the system functions are executed in a sequential manner.
  • the location channel finds and centers in a window an object of interest. When an object straddles evenly between two windows, the choice between which window will be used for the analysis depends on numerical runoff errors and appears random to the user.
  • the classification channel identifies the object.
  • the modules would run simultaneously.
  • the sequencing of functions would be controlled by enable signals, as described above, and by properly selecting the neural network interconnection time constants.
  • Time constants associated with the location channel's LTMs are short so that the channel will converge quickly to the location which is to be analyzed.
  • the classification channel's LTM time constants are longer and the identification process is comparatively slow. This difference in the time constants ensures that classification is done on a centered object.
  • Possible time constants would be such that the ratio of location time to classification time would be from 1:3 up to 1:10 or more. The exact time would depend on the nature of the application including the size of the input images, and grayness.
  • Pap cervical exfolliative
  • a glass slide 300 is smeared with a sample of cervical cells 302 (only a small representative sample of cells is shown).
  • the number of cells on the slide may be on the order of 20,000 - 100,000.
  • the cytologist's task is to scan the cells on the slide using a microscope and to identify and analyze the condition of non-normal cells.
  • each cell can be categorized as lying at some position along a continuum 305 from a normal cell 304 to a malignant cell 306.
  • the cells have generally the same size (bounded by a cell wall 308), regardless of their location along the continuum, but there are differences, among other things, in the size, configuration, and appearance of the cell nucleus 310 and in the roughness or smoothness of the outer cell boundaries, as well as possibly other cytoplasmic features, in a normal cell, the nucleus 310 is small, has smooth, curved boundaries, and a uniform dark appearance. In a malignant cell 306, the nucleus 312 is much larger, has irregular boundaries, and is blotchy in appearance.
  • the cytologist is expected to be able to detect as few as two or three non-normal cells on the slide for purposes of diagnosing cervical cancer. Even highly accomplished cytologists cannot achieve a false negative analysis rate much lower than about 10% (i.e., 10% of the smears which contain abnormal cells are incorrectly found to be normal). It is expected that the use of the object recognition system can improve this rate
  • the object recognition system for Pap smear analysis, one first trains the system by presenting it with some selection of known cells; then the system is used for analysis by presenting it with a Pap smear and allowing the system to scan the smear to detect cells and their normal or abnormal conditions.
  • the slide 300 in order to acquire a digitized version of an image of cells in the smear, the slide 300 is mounted on a stage 314 which can be driven by motors (not shown) along two dimensions 316 under the control of signals 318 delivered from a controller 320.
  • a microscope 322 focuses the image on a video camera 324 which feeds an analog signal to an image processor 326.
  • the image processor forms a 525 by 525 pixel digitized image and delivers it to the LGN 30 ( Figure 2).
  • the operator uses the microscope and the image processor to select a single cell 330 and enlarge the cell to a scale that fills an entire 175 by 175 pixel window 332 within the image.
  • This image is presented to the system and results in a spectrum which is classified by classifier 58 as one node of a first category 61 ( Figure 2).
  • the spectrum is based on the array of 625 subwindows 334, each 7 by 7 pixels, which tile the window.
  • the 2500 output lines of block 63 in Figure 2 are then arrayed along the spectrum such that the lines pertaining to the cell nucleus are at the higher "frequency" end and the lines pertaining to the cell boundary are at lower
  • the operator then indicates to classifier 66 the name to be associated with that category.
  • the first cell presented to the system for training may be a normal cell and becomes the first node of a NORMAL category 61 ( Figure 2). Additional normal cells could be presented and would form other nodes in that
  • the operator may load a slide of a smear to be analyzed onto the stage.
  • the controller will move the stage to a starting position say at the upper left corner of the slide and the camera will deliver an image of that portion of the slide to the system via processor 326.
  • the scale of the image will be such that the cells are each about the size of a 175 by 175 pixel window. Of course, the cells will not generally be found in the centers of the windows.
  • the system has a location channel and a classification channel which operate in parallel so that the system can locate a cell within the window and then adjust the field of view to center the cell. Then the cell can be classified automatically based on the prior training.
  • the results are stored in the store 68.
  • the store will hold an indication of whether the cell is NORMAL or ABNORMAL, a confidence level of that determination, and the location of the cell in the image, in operation, the SUM module analyses the gross size of the cell, the V2 module analyzes the edges of the cell wall to determine its shape, and the V1 module analyses the detailed
  • the controller 320 can keep track of the positions of the slide so that a particular cell can be automatically relocated based on the stage
  • a normal cell produces an output of the edge detector which has sharp spikes representing the edges of the nucleus (note that the graph labeled Edge Detector Results represents only a small portion - about 25% of the highest "frequencies" - of the full spectrum of 2500 lines).
  • subwindow of the image is represented by lines on the far right-hand side of the graph.
  • a comparable figure for an abnormal cell is shown in Figure 13B.
  • the V2 components are given relatively greater strength than the V1 components and the SUM component is given relatively greater strength then the V2 components.
  • the precise relative weights of the different components is achieved by applying to the raw SUM component a
  • Training is done by exposing the system to a variety of known normal and abnormal cells.
  • classifier stores the pattern associated with each sample cell. When an unknown test cell is then shown to the system, its generated spectrum is passed to the classifier. The classifier will find the closest match to the known stored samples. The cell is then labeled to be of the same type as the closest stored sample.
  • the system was shown a series of normal cells only (in this case 28 normal cells). Then the system was tested by showing it a sample of thirty-three test cells (17 normal and 16 abnormal). The system compared each test cell with known standards and made a yes/no decision based on a threshold of closeness to the normal standard cells. The chart illustrates that a tradeoiff can be obtained between the rate of false positives and the rate of false negatives, by adjusting the threshold from high to low.
  • Figure 16 demonstrates that the false negative rate can be reduced by increasing the number of training cells.
  • a training set of both normal and abnormal cells ln a set of cells which are thereafter used for testing, those which produce false results are added to the training set in their proper categories.
  • Curve 362 suggests that in this training regime, the addition of greater numbers of test cells causes a more rapid drop in the false negative rates. It is believed that using a set of test cells numbering say 1000 will be sufficient with a high level of confidence to reduce the false negative rate to an extremely small value.
  • OBJS image_util.o vfllter.o nedian.o IRP_histogram.o ⁇
  • LIBS -lm -lsuntool -lsunwlndow -lpixrect -lstd
  • ALL_LIBS $(LIBS)
  • cellview.o cellview.c cellview.h netparan.h ⁇ activation.h image_io.h LTM.h image_util.o: inage_util.c image_ io.h
  • verrtool.o verrtool.c cellview.h
  • vfllter.o vfilter.c cellview.h netpararm.h ⁇ image_io.h activation.h
  • IRP_hi stogram.o IRP_histogram.c cellview.h image_lo.h
  • IRP_edge_detector.o IRP_ edge_detector.c cellview.h netparam.h ⁇ activation.h image_ io.h
  • IRP_visar2.o IRP_vitar2.c cellview.h netparam.h ⁇ activation.h
  • IRP_LGN.o IRP_LGN.C ieage_io.h activation.h ART2.o: ART2.c activation.h LTM.h
  • HST_HElGHT 64 #define HST_HElGHT 64 /* Plot amplitude for largest peak */ #define PLOT_BORDER_WIDTH 16 /* width of blank border around plot */ #define HST_WlN_WIDTH (VLT_SIZE + (2 * PLOT_BORDER_WIDTH) )
  • EDF_WIN_WIDTH EDF_DISPLAY_WIDTH + (2 * PLOT_BORDER_WIDTH)
  • EDF_ PLOT_WIDTH EDF_SPECTRUM_SIZE + (2 * PLOT_BORDER_WIDTH)
  • extern int box_fig /* flag for defined image box */ extern int size_x, size_y; /* input image width and height */ extern int zoom_x, zoom_y; /* zoom magnification factors */
  • BOX_STRUCT (int size_x, size_y, x0, y0, x1, y1, x, y;);
  • act_delta 0.0; /* Initialize the Difference Measure */ ⁇ ⁇
  • Sigmoid functions to convert neuron activations into transmitted signals */ #define step_sigmoid(X) ((X)>0 ? 1.0 : 0) #define ramp_sigmoid(X) ((X)>0 ? X : 0)
  • setup_windows (argc, argv, "IRP Interactive Image Analysis Software Testbed”); setup_ img_menu();
  • batch_ fil_item panel_create_ itera(control_ panel, PANEL_TEXT,
  • window_create (base_frame, CANVAS,
  • window_set (img_canvas, WIN_BELOW, vlt_canvas, 0);
  • window_set (hst_canvas, WIN_Y, top_ y, 0);
  • window_ set (edf_canvas, WIN_Y, top_y+HST_ WIN_HEIGHT+48, 0);
  • window_set (edf_hdr_canvas, WIN_Y, top_y+HST_WlN_HEICHT+23, 0);
  • hst_pw canvas_pixwin(hst_canvas);
  • edf_pw canvas_pixwin(edf_canvas);
  • edf_hdr_pw canvas_pixwin(edf_hdr_canvas);
  • V2_wt 0.100
  • dispfont pf_open(*/usr/lib/fonts/flxedwidthfonts/cour.b.16");
  • window_create (bast_frame, PANEL,
  • ART2_hdr_ item panel_create_ item(ART2_panel, PANEL_TEXT,
  • LGN_ mult_ item panel_create_item(ART2_panel, PANEL_TEXT,
  • V2_ mult_item panel_create_ item(ART2_panel, PANEL_ TEXT,
  • LTM_input_ item panel_create_itera(ART2_ panel, PANEL_TEXT,
  • LTM_output_ item panel_create_item(ART2_panel, PANEL_TEXT,
  • pnl_x + 16; window_ set(ART2_ panel, WIN_X, pnl_ x , 0 ) ;
  • strncpy (LGN_ mstr, (char *)panel_get_value(LGN_mult_ item), NUM_STR_LEN); sscanf (LGN_mstr, "#f", &LGN_wt);
  • strncpy V2_ mstr, (char *)panel_ get_value(V2_ mult_item), NUM_STR_LEN); sscanf (V2_mstr, "#f", &V2_wt);
  • strncpy old_LTM_flie, (char * )panel_get_value(LTM_input_item), FNL
  • LTM_ source_ file fopen(old_LTM_file, "r");
  • err_str (char *)calloc(num_char, sizeof (char));
  • LTM output_ file fopen(new_LTM_ file, "*");
  • err_ str (char *)calloc(num_char, sizeof (char));
  • err_str (char *)calloc(num_char, sizeof (char));
  • num_ vale 2 * TOT_ SPECTRUM_ SIZE * nF2;
  • err_str (char *)calloc(num_ char, sizeof (char));
  • strcpy (err_ str, "Problem reading LTM trace values from file ⁇ "") strcat(err_ str, old_LTM_file);
  • err_ str (char * )calloc(num_char, sizeof (char));
  • n ⁇ m_vals 2 * TOT SPECTRUH_SIZE * nF2;
  • err_str (char *)calloc(num_char, sizeof (char)); strcpy(err_str, "Problem writing LTM trace values to file ⁇ ""); street (err_str, new_LTM_file);
  • color_fetch_index bas_cms .cms_size
  • V2_hidden_layer NULL
  • window_set (base_ frame, FRAME_NO_CONFIRM, TRUE, 0);
  • strncpy (seq_cwd, (char * )panel_get_value(batch_cwd_itern), FNL); void batch_fil_proc()
  • strncpy (seq_fname, (char *)panel_get_value(batch_fil_item), FNL);
  • panrl_set (batch_fil_it em, PANE _VALUE, srq_tname, 0); ⁇

Abstract

A pattern (e.g., the normal or abnormal characteristics of a biological cell) within an image is recognized based on visual characteristics of the pattern, the image being represented by signals whose values correspond to the visual characteristics, using a location channel (9) which determines the location of the pattern within the image, and a classification channel (11) which categorizes the pattern, the location channel (9) and the classification channel (11) operating in parallel and cooperatively to recognize the pattern. In other aspects, there the orientations of edges of the pattern within subwindows of the image are analyzed as are the strenghts of edges of the pattern near the periphery of portions of the image; an unsupervised classifier (58), defines internal representation classes of objects, and a supervised classifier (66) maps the classes to user-specified categories; and there is a feedback path from the classification channel (11) to the location channel (9).

Description

Recognition of Patterns in Images
Background of the Invention
This is a continuation-in-part of United States patent application serial number 07/468,681, filed
January 23, 1990.
This invention relates to recognition by machines of patterns in images.
The mechanisms by which patterns representing objects are recognized by animals has been studied extensively. A summary of studies of the human visual system is given in D.H. Hubel, "Eye, Brain, and Vision," New York, New York: W.H. Freeman and Company, 1988.
Machine based visual recognition schemes typically use combinations of opto-electronic devices and computer data processing techniques to recognize objects.
In general, recognizing an object requires determining whether a certain pattern (corresponding to the object) appears within a field-of-view (FOV) of an input image. The pattern generally is defined by
spatial gradients and discontinuities in luminance across the input image. Other types of gradients and discontinuities may also produce perceivable patterns. Perceivable patterns may occur in the presence of:
statistical differences in textural qualities (such as orientation, shape, density, or color), binocular
matching of elements of differing disparities, accretion and deletion of textural elements in moving displays, and classical 'subjective contours'. An input image is here meant to include any two-dimensional, spatially ordered array of signal intensities. The signals may be of any frequency within the entire electromagnetic spectrum, such as infrared radiation signals and radar ranging signals. Thus visual recognition here denotes recognition of an object based on electromagnetic
radiation received from the object. Humans easily recognize spatial gray-scale object patterns regardless of the patterns' location or
rotational orientation within a FOV. In perceiving these patterns, the human visual recognition system operates in two stages, first locating patterns of interest within the FOV, and then classifying the
patterns according to known categories of objects.
Biological vision systems can rapidly segment an input image in a manner described as "preattentive." It has been found experimentally that segmentation is context-sensitive, i.e., what is perceived as a pattern at a given location can depend on patterns at nearby locations.
Contemporary image-processing techniques based on artificial intelligence (Al) systems use geometric concepts such as surface normal, curvature, and the
Laplacian. These approaches were originally developed to analyze the local properties of physical processes.
Summary of the Invention
In general, in one aspect, the invention features apparatus for recognizing a pattern within an input image based on visual characteristics of the pattern, the image being represented by signals whose values correspond to the visual characteristics. The apparatus includes a location channel which determines the
location of the pattern within the image based on the signal values, and a classification channel which
categorizes the object based on the signal values, the location channel and the classification channel
operating in parallel and cooperatively to recognize the pattern.
Preferred embodiments of the invention include the following features. The location channel includes a coarse locator which makes a coarse determination of the existence and location of the pattern within the image, and a fine locator, responsive to the coarse locator, which makes a fine determination of the location of the pattern within the image. The coarse locator includes a neural network which compares the image with traces corresponding to general shapes of interest. The coarse locator operates with respect to a field of view within the image and a feedback path from the classification channel to the locator channel controls the position of the field of view within the image. The fine locator includes circuitry for responding to feedback from the classification channel in order to adjust the position of a field of view within the image in order to
determine the fine location of the pattern within the image. The coarse locator provides a feedforward signal to the fine locator which also affects the fine position of the field of view.
The classification channel includes a signal processer for preprocessing the signal values, a signal analyzer responsive to the signal processor for
generating measures of the visual characteristics, and a classifier for classifying the pattern in accordance with the measures. The signal analyzer includes edge detectors for detecting information about edges of the pattern. Some edge detectors are adapted to generate measures of the strengths of edges in predetermined orientations within portions of the image. The
predetermined orientations include vertical, horizontal, and 45 degrees. Other edge detectors are adapted to generate measures of the existence of edges at the periphery of a portion of the image. The edges are detected at the top, bottom, and each side of the
portion of the image. The signal analyzer also includes a gross size detector for detecting the gross size of a pattern within a portion of the image. The measures of the visual characteristics are arrayed as a spectrum for delivery to the classifier. Measures which correspond to coarser features appear in the lower end of the spectrum and measures which
correspond to finer features appear in the upper end of the spectrum. The signal analyzer includes a feedback path for providing the measures of the visual
characteristics to the location channel.
In general, in another aspect, the invention features apparatus including an orientation analyzer adapted to analyze the orientations of edges of the pattern within subwindows of the image, and a strength analyzer adapted to analyze the strengths of edges of the pattern near the periphery of a portion of a window of the image.
Preferred embodiments include the following features. The orientation analyzer includes detectors for detecting the strengths of orientation of edges in four different possible orientations: 0, 45, 90, and 135 degrees, respectively. The apparatus also includes a classifier for processing the outputs of the
orientation and strength analyzers as part of a
spectrum. A mapper causes outputs corresponding to subwindows of the image to be treated in the spectrum in an order such that outputs of subwindows nearer to the center of the image are treated as appearing lower on the spectrum than outputs of subwindows nearer the periphery of the image. Each analyzer includes neural networks. The strength analyzer includes an averaging module for averaging elements of the window to derive an averaged window, and four neural networks for processing the averaged window to determine the strength of edges at the north, south, east, and west peripheries of the window. In general, in another aspect, the invention features apparatus for categorizing, among a set of user-specified categories, a pattern which appears in an image based on visual characteristics of the pattern, the image being represented by signals whose values correspond to the visual characteristics. The apparatus includes an unsupervised classifier adapted to define classes of patterns and to categorize the patterns based on the visual features and the classes, and a supervised classifier adapted to map the classes to the set of user-specified categories. In preferred embodiments, the unsupervised classifier is an ART2 classifier.
In general, in another aspect, the invention features apparatus including a location channel which determines the location of the pattern within the image based on the signal values, a classification channel which categorizes the pattern based on the signal values, and a feedback path from the classification channel to the location channel to cause the location channel to adapt to classification results generated by the classification channel.
In general, in other aspects, the abnormal or normal state of a biological cell within an image is determined based on visual characteristics of the cell, and the cell is categorized, among a set of
user-specified categories, based on visual
characteristics of the cell.
The invention provides a highly effective, efficient scheme for recognizing patterns. Computer processing power is devoted more heavily to portions of the image which contain possible patterns. The spectrum is arranged to place relatively gross features at the lower end and relatively detailed features at the upper end which aids analysis of the relationship between features and the resulting classification. Biological cells, in particular cervical cells in a Pap smear, can be quickly and automatically analyzed to determine their normal or abnormal state.
Other advantages and features will become
apparent from the following description of the preferred embodiment and from the claims.
Description of the Preferred Embodiment
We first briefly describe the drawings.
Fig. 1 is a diagram of an image pattern and windows and a subwindow of the image.
Fig. 2 is a functional block diagram of an object recognition system.
Fig. 3 is a diagram of a spectrum of pattern information.
Fig. 4 is a diagram of edge recognition networks.
Fig. 5 is a table of possible outputs for example input edge patterns.
Fig. 6 is a diagram of the effect of window averaging.
Fig. 7 is a diagram of edge recognition
functions.
Fig. 8 is a table of edge recognition network outputs.
Fig. 9 is a top view of a slide of a Pap smear.
Fig. 10 is a chart of cervical cells.
Fig. 11 is a schematic view of a microscope and stage and related electronics for examining the Pap smear.
Fig. 12 is a diagram of a cell within a window.
Figs. 13A and B are photographs of NORMAL and ABNORMAL cells and related spectra.
Fig. 14 is a spectrum for a NORMAL cell.
Fig. 15 is a chart of results in a simple
threshold classification scheme. Figs. 16 and 17 are graphs of error rate versus training size.
Structure
Referring to Fig. 1, consider, by way of example, an image 10 consisting of a 525 by 525 array of 8-bit pixel values. The pixels are arrayed along the x and y axes and the z axis represents an 8-bit luminance value of each pixel. A pattern 12 representing an object to be recognized within the image is defined by a
collection of 8-bit pixels. The goal is to be able to recognize quickly and accurately the existence,
location, and category of pattern 12 within image 10.
Referring to Fig. 2, the recognition task is performed by a visual recognition system 8 which
includes a collection of modules which roughly achieve the functions of their biological counterparts in
recognizing, in a selected FOV within the image,
gray-scale patterns having arbitrary shifts and
rotations.
System 8 includes a location channel 9 which locates patterns of interest in the selected FOV and a classification channel 11 which classifies patterns
(i.e., associates a name with each pattern) located in the FOV according to known classes of objects. For example, the location channel may detect the existence of a pattern in the lower left corner of the FOV and the classifier may identify the pattern as that of the class of objects known as an automobile.
The classification channel
The classification channel consists of a Lateral Geniculate Nucleus (LGN) module 30 which receives the input image pixel values and performs initial processing of the image. Module 30 feeds three other modules: a visual area 1 (V1) module 56, a visual area 2 (V2) module 32, and a sum module 54. These three modules perform further detailed processing and generate pattern size, orientation, and location information about the image which is conceptually arrayed along a "frequency" spectrum 72. The information in the spectrum is passed to an Inferior Temporal Cortex 1 (ITC1) module 58 and then to an Inferior Temporal Cortex 2 (ITC2) module 66 which classify the pattern and provide the
classification results to a store 68. The modules of the classification channel are also assigned numbers on Fig. 2 (such as A17, A18) which correspond to well-known Brodmann areas of the human brain with similar
functions. The classification channel uses a
feedforward architecture so that the signal flows in a forward direction from the input image to the
classification module 66.
LGN module
Referring again to Fig. 1, for purposes of identifying the location of a pattern within the image, the array of image pixels is organized into 9 windows 14a ... 14i, each containing a 175 by 175 array of pixels. Processing proceeds window by window and each window represents a FOV within the image. The location channel operates on one window at a time.
Returning to Fig. 2, by a mechanism to be described below, the location channel 9 determines the location within the window presently being processed (the active window) at which any pattern lies and
conveys this location to the classification channel by a 10-bit value 21. The 10-bit value includes 5 bits which provide a row index and 5 bits which provide a column index for positioning the 175 by 175 bit window within the 525 by 525 input image. The 10-bit value specifies the center of the 175 by 175 window within the 525 by 525 input image. Five bits give the row coordinate and five bits give the column coordinate of the 525 by 525 image. The center is given to an accuracy of + or - 9 bits.
When a pattern has been located, the 10-bit location value and a 1-bit window enable signal 23 cause a row and column select unit 25 to indicate to LGN 30 that a pattern has been found and is located in a window whose position is specified by the 10-bit value. The active window 27 (i.e., the FOV) is then shifted to a revised location within the image (note, for example, the shifted window 17 in Fig. 1). The pixels within the shifted window are then processed by a calibrate unit 34 and a normalize unit 36 to distribute their intensities across a gray-scale. The resulting preprocessed window 37 is then sent to the later modules.
The calibrate unit calculates a histogram of the pixel values of the pattern within the selected (active) window. For the 8-bit pixels in the example, the histogram is typically concentrated in a sub-band within the total possible range of 0 to 255. The calibration unit spreads the histogram over the entire 0 to 255 range by linearly mapping the histogram values in the sub-band to the values 0 to 255, with the lowest value in the sub-band being mapped to 0 and the highest value in the sub-band being mapped to 255. The lowest value of the histogram sub-band is defined as the value where the number of pixels falls to 1% of the cumulative number. The highest value of the histogram is defined as the value where the number of pixels first exceeds 99.25% of the cumulative number. The normalize unit then rescales the pixel values by dividing each of them by 255 so that all pixel values leaving the LGN module are in the range from 0 to 1. In Fig. 2, the [0,1] indicates that the values lie between 0 and l.
(Vl) module Referring again to Fig. 1, in the V1 module, the active window is further subdivided into 625 subwindows 42 each having an array of 7 by 7 pixels (the subwindow 42 in Fig. 1 is shown at a much larger scale than the window 38 from which it came, for clarity). Returning to Fig. 2, in the V1 module, the window is first fed to a spiral map module 62 which performs a spiral mapping of the 625 subwindows, taking the upper left hand subwindow first (i.e., subwindow 40 of Fig. 1), then the other subwindows in the top row from left to right, then the subwindows in the right column from top to bottom, then the bottom row, left column, second row, and so on, finally ending with the center subwindow. The
subwindows are then delivered one by one in the spiral order to the visarea 1 unit 63.
In visarea 1 each 7 by 7 pixel subwindow is processed to generate measures of the visual strengths of the edges of the patterns in the horizontal,
vertical, and two 45 degree diagonal directions. For gray-scale images visarea 1 generates measures of the magnitude of the luminance gradient in the four
directions. For binary (1-bit pixel) images measures of the edge orientation in the four directions are
generated.
Referring to Fig. 4, each edge measurement is performed for each 7 by 7 subwindow by a
cooperative-competitive neural network which has 25 hidden neurons and one output neuron. Visarea 1 thus includes four neural networks 202, 204, 206, 208, each of which receives the pixels of each subwindow 57 and generates one of the outputs 210. As in biological systems, each neuron can be either an excitatory-type or an inhibitory-type, but not both simultaneously. There are 1924 fixed interconnection weights for each network. A set of actual interconnection weights useful for the four networks for the example are set forth in Appendix A. Each of the detectors is a three layer neural network having an input layer, a hidden layer, and a single output neuron. Appendix A includes two sets of four matrices each. One set of four matrices (marked horizontal) is used for the horizontal and vertical detectors; the other set of four matrices
(marked diagonal) is used for the 45 and 135 degree detectors. In each set, the four matrices A, B, C, and D contain interconnection weight values respectively for interconnections within the hidden layer,
interconnections from the input layer to the hidden layer, interconnections from the hidden layer to the output neuron, and interconnections from the input layer to the output neuron. Each row in a matrix represents all of the interconnections from a given neuron, and each column represents all of the interconnections to a given neuron. The diagonal of the A matrix thus
represents all of the interconnections of hidden layer neurons with themselves. The matrices labelled
horizontal may be used as the vertical edge detector simply by flipping the input 7 by 7 subwindow about its diagonal axis. The matrices labeled 45 degrees
similarly may be used to detect 135 degree edges simply by flipping the input 7 by 7 subwindow about its
horizontal axis.
For the general case of detecting gradients of luminance (instead of simple binary edges), detectors are designed using the genetic algorithm in the manner described in the copending patent application cited below, for a particular orientation and gradient
direction. The responses to orientations of 90 degrees or larger and/or gradients in the opposite sense can use the same detector weights if the input 7 by 7 subwindow are properly rotated first. The rotations are performed in visarea 1.
The interconnection weights between neurons remains fixed. The orientation measurements of
luminance lines and gradient magnitudes model similar processing that occurs in biological visual systems. A technique for determining the interconnection weights for the neural network is set forth in copending United States patent application, serial number 468,857, filed on the same day as the parent of this application, and incorporated by reference.
Referring to Fig. 5, binary edge patterns of the kinds shown in column 220 and gray-scale patterns of the kinds shown in column 222 would produce visarea 1
outputs as shown. In the gray-scale patterns each line represents pixels of constant value. The indicated gradient in the pattern can be reversed without
affecting the visarea 1 outputs.
As explained below, object classification is done in part on the basis of these orientation strengths over a set of subwindows. In the preferred embodiment, there are no 'corner,' 'circle,' 'face,' or 'matched' filter detectors of the kind commonly used in other machine vision approaches to recognize features of a pattern.
In the example, the four orientation signals generated by visarea 1 for each of the 625 7 by 7 pixel subwindows yields a total of 2500 orientation values for the entire window.
Referring again to Fig. 3, the 2500 orientation signal values 71 generated by visarea 1 can be arrayed as lines on a spectrum 72 in which the length of each horizontal line represents the magnitude of the signal. The positions along the spectrum may be thought of as corresponding to different "frequencies". The
orientation signal lines for each window are arranged in order as shown, and the successive subwindows in the spiral order are arranged in order along the spectrum so that the first subwindow's lines appear first. Thus, the outer subwindows of the windowed image are nearer the top of the spectrum (lower frequency) and the inner subwindows are nearer the bottom. Hence, information about the general shape of the pattern occurs at the top or low frequency part of the output spectrum, and
information about the interior of the pattern occurs at the bottom or high frequency part of the spectrum.
Visarea 2 module
Referring again to Fig. 2, the second
feature-generating module in the classification channel is visarea 2 module 32. The function of this module is to detect edges near the perimeter of the 175 by 175 pixel window. Since only the outside edges of the pattern are of interest in this step, the window image is first defocused by a 25 by 25 average unit 49.
Referring to Fig. 6, this averaging smears details of the pattern (the detail is captured by visarea 1), but retains the necessary outside edge information. The averaging produces a single smeared 7 by 7 pixel image 230, 230' of the pattern in the 175 by 175 window 232, 232'. As shown, the averaging simplifies the pattern edges to enable them to be easily detected.
Referring to Fig. 7, visarea 2 includes four neural networks, 234, 236, 238, 240, each of which detects the presence or absence of an edge. Two 3 by 7 pixel detectors 234, 236 detect the presence of nearly horizontal edges respectively at the top and bottom of the window image. Two 7 by 3 pixel detectors 238, 240 detect the presence of nearly vertical edges
respectively at the left and right of the window image. These edge detectors are like the ones in visarea 1 except the input images are now 7 by 3 or 3 by 7 instead of 7 by 7. Each detector uses 25 neurons with fixed interconnection weights.
A set of actual interconnection weights for these four neural networks are set forth in Appendix B. Only one set of four matrices is provided; these may be used in all of the four different detectors simply by
rotating the input 7 by 7 subwindow by 45, 90, or 135 degrees as the case may be.
For most objects of interest that fit in the 175 by 175 window, there will be edges on the top and bottom and on the right and left sides. The output of the visarea 2 unit is four spectrum lines 45 which measure the north, south, east, and west edge strengths. These four lines also comprise part of the spectrum 72 used by the classifier.
Referring to Fig. 8, for a pattern in the center of the output of the average module, the four network outputs of visarea 2 are all high, while for a pattern in the lower right corner, the north and west outputs are low while the south and east outputs are high.
Sum module
The third feature-generating module is a sum module 54. This module sums the pixel values in the 175 by 175 pixel window. The computed sum is a measure of the gross size of the pattern in the window and it is used as one of the input spectrum values to the
classifier (note reference numeral 47 on Fig. 3).
Classification spectrum
Referring again to Fig. 3, classification is achieved by interpreting a combination of the visual feature measures discussed above. Note that these feature measures include some values which have been only slightly processed (the output of the sum module), some moderately processed (the output of the visarea 2 module), and some highly processed (the output of the visarea 1 module). Because the spectrum includes lines from visarea 1, from visarea 2, and from sum, the
magnitudes of the lines are adjusted by each module to ensure appropriate comparative weighting of each
module's output. In one example, the visarea 1 module outputs are adjusted by subtracting the minimum (usually negative) of all of the visarea 1 outputs from each of the visarea 1 outputs to ensure that the visarea 1 portion of the spectrum is entirely positive with a minimum value of zero. The visarea 2 and sum outputs are multiplied by scale factors which depend on the window size used in LGN 30 (Fig. 2). For a window size of 175 by 175, the scale factors are 0.1 for the visarea 2 outputs and 0.01 for the sum module output. For a window size of 42 by 42, the factors are 1.5 and 0.3 respectively. This weighting ensures that the
classifier gives equal significance to information about size, edges, and detail structure.
ITC1 module
Referring again to Fig. 1, classification is done, using the spectrum 72 of information, by an
unsupervised classifier 58 followed by a supervised classifier 66. The unsupervised classifier ITCl module uses the ART 2 classifier technique discussed in G.
Carpenter and s. Grossberg, "ART 2: Self-Organization of Stable Category Recognition Codes for Analog Input Patterns," Applied Optics, Special Issue on Neural
Networks, (1987), incorporated herein by reference.
In neural network theory terminology, the input spectrum is "impressed" on the bottom layer of the ART2. This classifier automatically selects characteristics of the input spectrum (or pattern) to define a category. Subsequent patterns are compared to patterns stored in the long-term memory (LTM) trace 59. ART2 is a two-slab neural network. One slab is called F1 and consists of 3 interacting layers which perform noise filtering and signal enhancement. The second slab is called F2 and consists of a single interacting layer. The F2 neurons are used to indicate by their activity the category of the input pattern. The input patterns, after processing by F1 are judged to be close or far from the LTM traces. If a new input spectrum is different from previous spectra, then a new category is defined for the input. If a new input spectrum is similar to a previous
category class, then the existing category is updated with an additional example. The classifier is 'trained' by presenting to it a sequence of example patterns which are then categorized by ITC1. In principle, if the examples are sufficiently different, a distinct category will be defined for each example. If some of the examples are similar to one another, then a smaller number of categories are defined.
The definition of ART2 and its operating
characteristics are well-known. It is selected over other classifiers such as Hopfield nets and perceptrons because of its feature enhancement, noise reduction, and stability properties.
Within ITC1, the orient unit 250 determines the closeness of the match between the input and a stored pattern based on a positive number ||R|| generated by F1. If the match is not close then it causes a search of the F2 categories for a closer match. The confidence unit 252 associates the closeness measure ||R|| with a
confidence level as defined by the user. For example, if ||R|| = 1.0, then the confidence level is 100% and if||R|| = 0.7, then the confidence level is 50%, with a linear interpolation for ||R|| greater than 0.7 and less than 1.0.
ITC2 module After training the ITC1 module, its output nodes 61 correspond to examples of input patterns from
particular categories or classes. For example, if the first ten examples are trucks, the first ten ITC1 output nodes are in a category (say category 1) that
corresponds to trucks. The ITC2 module 66 then
associates the activation of any of the first ten nodes with the name 'truck'. This is implemented by a simple logical OR operation. In similar fashion, other
categories of objects are learned by ITC2 and associated with other names.
In practice, it is desirable to store the identification and locations of patterns found in the FOV for future reference. The decision to store a pattern is made by using the matching parameter 109 of ITC1 as a measure of confidence in the pattern
identification. By setting the confidence level 67 equal to 50% when the match just passes a predetermined threshold for a category match and to 100% when the match with a LTM trace is perfect, a confidence measure is generated. ITC2 decides 69 whether the
identification is accurate enough for a given
application. If the confidence level is high enough 71, then the results are stored in store 68. The
information stored is the class name 73, the confidence level 75, and the location 77 in the FOV. If the confidence level is not high enough, then the system tries to identify the pattern by evaluating the input image again, as explained below.
Location channel
The function of the location channel is to isolate an individual pattern in the FOV so that the classification channel processing can be applied to that pattern. The location channel includes a Superior
Colliculus (superc) module 18, and also includes the LGN, visarea 2, and Posterior Parietal Cortex (PPC) modules. The location channel supports both feedforward and feedback flows of signals.
Superc module
Locating individual patterns within the FOV
(active window) involves a two-stage process consisting of coarse location followed by fine location and
pull-in. The superc module performs the coarse location procedure. In this module a modified ART2 neural network is used to grossly locate objects of interest within the FOV. The F2 slab of the ART2 is used to impress a stored LTM trace on the top layer of the F1 slab. LTM traces for the general shapes of interest are computed off-line and stored in the superc. In this F2-to-Fl, the system is 'primed' to locate a particular class of objects.
A 175 by 175 pixel window is extracted from the input image and impressed on the bottom layer of the ART2. The pattern specified by the LTM trace 19 is compared to the windowed image. The LTM trace is designed so that an object of the correct general size will cause a match, even if off-center, to indicate its presence. A row map unit 24 is used to map the windowed input to the ART2 input. Because the input window is 175 by 175, there are 30,625 input pixels delivered to the ART2. If no match is found, then another
non-overlapping window in the image is input as the active window and evaluated for the presence of an object. Thus, in the example, there are nine coarse location positions, each represented by one of the nine non-overlapping windows in the image. The degree of match between the image pattern and the LTM traces is used as an enable signal 23 to the LGN module. The selection of the coarse window position from among the nine possible windows is done by a fovea move unit 20. The coarse position 22 is sent to the row map unit, and to the PPC module for further adjustment.
PPC module
The second stage of the location process is the fine adjustment and pull-in stage. This pull-in stage is done by a feedback path which includes the LGN, visarea 2, and PPC modules. The function of the LGN and visarea 2 modules was described above. In the PPC module 28, the center of attention, or fovea (i.e., the location of the center of the active window) is adjusted to center the window on the pattern of interest.
Referring again to Fig. 1, for example, the object 12 is not centered in any of the nine original windows of the image. By shifting window 14e to location 17, the object pattern is made to lie in the center of the window as shown by reference numeral 50. The centering function evaluates the outputs of visarea 2, i.e., the strength of the four edges of the window, which are sent to PPC on lines 81.
When an object is centered, the strength of the edge measurements will be about equal. If the object is only partially in the window, then one or more of the edges will be missing and the corresponding edge
strength will be small. The window is moved in a direction that will tend to equalize the edge strengths.
The fovea delta 1 unit 46 in the PPC implements the control law for moving the window. One possible control law is a standard bang-bang rule with a
dead-zone for the vertical and horizontal directions.
under the bang-bang rule, for vertical movements, the difference in the north and south outputs from visarea 2 is computed. If the difference is larger than a
positive threshold or smaller than a negative threshold, then the window is moved a fixed amount vertically, up or down depending on the sign of the difference. For example, if north - south is positive and larger than the positive threshold, then the window is moved
vertically down a fixed amount; if the sign is negative and smaller than the negative threshold, then the window is moved vertically up the same fixed amount. The magnitude of the movement is constant regardless of the magnitude of the north - south difference, i.e., when movement occurs the maximum amount is used (bang-bang). When the difference is intermediate between the positive and negative threshold (dead zone), then no vertical movement of the window is made. For horizontal
movements a similar rule is implemented using the east and west visarea 2 outputs.
The output of the fovea delta 1 box is the magnitude of adjustment for the location in the vertical and horizontal directions, and is fed to the fovea adjust unit 83. The fovea adjust unit adjusts the value provided by the fovea move unit 20 and delivers the current location values in the horizontal and vertical directions on line 21. Adjustments may be made one pixel at a time in either direction.
A second pull-in path includes the LGN, visarea 2, ITC1, ITC2, and PPC modules. This path is used to take additional looks at an object when the confidence in pattern identification is low. if the confidence level is judged to be insufficient, then an enable signal 99 from ITC2 activates a fovea delta2 unit 68 in PPC. This unit generates a random adjustment of the window in the vertical and horizontal directions. This random adjustment gives the system a second chance to achieve a better pattern classification. A counter in ITC2 (not shown) is used to limit the number of retries. After some preset number of retries, the system stores the object's conjectured identity together with the confidence level and location, and then goes on to search for other objects.
After processing the windowed image and storing the results, a slew enable signal 101 is used to
activate the fovea move unit 20 to move to the next coarse position, i.e., to the next one of the nine windows in the original image.
The system has been implemented in a computer simulation written in the C language, and compiled and run on a combination SUN 4/110 and CONVEX 220 computing system (using SUN's version 4.03 C compiler or CONVEX's version 3.0 C compiler). Copies of the source code are attached as Appendix C. Appendix C is subject to copyright protection. The copyright owner has no objection to the reproduction of Appendix C as it appears in the United States Patent and Trademark office, but otherwise reserves all copyright rights whatsoever.
System dynamics
In a computer simulation of the object
recognition system, the system functions are executed in a sequential manner. First, the location channel finds and centers in a window an object of interest. When an object straddles evenly between two windows, the choice between which window will be used for the analysis depends on numerical runoff errors and appears random to the user. Then the classification channel identifies the object.
In a parallel implementation with custom hardware, the modules would run simultaneously. The sequencing of functions would be controlled by enable signals, as described above, and by properly selecting the neural network interconnection time constants. Time constants associated with the location channel's LTMs are short so that the channel will converge quickly to the location which is to be analyzed. The classification channel's LTM time constants are longer and the identification process is comparatively slow. This difference in the time constants ensures that classification is done on a centered object. Possible time constants would be such that the ratio of location time to classification time would be from 1:3 up to 1:10 or more. The exact time would depend on the nature of the application including the size of the input images, and grayness.
Pap smear application
The screening and interpretation of cervical exfolliative (Pap) smears is one application of the object recognition system. Manual analysis of such smears by a cytologist is time consuming. By applying the object recognition system to Pap smear analysis, automatic prescreening of smears should be possible, saving time and money.
Referring to Figure 9, in a typical Pap smear, a glass slide 300 is smeared with a sample of cervical cells 302 (only a small representative sample of cells is shown). The number of cells on the slide may be on the order of 20,000 - 100,000. The cytologist's task is to scan the cells on the slide using a microscope and to identify and analyze the condition of non-normal cells.
Referring to Figure 10, each cell can be categorized as lying at some position along a continuum 305 from a normal cell 304 to a malignant cell 306. In general, the cells have generally the same size (bounded by a cell wall 308), regardless of their location along the continuum, but there are differences, among other things, in the size, configuration, and appearance of the cell nucleus 310 and in the roughness or smoothness of the outer cell boundaries, as well as possibly other cytoplasmic features, in a normal cell, the nucleus 310 is small, has smooth, curved boundaries, and a uniform dark appearance. In a malignant cell 306, the nucleus 312 is much larger, has irregular boundaries, and is blotchy in appearance.
The cytologist is expected to be able to detect as few as two or three non-normal cells on the slide for purposes of diagnosing cervical cancer. Even highly accomplished cytologists cannot achieve a false negative analysis rate much lower than about 10% (i.e., 10% of the smears which contain abnormal cells are incorrectly found to be normal). It is expected that the use of the object recognition system can improve this rate
significantly.
In general, to use the object recognition system for Pap smear analysis, one first trains the system by presenting it with some selection of known cells; then the system is used for analysis by presenting it with a Pap smear and allowing the system to scan the smear to detect cells and their normal or abnormal conditions.
Referring to Figure 11, in order to acquire a digitized version of an image of cells in the smear, the slide 300 is mounted on a stage 314 which can be driven by motors (not shown) along two dimensions 316 under the control of signals 318 delivered from a controller 320. A microscope 322 focuses the image on a video camera 324 which feeds an analog signal to an image processor 326. The image processor forms a 525 by 525 pixel digitized image and delivers it to the LGN 30 (Figure 2).
Referring to Figure 12, for training, the operator uses the microscope and the image processor to select a single cell 330 and enlarge the cell to a scale that fills an entire 175 by 175 pixel window 332 within the image. This image is presented to the system and results in a spectrum which is classified by classifier 58 as one node of a first category 61 (Figure 2). The spectrum is based on the array of 625 subwindows 334, each 7 by 7 pixels, which tile the window. The 2500 output lines of block 63 in Figure 2 are then arrayed along the spectrum such that the lines pertaining to the cell nucleus are at the higher "frequency" end and the lines pertaining to the cell boundary are at lower
"frequencies".
The operator then indicates to classifier 66 the name to be associated with that category. For example, the first cell presented to the system for training may be a normal cell and becomes the first node of a NORMAL category 61 (Figure 2). Additional normal cells could be presented and would form other nodes in that
category. In a simple scheme, there would be only two categories, NORMAL and ABNORMAL, although several other intermediate categories could also be used.
Once the system is trained, the operator may load a slide of a smear to be analyzed onto the stage. The controller will move the stage to a starting position say at the upper left corner of the slide and the camera will deliver an image of that portion of the slide to the system via processor 326. The scale of the image will be such that the cells are each about the size of a 175 by 175 pixel window. Of course, the cells will not generally be found in the centers of the windows. As previously explained, the system has a location channel and a classification channel which operate in parallel so that the system can locate a cell within the window and then adjust the field of view to center the cell. Then the cell can be classified automatically based on the prior training. The results are stored in the store 68. For a given cell, the store will hold an indication of whether the cell is NORMAL or ABNORMAL, a confidence level of that determination, and the location of the cell in the image, in operation, the SUM module analyses the gross size of the cell, the V2 module analyzes the edges of the cell wall to determine its shape, and the V1 module analyses the detailed
configuration of the parts of the cell and their
appearance, especially of the nucleus.
Next the stage is moved to a new location and the process is repeated. The controller 320 can keep track of the positions of the slide so that a particular cell can be automatically relocated based on the stage
position and the location of the cell within the image taken at that position (stored in store 68). Thus the cytologist can quickly find and analyse the cells which the system has indicated are abnormal. The system thus saves the cytologist a great deal of time by
preprocessing the slide to identify abnormal cells.
Referring to Figure 13A, a normal cell produces an output of the edge detector which has sharp spikes representing the edges of the nucleus (note that the graph labeled Edge Detector Results represents only a small portion - about 25% of the highest "frequencies" - of the full spectrum of 2500 lines). The center
subwindow of the image is represented by lines on the far right-hand side of the graph. A comparable figure for an abnormal cell is shown in Figure 13B.
Referring to Figure 14, in the full spectrum 350, the V2 components are given relatively greater strength than the V1 components and the SUM component is given relatively greater strength then the V2 components. The precise relative weights of the different components is achieved by applying to the raw SUM component a
weighting factor of 10-1 and to the V2 components a weighting factor of 10. This weighting gives
approximately equal value to the three types of
information (gross size of the cell, general shape of the cell, and detailed features of the interior and exterior of the cell).
Training is done by exposing the system to a variety of known normal and abnormal cells. The
classifier stores the pattern associated with each sample cell. When an unknown test cell is then shown to the system, its generated spectrum is passed to the classifier. The classifier will find the closest match to the known stored samples. The cell is then labeled to be of the same type as the closest stored sample.
Referring to Figure 15, in one training
technique, the system was shown a series of normal cells only (in this case 28 normal cells). Then the system was tested by showing it a sample of thirty-three test cells (17 normal and 16 abnormal). The system compared each test cell with known standards and made a yes/no decision based on a threshold of closeness to the normal standard cells. The chart illustrates that a tradeoiff can be obtained between the rate of false positives and the rate of false negatives, by adjusting the threshold from high to low.
Figure 16 demonstrates that the false negative rate can be reduced by increasing the number of training cells.
Referring to Figure 17, in yet another training technique, called selected training, one begins with a training set of both normal and abnormal cells, ln a set of cells which are thereafter used for testing, those which produce false results are added to the training set in their proper categories. Curve 362 suggests that in this training regime, the addition of greater numbers of test cells causes a more rapid drop in the false negative rates. It is believed that using a set of test cells numbering say 1000 will be sufficient with a high level of confidence to reduce the false negative rate to an extremely small value.
Other embodiments are within the claims that follow the appendices.
- -
Figure imgf000030_0001
Figure imgf000031_0001
Figure imgf000032_0001
Figure imgf000033_0001
Figure imgf000034_0001
Figure imgf000035_0001
Figure imgf000036_0001
Figure imgf000037_0001
Figure imgf000038_0001
CFLAGS = -g
OBJS = image_util.o vfllter.o nedian.o IRP_histogram.o \
IRP edge_detector.o IRP_vlsar2.o IRP_LGN.o ART2.o verrtool.o
ALL_OBJS = $(OBJS)
LIBS = -lm -lsuntool -lsunwlndow -lpixrect -lstd
ALL_LIBS = $(LIBS)
cellview: cellview.o $(ALL_OBJS)
cc -o cellview cellview.o $(ALL_OBJS) $(ALL_LIBS)
cellview.o: cellview.c cellview.h netparan.h \ activation.h image_io.h LTM.h image_util.o: inage_util.c image_ io.h
verrtool.o: verrtool.c cellview.h
vfllter.o: vfilter.c cellview.h netpararm.h \ image_io.h activation.h
median.o: median.c
IRP_hi stogram.o: IRP_histogram.c cellview.h image_lo.h
IRP_edge_detector.o: IRP_ edge_detector.c cellview.h netparam.h \ activation.h image_ io.h
IRP_visar2.o: IRP_vitar2.c cellview.h netparam.h \ activation.h
IRP_LGN.o: IRP_LGN.C ieage_io.h activation.h ART2.o: ART2.c activation.h LTM.h
APPENDIX C /* cellview.h Printed on 18-Decenber-1989 */
/*
Header rile for RL Harvey's IRP Software Testbed
Adapted from viewtool.h with additional coding by
Paul Dicaprio and KG Heinemann
*/
/* Defines */
/* Parameters for plot of image histograa */
#define HST_HElGHT 64 /* Plot amplitude for largest peak */ #define PLOT_BORDER_WIDTH 16 /* width of blank border around plot */ #define HST_WlN_WIDTH (VLT_SIZE + (2 * PLOT_BORDER_WIDTH) )
#define HST_ PLOT_HEIGHT (HST_HEIGHT + (2 * PLOT_BORDER_WIDTH))
#define N_HST_DISPLAY_PIXELS HST_WIN_WIDTH * HST_PLOT_HEIGHT
#define HST_WIN_HEIGHT 120
/* Parameters for plot of edge detector spectrum */
#define EDF_SPECTRUM SIZE 2500 /* Value for cytology discrimination */
/* #define EDF_ SPECTRUM_SIZE 144 Value for ATR application */
#define EDF_DISPLAY_WIDTH (2 * VLT_SIZE)
#define EDF_WIN_WIDTH (EDF_ DISPLAY_WIDTH + (2 * PLOT_BORDER_WIDTH))
#if (EDF_SPECTRUM_SIZE ( EDF_DISPLAY_WIDTH)
#define EDF_PLOT_WIDTH EDF_WIN_WIDTH
#else
#define EDF_ PLOT_WIDTH (EDF_SPECTRUM_SIZE + (2 * PLOT_BORDER_WIDTH))
#endif
#define N_EDF_ PLOT_PIXELS (EDF_PLOT_WIDTH * HST_PLOT_HEIGHT)
/* Number of signals generated by the offset detection module (V2) */
#define V2_SPECTRUM_SIZE 4
/* First location for offset (V2) signals in the spectrua array */
#define V2_SPECTRUM_OFFSET 1
/* First location fo edge detector (V1) signals in the spectrum array */
#define EDF_SPECTRUM_OFFSET (V2_SPECTRUM_OFFSET + V2_SPECTRUH_SlZE)
/* Total number of input signals for ART2 in the classification channel */
#define TOT_SPECTRUM_SIZE (EDF_SPECTRUM_OFFSET + EDF_SPECTRUM_SIZE)
/* -- SunView -- */
extern Frame message_ frame;
extern Canvas hst_canvas, edf_canvas;
extern Panel aessage_panel, img_proc_panel;
extern Panel item msg_item, out_item;
extern Pixwin *hst_pw, *edf_pw, *edf_hdr_pw;
/* -- External procedures -- */
extern void slider_proc();
extern void roll_vit_proc();
extern void mean_ filter_proc();
extern void median_filter_proc();
extern void binary_filter_proc();
extern void set_binary_filter_proc();
extern void reset_mess_proc(); extern void batch_cwd_proc();
extern void batch_ fil_proc();
extern void LGN_mult_proc();
extern void V2 mult_proc();
extern void old_LTM_proc();
extern void new_LTM_proc();
extern void read_LTM();
extern void write_LTM();
/* -- More external routines -- */
extern int read_test();
extern void make_IRP_histogram();
extern void hst_equalize();
extern void spiral_map();
extern void detect_edges();
extern void display_edf_results();
extern void detect_offset();
extern void ITCl();
extern int IRP_auto_proc();
extern void IRP_batch_proc();
/* External variables */
/* Width and height of histogram display image */
extern int hist_width, hist_height;
/* Width of display window for the edge detector spectrua */
extern int edf_width;
extern int IRP_hist_datal); /* Image histogram array */
extern int hstmax; /* Maximua count in histogram */
extern int hnax_loc; /* Intensity val corresponding to max count */
/* Flag to Indicate whether Histogram Array Contains
Results that are Valid for the Current Image */
extern u_char validhst_flag;
/* Flag to Indicate whether Edge Detector Spectrum
(V1) Values are Valid for the Current Image */
extern u_char valid_V1_flag;
/* Flag to Indicate Whether Centering Signal (V2)
Values are Valid for the Current Image */
extern u_char valid_V2_flag;
/* Flag to indicate whether the prograa is operating in a batch mode */ extern u_char batch_flg;
/* Parameter blocks for manipulation of "STD" files */
extern int std_ioblk[32], std_pblk[32];
extern u_char *hst_ image;
extern u_char *edf_ image;
extern int binary_thresh;
extern FILE *fp_vlt, *fp_img, *fp_out;
extern int vlt_format, img_format, out_format; /* Use information in SunView include files to define
a structure for accessing the base frame's color map */
extecn struct colormapseg bas_cms;
/* string to hold name of base frame color map segment */
txtern char bas_cmsname[CMS_NAMESIZE];
/* Current Working Directory and File Name for Batch Sequence Information */ extern char seq_cwd[ ], seq_fnarae[ ];
/* Structure to hold sequence control information for batch operations */ extern struct sequence *batch_seq;
/* Structure to hold pointers for ART 2 result information */
extern struct ART2_res_ptrs log_class_info;
/* image_io.h Printed on 18-December-1989 */
/* Global variables for display and manipulation of lmages in programs
developed for Bob Harvey's IRP by KG Heinemann and and P.N. DiCaprio */
#define FNL 49 /* No. of characters allowed in file name */
#define VLT_SIZE 256 /* Total number of color map entries */
/* Number of color map entries to use for BW display of input image */
#define IMG_VLT_SIZE 128
/* Height of color bar display for Video Look-up Table */
#define VLT_HEIGHT 10
/* Horizontal and vertical dimensions of canvas for image display */
/* Size of SUN Monitor Screen */
#define IMG_CANVAS_X_SIZE 1184
#define IMG_CANVAS_Y_ SIZE 900
/* Define Image Parameters */
#define IMAGE_HEIGHT 512
#define IMΛGE_WIDTH 512
/* Code to indicate successful "std" file I/O */
#define lMG_FILE_OK 1
/* Code to indicate error in reading data from a file */
#define DATA_RD_ERROR -1
/* Code to indicate attempt to read from a file that has not been opened */
#define FIL_NOT_OPN_ERR -10
/* Code to indicate that requested "std" pattern number is out of range */
#define BAD_STD_PAT_NUM -15
extern u_char red[ ], green[ ], blue[ ];
extern u_ char *image;
extern char cwd[ ];
extern char img_fname[ ];
/* ASCII representation of pattern index number within the image input file */ extern char imgnum_str[ ];
extern char vlt_fname[ ];
extern char header[ ];
/* Pointer to error message string generated by image I/O routines */ extern char *err_str;
/* Number of rows already specified in a given panel */
extern int npnl_rows;
/* Command parameters for "std" file I/O */
extern int std_ioblk[ ]
extern int ok_lmg_ file;
extern int box_fig; /* flag for defined image box */ extern int size_x, size_y; /* input image width and height */ extern int zoom_x, zoom_y; /* zoom magnification factors */
/* Horizontal and Vertical Offsets of Displayed Image within Image Canvas */ extern int img_x_offs, img_y_offs;
/* Thickness of a Default Scroll Bar for Proper Sizing of the Image Canvas */ extern int scroll_bar_thickness;
extern FILE *fp_vlt;
struct BOX_STRUCT (int size_x, size_y, x0, y0, x1, y1, x, y;);
/* -- Objects for SunView -- */
extern Frame base_frame;
extern Panel control_panel;
extern Panel_itera cwd_item, file_item, num_item, hdr_ item, vlt_item;
extern Panel_ item csr_item, zoom_item, img_box_item;
extern Canvas img_canvas, vit_canvas;
extern Menu img_menu;
extern Cursor img_cursor;
extern Pixwin *bas pw, *img_pw, *vlt_pw;
extern struct BOX_STRUCT img_box;
/* -- External procedures -- */
extern void cwd_proc();
extern void img_ open_proc();
extern void std_pnua_proc();
extern void vlt_open_proc();
extern int display_proc();
extern void clear_canvas_proc();
extern void zoom_proc();
extern void unzoom_proc();
extern void quit_ proc();
extern void xy_proc();
/* netparam.h Printed on 18-December-1989 */ /*--------------------------------------------------------------------------------------------------------------------------
Header File for Neural Net Feature Extraction Algorithms
in RL Harvey's IRP Software Testbed
Created by RG Heinemann on 31-July-1989
---------------------------------------------------------------------------------------------------------------------------- */
/* Number of pixels spanned by one dimension of square input window */
#define INPUT_WINDOW_SIZE 7
#define N_HIDDEN_NEURONS 25 /* No. of units in hidden layer */
#define N_OUTPUTS 1 /* No. of edge detector outputs */
/* Calculate proper number of locations needed for temporary storage
of direct contributions to activation from input signals */
#if (N_HIDDEN_NEURONS > N OUTPUTS)
#define MAX_NEURONS N_HIDDEN_NEURONS
#else
#define MAX_NEURONS N_OUTPUTS
#endif
/* Specify "C " data type for representation of activation levels
in neural network edge detection algorithm */
#include "activation.h"
/*------------------------------------------------------------------------------------------------------------------------*/
/* Define macro to perform multiplication of vectors by a matrix */
int irow, /* index for rows of matrix and output vector elements */ icol; /* index for columns of matrix and input vector elements */ int matrix_elerment_ptr; /* Index for accessing individual matrix elements */
/* Pointer for accessing specific elements of the output vector */
ACTIVATION_DATA_TYPE *output_vector_ptr;
/* Vector-matrix product in direct orientation */ #define matrix vector_ producttnc, nr, matrix, input_vector, output_vector) \ matrix_element_ptr=0 ; \
for (irow=0; irow(nr; ++irow) \
{ \
*(output_vector_ptr-output vector+irow) = 0; \
for(icol=0; icol(nc; ++icol) \
*output_vector_ ptr+= \
*(input_vector+icol) * \
(ACTIVATION_DATA_TYPE)* (matrix + matrix_element_ptr++); \
}
/*-------------------------------------------------------------------------------------------------------------------------* /
/* Define procedure to compute activation of the hidden units as a macro */
/* SUB of differences between subsequent activation levels */ ACTIVATION_DATA_TYPE act_delta;
/* Iteration Counter for Procedure to Compute Hidden Unit Activations */ int iter_count;
int input_index;
#define compute_hidden_unit_activations (num_inp, num_hid, A_matrix, B_matrix) \ \
/* compute direct contributions to hidden layer activations \
from input signals and store them in designated array */ \
\
matrix_vector_product \
(num_inp, num_hid, B_matrix, input_neuron_ signal, B_ or_D_ fi) \
\
/* store these direct contributions as neuron activation levels \
for first step of the iterative calculation procedure */ \ for(input_index=0; input_index(num_hid; input_ index++) \
hidden_neuron_signal [input_index] = B_or_D fi[input_ index]; \
\
iter_count = 0; /* Initialize iteration counter */ \
\
/* Initialize difference measure so as to guarantee at least one iteration */ \ act_delta = pow(2.0, 32); \
\
/* Iterative calculation of activation levels for hidden units */ \ while (act_delta > 0 && iter_count ( 10) \
\ for (input_index=0; input_index(num_hid; input_ index++) \
\
/* Save current neuron activations for comparison \
with results of next Iteration */ \ previous_activation (input_index) = \ hidden_neuron_signal (input_index); \ \
/* Pass previous activation levels through the sigmoid rectification */ \ sigrect(input_index)=step_sigmoid(previous_activation (input_index]);\ }
\ \
/* Mediate interactions among hidden units by matrix-vector multiplication */\ matrix_vector product \
(num_hid, num_hid, A_matrlx, sigrect, hidden_neuron_signal) \ \
act_delta = 0.0; /* Initialize the Difference Measure */ \ \
for (input_index=0; input_index(num_hid; input_ index++) \
\
/* Add in Direct Contributions from the Input Signals */ \ hidden_neuron_signal (input_index) += B_or_D_fi (input_index); \
\
/* Compute Difference Measure and then Store New Activation Levels*/ \ act_delta += fabs (hidden_neuron_signal (input_index) - \ previous_activation (input_index)); \ \ } \
++iter_count; /* Incremen t interation counter */ \ /*-------------------------------------------------------------------------------------------------------------------------* /
/* Sigmoid functions to convert neuron activations into transmitted signals */ #define step_sigmoid(X) ((X)>0 ? 1.0 : 0) #define ramp_sigmoid(X) ((X)>0 ? X : 0)
/* Error code to Indicate problems during V2 offset calculations */
#define V2_CALC_ERR 2
/* --- Define Externals -- - */
/* Array to Store Collected Edge Detector Results from Multiple Windows */ extern ACTIVATION_DATA_TYPE frwd_feature();
/* Index to Indicate Next Unused Location in the "Edge Feature" Array
and Cumulative Entry Counter for that Array */ extern int ftr_counter;
/* Pointer to memory region for compressed image in V2 module */
extern ACTIVATION_DATA_TYPE *V2_hidden_layer;
/* Strings for error messages in routines to read matrix coefficients */ extern char *A_str, *B_str, *C_str, *D_str;
/* Rectified signals transmitted by the input neurons */
extern ACTI VATlON_DATA_TYPE input_neuron_signal();
/* Activation levels for the hidden neurons */
extern ACTIVATION_DATA_TYPE hidden_neuron_signal();
/* Activation levels for the output neurons */
extern ACTIVATION_DATA_TYPE output_neuron_signal ();
/* Array to store component of neuron activation caused by input stimulus */ extern ACTIVATION_DATA_TYPE B_or_D_fi();
/* Array to store previous activation levels for iterative computation */ extern ACTIVATION_DATA_TYPE previous_activation();
/* Array to store rectified input siqnals */
extern ACTIVATION_DATA_TYPE sigrect();
/* Number of image pixels to skip when moving from the end of one row
in an input window to the beginning of the next one */ extern int row_skip_increment;
/* Weighting factor for the LGN Sum signal in input to the ART 2 classifier */ extern float LGN_wt;
/* Weighting factor for the V2 Signals in input to the ART 2 classifier */ extern float V2_wt;
/* activation.h Printed on 18-December-1989 */
#define ACTIVATION_DATA_ TYPE float
/* LTM.h Printed on 18-December-1989 */
/* Header file to make Long Terra Memory trace information available to programs outside the "ART2.c" file */
/* Number of output categories (F2 nodes) for the ART2 classifier */ extern int nF2;
/* Number of output category (F2) nodes
which have been associated with particular input patterns */ extern int Nactv;
extern float **z; /* Pointers to actual LTM values */
/* Structure for returning pointers to ART 2 result information */ struct ART2_ res_ ptrs
{
int cat_node;
int num_pass;
float R_ value;
};
/* cellview.c Printed on 18-December-1989 */
/*
Tool for viewing 512 X 512 test images for IRP
Uses SUN-View system
NOTE: this is jυst a cheap knockoff of JT's viewtool so I can just get the job done. Better things are comming.
Added test code to put boxes around sections of images. -- this will be useful when we have some fancy applications to run later.
14-Feb-1989 KGH added code to compute and display image histogram
05-06 October 1989 KGH added code to implement panel I/O
for information pertaining to the ART 2 classifier.
*/
Idefine NUM_STR_LEN 11
/* -- includes -- */
#include (stdio.h)
#include (math.h)
#include (suntool/sunview.h)
#include (suntool/canvas.h)
#include (suntool/panel.h)
#include (suntool/scrolIbar.h)
#include (local/STD.h)
#include "image_io.h"
#include "cellview.h"
#include "netparam.h"
#include "LTM.h" /* ART 2 Long Term Memory Trace Information */
/* -- External variables -- */
int hist_width =HST_WIN WIDTH, /* histogram display image width */ hist_height=HST_PLOT_HEIGHT; /* histogram display image height */
/* Width of window for displaying edge detector spectrum */
int edf_width=EDF_WIN_WIDTH;
u_char *hst_image=NULL; /* dynamically allocated hstgrm plot array */
/* Dynamically Allocated Edge Detector Spectrum Array */
u_char *edf_lmage-NULL;
/* Current Working Directory and File Name for Batch Sequence Information */ char seq_cwd(TNL), seq_fname (FNL);
/* Strings for ART 2 Control Panel Items */
char LGN_mstr (10), V2_mstr (10), old_LTM_file (FNL), new_LTM_file (FNL);
/* -- Sunvlew -- */
Frame message_frame;
Canvas hst_canvas;
Canvas edf_canvas;
Canvas edf_hdr_canvas;
Panel message_panel, ART2_ panel, img_ proe_ panel;
Panel_ltem batch_cwd_item, batch_ fil_ item, msg_item, out_ item; panel_item ART2_hdr_ item, LGN_mult_item, V2_mult_item,
LTM_input_item, LTR_outpυt_ item;
Plxwin *hst_pw, *edf_pw, *edf_hdr_pw;
Plxfont *dispfont;
/* Use information in SunView include files to define
a structure for accessing the base frame's color map */
struct colormapseg bas_cms;
/* String to hold name of base frame color map segment */
char bas cmsnamel CMS_NAMESIZE] ;
/* Text Header for Histogram Display Canvas */
char *hst_header = "Image Histogram";
/* Text Header for Display of Edge Detector Results */
char "edf_header = "Edge Detector Results";
/* Horizontal and Vertical Offsets of Displayed Image within Image Canvas */ int i, img_x_offs, img_y_offs;
/* Weighting factor for the LGN Sum signal in input to the ART 2 classifier */ float LGN_wt;
/* Weighting factor for the V2 Signals in input to the ART 2 classifier */ float V2_wt;
/* Files for storing ART 2 Long Term Memory Traces */ FILE *LTM_source_file-NULL, *LTM_output_file-NULL;
/* Structure to hold sequence control information for batch operations */ struct sequence *batch_seq-NULL;
/* Structure to hold pointers for ART 2 result information */
struct ART2_res_ptrs log_class_info;
/* -- maln(): set up the window environment -- */
main (argc,argv)
int argc;
char **argv;
{
initialize (argc, argv);
setup_windows(argc, argv, "IRP Interactive Image Analysis Software Testbed"); setup_ img_menu();
post_initialize();
window_main_ loop(base_ frame);
exit(o);
}
int batch_ file_info()
{
batch_cwd_item =
panel_create_ item(control_panel, PANEL_TEXT,
PANEL_LXBEL_STRING, "Batch Directory:",
PANEL_VALUE, seq_cwd,
PANEL_VALUE_DISPLAY_ LENGTH, TNL,
PANEL_NOTIFY_PROC, batch_cwd_proe, 0);
batch_ fil_item = panel_create_ itera(control_ panel, PANEL_TEXT,
PANEL_ LXBEL_STRING, "Batch SEQ File:",
PANEL_VALUE, seq_fname,
PANEL_VALUE_DISPLAY_LENGTH, FNL,
PANEL_NOTIFY_PROC, batch_fil_proc, 0);
return(2);
}
int aux_ file_ info()
{
/* NULL ROUTINE - No other file information used in this application */ return(0); /* Tell calling program that no new lines have been added */
}
void aux_ buttons ()
{
/* NULL ROUTINE - No other control panel buttons used in this application */
}
void aux_panels()
{
void mk_ART2_panel();
mk_ART2_panel(); aux_windows()
{
/* Upper Y-coordinate of highest display canvas in the base frame */ int top_y;
/* Create canvas area for displaying the image histogram */
hst_canvas =
window_create (base_frame, CANVAS,
WIN_RlGHT_OF, img_canvas,
WIN_X, size_x+scroll_bar_ thickness+8,
CANVAS_WIDTH, hist_width,
CANVAS_HEIGHT, HST_WIN_ HEIGHT,
WIN_WIDTH, hist_width,
WIN_HEIGHT, HST_WIN_HEIGHT, 0);
/* Create canvas area for displaying the spectrum of edge detector results */ edf_ canvas =
window_create(base_frame, CANVAS ,
WIN_X, size_ x+scroll_bar_thickness+28,
WIN_WIDTH, edf_width,
WIN_ HEIGHT, HST_WIN_HEIGHT,
0);
window_ set (edf_canvas,
CANVAS_AUTO_SHRINK, FALSE,
CANVAS_WIDTH, EDF_PLOT_WIDTH,
CANVAS_HEIGHT, HST_WIN_HEIGHT,
WIN_HORIZONTAL_SCROLLBAR, scrollbar_ create ( 0),
0);
/* Create canvas area to display a text header for the edf canvas */
edf_hdr_canvas =
window_create(base_frame, CANVAS,
WIN_X, size_x+scroll_bar_thlckness+200, WIN_WIDTH, 212,
WIN_HEIGHT, 20,
0);
/* Create control panel for selection of image processing algorithms.
Do this step after laying out all the other canvases and computing final width of the enclosing frame, so that this particular panel
can strectch across that entire width */ mk_img_proc_panel ();
/* Update those positional parameters of the display canvases which
depend on the exact placement of the assorted control panels */ wlndow_set (vlt_canvas, WIN_BELOW, img_proc_panel, 0);
window_set (img_canvas, WIN_BELOW, vlt_canvas, 0);
/* Set proper positions for canvases which display processing results */ top_y = (int) window_get (img_canvas, WlN_Y);
window_set (hst_canvas, WIN_Y, top_ y, 0);
window_ set (edf_canvas, WIN_Y, top_y+HST_ WIN_HEIGHT+48, 0);
window_set (edf_hdr_canvas, WIN_Y, top_y+HST_WlN_HEICHT+23, 0);
hst_pw = canvas_pixwin(hst_canvas);
edf_pw = canvas_pixwin(edf_canvas);
edf_hdr_pw = canvas_pixwin(edf_hdr_canvas);
}
initialize(argc,argv)
int argc;
char **argv;
{
u_char *reset_image_block();
/* Store name of Current Working Directory in "cwd" */
getcwd(cwd,FNL);
/* ... and in the Current Working Directory for Batch Sequence Information */ strcpy (seq_cwd, cwd);
/* Initialize names of image file and video look-up table file */
strcpy (img_fname,"");
strcpy (vit_fname, "grayscale");
/* Initialize multipliers for the LGN Sum signal and the V2 signals
and format strings for display in the ART 2 control panel */
LGN_wt = 0.001;
sprintf (LGN_mstr, "%.3f", LGN_wt);
V2_wt = 0.100;
sprintf (V2_mstr, "%.1f, V2_wt);
/* Open auxiliary font for text display */
dispfont = pf_open(*/usr/lib/fonts/flxedwidthfonts/cour.b.16");
if (dispfont==NULL)
errmess ("Initialize error opening auxiliary font file");
/* Determine thickness of a default scroll bar */
scroll_bar_thlckness = (int) scrollbar_get (SCROLLBAR, SCROLL_THICKNESS); hst_image =
reset_image_block (hst_image, hist_width, hist_height, "histogram");
edf_ image =
reset_image_block (edf_image, EDF_PLOT_WIDTH, hist_height, "edf spectrum*);
}
post_Initialize()
vlt_open(); /* Set up video lookup tables */
/* Display appropriate heading text in histogram display canvas */
pw_text(hst_pw, 64, 18, PIX_SRC | PIX_COLOR( 2 ), dispfont, hst_header);
/* Display appropriate heading text for edge detector results canvas*/ pw_text(edf_hdr_pw, 1, 12, PIX_SRC|PIX_COLOR( 2) , dispfont, edf_header);
reset_all_panels(); /* Display default control panel strings */
/* Read in matrix coefficients for neural network edge detectors (V1) */ get_edf_matrix_elements();
/* Read in matrix coefficients for neural network offset detectors (V2) */ get_V2_matrix_elements();
/* Allocate memory for information used by the ART2 classifier */
ART_start (TOT_SPECTRUM_SIZE);
if (ok _img_ file) display_proc(); /* Activate display and interaction */ }
reset()
{
u_char *reset_image_block();
image = reset_image_block(image, size_x, size_y, "input");
hst_image =
reset_image_block(hst_image, hlst_wldth, hist_height, "histogram");
edf_ image =
reset_image_ block(edf_image, EDF_PLOT_WIDTH, hist_ height, "edf spectrum"); reset_all_panels(); /* ---------------------* /
/* Procedures */
/* ---------------------* /
/*------------------------------------------------------------------------------------------------------------------------*/
void mk_ART2_ panel( )
{
int pnl_x;
/* Create new frame to hold the panel */
ART2_panel =
window_create (bast_frame, PANEL,
WIN_RlGHT_OF, control_panel, 0);
/* Display appropriate heading text in ART2 control panel canvas */
ART2_hdr_ item = panel_create_ item(ART2_panel, PANEL_TEXT,
PANEL_LABEL_STRING, "Controls for ART 2 Classifier",
PANEL_ LABEL_FONT, dispfont,
PANEL_LABEL_X, ATTR COL(15),
PANEL_ LABEL_ Y, 10,
PANEL_VALUE_DISPLAY_LENGTH, 0, 0);
LGN_ mult_ item = panel_create_item(ART2_panel, PANEL_TEXT,
PANEL_LABEL_STRING,
"Multiplier for LGN Sum:",
PANEL_LABEL_Y, ATTR_ ROW(2),
PANEL_ LABEL"_ X, ATTR_COL(0),
PANEL_VALUE, LGN_mstr,
PANEL_ VALUE_Y, ATTR_ROW( 2),
PANEL_VALUE_X, ATTR_COL(27),
PANEL_ VALUE_DISPLAY_ LENGTH, NUM_STR_LEN, PANEL_NOTIFY_PROC, EGN_mult_proc , 0);
V2_ mult_item = panel_create_ item(ART2_panel, PANEL_ TEXT,
PANEL_LABEL_STRING,
"Multiplier for V2 signals:",
PANEL_ VALUE, V2_mstr,
PANEL_VALUE_DISPLAY_ LENGTH, NUM_STR_LEN, PANEL_NOTIFY_PROC, V2_mult_proc, 0);
LTM_input_ item = panel_create_itera(ART2_ panel, PANEL_TEXT,
PANEL_LABEL_STRING, "LTM input file:",
PANEL_VALUE, old LTM_file,
PANEL_ VALUE_DISPLAY_ LENGTH, FNL,
PANEL_NOTIFY_PROC, old_LTM_proc , 0);
LTM_output_ item = panel_create_item(ART2_panel, PANEL_TEXT,
PANEL_ LABEL_STRING, "LTM save file:", PANEL_VALUE, new LTM_file,
PANEL_VALUE_DISPLAY_LENGTH, FNL,
PANEL_NOTIFY_PROC, new_LTM_proe, 0);
/* Set Active Window Item to Field for the "LTM Input File" */
wlndow_set(ART2_panel, PANEL_CARET_ITEM, LTM_input_itern, 0);
/* Create "button" items for interactive recall and storage of LTM values */ panel_create_item(ART2 panel, PANEL_BUTTON,
PANEL_NOTIFY_PROC, read_LTM,
PANEL_LABEL_ lMAGE,
panel_button_ image(control_ panel, "F etch LTM Values" ,17, 0),
0);
panel_create_item(ART2 panel, PANEL_ BUTTON,
PANEL_ NOTIFY_ PROC, write_ LTM,
PANEL_ LABEL_lMAGE,
panel_button_image (control_panel, "Store LTM Values", 17,0),
PANEL_ ITEM_ Y, (ATTR_ROW( 6)-4),
PANEL_ ITEM_X, (ATTR_COL(20)-2),
0);
/* Scale contents of ART 2 control panel to fit in available space */ window_fit(ART2_panel);
pnl_ x = (int)window_ get(ART2_panel, WIN_ X);
pnl_x += 16; window_ set(ART2_ panel, WIN_X, pnl_ x , 0 ) ;
{
void LGN_mult_proc()
}
strncpy (LGN_ mstr, (char *)panel_get_value(LGN_mult_ item), NUM_STR_LEN); sscanf (LGN_mstr, "#f", &LGN_wt);
{
void V2_mult_proc()
}
strncpy (V2_ mstr, (char *)panel_ get_value(V2_ mult_item), NUM_STR_LEN); sscanf (V2_mstr, "#f", &V2_wt);
{
void old_LTM_proc()
}
char *err_str;
int num_char;
strncpy (old_LTM_flie, (char * )panel_get_value(LTM_input_item), FNL);
/* Close any LTM input file which may have been opened previously */ if (LTM_source_ file != NULL)
fclose(LTM_source_flie);
LTM_ source_ file = fopen(old_LTM_file, "r");
if (LTM_source_file == NULL)
num_char = 49 + strlen(old_ LTM_file);
err_str = (char *)calloc(num_char, sizeof (char));
strcpy(err_str, "Problem opening file \"");
strcat(err_str, old_LTM_ file);
strcat(err_ str, "\" as source for LTM traces.*);
message (err_ str);
free ((char *)err_ str);
{
}
void new_LTM_proc()
{
char *err_ str;
int num_char;
strncpy (new_LTM_file, (char *)panel_get_value(LTM_output_item), FNL);
/* Close any LTM output file which may have been opened previously */ if (LTM_output_ file I= NULL)
fclose(LTM_output_file);
LTM output_ file = fopen(new_LTM_ file, "*");
if (LTM_ouIput_file == NULL)
num_char = 54 + strlen(new_LTM_file);
err_ str = (char *)calloc(num_char, sizeof (char));
strcρy(err_str, "Problem opening file \"");
strcat(err_ str, new_LTM_file);
strcat(err_str, "\" for storage of new LTM traces.");
message (err_str);
free ((char *)err_ str); }
}
void read_LTM()
{
char *err_str;
int num_vals, num_writ, num_char;
if (LTM_source_file == NULL)
message ("Source file for LTM trace values has not been opened.");
else
}
/* Read in number of category (F2) nodes that were associated
with particular input patterns */ num_writ = fread (&Nactv, sizeof(int), 1, LTM_ source_file);
if (num_writ != 1)
{
num_char = 54 + strlen(old_LTM_ file);
err_str = (char *)calloc(num_char, sizeof (char));
strcpy
(err str, "Problem reading number of assigned nodes from file \""); strcat(err_str, old_ LTM_ file);
strcat(err_str, "\".");
message (err_str);
free ((char *)err_str);
}
else
/* Read in actual LTM memory trace values */
num_ vale = 2 * TOT_ SPECTRUM_ SIZE * nF2;
num_writ = fread (z(0), sizeof(float), num_vale, LTM_ source_file); if (num_writ != num_vals)
{
num_char = 46 + strlen(old_LTM_ file);
err_str = (char *)calloc(num_ char, sizeof (char));
strcpy(err_ str, "Problem reading LTM trace values from file \"") strcat(err_ str, old_LTM_file);
strcat(err"_ str, "\".");
message (err_ str);
free ((char *)err_ str);
}
}
}
/* Reset all information about the LTM output file
to prevent inadvertent reuse. */ fclose(LTM_source_ file);
LTM_source_flle = NULL;
strcpy(old_LTM_file, "");
panel_set(LTM_Tnput_item, PANEL_VALUE, old_LTM_ file, 0);
}
void write LTM[ ]
{
char *err_ str;
int num_vals, num_writ, num_char;
if (LTM_output_file == NULL) message ("Flle to receive LTM trace values has not been opened.");
else
{
/* Save number of category (F2) nodes that have been associated
with particular input patterns */ num writ = fwrite (&Nactv, sizeof(int), 1, LTM_output_ file);
if Tnum writ != 1)
{
num char = 52 + strlen(new_LTM_ file);
err_ str = (char * )calloc(num_char, sizeof (char));
strcpy
(err_ str, "Problem writing number of assigned nodes to file \""); strcat(err_str, new_LTM_file);
strcat(err_ str, "\".");
message (err_str);
free ( (char *)err_ str);
}
else
/* Save actual LTM memory trace values */
nυm_vals = 2 * TOT SPECTRUH_SIZE * nF2;
num writ = fwrite (z[0], sizeof (float), num_vals, LTM_output_file); if (num_writ != num_vals)
{
num_char = 44 + strlen(new_LTM_file);
err_str = (char *)calloc(num_char, sizeof (char)); strcpy(err_str, "Problem writing LTM trace values to file \""); street (err_str, new_LTM_file);
strcat(err_ str, "\".");
message (err_str);
free ( (char *)err_str);
}
}
}
/* Reset all information about the LTM output file
to prevent inadvertent reuse. */ fclose(LTM_output_file);
LTM_output_file = NULL;
strcpy(new_LTM_ file, "");
panel_set(LTM_output_item, PANEL_VALUE, new_LTM_file, 0);
}
vlt_open() /* Activate video lookup tables for image display window */
/* Modified by KG Heinemann on 06-MARCH-1989 and 07-MARCH-1989
to include colormap information from the image VLT, the bast frame, and all pre-exitting windows in the VLT ramp display */
int i, color_fetch_index;
/* Set up video lookup table for the histogram plot display */
set_hist_vlt();
/* Set up video lookup table for the display of edge detector results */ set_edf_vlt();
/* Close any video lookup table file that was already opened */ fc l os e ( fp_v i t ) ;
fp_vlt = fopen(vlt_fname, "r" ) ; /* Open new video lookup table file */
/* Set video lookup table for actual image */
set_colormap(img_pw, fp_vlt, vit_fname, 0);
/* Set video lookup table for VLT encoding display */
/* Extract colormap values set by pre-existing windows and
store them in corresponding locations of the colormap array
(This is accomplished without upsetting the window manager
by fetching the colormap entries one at a time) */
/* Start colormap retrieval at the next element after
the end of the base frame's colormap segment */
color_fetch_index = bas_cms .cms_size;
for(i=(bas_ cms. cms_ addr+bas_ cms. cms_ size); i<VLT SIZE; ++i)
{
pw_getcolormap(bas_pw, color_fetch_index, 1, red+i, green+i, blue+i);
++color_fetch_index;
}
pw_se tcrasname ( vlt_pw, "ramp_vlt");
pw_ putcolormap( vlt_pw, 0, VLT_SIZE, red, green, blue);
/* Transfer colormap information from plxrect to the pixwin
which has been allocated for the VLT encoding display */
draw_ rainbow(vlt_ pw);
}
int display_proc()
int retval;
/* Allocate memory to store image */
image = reset_image_block(image, slzt_x, size_y, "input");
/* Release any memory assigned to the V2 image */
if (V2_hidden_layer != NULL )
free((ACTIVATION_DATA_TYPE *)V2_hidden_layer);
V2_hidden_layer = NULL;
}
retval = show_image(batch_fig);
return (retval);
}
void clear_ canvas_proc()
{
pw_writebackgroυnd(img_pw, ing x offs, img_y_offs,
IMG_CANVAS_X_SIZE, IMG_SANVAS_Y_SlZE, PIX_ SRC);
/* Clear any histogram displayed for the old image */
pw_writebackground(hst_pw, 0, 24, hist_width, HST_PLOT_HEIGHT, PIX_SRC);
/* Set flag to indicate that histogram information is now invalid */ validhst_flag = 0; /* Clear any edge detector specturm displayed for the old image */ pw_writebackground(edf_pw, 0, 0, EDF_PLOT_wiDTH, HST_WIN_HEIGHT, PIX_SRC);
/* Set flags to indicate that edge detector spectrum (V1)
and centering signal (V2) information is now invalid */ valid_Vl_flag = valid_V2_flag = 0;
}
void quit_proc( )
{
window_set(base_ frame, FRAME_NO_CONFIRM, TRUE, 0);
window_destroy(base_frame);
}
#include <errno.h>
oid batch_cwd_proc()
{
strncpy (seq_cwd, (char * )panel_get_value(batch_cwd_itern), FNL); void batch_fil_proc()
{
int stat, read_seq();
void seqfil_reset();
/* Attempt to switch current working directory to
directory path specifled for the batch sequence file */ switch( chdir(seq_cwd) )
{
case 0: /* Successful Switch */
strncpy (seq_fname, (char *)panel_get_value(batch_fil_item), FNL);
if (batch_ seq==NULL)
batch_seq = (struct sequence *)calloc(1, sizeof (struct sequence)); if (batch_seq==NULL)
{
message ("Could not allocate sequence structure.\n" ) ;
seqfil_ reset();
}
else
{
stat = read_seq(bateh_seq, seq_fname);
if (stat<0)
{
message ("Could not open sequence file.\n");
seqfil_reset();
}
}
break;
case ENOTDIR:
message("Non-directory component in path to batch sequence filel"); break;
default:
message ("Cannot access directory specified for batch sequence filel"); strcpy(seq_cwd, cwd); panle_set(batch_cwd_item, PANEL_VALUE, seq_ cwd, 0);
}
}
void seqfil_reset()
{
strcpy(seq_fname, "");
panrl_set(batch_fil_it em, PANE _VALUE, srq_tname, 0); }
/* image_util.c Printe on 18-December-1989 */
/*
Image I/O and Display Manipulation Routines for the Software Testbed Developed by KG Heinemann and P.N. DiCaprio to Test R.L. Harvey's
Neural Network Architecture for General Object Recognition
"vio.c" Modified by KG Heinemann on 18-MAy-1989 to facilitate
use of M.M. Menon's "STD" file format for input images
"imgbox_io.c" adapted from "vio.c" by KG Heinemann on 08-AUGUST-1989
"vio.c" and "imgbox_io.c" merged into one set of routines
on 27-September-1989 and 28-September-1989.
*/
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <suntool/sunview.h>
#include <sυntool/canvas.h>
#include <suntool/panel.h>
#include <suntool/scrollbar.h>
#include <local/STD.h>
#include "image_io.h"
/* Look up table for displaying an 8 bit image with a compressed colormap */ int img_display_value[VLT_SIZE];
int npnl_rows; /* Number of rows already specified in a given panel */
/*---------------------------------------------------------------------------------------------------------------------------------* /
/* Variables for Image File I/O */ char cwd[FNL]; /* Current working directory */
char img_fname[FNL]; /* Image file name */
/* ASCII representation of pattern index number within the image input file */ char imgnum_str[4];
Panel control_panel;
Panel_item
cwd_ item, file_item, num_item, hdr_item, vlt_item, csr_item, zoom_item;
int ok_img_file=0;
FILE *fp_img; /* stream pointer for ASCII input files */
/* Parameter blocks for manipulation of "STD" files */
int std_ioblk[32], std_pblk[32];
/* Flag to indicate whether input image is coming from a "non-STD" file */ int nonstd_flag;
/* Flag to indicate whether an "STD" input image file has been opened */ u_char std_open;
char header[FNL]; /* Publically Available Portion of "STD" file header */ /*---------------------------------------------------------------------------------------------------------------------------------* / /* Variables for Dynamically Allocated Image Memory Blocks */ /* Error Message String for the "reset_image_block" Subroutine */
char "rib err_string "
"reset_lmage_ block: insufficient memory available for *; char "image_str = " image. \n ";
u_char *image=NULL; /* dynamically allocated image array */
/*---------------------------------------------------------------------------------------------------------------------------------* /
/* Image Display Parameters */
Frame base_frame; /* Structure to describe top level window In SunView */
Canvas img_canvas; /* Structure to describe image display area in SunView */
Plxwin "bas_pw, *img_pw;
Menu img_menu;
Cursor img_cursor;
int size_x-IMAGE_WIDTH, /* input image width */
size_y=IMAGE_HEIGHT; /* input image height */
int zoom_x=1, zoom_y=1; /* zoom magnification */
/* Horizontal and Vertical Offsets of Displayed Image within Image Canvas */ int i, img_x_offs, img_y_offs;
/* Thickness of a Default Scroll Bar for Proper Sizing of the Image Canvas */ int scroll_bar_ thickness;
/*---------------------------------------------------------------------------------------------------------------------------------* /
/* Information Pertaining to Global Video Look-Up Table Values and Handles for Color Bar Display Area */ char vlt_ fname[FNL]; /* VLT file name */
FILE *fp_vlt; /* Stream pointer for VLT file name */
Canvas vlt_canvas;
Pixwin *vlt_pw;
u_ char red[VLT_SIZE], green[VLT_ SIZE), blue(VLT_SIZE);
/*---------------------------------------------------------------------------------------------------------------------------------* / int box_ fig = -1; /* flag for defined image box */ char box_str[FNL];
Panel_ item img_box_ item;
struct BOX_STRUCT img_box;
/*---------------------------------------------------------------------------------------------------------------------------------* /
/* Variables for Manipulating Error Message Strings */
char *err_ str;
char *oper_ prefix = "Problem while opening ";
char *oper_sufflx = " input image filed";
char *FilNOpen_str = "read_ image: file not open!";
char *ASC_ rderr_ str = "read_ image: Error reading data from ASCII image filel"; char *STD_rderr_str = "read_image: Error reading data from STD image filel"; char "wrong_num_str "
"Image number is out of range; the input file contains only patternsl ";
/*---------------------------------------------------------------------------------------------------------------------------------* / setup_windows(argc, argv, frlabstr)
int argc;
char **argv;
char *frlabstr;
{
struct (u_char rval, gval, bval) fgnd color;
fgnd_color. rval=162; fgnd_color.gval=0; fgnd_color.bval=223;
/* Create handle for the overall display */
base_ frame = window_ create(NULL, FRAME, TRAME LABEL, frlabstr,
TRAME_EMBOLDEN_LABEL, TRUE,
FRAME_FOREGROUND_COLOR, fgnd_ color,
WIN_X, 4, WIN_Y, 0, FRAME_ARGS, argc, argv, 0);
/* Create panel for input image selection and display manipulation */
control panel = window_create(base_frarae, PANEL, 0);
/* At start of control panel construction,
set number of preceding lines to zero */
npnl_rows = 0;
/* Create lines for communication about the batch control file, if any */ npnl_rows += batch_file_info();
/* Create text lines to communicate display status information */
cwd_ltem =
panel_create_item(control_panel, PANEL_TEXT,
PANEL_ LKBEL_STRING, "Image Directory:",
PANEL_VALUE, cwd,
PANEL_ VALUE_ DISPLAY_LENGTH, FNL,
PANEL_NOTIFY_FROC, cwd_proc, 0);
++npnl_rows;
file_item =
panel_ create_ltem(control_ panel, PANEL_ TEXT,
PANEL_ LXBEL_ STRING, "Image File Name:",
PANEL_VALUE, img_fname,
PANEL_VALUE_ DISPLAY_LENGTH, FNL,
PANEL_NOTIFY_PROC, img_open_proe, 0);
++npnl_rows;
num_item = panel_create_ item(control_panel, PANEL_TEXT,
PANEL_ LXBEL_STRING, "image Number:",
PANEL_VALUE, imgnum_str,
PANEL_VALUE_DISPLAY_ LENGTH, FNL,
PANEL_NOTlFY_PROC, std_pnum_proc. 0);
++npnl_rows;
hdr_ item = panel_create_itemlcontrol_ panel, PANEL_TEXT,
PANEL_LXBEL_STRING, "File Header:",
PANEL_VALUE, header,
PΛNEL_VALUE_DISPLAY_LENGTH, FNL, 0);
++npnl_ rows; vlt_ item =
panel_create_ item(control_panel, PANEL_ TEXT,
PANEL_ LXBEL_STRING, " VLT File Name:",
PANEL_VALUE, vlt fname,
PANEL_ VALUE_ DISPLAY_ LENGTH, FNL,
PANEL_NOTiFY_PROC, vlt_open_proc, 0);
++npnl_rows;
npnl_ rows += aux_flle_lnfo();
csr_item = panel_create_ item(control_panel, PANEL_ TEXT,
PANEL_LXBEL_STRING, "Cursor position:", PANEL_ VALUE_ DISPLAY_ LENGTH, FNL,
0);
++npnl_rows;
zoom_ item = panel_create_item(control_panel, PANEL_TEXT,
PANEL_LABEL_STRING, "Current Zoom Factors:", PANEL_ VALUE_ DISPLAY_LENGTH, FNL,
0);
++npnl_rows;
img_box_ item = panel_create_item(control_panel, PANEL_TEXT,
PANEL_LABEL_STRING, " Image box:",
PANEL_VALUE, box_str,
PANEL_VALUE_DISPLAY_LENGTH, TNL, 0);
++npnl_rows;
/* Set Active Window Item to the "Image File" field */
window_set(control_panel, PANEL_CARET_ITEM, file_item, 0);
/* Create "button" items to receive interactive user Instructions */ panel_create_item(control_panel, PANEL BUTTON,
PANEL_NOTIFY_ PROC, display_proc,
PANEL_LABEL_lMAGE,
panel_button_ image(control_panel, "Display", 9,0),
PANEL_ ITEM_Y, (ATTR_ FOW(npnl_rows) - 10),
PANEL_ ITEM_X, ATTR_COL(0),
0);
panel_ create_ item(control_ panel, PANEL_BUTTON,
PANEL_NOTIFY_PROC, clear_ canvas_ proc,
PANEL_LABEL_lMAGE,
panel_button_ iaage(control_ panel, "Clear", 7,0),
0);
panel_ create_item(control_panel, PANEL_BUTTON,
PANEL_NOTIFY_PROC, zoom_proc,
PANEL_LABEL_lMAGE,
panel_button_image( control_panel, "Zoom", 6,0),
0);
panel_create_ item(control_ panel, PANEL_ BUTTON,
PANEL_ NOTIFY_ PROC, unzoom_proc,
PANEL_LABEL_TMAGE,
panel_ button_image(centrol_panel, "UnZoom",8,0),
0); panel_ create_ item(control_panel, PANEL_BUTTON,
PANEL_NOTIFY_PROC, quit_ proc,
PANEL_ LABEL_lMAGE,
panel_button_ image(control_panel, "Quit", 6,0),
0);
aux_buttons();
mk_messwin(); /* Create window for displaying error messages */
/* Scale contents of control panel to fit in available space */
window_fit(control_panel);
aux_panels();
/* Set Up SunView Canvas for Display of Input Image */
/* Create canvas area for displaying the video lookup table */
vlt_canvas =
window_create (base_frame, CANVAS,
WIN_X, 0,
CANVAS_AUTO_SHRINK, FALSE,
CANVAS_ HEICHT, VLT_HEIGHT,
WIN_WIDTH, VLT_SIZE,
WIN_HEIGHT, VLT_HEIGHT, 0);
/* Create crosshair image cursor */
img_cursor = cursor_create(CURSOR_SHOW_CROSSHAIRS, TRUE,
CURSOR_CROSSHAIR_LENGTH, 15 ,
CURSOR_CROSSHAIR_GAP, 5 , 0);
/* Determine thickness of a default scroll bar */
scroll_bar_thickness = (int) scrollbar_get(SCROLLBAR, SCROLL_THICKNESS); /* Creat canvas area for displaying the actual image */
img_canvas =
window_create(base_ frame, CANVAS,
WIN_X, 0,
CANVAS_AUTO_SHRINK, FALSE,
CANVAS_WIDTH, IMG_CANVAS_X_SIZE,
CANVAS_HEIGHT, IMG_CANVAS_Y_SlZE,
WIN_WlDTH, IMAGE_WIDTH + scroll_ bar_thickness,
WIN_ HEIGHT, IMAGE_HEIGHT + scroll_bar_ thickness,
WIN_VERTICAL_SCROLLBAR, scrollbar_ create(0),
WIN_ HORIZONTAL_SCROLLBAR, scrol lbar_create (0),
WIN_EVENT PROCT xy_proc,
WIN_CURSOR, img_cursor,
0);
aux_windows(); /* Create any other windows used in this application */ /* Horizontal and vertical scaling for overall window */
window_fit(base_frame);
bas_pw = (Pixwin *)w indow_get(base_frame, WIN_ PIXWIN);
vlt_pw = canvas_ pixwin(vlt_canvas);
img_pw = canvas_pixwin(img_canvas);
}
set_colormap(ρw, fp, name, first_ioc) /* Modified on 02-KAR-1989 by KCH so that the size and location of the colormap segment are specified by arguments */
Pixwin *pw;
FlLB *fp;
char *name;
int first_loc; /* Index of first colormap location to set */
/* Set up video look up table for image display */
{
int i, r, g, b, color_val, color_dec;
color_dec = (VLT_SIZE + IMG_VLT_SIZE - 1) / IMG_VLT_SIZE;
if(fp==NULL) /* No VLT file - default is gray scale */
{
color_val = VLT_ SIZE - 1;
for(i=first_ loc; i<IMG_VLT_SlZE; ++i)
}
red[i] = green[i] = blue[i] = color_val;
color_ val-=color_dec;
}
pw_ setcmsname (pw, name);
}
elst
/* Read color value triplets from file and store in arrays */
{
for(i=first_ loc; i< IMC_VLT_SIZE; ++i)
{
. fscanf (fp, "#d #d #d",&r, &g,&b);
red[i]=r; green[i]=g; blue[i]=b;
}
rewind(fp);
}
/* Assign video look up table to the color map */
pw_putcolormap(pw, first_loc, IMG_VLT_SIZE,
red + first_loc,
green + first_loc,
blue + first_loc);
/* Set up look up table for compressing the image before display */ color_val = (VLT_SIZE - 1) / color_dec;
for(i=0; i<<VLT_SIZE; ++1)
img_display_value[i] = color_val - (i / color_dec);
}
draw_ rainbow(pw)
Pixwin *pw;
/* Write out sequence of vertical bars to activate actual VLT display */{
int i;
for(i=0; i<VLT_ SIZE; ++!)
pw_rop(pw, i,0, 1, VLT_HRIGHT, PIX_SRC | PIX_COLOR(i), (Pixrect *)0, 0, 0);}
reset all_panels()
/* Refresh default entries on display of master control panel */ {
panel_set(file_item, PANEL_VALUE, img_fname, 0);
panel_set (num_item, pANEL_VALUE, imgnυm_str, 0);
panel_set(hdr_item, PANEL_VALUE, header, 0);
panel_ set(vlt_ item, PANEL_VALUE, vlt_fname, 0);
}
int img_open(batch_flg)
/* flag to indicate whether calling program is operating in a batch mode */ u_char batch_flg;
{
void imgfil_reset();
int str_index, stat, retval, emes_len;
char ftype[9], class_name[10];
char *suf_char_ptr, *std_hdbf;
retval = 1; /* Set default return value to indicate success */
/* Extract file type suffix from name of image file */
str index=0;
if ((suf_char_ptr = strchr(img_fname, '.')) != NULL)
while(*suf_char_ptr != '\0')
if(str_ index<B)
ftyρe[str_ index++] = *(++suf_char_ptr);
else
{
ftype(str_ index) = '\0';
break;
}
if (nonstd_flag=strcmp(ftype, "std"))
fp_img = fopen(img_ fname, "r");
if(fp_img==NULL)
/* Allocate memory and construct an actual error message string */ ernes_ lcn = strlen(oper_prefix) + strlen(oper_suffix) + 5; err_str = (char * )calloc(emes_len, siteof (char));
strcpy (err_ str, oper_ prefix)
strcat (err_str, "ASCII");
strcat (err_str, oper_sufflx);
/* Display error message in pop-up window if operating in interactive mode */ if (ibatch_flg)
{
message(err_str);
free((char *)err_str);
}
retval = 0; /* Set return value to indicate a problem */ imgfil_reset();
}
else
{
size_ x = 552;
sise_y = 512; }
}
else
{
std_open = FALSE;
std_hdbf = (char *)calloc(FNAMLEN, sizeof (char));
stat = init_ rd_std(std_ ioblk, std_ pblk, std_hdbf, img_ fname);
if (stat<0)
{
/* Allocate memory and construct an actual error message string */ emes_len = strlen(oper_prefix) + strlen(oper_suffix) + 3; err_ str = (char *)calloc(emes_len, sizeof (char));
strcpy (err_str, oper_prefix);
strcat (err_ str, "STD");
strcat (err_str, oper_sυffix);
/* Display error message in pop-up window if operating in interactive mode */ if (ibatch_fig)
{
message (err_ str);
free((char * )err_str);
}
retval = 0; /* Set return value to indicate a problem */ imgfil_ reset();
}
else
{
std_open = TRUE;
size_ x = std_pblk [2];
size_y = std_pblk[ 1];
for(str_index=0; str_index<(FNL-1); ++str_index)
header[str_index] = std_ hdbf[str_index];
header[FNL] = '\0';
}
free((char *)std_hdbf);
}
return(retval);
}
void imgfil_ reset ()
{
strcpy(i mg_ fname, "");
panel_ set(file_item, PANEL_VALUE, img_fname, 0);
/*---------------------------------------------------------------------------------------------------------------------------------* /
/* Routine to perform dynamic allocation of image arrays */
u_ehar *reset_image_block(img_ptr, x_dim, y_dim, type_string)
u_char *img_ ptr; /* Pointer to memory region designated for image */ int x_dim, y dim; /* Dimensions of specified image */
char *type_string; /* Image type information for use in error message */
{
int aux_index, err_index; /* Release any memory which has already been assigned for the image */ if (img_ ptr != NULL )
free((char *)img_ptr);
/* Allocate new memory block for image storage */
img_ ptr = (u_char *)calloc(x_dim*y_dim, sizeof (u_char));
if (img_ptr==NULL )
{
err_ index-53;
for(aux_index=0; type_ strlng[aux_index] !='\0'; aux index++)
rib_err_ string[err_index++] = type_string[aux_index];
for(aux_index=0; image_strIaux_index)!='\0'; aux_ index++)
rib_err_string[err_index++] = image_str[aυx_index];
rib_err_string(err_index] = '\0';
errmess(rib_err_string);
}
return(img_ptr);
}
/*---------------------------------------------------------------------------------------------------------------------------------* /
/* Read in image data from file */
int read_image(batch_flg)
/* flag to indicate whether calling program is operating in a batch mode */ u_char batch_flg;
{
int stat;
char class_name[10]; /* STD Pattein Class */
char reqnum_str [4], filnum_str[4];
if (nonstd_flag)
{
if (fp_img) /* Check whether image file is open */
}
/* Check for error in reading image */
if (fread(image, 512*512, 1, fp_img)!=1) /* Error occurred */ {
/* Issue message if operating in interactive mode */ if (!batch_flg) message(ASC_rderr_str);
/*
Otherwise set error message pointer to appropriate string and set return statue value to indicate a "data read" error */ else
{
err_ str = ASC_rderr_str;
stat = DATA_ RD_ERROK;
}
ok_img_file = FALSE;
}
else { /* Image data acquired successfully */ size_x = 512;
size_y = 512;
ok_ img_file = TRUE;
stat = lMG_FILE_OK;
} }
else /* Image file not open */
{
/* Issue message if operating in interactive node */
if (ibatch_flg) message (FilNOpen_str);
/* Otherwise set error message pointer to appropriate string
and set return status value to indicate that file not open */ else
{
err_str = FilNOpen_ str;
stat = TIL_NOT_OPN_ERR;
}
ok_ img_ file = FALSE;
}
}
else
{
if (std_open) /* Check whether "STD" image file is open */
/* Cheek whether "std" file contains the requested pattern number */
if ( (std_ioblk[1] > std_pblk[12]) || std_ioblk[1]<1)
{ /* NO */
/* format error message string */
sprintf (reqnum_str, "#3d", std_ioblk [1]);
strncpy(wrong_num_str+13, reqnum_ str, 3);
sprintf(filnum_str, "#3d", std_pblk[12]);
strncpy(wrong_num_str+63, filnum_str, 3);
/* Display error message in pop-up if working interactively */
if (lbatch_flg) message (wrong_num_str);
/* Otherwise set error message pointer to string for bad pattern no. and set return status value to indicate pattern number out of range */ else
{
err str = wrong_ num_ str;
stat = BAD_STD_PAT_NUM;
}
ok_ img_file = FALSE;
}
else
{
stat = read_std(std_ioblk, std_ pblk, class_name, image);
if (stat 1= IMG_ FlLE_ OK)
{
/* Issue message if operating in interactive mode*/ if (!batch_flg) message (STD_rderr_str);
/* Otherwise set error message pointer to appropriate string */ else err_str = STD_rderr_str;
ok_img_file = FALSE;
}
else {
size_x = std_pblk[2];
size_y = std_pblk[1];
ok_img_file = TRUE;
}
}
}
else
{
/* Issue message if operating in interactive mode */
if ( !batch_flg) message(FilNOpen_str);
/* Otherwise set error message pointer to appropriate string
and set return status value to indicate that file not open */ else
{
err_str = FilNOpen_str;
stat = FIL_NOT_OPN_ERR;
}
ok_ img_ file = FALSE;
}
}
return (stat);
}
int show_image (batch_flg)
/* Flag to indicate whether calling program is operating in a batch mode */ u_char batch_fig;
{
int cetval;
int aυtozoom;
char zoom_str [13];
retval = read_image(batch_fig); /* Attempt to read image data from file */ if (retval == lMG_FlLE_OK) /* If image successfully acquired from file */ {
clear_canvas_ proe();
autozoom = min ((IMAGE_WIDTH/sise_x), (IMAGE_HEICHT/size_y));
zoom x = aoom_y = aυtozoom;
put_image(); /* Send it to the appropriate plxwin */ sprintf(zoom_ str, "X: #2d Y: #2d", zoom_ x, zoom_y);
panel_set(zoom_ item, PANEL_VALUE, zoom_sir, 0);
}
reset_all_panels(); /* Display control panel defaults again */
return(relval);
}
put_ image()
{
int img_indx;
/* Dynamically allocated array for compressed display image */
u_char *dsp_image=NULL;
/* Allocate memory block to store compressed display image */
dsp_image = (u_ char * )calloc(size_x*size_y, sizeof(u_char));
if(dsp_image==NULL )
errmess("put_ image: insufficient memory for compressed image."); /* Use pre-established look up table to compress the image for display */ ior (img_indx=0; img_indx<(size_x * size y); ++img_indx)
dsp_image[img_indx) = ( u_char) img_display_value[image [i mg_ indx]];
/* Compute offsets to center image on visible portion of canvas */ img_x_offs = max ( (IMAGE_WIDTH - (zoom_x * size_x)) / 2, 0);
img_y_offs = max ( ( IMAGE_HEIGHT - ( zoom_y * size_y)) / 2, 0);
pw_put_image(img_pw, img_x_offs, img_y_offs, size_x, size_y, dsp_ image);
/* Release memory block which holds the compressed image */
if (dsp_image != NULL ) free((char * )dsp_image);
}
pw_put_image(pw,dx,dy,ew,ns, image)
Pixwin *pw;
int dx,dy;
u_char *image;
int ns,ew;
{
struct pixrect *mem_point(), *im_pix;
int i,j,k=0;
pw_batch_on(pw);
if (zoom_x == 1 && zoom_y == 1)
{
im_pix = mem_point(ew, ns, 8, image);
pw_write(pw, dx, dy, ew, ns, PIX_SRC, im_pix, 0, 0);
}
else
{
for(j=0; j<ns; ++j)
for(i=0; i<ew; ++i, ++k)
{
pw_rop(pw, dx+i*zoom_x, dy+j*zoom_y, zoom_x, zoom_y,
PIX_SRC I PIX_COLOR(image[k]), (Pixrect *)0, 0, 0);
}
}
pw_batch_off(pw);
/* set image box */
img_box.x0 = dx;
img_box.y0 = dy;
img_box.x1 = ew;
img_box.y1 = ns;
sprintf (box_str, "x0 = #d, y0 = #d, x1 = #d, y1 = #d",
img_box.x0, img_ box.y0, (img_box.x1-1), (img_box.y1-1)); panel_set(img_box_item, PANEL_VALUE, box_str, 0);
}
setup_img_menu() /* Create menu for image zoom functions */
{
img_menu = menu_create (MENU_STRINGS , "Zoom", "UnZoom", "Clear", 0, 0); }
void zoom_proc( )
{ clear_canvas_proc();
zoom_x *= 2;
zoom_y *= 2;
pυt_image();
}
void unzoom_proc()
{
clear_canvas_proc();
zoom_x /= 2;
zoom_y /= 2;
if(zoom_x<1) zoom_x=1;
if(zoom_y<1) zoom_y=1;
put_ image();
}
void img_open_proc()
{
strncpy (img_fname, (char *)panel_get_value(file_item), FNL); img_oρen(0);
reset();
clear_canvas_proc();
}
void std_pnum_proc()
{
strncpy (imgnura_ str, (char *)panel_get_value(num_ item), 3); sscanf (imgnum_str, "#d", std_ioblk+1);
clear_ canvas_ proc();
}
void vlt_ open_ proc()
{
strncpy(vlt_fname,(char * )panel_get_value(vlt_item), FNL); vlt_open();
}
void xy_proc (cv, event)
Canvas cv;
Event *event;
{
char string[50];
int ix, iy;
int value;
int item_number;
static int crosshair_toggle=TBUE;
switch(event_ id(event))
{
case MS_ RIGHT:
/* ------------------ */
item_number = (int)menu_show(img_menu, img_canvas, event, 0): switch(item_number-1)
{
cast 0:
zoom_proc( );
break;
case 1:
υnzoom_proe(); break;
case 2:
clear_canvas_proc();
break;
}
break;
case MS _MIDDLE:
/* ------------------ */
crosshair toggle =
(int)cursor_get(img_cursor, CURSOR_SHOW_CROSSHAIRS); if(event_ is_down(event))
{
if(crosshair_ toggle==TRUE)
crosshair_toggle= FALSE;
else
crosshair_toggle=TRUE;
cursor_ set(img_cursor,
CURSOR_ SHOW_CROSSHAIRS, crosshair _toggle, 0);
window_set(img_canvas, WlN_CURSOR, img_ cursor, 0 ); }
break;
case MS_LEFT:
/* ------------------ */
if (event _is_ down(event))
{
if (box_flg > -1) draw_box();
img_ box.x = (event_x(event) - img_x_offs) / zoom_ x; img_box.y = (event_y(event) - img_y_offs) / zoom_y; change_box();
box_ fig = (box_ fig == 0) ? 1 : 0;
}
break;
default:
/* ------------------ */
img_box.x = (event_x(event) - img_x_ offs) / zoom_x;
img_box.y = (event_y(event) - img_y_offs) / zoom_y;
if (box_ fig == 0)
{
draw_box();
change _box();
}
value = pw_get(img_pw,
( (img_box.x*zoom_x) + img_x_ offs),
( (img_box.y*zoom_ y) + img_y_offs) );
ix = img_box.x;
iy = iag_box.y;
sprintf(string,"x=%4d y-%4d value-%3d", ix, iy, value); panel_set(csr_item, PANEL_VALUE, string, 0);
break;
} }
change_box()
{
img_box.x0 = irog_box.x - ((img_ box.size_x - 1) /2) - 1;
img_box.y0 = img_box.y - ((img_box.size_y - 1) /2) - 1;
img_box.x1 = img_box.x0 + img_box.size_x;
img_box.y1 = img_box.y0 + img_box.size_y;
draw_box();
sprintf(box_str, "x0 = %d, y0 = td, x1 = %d, yl = %d",
img_box.x0, img_box.y0, (img_box.x1-1), (img_box.y1-1)); panel_set(img_box_item, PANEL_VALUE, box_str, 0);
}
draw_box( )
{
int scr_x0, scr_y0, scr_x1, scr_y1;
scr_x0 = (zoom_ x * img_box.x0) + img_x_offs;
sce_y0 = (zoom_y * img_ box.y0) + img_y_offs;
scr_ x1 = (zoom_ x * img_box.x1) + img_x_offs;
scr_y1 = (zoom_y * img_box.y1) + img_y_offs;
pw_vectordmg_ pw, scr_y0, scr_y0, scr_x1, scr_x0, PIX_SRC ^ PIX_DST, 255); pw_vectordmg_ pw, scr_y1, scr_y0, scr_x1, scr_x1, PIX_SRC ^ PIX_DST, 255); pw_vectordmg_ pw, scr_y0, scr_y1, scr_x1, scr_x1, PIX_SRC ^ PIX_DST, 255); pw_vectordmg_ pw, scr_y0, scr_y0, scr_x0, scr_x1, PIX_ SRC ^ PIX_DST, 255); }
#include <errno.h>
void cwd_proc()
{
switch( chdir((char *)panel_get_value(cwd_item)) )
{
case 0:
strncpy(cwd, (char *)panel_get_value(cwd_item), FNL);
return;
case ENOTDIR:
errmess("cwd_proe: component of path not directory");
break;
default:
panel_set(cwd_item,PANEL_VALUE,cwd,0);
break;
}
}
/* verrtool. c Printed on 18-December-1969 */
/*
error message for image viewer viewtool
*/
/* -- Includes -- */
#include <stdio.h>
#include <suntool/sunview.h>
#include <suntool/canvas.h>
#include <suntool/panel.h>
#include "viewtool.h"
/* ------------------ */
/* Error Message */
/* ------------------ */
nk_messwin()
{
/* Create new frame window to display error message */
message_frame =
window_create(base_ frame, FRAME,
FRAME_SHOW_ LABEL, TRUE, /* Show label in frame border */
FRAME_LABEL, "Error:",
WIN_X, 20,
WIN_ Y, 20,
WIN_SHOW, FALSE, 0);
/* Create a panel within the new window */
message_panel =
window_ create(message_ frame, PANEL,
PANEL_LAYOUT, PANEL_HORIZONTAL, 0);
/* Create a reply button for the user */
panel_create_ item(message_panel, PANEL_ BUTTON,
PANEL_ NOTIFY_ PROC, reset_mess_ proc,
PANEL_ LABEL_lMAGE,
panel_ but on_ image ( control_panel, "Reset", 5,0),
0);
/* Allocate screen space for message string and get its handle */
msg_ item = panel_create_item(message_ panel, PANEL_MESSAGE, 0);
}
message(s)
char *s;
{
Frame mess_frame;
panel_ set(msg_ item, PANEL_LABEL_STRING, s, 0);
window_ fit(message_panel);
window_fit(message_ frame);
window_set(message_frame,WIN_SHOW,TRUE, 0);
window_set(base_frame,WIN_ SHOW, FALSE, 0);
window_looρ(message_ frame);
}
void reset_ mess_proc()
{ window_set(base_frame,WIN_S HOW,TRUE,0); window_set(message_frame,WIN_SHOW,FALSE,0); window_ return(0);
}
errmess(s)
char *s;
{
printf("*s",s );
quit_ proc();
exit(0);
}
/* filter.c Printed on 18-December-1989 */
/*
filter routines for viewtool
*/
/* -- Includes -- */
#include <stdio.h>
#include <suntool/sunview.h>
#include <suntool/canvas.h>
#include <suntool/panel.h>
#include <local/STD.h>
#include "image_io.h"
#include "cellview.h"
#include "netparam.h"
/* Header file to make Long Terra Memory trace information
and ART 2 result descriptors available to outside programs */ linclude "LTM.h"
/*
Subroutine "mk_img_proc_panel" modified by KG Heinemann
on 17-February-1985 to add button for histogram generator
further modifications by KG Heinemann on 25-April-1989 to add
button for RL Harvey's Neural Network Edge Detection Algorithms
*/
Panel_item EDF_item, thresh_item, roll_item;
/* Flag to indicate whether the program is operating in a batch mode */ u_char batch_flg=0;
FILE *log_chan=NULL, "fopen() ;
nk_img_proc_panel()
{
int item_X; /* Horizontal Coordinate of Specific Panel Items */ int item_Y; /* Vertical Coordinate of Specific Panel Items */
/* First create panel to specify Image Processing Options;
Set Default Width Equal to Width of the Control Panel */
/* Create new frame to hold the panel */
img_proc_panel =
window_create(base_frame, PANEL,
WIN_BELOW, control_ panel,
WIN_WIDTH, IMG_CANVAS_ X_ SIZE,
WIN_X, 0, 0);
/* Then install filter selection buttons */
panel_create_item(img_proc_panel, PANEL_BUTTON,
PANEL_NOTlFY_PROC, mean_filter_ proc,
PANEL_LABEL_lMAGE,
panel_button_image (control_panel, "Mean filter", 11,0), 0); panel_ create_ item(img_proc_ panel, PANEL_BUTTON,
PANEL_NOTIFY_PROC, median_ filter_proc,
PANEL_LABEL_IMAGE,
panel_button_image(control_ panel, "Median fliter" ,13,0), 0);
panel_create_item(img_proc_panel, PANEL_ BUTTON,
PANEL_NOTlFY_ PROC, set_ binary_ filter_proc,
PANEL_LABEL_lMAGE,
panel_ button_ image(control_panel, "Threshold" ,9,0), 0);
panel_create_item(img_proc_panel, PANEL_BUTTON,
PANEL_NOTIFY_PROC, make_IRP_histogram,
PANEL_LABEL_ IMAGE,
panel_button_image
(control_panel, "Generate Histogram", 18, 0),
0);
panel_ create_item(img_ proc_panel, PANEL_ BUTTON,
PANEL_NOTIFY_PROC, hst_equalize,
PANEL_LABEL_ IMAGE,
panel_button_image
(control_ panel, "Equalize Histogram", 18, 0),
0);
EDF_item =
panel_create_ item(img_proc_panel, PANEL_BUTTON,
PANEL_NOTIFY_PROC, spiral_map,
PANEL_LABEL_ IMAGE,
panel_ button_image
(control_ panel, "NN Edge Detectors", 17, 0), 0);
panel_ create_ item(img_ proc_ panel, PANEL_ BUTTON,
PANEL_ NOTIFY_ PROC, detect_ offset,
PANEL_LABEL_IMAGE,
panel_button_ image
(control_panel, "NN Offset Detectors", 19, 0),
0);
panel_ create_ item(img_proc_panel, PANEL_BUTTON,
PANEL_NOTlFY_ PROC, ITCl,
PANEL_LABEL_lMAGE,
panel_button_image
(control_panel, "ART2 Classifier", 15, 0),
0);
/* Create sliding scale lever for user input of threshold value */
thresh_ item =
panel_create_item(img_ proc_ pantl, PANEL_ SLIDER,
PANlL_LAlEL_ STRING, " Binary Threshold:",
PANEL_MIN_VXLUE, 0,
PANEL_MAX_VALUE, VLT_SIZE-1,
PANEL_VALUE, binary_ thresh,
PANEL_NOTIFY_PROC, binary_filter_ proc,
0);
/* Create sliding scale lever for user to roll the VLT */
/* Extract Vertical Coordinate of "thresh item" */
item_Y = (int)panel_get(thresh_item, PANEL_ITEM_Y);
/* Compute Desired Vertical Coordinate for "roll_ item" */
item_ Y += 20; roll_item =
panel_create_itere(img_proc_panel, PANEL_SLIDER,
PANEL_LABEL_STRING, " Roll VLT :",
PANEL_ MIN_VALUE, 0,
PANEL_MAX_ VALUE, VLT_SIZE-1,
PANEL_ITEM_ X, 3,
PANEL_ ITEM_Y, item_ Y,
PANEL_VALUE, 0,
PANEL_NOTIFY_PROC, roll_ vlt_proc,
0);
/* Extract Horizontal Coordinate of the "NN Edge Detectors" Button */ item_X = (int)panel_get(EDF_item, PANEL_ITEM_X) - 96;
item_Y -= 10;
panel_ create_item(img_proc_panel, PANEL_ BUTTON,
PANEL_NOTIFY_PROC, IRP_ auto_proc,
PANEL_LABEL_lMAGE,
panel_bυtton_image
(control_ panel, "Full IRP Sequence", 19, 0), PANEL_ITEM_Y, item_Y,
PANEL_ITEM_X, item_X, 0);
panel_create_ item(img_proc_panel, PANEL_ BUTTON,
PANEL_NOTIFY_PROC, IRP_ batch_proc,
PANEL_LABEL_lMAGE,
panel_button_image
(control_panel, "Run from Batch File", 21, 0), PANEL_ITEM_Y, item_Y, 0);
/* Scale contents of algorithm control panel to fit in available space */ window_fit(img_ proc_panel);
}
int binary_thresh=128;
void mean_ filter_ proct)
{
int i,j,k;
u_char *old_image;
old_ image = (u_char * )calloc(site_x*size_y,sizeof(u_char));
if (old_image==NULL)
errmess("mean_filter_proe: no room for old_image");
for(k=0; k<size x*size y; ++k)
old_image[k]=image[k];
pw_put_image (img_pw, 0 , 0 , s i ze_x , s i ze_y , old_image);
for(i=1; i<size_ x-1; ++i)
for(j=1; j<size_ y-1; ++j)
image[size_x*i+j ] = 0.2*(old_image[size_ x*i+ j]+
old_image[size_x*1+j+1]+
old_image[size_x*i+j-1)+
old_image[size_x*(i+1)+j]+
old_image[sixe_x*(i-1)+j]); free((char *)old_image);
put_ image();
{
void median_f ilter_proc()
{
int i,j,k;
υ_char *old_image;
old_image = (u_ char *)calloc(size_x*size_y, sizeof(u_char)); if (old_image==NULL)
errmess("mean_filter_proc: no room for old_image");
for(k=0; k<size_ x*size_y; ++k)
old_image[k]=imaage[k];
pw_put_image (img_pw,0,0,size_x,size_y,old_image);
for(i=1; i<size x-1; ++ i)
for(j=1; j<size_y-1; ++j)
image (size_x*i+j) = median(5,
(int)old_image[size_ x*i+j], (int)old_image|size_x*i+5+1], (int)old image[size_x*i+j-1], (int)old_image[size_ x*( i+ 1)+j], (int)old_image[size_x*(i-1)+j]); free((char *)old_image);
put_ image();
{
void binary_filter_proc()
{
int k;
u_char *t_image;
binary_thresh = (int)panel_get_value (thresh_item);
t_image = (u_char * )calloc(size_x*size_y, sizeof (u_char));
for(k=0; k<size_x*sizt_y; ++k)
{
if(image [k]>binary_thresh)
t_ image[k] =VLT_SIZE-1;
else
t_image[k]=0;
}
pv_pυt_image( img_pw,0,0, size_x, size_y, t_image);
free((char *)t_image);
{
void set_binary_filter_proc()
{
int k;
binary_thresh = (int)panel_get_valυe(thresh_item); for (loop_index=V2_SPECTRUM_OFFSET; loop_index<TOT_SPECTRUM_SIZE; ++loop_ index)
if (frwd_feature[loop_index] < bias_val)
bias_val = frwd_feature [loop_index];
for (loop_index=V2_SPECTRUM_OFFSET; loop_ index<TOT_ SPECTRUM_ SIZE;
++loop_index)
frwd_feature[loop_index] -= bias_val;
/* Apply designated weighting factor to the V2 signals */
for (loop_index-V2_SPECTRUM_OFFSET; loop_index<EDF_ SPECTRUM_OFFSET;
++loop_index)
frwd_ feature[loop_ index) *= V2_wt;
run_ART(TOT_SPECTRUM_SIZE, frwd_feature , <&log_class_ info, log_ chan);
}
/* Routine to run full sequence of IRP algorithms without user intervention */ nt IRP_auto_proc()
int retval;
if ((retval = display_proc()) == IMG_FILE_OK)
make_IRP histogram();
hst_equalize();
make_IRP_histogram();
spiral_ map();
detect_offset();
if (ivalid_V2_flag) retval = V2_CALC_ERR;
else ITCl() ;
}
return (retval);
}
/* Routine to run full sequence of IRP_algorithms on
a series of images specified in a "sequence" control file */
#include <errno.h>
void IRP_batch_proc()
{
int stat, init_seq_pat(), loop_ index, last_delim_ptr, fn_str_len;
char *fil_path, *cwd_sav_str, *log_path, *batch_err_str;
/* Check whether sequence control information has been acquired successfully */ if (seq_fname[0] == '\0') /* NO! */
message("Sequence control file not specified properly.\n");
else /* Sequence control information OK */
/* Translate sequence control information into a series of image requests and check for errors */
stat = init_seq_pat(batch_ seq);
if (stat<0) message ("Error interpreting sequence information.\n");
else /* Successful translation of sequence control information */ { /* Construct name of file to receive log of processing results */
/* Allocate memory to hold log file path name */
log_path = (char * )calloc(2*FNL, sizeof (char));
/* Set "directory" component of the log file path name
to store results information in the same place as
the batch sequence information */
strcpy(l og_path, seq_cwd);
strcat(log_path, "/" ); /* Add final slash delimiter */
/* Parse name of batch sequence source file
into "file name" and "file type" components */
/* Determine whether a file type is specified by searching
the batch sequence file string for its last period character. If the period character is found, append all the preceding characters (the file name portion) onto the log file path name. Otherwise treat the entire sequence file string as a
file name and append that onto the log file path name. */ if ( (last_delim_ptr = strrchr(seq_fname, '.')) != NULL)
ute relative position of the last period character by subtracting the absolute address of the batch sequence file string */ fn_str_len = last_delim_ptr - (int)seq_fname;
else fn_str_len = strlen(seq_fnarae);
strncat (log_path, seq_fnarae, fn_str_len); /* Copy file name */
/* Add period delimiter and "log" file type suffix */
strcat (iog_path, ".log");
log_chan = fopen(log_ path, "w"); /* open the actual log file */ if (log_chan == NULL)
{
fn_str_ len = 56 + strlen(log_ path);
batch_err_ str = (char *)calloc(fn_ str_ len, sizeof (char));
strcpy(balch_err_str, "Problem opening file \"");
strcat(batch_err_str, log_path);
strcat(batch_err_str, "\" for storage of batch result log."); message (batch_ err_str);
free ((char *)batch_ err_ str);
}
else
{
/* Save current path name for the image working directory */ cwd_sav_ str = (char * )calloe(FNL, sizeof (char));
strcpy(cwd_sav_str, cwd);
/* Allocate memory to image file path name */
fil_path = (char *)calloc(FNAMLEN, sizeof (char));
/* Set flag to indicate that the program
is operating in a batch mode */
batch_flg = 1; /* Loop over the specified number of images */
for (loop_ index=0; loop_index<batch_ seq->num_ entries;
++loop index)
{
/* Clear image display canvas to avoid confusion between visible scene and current pattern specification */ clear_canvas_proc();
/* Write sequence entry number to the batch results log file */
fprintf (log_chan, "#d - ", (loop_index+1));
/* Extract image file path name and pattern number
from the sequence control information */
get_seq_info(batch_seq, loop_index, fil_path, std_ioblk+1);
/* Parse image file path name into "directory" and "file name" components */
/* Determine whether a directory is specified by searching
the image file path name for its last forward slash character.
If the forward slash character is found, copy the directory
specification into the image "current working directory" string.
Otherwise, leave the "cwd" string unchanged and use the same
directory that was specified for the last image. */ if ((last_delim_ ptr = strrchr(fil_ path, '/')) != NULL)
{
/* Compute relative position of last forward slash character
by subtracting the absolute address of the file path name */
fn_str_len = last_delim_ptr - (int)fil_path;
strncpy (cwd, fil_path, fn_str_len); /* Copy */
/* Increment "last_delim_ptr" so that it points to beginning
of the file name component of the image file path naat */
++last_delim_ptr;
}
else /* No explicit directory specification */
/* In order to facilitate proper handling of the image file name,
set "last_delim_ptr" so that it points to beginning
of the image file path name. */ last_delim_ptr = (int)fil_path;
/* Change "current working directory" to current specifications,
check for success or failure, and take appropriate action. */ switch(chdir(cwd))
{
/* Successful change of the "cwd " - verify existence and format of the image file */
case 0:
panel_set(cwd_item, PANEL_VALUE, cwd, 0);
/* write current image flit directory
to the batch results log file */
fprintf(log_chan, "%s", cwd); /* Extract file name component from image file path name */ strcpy(img_fname, (char *)last_delirm_ptr);
/* Write current image file name to the batch results log file */ fprintf (log_chan, "/%s", img_fname);
/* Verify that image file exists and has correct format */ stat = img_ open(1);
if (stat>0) /* lmage file is OK */
/* Show file name on control panel */
panel_set(file_item, PANEL_VALUE, img_fname, 0);
/* Format pattern number for display on control panel */
sprintf (imgnum_str, "%3d", std ioblk[1]); panel_set(nura_Ttem, PANEL_VALUE, imgnum_str, 0);
/* Write current "std" pattern number
to the batch results log file */
fprintf (log_chan, " image no. %d", std_ioblk[1]);
/* Execute image processing algorithm suite */
stat = IRP_auto_ proc();
if (stat == IMG_FILE_OK)
{
fprintf (log_chan,
"\n Pattern mapped to node %d ", log_class_info. cat_node);
fρrintf(log_chan, "and learned in %d passes;", log_ class_ info.nυm_ pass);
fprintf (log_chan, " R = %6.4f.",
log_class_info.R_value);
}
else
fprintf (log_chan, "\n %s" , err_str);
}
else /* Problem accessing current image file */ fprintf (log_chan, "\n \s*, err_ str);
free ((char *)err_ str);
}
break;
case ENOTDIR:
fprintf(log_ chan,
" Some component of path %s is not a directory.", cwd);
break;
default:
fprintf(log_ chan,
" Unable to access directory %s.", cwd);
break;
}
/* Advance to next lint in the batch results loα file */ fprintf (log_chan, "\n");
/* Flush all batch result log information into file */
ff lush(log_chan); } /* End of loop over image, */
/* Release memory used to hold inage file path name */ free((char *)fil_path);
/ * Restore lmage working directory path name that was avtive
beefore processing files from the sequence control file */ strcpy(cwd, cwd_sav_ str);
free((char * )cwd_sav_str);
swltch(chdir(cwd))
{
case 0:
panel_det(cwd_item, PANEL_VALUE, cwd, 0);
break;
default:
batch_err_str = (char *) calloc(48+strlen(cwd), sizeof(char)); strcpy(batch_err_str,
"Problem restoring original working directory \""); strcat(batch_err_str, cwd);
strcat(batch_err_str, "\".");
message (batch_err_str);
free ((char *)batch_err_str);
break;
}
}
fclose(log_chan); /* Close batch results log file */
} log_chan = NULL; /* Indicate batch results log fileunspecified */ }
}
/* median.c Printed on 18-December-1989 */
#include <stdio.h>
#include <varargs.h>
int
median(va_alist)
va_ del
{
va_ list ap;
int i,n,med;
int *pix;
int compar();
va_start(ap);
n = va_arg(ap,int);
pix = (int * )calloc(n, sizeof (int));
if (pix==NULL)
errraess( "median: cant allocate pix[ ]");
for(i=0; i<n; ++i)
pix[i] = va_arg(ap,int);
va_end(ap);
qsort( (char *)pix,n,sizeof (int),compar);
if(n%2==0)
med = (pix[n/2]+pix[n/2+1))/2;
else
med = pix[n/2];
return(med);
}
corapar (px,py)
int *pχ,*py;
{
return(*px - *py);
}
/* lRP_histogram.c Printed on 18-December 1989 */
#include <stdio.h>
#include <suntool/sunview.h>
# include <suntool/canvas.h>
#include <suntool/panel.h>
#include "image_ io.h"
#include "cellviev.h"
/* Define external image histogram array */
int IRP_hist_data(VLT_SIZE];
int hstnax; /* Maximum count in the intensity histogram */
int hmax_loc; /* Intensity value corresponding to maximum count */
/* Flag to Indicate whether Histogram Array Contents
are Valid for the Current Image */
u_char validhst_flag = 0;
set_hist_vlt()
/* Set up video look up table for display of histogram plot */ int i;
/* Get ID name of base frame's color map segment */
pw_getcmsname(bas_pw, bas_casname);
/* Get description of base frame's color map segment using it* ID name */ pw_ge tcmsdata(bas_pw, tbas_cms, &i);
/* Extract base frame color aap entries and store them in color map arrays at positions corresponding to the segment's absolute location
in the global color aap table */
pw_getcolormap(bas_pw, 0, bas_cms.cms_size,
red + bas_cas.cas_addr,
green + bas_cms.cits_addr,
blue + bas_cms.cas_addr);
/* Expand Base Frame's Color Map by Two Entries
in Order to Accoaodate More Colors */
/* Slide existing background color specification
to beginning of the enlarged colormap segment */
i = bas_ ems.cms_addr - bas_cms.cms_size;
red [i) = red- [bas_cas.cms_addr];
green [i] = green [bas_cms.cms_addr];
blue [i] = blue [bas_cas.cms_addr];
red[bas_cas'. cms_addr] = green[bas_cms.cms_addr] = blue[bas_ cms. cms_ addr] = 0;
/* Destroy previous base frame colormap segment information */
pw_setcmsname (bas_ pw, bas_cmsname);
pw_putcolormap(bas_pw, 0, 0, red, green, blue);
/* Reload Enlarged Base Frame Color Map Segment */
pw_putcolormap(bas_pw, 0, 2*bas_cms .cms_size, red+i, green+i, blue+i); /* Acquire new descriptors for modified base frame color map segment */ pw_getcmsdata(bas_pw, &bas_cms, &i);
/* Assign base frame color map segment to the histogram display canvas */ pw_seternsname ( hst_pw, bas_cmsname);
/* Reload original base frame color map values
into the histogram display canvas segment */
pw_putcolormap(hst_pw, 0, bas_cms.cms_size,
red + bas_cms.cms_addr,
green + bas_cms.cms_addr,
blue + bas_cms.cms_addr);
}
vioid make_IRP_histogram()
int i, j, hst_ img_ base_ ptr, hst_ img_offset, new_pix, last_ y, nfill,
fill_start, fill_end;
float scale_fac;
/* Generate and Plot new Histogram Only if Current Inforniation is Not Valid */ if (Ivalidhst_flag)
{
/* Clear array for storing image histogram data */
for(i=0; i<VLT_SIZE; ++i)
IRP_hist_data[i]=0;
/* Compile histogram of image */
for(i=0; i<(size_x"size_y); ++i)
lRP_hist_data[image[i])++;
/* Set Flag to Indicate that Histogram Information is Now Valid */ validhst_flag = 1;
/* Find maximum histogram count */
hstmax = 0;
for(i=0; i<VLT_SIZE; i++)
if(lRP_hist_data[i]>hstmax)
{
hstmax=IRP_hist_data[i];
hmax_ loc = i;
}
scale_fac = (float)HST_HEIGHT / (float)hstmax;
/* Clear histogram plot image */
for(i=0; i<N_HST_DISPLAY_PIXELS; i++)
hst_image[i]=0;
/* Generate new histogram plot image */
hst_ img base_ptr = PLOT_BORDER_ WlDTH*(hist_height+1);
last_ y =0;
for (i=0; i<VLT_SIZE; i+ +)
{
/* Compute current display image position and fill it in */ hst img_offset =
HST HEIGHT - (int)(scale_fac * (float)IRP_ hist_data[i]); hst_image[(new_pix=hst_img_base_ptr+hst_img_offset)]=3;
/* Fill in current column from current y-value to the preceeding one */ nflll = (int)(scale_fac * (float)(last_ y - IRP_ hist_ data[i]));
if(nfill)
{
fill_ start = min(new_p-x, (new_pix - nfill));
fill_end = max(new_pix, (new_pix - nfill));
for(j=fill_start; j<fill_end; j++)
hst_ image[j]=3;
}
last_ y = IRP_hist_data[i];
hst_img_base_ptr+=hist_height;
}
hst_ img_offset=0;
pw_batch_on(hst_pw);
for(i=0; i<hist_ width; i++)
{
for (hst_img_base_ptr=24; hst_ img_base_ptr <HST_WIN_HEIGHT;
++hst_img_base_ptr)
{
pw_ rop(hst_pw, i, hst_ img_base_ptr, 1, 1,
PIX_SRC I PIX_COLOR(hst_image [hst_img_offset ]),
(Pixrect *)0, 0, 0);
++hst_ img_offset;
}
}
pw_batch_off(hst_pw);
}
}
void hst_equalize()
/* Routine to Enhance Image Contrast by Altering Pixel Values
to Produce a Linear Stretching of the Global Histogram
so that it Fills the Entire Dynamic Range of Intensity Values (0 - 255) */
/* Coded by KG Heinemann between 14-July-1989 and 19-July-1989 */ int clip_thresh, cum_sum, lowerlim, upperlia, win_row, win_col,
first_ row, first_col, last_ row, last_col, lcol, irow, index;
float scale_fac, denom;
float *new_image=NULL;
/* Determine whether a valid histogram is available by checking the value of "validhst_flag"; if not, issue an error message and exit from routine */ if (validhst_flag)
/* Define lower limit of the intensity interval to be stretched
by finding the closest point left of the histograa peak where the histogram count falls to 1% of the peak value */ clip_ thresh = (int)(0.01 * (float)hstmax);
for (index=hmax_loc; (index>-1) && (IRP_ hist_data[index]>clip_thresh); index--)
lowerlim = index; /* Define upper limit of the intensity interval to be stretched by finding the point where the cumulative histogram count
first exceeds 99.251 of the total image pixels */ clip_ thresh = (int)(0.0075 * (float) (size_x * size_y));
for (upperlim=(VLT_SIZE-1), cum_sum=0;
(υpperlim>-1) && (cum_surm<clip_thresh);
upperlim--)
cum_sum += IRP_hist_data[upperlim];
++upperlim;
/* Compute scale factor for linear stretching of intensity values */ scale_fac = (float)VLT_SIZE / (float) (upperlim - lowerlim + 1);
if (scale_fac>1.0)
{
/* Allocate temporary memory block for storing modified image */ new_image = (float * )calloc(size_x * size_y, sizeof (float));
/* Compute sum of intensity values in a 3 x 3 window
centered on the pixel which is to be modified,
clipping regions window which fall outside image borders */ for (win_ row=first_ row=0; win_ row<size_ y; win_row++)
{
last_ row = min (win_row+2, size_y);
for (win_col=first_col=0; win_ col<size_ x; win_col++)
{
last_col = min {win_col+2, size_x);
cum_sum = 0.0;
denom = 0.0;
for (irow-first_ row; irow<last_ row; irow++)
{
index = (irow * size_x) + first_col;
for (icol=first_col; icol<last_ col; icol++)
{
cum_ sum += image [index++];
denom += 1.0;
}
}
/* Array index of pixel which is to be modified */
index = (win_row * size_x) * win_col;
/* Compute mean intensity for the 3 x 3 window and combine
results with the original intensity value in a 1:3 ratio */ new_ image [index] = (float)cum_ sum / denom;
new_image [index] += (float)(3 * image[index]); new_image [index] = new_image[index] / 4.0;
first_ col = win_col;
}
first_row = win_ row;
}
/* Calculate values for new image by applying histogram based stretching to the "modified" intensity values */ for (index=0; index<(size_x*size_y); index++)
if (new_image[index]<lowerlim) iroage[index] =0; else if (new_image[index]>upρerlim) image[index] =(VLT_SIZE-1); else image[index] =
(int) ((scale_fac * (new_image[index] - (float)lowerlim)) + 0.5); free ((float * )new_image) ; /* Release temporary image memory */ new_image = NULL;
}
/* Clear away the old image and display the equalized one */
clear_canvas_proc();
put_image(); /* Send it to the appropriate pixwin */
}
else
message ("Histogram for current image has not been created.");
}
/* IRP_ edge_detector.c Printed on 18-December-1989 */
/* Code to implement neural network oriented edge detectors (V1) for IRP */
#include <stdio.h>
#include <math.h>
#include <suntool/sunvlew.h>
#include <suntool/canvas.h>
#include <suntool/panel.h>
#include "image_ io.h"
#include "cellview.h"
#include "netparam.h"
/*--------------------------------------------------------------------------------------------------------------------*/
/* Code to set up video look up table for display of edge detector results */
set_edf_vlt()
/*
Created on 25-April-1989 by KG Heinemann
Th i s subrout ine uses the same colormap segment that was ass i gned
to the base frame and the histogram display canvas in subroutine
"set_ hist_vlt". It assumes that actions in that part of the code
already have been performed, so it will not work properly if. that
subroutine has not been called previously .
*/
/* Assign base frame color map segment to the canvas which
has been designated for display of edge detector results */
pw_setcmsname(edf_pw, bas_cmsname);
/* Reload original base frame color map values
into the histogram display canvas segment */
pw_putcolormap(edf_pw, 0, bas_ cms.cas_sizt,
~ red + bas_cam.cms_addr,
green + bas_cms.cms_addr,
blue + bas_cms.cms_addr);
}
/*--------------------------------------------------------------------------------------------------------------------*/
/* Flag to Indicate whethe r Edge Detector Spectrum
( V1 ) Values are Val id for the Cur rent Image */
u_cha r valid_V1_flag = 0;
/*--------------------------------------------------------------------------------------------------------------------*/
/* */
/* Connection weights for detection of hor i zontal and vertical edges */
/* */
/*------------------------------------------------------------------------------------------------------ -------*/
/* Strengths for connections between the hidden units */
float horzvert_A_wts [N_HIDDEN_NEURONS * N_HIDDEN_NEURONS];
/* Strengths for connections between input units and hidden units */ f l oa t
horzvert_B_wts [INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE * N_HIDDEN_NEURONS] ;
/* Strengths for connections between hidden units and output units */ float horzvert_C_wts [N_HIDDEN_NEURONS * N_OUTPUTS];
/* Strengths for connections between input units and output units */ float horzvert_D_wts [INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE * N_OUTPUTS];
/*---------------------------------------------------------------------------------------*/
/* */
/* Connection weights for detection of diagonal edges */
/*---------------------------------------------------------------------------------------*/
/* Strengths for connections between the hidden units */
float diagonal_A_wts [N_HIDDEN_NEURONS * N_HIDDEN_NEURONS];
/* Strengths for connections between input units and hidden units */ float
diagonal_B_wts [INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE * N_HIDDEN_NEURONS];
/* Strengths for connections between hidden units and output units */ float diagonal_C_wts [N_HIDDEN_NEURONS * N_OUTPUTS];
/* Strengths for connections between input unite and output units */ float diagonal_D_wts [INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE * N_OUTPUTS];
/* Rectified signals transmitted by the input neurons */
ACTIVATION_DATA_TYPE
input_neuron_signal[INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE];
/* Activation levels for the hidden neurons */
ACTlVATION_DATA_TYPE hidden_neuron_signal [N_HIDDEN_NEURONS];
/* Activation levels for the output neurons */
ACTIVATION_DATA_TYPE output_neuron_signal [N_OUTPUTS];
/* Array to store component of neuron activation caused by input stimulus */ ACTIVATlON_DATA_TYPE B_or_D_fi [MAX_NEURONS];
/* Array to store previous activation levels for iterative computation */ ACTIVATION_DATA_TYPE previous_activation[MAX_NEURONS];
/* Array to store rectified input signals */
ACTIVATION_DATA_TYPE sigrect [MAX_NEURONS];
/* Array to Store Collected Input Signals for the ART2 classifier */
ACTIVATION_DATA_TYPE frwd_feature [TOT_SPECTRUM_SIZE];
/* Temporary Storage for Use While Re-Ordering the Input Window Array */ ACTIVATlON_DATA_TYPE scratch_pixel;
/* Number of image pixels to skip when aovlng from the end of one row
in the input window to the beginning of the next one */ int row_skip_increaent;
/* Offset from start of input window to begin calculation
of input signals for the orthogonal orientations */ int offset_for_orthogonal_orientations;
/* Index to Indicate Next Unused Location in the 'Edge Featυre" Array
and Cumulative Entry Counter for that Array */
int ftr_counter;
/*--------------------------------------------------------------------------------------------------------------------*/
/* */
/* Utility routine to read in pre-established connection strength valuess */
/* from file on disk */
/* */
/*--------------------------------------------------------------------------------------------------------------------*/
/* Created by KG Heinemann on 08-May-1989 */
char "edf_matrix_err_ str =
"\nsrror reading coefficients for ",
*emestr_ suffix = "matrixl\n\n",
*upright_ str = "upright",
*diagonal_ str = "diagonal",
*space_ str = " ",
*A_str="A ", *B_str= "B ", *C_str="C", *D_str="D";
void get_edf_matrlx_elements()
{
FlLE *matrix_data_ file, *fopen();
int num_read, err_index, aux_index, type_index;
/* Attempt to open matrix coefficient data file */
matrix_data_ file = fopen("gray_ edf_ matrices.bin*, *r");
if(matrix_data_file== NULL)
errmess("\nError opening edge detector matrix coefficient filel\n\n");
/* Prepare error message string for the horizontal and vertical orientations */
err_ index=31;
for(aux_index=0; upright_str[aux_index] 1='\0' ; aux_ index++)
edf_matrix_err_ str[err_index++] = upright_str[aux_ index];
edf_ matrix_err_str[err_index++] = space_ str[0];
type_index = err_index;
for(aux_index=0; aux_ index<2; aux_ index++)
edf_matrix_err_ str[err_index++] = space_ str[0];
for(aux_index=0; eaestr_suffix[aux_index]1=*\0'; aux_ index++)
edf_ matrix_err_str[err_ index++] = emestr_ sufflx[aux_ index];
edf_matrix_err_str[err_index] = '\0';
num_ read = fread(horzvert_A_wts, sizeof(float),
N_HIDDEN_NEURONS * N_HIDDEN_NEURONS, matrix_data_file);
if(num_read != N_HlDDEN_NEURONS * N_HlDDEN_NEURONS)
{
edf_matrix_ err_str[type_index] = A_str[0];
errmess(edf_matrix_err_str);
}
num_ read = fread(horzvert_B_wts, sizeof(float),
INPUT_WlHDOW_SIZE * INPUT_WINDOW_ SIZE * N_HIDDEN_NEURONS,
matrix_data_file);
if(num_read != lNPUT_WINDOW_SlZE * INPUT_WINDOW_SIZE * N_HIDDEN_NEURONS)
{
edf_matrix_err_str[type_index] = B_str[0]; } errmess(edf_matrix_err_ str); num_ read = fread (horzvert_C_wts, sizeof(float),
N_HIDDEN_NEURONS * N_OUTPUTS, matrix_data_file);
if (num_ read != N_ HlDDEN_NEURONS * N_OUTPUTS)
{
edf_matrix_ err_ str[type_ index] = C_str[0];
} errmess(edf_matrix_err_ str); num_ read = fread(horzvert_D_wts, sizeof (float),
INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE * N_OUTPUTS, natrix_ data_file);
if(num_ read != INPUT_ WINDOW_SIZE * INPUT_WINDOW_SIZE * N_OUTPUTS)
{
edf_matrix_ 1rr_str[type_index] = D_str[0];
} errmess(edf_matrix_err_str);
/* Prepare error message string for the diagonal orientations */ err_index=31;
for(aux_index=0; diagonal_str[aux_index]!='\0'; aux_index++)
edf_matrix_err_str[err_index++) = diagonal_str[aux_index);
edf_ matrix_err_ str[err _index++] = space_str [0];
type_index = err_ index;
for(aux_index=0; aux_index<2; aux_index++)
edf_matrix_err_str[err_ index++] = space_ str[0];
for(aux_index=0; emestr_suffix[aux_index]1='\0'; aux_index++)
edf_matrix_err_ str[err_index++] = emestr_ suffix[aux_index];
edf_matrix_err_str[err_index] = '\0';
num_ read = fread(diagonal_A_wts, sizeof (float),
N_HIDDEN_NEURONS * N_HIDDEN_NEURONS, matrix_data_ file); lf(num_read != N_HTDDEN_NEURONS * N_HlDDEN_NEURONS)
{
edf_ matrix_ err_str[type_index) = A_ str[0];
errmess(edf_ malrix_err_str);
}
num_ read = fread(diagonal_B_ wts, sizeof(float),
INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE * N_HIDDEN_NEURONS, matrix_data_file);
if(num_read != INPUT_WINDOW_SlZE * INPUT_WINDOW_SIZE * N_HIDDEN_NEURONS) {
edf_matrix_err_str[type_ index] = B_str[0];
errmess(edf_matrix_err_str);
}
num_read = fread(diagonal_C_wts, sizeof(float),
N_HIDDE_ NEURONS * N_OUTPUTS, matrix_data_ file);
if(num_read != N_HTDDEN_NEURONS * N_OUTPUTS)
{
edf_aatrix_err_str[type_index] = C_ str[0];
errmess(edf_matrix_err_str);
}
num_read = fread(diagonal_D_wts, sizeof (float),
INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE * N_OUTPUTS, matrix_ data_file); if(num_read != INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE * N_OUTPUTS)
{
edf_matrix_err_ str[type_index] = D_str[0];
errmess(edf_ matrix_ err_str);
}
fclose(matrix_data_ file);
}
/*--------------------------------------------------------------------------------------------------------------------*/
/* */
/* runction to run neural_network edge detectors and store resulting signals */
/* */
/* Modified by KG Heinemann between 29-June and 05-July-1969 */
/* To implement revisions needed for use with gray scale imagery: */
/* */
/* The previous neural network structure and calculation procedure */
/* are retained, but the individual detectors have been made sensl- */
/* tive to the actual DIRECTION or THE INTENSITY GRADIENT, and not */
/* just the "unsigned" orientation. As a result of this change, */
/* each filter generates a strong response only when the edge has */
/* a particular orientation AND ONE SPECiriC SIDE IS BRIGHTER THAN */
/* THE OTHER, ror example, a horizontal edge will produce a large */
/* signal when the upper region is brighter than the lower one, */
/* but NOT in the reverse situation. However, the original design */
/* called for algorithms which would detect edges at the specified */
/* orientation without regard to the "direction" of the Intensity */
/* change. The desired behavior is recovered by applying two */
/* complementary filters for each orientation: one which responds */
/* strongly when intensity increases across the edge, and another */
/* that is tuned to that same edge with decreasing intensity. The */
/* stronger of these two siαnals then serves to represent the actual */
/* edge strength. The complementary filters differ froa one another */
/* only by a 180 degree rotation, because this operation is equiv- */
/* alent to reversing the gradient direction. Hence, if one filter */
/* is specified by a given set of connection matrices, we can obtain */
/* the complementary filter by rotating those matrices through 180 */
/* degrees. In the present implementation, we accomplish the same */
/* effect by rotating the input window through 180 degress. This */
/* operation is accomplished quite easily by simply reversing the */
/* order of pixels in the input vector. */
/*
/*----------------------------------------------------------------------------------------------------------------------*/ void detect_edges (ix, iy)
/* lmage coordinates for upper left corner of input window */
int ix, iy;
{
int win row, win col, aux index;
u_char * first_img_pixel, *current_img_pixel;
/* DEBUG */
printf("Detect edges called at column %d, row %d.\n", ix, iy);
/*---------------------------------------------------------------------------------------*/
/* Edge Detection Calculations for the Direct Orientations */ /* */
/*---------------------------------------------------------------------------------------*/
/* Initialization */
/* Calculate signals transmitted from input neurons for direct orientations
by normalizing the raw image pixels and applying the sigmoid function */
/* Compute actual address of first input pixel
and initialize pixel pointing index */
current_img_pixel = first_img_pixel = image + ((iy*size_x) + ix);
/* Initialize Index for accessing the array of input neuron signals */
input_index = 0;
for(win_ row=0; win_row<INPUT_WINDOW_SIZE; win_row++)
{
for(win_col=0; win_col <INPUT_WINDOW_SIZE; win_col++)
/* Use simple normalization to emulate rectification
by the saturated ramp sigmoid function */
input_neuron_signal[input_index++] =
(float)*current_img_pixel++ / (float) (VLT_SIZE-1);
/* Skip to beginning of input window's next row */
current_ img_ pixel+=row_ skip_increment;
}
/*---------------------------------------------------------------------------------------*/
/* Horizontal/Vertical Edge Detection with Original Filter */
/*---------------------------------------------------------------------------------------*/ compute_hidden_ unit_activations
(INPUT_WINDOW_ SIZE * INPUT_WINDOW_SIZE, N_HIDDEN_NEURONS,
horzvert_A_wts, horzvert_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */
matrix_vector_product(INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE, N_OUTPUTS,
horzvert_D_wts, inρut_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */
for(i nput_ index=0; input_index<N_HIDDEN_NEURONS; input_ index++)
sigrect[input_index] = step_sigmoid(hidden_neuron_signal [input_index]);
/* Compute Contributions to Output Layer Activations from Hidden Units */
matrix_vector_product(N_HIDDEN_NEURONS, N_OUTPUTS, horzvert_C_wts,
sigrect, outρut_neuron_signal)
/* Compute Final Output Activations by Combining
Contributions from the Input and Hidden Layers
and Store Results in Edge Detection Feature Array */
for(input_index=0; input_index<N_OUTPUTS; input_index++)
{
output_neuron signal(input_ index) += B_or_ D_fi[input_ index];
frwd_feature(ftr_counter++) = output_neuron_signal [input_index];
}
/*-------------------------------------------------------------------------------------------------*/ /* Diagonal Edge Detection with Original Filter */
/*-------------------------------------------------------------------------------------------------*/
compute_ hidden_ unit_activations
(INPUT_WINDOW_SIZE * INPUT_WINDOW_ SIZE, N_HIDDEN_NEURONS,
diagonal_A_wts, diagonal_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input signals and Store them in Designated Array */ matrix_vector_product (INPUT_WINDOW_ SIZE * INPUT_WINDOW_SIZE, N_OUTPUTS,
diagonal_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */ for(input_ index=0; input_ index<N_HIDDEN_ NEURONS; input_ index++)
slgrect[input_index) = step_sigmoid(hidden_neuron_signal[inρut_index]);
/* Compute Contributions to Output Layer Activations from Hidden Units */ matrix_vector_product(N_HIDDEN_NEURONS, N_OUTPUTS, diagonal_C_wts,
sigrect, output_neuron_signal)
/* Compute Final Output Activations by Combining
Contributions from the Input and Hidden Layers
and Store Results in Edge Detection Feature Array */ for(input_index=0; input_ index<N_OUTPUTS; input_index++)
{
output_ neuron signal[input_ index] += B_or_D_fi [Input_ index];
frwd_feature[ftr_counter++] = output_ neuron_ signal [input_ index];
}
/*---------------------------------------------------------------------------------------*/
/* Reverse Order of Input Pixels for the Complementary Filters */
/ *---------------------------------------------------------------------------------------*/
for(input_index=0, aux_ index=(INPUT_WINDOW_SIZE*INPUT_WINDOW_ SIZE)-1;
input_index < ((INPUT_WINDOW_SIZE * INPUT_WINDOW_SlZE)+1)/2;
input_index++, aux_ index--)
{
scratch_pixel = input_neυron_signal [input_index];
input_neuron_signal [input index] = input_neuron_signal [ aux_index ];
input_neuron_signal [ aux_index ] = scratch_pixel;
}
/* Set pointer for Edge Detector Feature Spectrua Back to
First Location for the Present Set of Direct Orientations */ ftr_counter -= (2 * N_OUTPUTS);
/*---------------------------------------------------------------------------------------*/
/* Horizontal/Vertical Edge Detection with Complementary Filter */
/*-------------------------------------------------------------------------------------------------*/
compute_hidden_ unit_activations
(INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE, N_HIDDEN_ NEURONS,
horzvert_A_wts, horzvert_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */
matrix_vector_product(INPUT_WINDOW_SIZE * INPUT_WINDOW_SIZE, N_OUTPUTS,
horzvert_D_wts, input_neuron_signal, B_or_D_fi)
Figure imgf000101_0001
}
/*-------------------------------------------------------------------------------------------------*/
/* Diagonal Edge Detection with Original Filter */
/*-------------------------------------------------------------------------------------------------*/
compute_hidden_ unit_activations
(INPUT_WINDOW_SIZE * INPUT_ WINDOW_SIZE, N_ HIDDEN_NEURONS,
diagonal_A_wts, diagonal_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */ matrix_ vector_product(lNPUT_WINDOW_ SIZE * INPUT_WINDOW_SIZE, N_OUTPUTS,
diagonal_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */ for(input_ index=0; input_ index<N_HIDDEN_NEURONS; input_ index++)
sigrect[input_ index] = step_sigmoid(hidden_neuron_signal [input_index]);
/* Compute Contributions to Output Layer Activations from Hidden Units */ matrix_vector_ product (N_ HIDDEN_NEURONS, N_OUTPUTS, diagonal_C_wts,
sigrect, output_neuron_signal)
/* Compute Final Output Activations by Combining
Contributions from the Input and Hidden Layers
and Store Results in Edge Detection reature Array */ for(input_ index=0; input_index<N_OUTPUTS; input_ index++)
{
output_neuron_signal [input_index] += B_or_D_fi(input_index);
frwd_feature[ftr_counter++] = output_neuron_signal[input_index);
}
/*---------------------------------------------------------------------------------------*/
/* Reverse Order of Input Pixels for the Complementary Filters */
/*---------------------------------------------------------------------------------------*/ for(input_ index=0, aux_index=(INPUT_WINDOW_SIZE* INPUT_WINDOW_SIZE)-1;
input_index < ( (INPUT_WINDOW_SIZE * INPUT_WINDOH_SIZE)*1)/2;
input_index++, aux_index--)
{
scratch_pixel = input_neuron_ signal [input_index]; input_ neuron_ signal [input_ index] = input_neuron_signal [ aux_index ]; input_neυron_signal [ aux_index ] = scratch_pixtl;
}
/* Set pointer for Edge Detector reature Spectrum Back to
First Location for the Present Set of Direct Orientations */ ftr_counter -= (2 * N_OUTPUTS);
/*---------------------------------------------------------------------------------------*/
/* Horizontal/Vertical Edge Detection with Coapleaentary Filter */
/*---------------------------------------------------------------------------------------*/ compute_hidden_ unit_activations
(INPUT_ WINDOW_ SIZE * INPUT_WINDOW_SIZE, N_HIDDEN_NEURONS,
horzvert_A_wts, horzvert_B_wts)
/* Compute Direct Contributions to Output Layer Activations from Input Signals and Store them in Designated Array */
trix_vector_ product (INPUT_WINDOW_ SIZE * INPUT_WINDOW_ SIZE, N_OUTPUTS,
horzvert_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */ r(input_index=0; inρut_index<N_HIDDEN_NEURONS; input_index++)
sigrect[input_index] = step_sigmoid(hidden_neuron_signal[input_index]);
Compute Contributions to Output Layer Activations from Hidden Units */ trix_vector_product(N_HIDDEN_NEURONS, N_OUTPUTS, horzvert_C_wts,
sigrect, output_neuron_signal)
/* Compute Final Output Activations by Combining
Contributions froa the Input and Hidden Layers
and Store Results in Edge Detection Feature Array */ or(input_ index=0; input_index<N_OUTPUTS; input_ index++)
{
output_neuron signal [input_index] += B_or_D_fi [input_index];
frwd_feature[ftr_counter] =
max (frwd_ feature[ftr_counter], output_neuron_signal [input_index]);
++ftr_counter;
}
/*---------------------------------------------------------------------------------------*/
/* Diagonal Edge Detection with Complementary Filter */
/*---------------------------------------------------------------------------------------*/ ompute_hidden_unit_activations
(INPUT_WINDOW_ SIZE * INPUT_WINDOW_SIZE, N_HIDDEN_NEURONS,
diagonal_A_wts, diagonal_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */
atrix_vector_product(INPUT_WINDOW SIZE * INPUT_WINDOW_SIZE, N_OUTPUTS,
diagonal_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */ or (input_index=0; input_ index<N_HIDDEN_NEURONS; input_ index++)
slgrect[input_index] = step_sigmoid(hidden_neuron_signal[inρut_index]);
/* Coapute Contributions to Output Layer Activations froa Hidden Units */ matrix_vector_product(N_HIDDEN_NEURONS, N_OUTPUTS, diagonal_ C_wts,
sigrect, output_neuron_signal)
/* Compute Final Output Activations by Combining
Contributions from the Input and Hidden Layers
and Store Results in Edge Detection Feature Array */for(input_index=0; input_index<N_OUTPUTS; input_index++)
{
output_neuron_signal[input_index] += B_or_ D_ fi[input_ index];
frwd_feature[ftr_counter] =
max (frwd_ feature[ftr_ counter], output_ neuron_signal[input_index]);
++ftr_ counter;
} d display_edf_results() /* Image Coordinate Corresponding to
Last Column of Input Windows
for a Given Cycle of the Scan Sequence*/
last_ x,
/* Image Coordinate Corresponding to
Last Row of Input Windows
for a Given Cycle of the Scan Sequence*/
last_ y,
/* Horizontal and Vertical Image Coordinates
Corresponding to Active Position in Scan */
ix_index, iy_index;
FILE *specdata_file, *fopen();
/* Generate and Plot new Edge Detector Spectrum
Only if the Current Information is Not Valid */
if (Ivalid_V1_ flag)
{
/* Initialize count of individual edge detector features */ ftr_counter-EDF_SPECTRUM_OFFSET;
/* Initialize parameters for transferring input data from
the actual image to an input window array */
row_sxip_ increment = max(0, (size_ x - INPUT_ WINDOW_ SIZE));
offset_for_orthogonal_orientations = size_x * (INPUT_WINDOW_SIZE-1);
/* Set Limits for Initial Cycle of the Scan Sequence */
first_x = first_y = 0;
last_x = INPUT_WINDOW_SIZE * ((size_x / INPUT_WINDOW_SIZE) - 1); last_y = INPUT_WINDOW_SIZE * ((size_y / INPUT_WINDOW_SIZE) - 1);
/* Loop over separate cycles of the spiral scan pattern */
while (first_ x <= last_x it first_y <= last_y)
{
for (ix_index=first_x; ix_index<=last_x;
ix_index += INPUT_Wl NDOW_SIZE)
deteet_edges (ix_index, first_y);
first_ y += INPUT_WINDOW_SIZE;
for (Iy_ index=first_y; Iy_index<=last_ y;
iy_index += INPUT_WlRDOW_SIZE) "
detect_edgts (last_x, iy_index);
last_ x -= INPUT_WINDOW_ SIZE;
for (ix_ index=last_ x; ix_index>first_x;
ix_ index -= IRPUT_WTNDOW_ SIZE)
detecl_edges (ix_index, last_y);
for (iy_index=last_y; iy_itιdex>=first_y;
ly_index -= lNPUT_WlNDOW_SIZE)
detecl_edges [first_x, iy_index);
first_x += INPUT_WINDOW_SIZE;
last_ y -= INPUT_WINDOW_SIZE;
}
/* DEBUG */
pr intf ( "\nSpi ral aap has generated %d poi nts for the spectrum .\n" , (ftr_counter-EDF_SPECTRUM_OFFSET));
/* Set Flag to Indicate that Edge Detector Spectrua is Now Valid */ valld_V1_flag = 1;
/* DEBUG */
/*
specdata_ file = fopen("spectrum.dat", "w");
if (specdata_file==NULL)
errmess ("Error opening spectrum data filel");
else
{
printf
("Writing edge feature spectrum data to file \"sρectrum.dat\"."); for(ix_ index=EDF_SPECTRUM_OFFSET; ix_ index<ftr_ counter; ++ix_index) fprintf (specdata_file, "%9.3f\n", frwd_ feature[ix_index]);
}
fclose(specdata_file);
*/
display_edf_results();
}
{
/* IRP_visar2.c Printed on 18-December-1989 */
/*
Code to implement neural network algorithm for assessing positional offsets within a field of view (V2)
for IRP
*/
/* Originally coded by KG Heinemann from 16-June-1989 to 07-August-1989 */
/* Modified by KG Heinemann on 03 - 04 October 1989 to add mechanise
for communicating validity of V2 results to other "cellview" modules
(the "valid_V2_flag") */
#include <stdio.h>
#include <strlng.h>
#include <math.h>
#include <suntool/sunview.h>
#include <suntool/canvas.h>
#include <suntool/panel.h>
#include "image_ io.h"
#include "cellview.h"
#include "netparam.h"
/* Second Dimension of Input Window for Reduce Network in V2 */
#define V2AUX_WINDOW_SIZE 3
#if (V2AUX_WINDOW_SIZE > INPUT_ WINDOW_SIZE)
#define RSKP_ INC 0
#else
#define RSKP_INC (INPUT_WINDOW_SIZE-V2AUX_WINDOW_SIZE)
#endif
/* Flag to Indicate Whethe r Centering Signal (V2 )
Values are Valid for the Cur rent lmage */ u_char val id_v2_flag;
/* Pointe r to memory region for compressed image */
ACTIVATION_DATA_TVPE *V2_hidden_layer=NULL;
/* lmage compression factors ( linea r diaensions of the averaging window) */ int x_compression_factor , y_compress ion_factor;
char too_small_img_str[ 92] ;
char *horz_ str = "Horizontal";
char *vert_str = "Vertical";
char *lstri = " size of the ";
char *inpt_str = "input";
char *lstr2 = " iaage (";
char *lstr3 = ") is too small for the
char *offs_str = "offset detector's";
char *lstr4 = " input window.";
char *insυfmem_str = "V2_average: insufficient memory available for V2 image.";
/* Strengthβ for connections between the hidden units */
float V2_A_wts |N_HIDDEN_NEURONS * N_HIDDEN_NEURONS]; /* Strengths for connections between input units and hidden units */
float V2_B_wts [INPUT_WINDOW_SIZE * V2AUX_WIND0W_SIZE * N_HIDDEN_NEURONS];
/* Strengths for connections between hidden unite and output units */ float V2_C_wts [N_HIDDEN_NEURONS * N_OUTPUTS] ;
/* Strengths for connections between input units and output unite */
float V2_D_wts [INPUT_WINDOW_SIZE 8 V2AUX_WINDOW_SIZE * N_OUTPUTS];
/*--------------------------------------------------------------------------------------------------------------*/
/* */
/* Utility routine to read in pre-established V2 connection strength values */ /* from file on disk */
/* */
/*--------------------------------------------------------------------------------------------------------------*/
/* Adapted from the "get_edf_matrix_ elements" routine
by KG Heinemann on 31-July - 01-August 1989 */ char *v2_matrix_err_str =
"\nError reading coefficients for V2's matrixl\n\n";
void get_V2_matrix_elements()
{
FILE *matrix_data_file, *fopen();
int num_read, err_index, aux_index, tyρe_index=37;
/* Attempt to open matrix coefficient data file */
matrix_data_file = fopen("V2_matrices.bin", "r");
if(matrix_data file==NULL)
errmess("\nerror opening V2's matrix coefficient filel\n\n");
num_ read = fread (V2_A_ wts, sizeof (float),
N HlDDEN_NEURONS * N_HIDDEN_NEURONS, matrix_data_ file); if (num_ read != N_HlDDEN_NEURONS * N_HIDDEN_NEURONS)
{
V2_matrix_ err_str[type_index] = A_str[0];
errmess(V2_matrix_err_str);
}
num_read = fread(V2_B_wts, sizeof (float),
INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE * N_HIDDEN_NEURONS, matrix_data_file);
if(num_read != INPUT_WINDOW_SlZE * V2AUX_WINDOW_SIZE * N_HIDDEN_NEURONS)
{
V2_matrix_err_str[type_index] = B_ str[0];
} eremess(V2_ matrix-err-str); num_read = fread
(V2_C_wts, sizeof (float), N_HIDDEN_NEURONS * N_OUTPUTS, matrix_data_ file); if(num_read != N_HIDDEN_NEURONS * N_OUTPUTS)
{
V2_matrix_err_str[type_index] = C_str[0];
} errmess(V2_matrix_ err_ str); num_read = fread(V2_D_wts, sizeof (float),
INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE * N_OUTPUTS, matrix_ data_file);
if (num_read != INPUT_ WINDOW_SlZE * V2AUX_ WINDOW_SIZE * N_OUTPUTS)
{
V2_matrix_err_str[tyρe_index] = D_str[0];
eremess(V2_matrix_err_str);
}
fclose (matrix_data_file);
}
/*--------------------------------------------------------------------------------------------------------------*/ void V2_average()
{
int scan_row, scan_col, irow, icol, V2_ptr; /* Indices */
int pixsum; /* Sum of integer pixel values within a given window */
/* Divisor to compute window average from "pixsum"
and normalize intensities at the same time */
ACTIVATION_DATA_TYPE normal_factor;
/* Pixel position corresponding to beginning of current row in loop */ int row_begn;
/* Number of image pixels to skip when the averaging window moves
between subsequent row positions of its scanning pattern */ int srb_skip_increment;
/* Number of image pixels to skip when moving froa the end of one row
in the averaging window to the beginning of the next one */ int row_skip_increment;
/* Compute dimensions of the averaging window (image coapression factors) and use the results to determine whether the input window is too small for the offset detector */
x_compression_ factor = size_x / INPUT_WINDOW_ SIZE;
y_compression_factor = size_y / INPUT_WINDOW_SIZE;
if (x_compression_factor>0 && y_compression_factor>0)
if (V2_hidden_layer == NULL) /* Allocate memory for coapressed lmage */ {
V2_ hidden_layer = (ACTIVATION_DATA_TYPE *)calloc
(lNPUT_ RlNDOW_ SIZE * INPUT_fWlNDOW_SIZE,
sizeof(ACTIVATION_DATA_TYPE));
if (V2_hidden_layer == NULL)
if (Tbatch_ fig) message(insufmem_ str);
else err_str = insufmem_str;
}
/* Compute compressed image only if storage has been allocated */ if (V2_hidden_layer != NULL)
{
/* Set divisor for averaging to product of the window size and the normalizing factor for conversion between
integer and real pixel values */ normal_ factor = (ACTIVATION_ DATA_TYPE)
((VLT_SIZE-1) * x_compression_factor * y_compression_factor);
/* Compute pixel increment for moving between rows within a window */ row_skip_increment = sizw_x - x_compression_factor;
/* Compute pixel increment for moving the averaging window between subsequent row positions of its scanning patter */ srb_skip_increment = size_x * y_compression_ factor;
/ * Loops to scan the averaging process over the entire image */
V2_ptr=0;
for (scan_row=0, row_begn=0; scan_row<size_ y;
scan_row+=y_compression_factor, row_ begn+=srb_skip_ increment) for (scan_col=0; scan_col<size_x; scan_ col+=x_compression_factor) {
input_index = row_begn + scan_col;
pixsum = 0;
for (irow=0; irow<y_compression_factor; irow++)
for (icol=0; icol<x_compression_factor; icol++)
pixsum += image[input_index++];
input_ index += row_skip_ increment;
}
V2_hidden_layer[V2_ptr++] = (float)pixsum / normal_factor;
}
}
}
}
/* Routine to generate positional offset signals froa the coapressed image */ void detect_offset()
{
int scan_row, scan_col, V2_ptr, V2_base_ptr;
if (!valid_V2_ flag)
{
V2_average(); /* Generate the compressed imagt */
/* Skip remaining computations if memory allocation error
occurred while generating the compressed image */ if (V2_hidden_layer != NULL)
{
if (x_coapression_factor < 1)
{
too_small_ img_str[0] = '\0';
atreat (too_ small_ img_str, horz_ str, 10 );
strcat (too_ small_ img_str, Istrl, 13 );
strcat (too_ small_ img_str, input_str, 10 );
strcat (too_ small_ img_str, lstr2, 8 );
sprintf (too_small_ img_str+41, "%2d", size_x );
strcat (too_ small_ img_str, lstr3, 24 );
strcat (too_small_ img_str, offs_ str, 16 );
strcat (too_small_ img_str, lstr4, 14 );
if (!bateh_fig) message (too_ small_ img_ str); else err_str = too_small_img_str;
}
else
{
if (y_compression_factor < 1)
{
too_small_ img_str[0] = '\0';
strcat (too_small_ img_str, vert_ str, 8 );
strcat (too_small_ img_str, lstr1, 13 );
strcat (too_small_ img_str, inpt_ str, 10 );
strcat (too_small_ img_str, lstr2, 8 );
sprintf (too_small_ img_str+39, "%2d", size_ y );
strcat (too_ small_ img_str, lstr3, 24 );
strcat (too_ small_ img_str, offs_ str, 16 );
strcat (too_small_ img_str, Istr4, 14 );
if (!batch_fig) message (too_small_ img_ str);
else err_str = too_small_img_str;
}
else
{
ftr_counter = V2_SPECTRUM_OFFSET;
/*--------------------------------------------------------------------------------------------------------------*/
/* */
/* Offset Signal Calculations for the Northerly Direction */
/* */
/*--------------------------------------------------------------------------------------------------------------*/
/* Initialization - Pack Data from Compressed Image
into the 7 x 3 Neural Network Input Array */
input index=0;
for (scan_ col-INPUT_WINDOW_SIZE; scan_ col>0; --scan_col) {
V2_ ptr = scan_ col - 1;
for (scan_ row=0; scan_ row<v2AUX_WINDOW_SIZE; ++scan_ row)
{
input_neuron_ signal[input_ index++] =
V2_hidden_layer[V2_ ptr];
V2_ptr += lRPUT_WINDOW_SIZE;
}
}
/* Actual Calculation for Raw Sub-Image */
coapute_ hidden_ unit_activations
(INPUT_WINDOW_ SIZE * V2AUX_WINDOW_SIZE, N_ HIDDEN_NEURONS,
V2_A_wts, V2_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */
matrix_ vector_product
(INPUT_WINDOW_SIZE * V2AUX_WINDOW_ SIZE,
N_OUTPUTS, V2_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */
for(input_index=0; input_index<N_HIDDEN_NEURONS; ; input_index++)
sigrect[input_index] =
step_sigmoid(hidden_neuron_signal [input_ index]);
/* Compute Contributions to Output Layer Activations from Hidden Units */ matrix_vector_product(N_HIDDEN_NEURONS, N_ OUTPUTS, V2_C_wts, sigrect, output_neuron_signal)
{
output_neuron_signal[input_index] +=
B_or_D_ fi[input_index];
frwd_ feature[ftr_counter] =
output_neuron_signal [input_index];
}
/* Compute Complement of the Input Sub-Image */ for (input_ index=0;
input_index < (INPUT_WINDOW_SIZE * V2AUX_ WINDOW_ SIZE); ++input_index)
input_neuron_signal[input_index] =
1.0 - input_neuron_signal [input_index];
/* Repeat Offset Signal Calculation Using
the Complemented Sub-Image as Input */ compute_hidden_unit_activations
(INPUT_WINDOW_ SIZE * V2AUX_WINDOW_SUE, N_ HIDDEN_NEURONS, V2_A_wts, V2_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */ matrix_ vector_ product
(INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE, N_OUTPUTS,
V2_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */
for(input_index=0; input_index<N_HIDDEN_ NEURONS;
input_index++)
sigrect[input_index] =
step_sigmoid(hidden_neuron_signal[input_index]);
/* Compute Contributions to Output Layer Activations from Hidden Unite */ matrix_vector_product(N_HIDDEN_NEURONS, N_OUTPUTS, V2_C_wts, sigrect, output_neuron_signal)
/* Compute Final Output Activations by Combining
Contributions from the Input and Hidden Layers
and Store Results in Edge Detection Feature Array */ for(input_index=0; input_index<N_OUTPUTS; input_index++) {
output_neuron_signal [input_ index] +=
B_or_D_ fi[input_index],
frwd_ feature[ftr_counter] =
max (f rwd_feature[f tr_counter],
output_neuron_signal[input_ index]); ++ftr_counter;
}
/*--------------------------------------------------------------------------------------------------------------*/
/* */
/* Offset Signal Calculations for the Southerly Direction */
/* */
/*--------------------------------------------------------------------------------------------------------------*/
/* Initialization - Pack Data from Compressed Image
into the 7 x 3 Neural Network Input Array */ input_index=0;
V2_base_ ptr = (INPUT_WINDOW_ SIZE-1) * INPUT_WINDOW_SIZE;
for (scan_ col=0; scan_col<INPUT_WINDOW_SIZE; ++scan_col)
{
V2_ptr = V2_base_ptr + scan_ col;
for (scan_ row=0;~scan_row<VZAUX_WINDOW_SIZE; ++scan_ row)
{
input_neuron_signal[input_ index++] =
V2_hidden_layer[V2_ptr];
V2_ptr -= INPUT_WINDOW_ SIZE;
}
}
/* Actual Calculation for Raw Sub-Image */
compute_hidden_unit_activations
(INPUT_WINDOW_ SIZE * V2AUX_WINDOW_ SIZE, N_HIDDEN_NEURONS,
V2_A_wts, V2_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */
matrix_vector_product
(INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE, N_OUTPUTS,
V2_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigaoid rectification */
for(input_index=0; input_index<N_HlDDEN_NEURONS;
input_index++)
sigrect[input_ index] =
step_sigmoid(hidden_neuron_signal [input_index]);
/* Compute Contributions to Output Layer Activations froa Hidden Unite */
matrix_vector_product(N_HIDDEN_NEURONS, N_OUTPUTS, V2_C_wts,
sigrect, output_neuron_signal)
/* Compute Final Output Activations by Combining
Contributions from the Input and Hidden Layers
and Store Results in Edge Detection Feature Array */
for(input_index=0; input_ index<N_OUTPUTS; input_index++)
{
output_neuron_signal [input_ index] +=
B_or_D_fi[input_index);
frwd_feature[ftr_counter] =
output_neuron_signal [input_index];
}
/* Coapute Coaplement of the Input Sub-Image */ for (input_ index=0;
input_index < (INPUT_WINDOW_SIZE + V2AUX_WINDOW_ SIZE);
++input_ index)
input_neuron_signal [lnput_index] =
1.0 - input_neuron_signal [input_index];
/* Repeat Offset Signal Calculation Using
the Complemented Sub-Image as Input */
compute_hidden_unit_ actlvations
(INPUT_WINDOW_ SIZE * V2AUX_WINDOW_SIZE, N_HIDDEN_ NEURONS,
V2_A_wts, V2_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */
matrix_vector_product
(INPUT_ WINDOW_SIZE * V2AUX_WINDOW_SIZE, N_OUTPUTS,
V2_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */
for(i nput_ index=0; input_index<N_HIDDEN_ NEURONS;
input_index++)
sigrect[input_ index] =
step_sigmoid(hidden_neuron_signal[input_index]);
/* Compute Contributions to Output Layer Activations from Hidden Units */
matrix_vector_product(N_HIDDEN_NEURONS, N_OUTPUTS, V2_C_wts,
sigrect, output_neuron_signal)
/* Compute Final Output Activations by Combining
Contributions from the Input and Hidden Layers
and store Results in Edge Detection Feature Array */
for(input_index=0; input_index<N_OUTPUTS; input_index++)
{
output_neuron_signal [input_index) +=
B_ or_D_fi[input_ index];
frwd_fealure[ftr_counter] =
max (frwd_feature[ftr_counter],
output_neuron_sinnal [input_index]);
++ftr_counter;
}
/*--------------------------------------------------------------------------------------------------------------*/ /* Offset Signal Calculations for the Easterly Direction */
/* */
/*--------------------------------------------------------------------------------------------------------------*/
/* Initialization - Pack Data from Compressed Image
into the 7 x 3 Neural Network Input Array */ input_index=0;
V2_ptr = (INPUT_WINDOW_ SIZE * INPUT_WINDOW_SIZE) - 1;
for (scan_row=0; scan_row<lNPUT_wiNDOW_SIZE; ++scan_row)
{
for (scan_col=0; scan col<v2AUX_WINDOW_SIZE; ++scan_col)
input_neuron_ signal[input_index++] =
V2_hidden_layer [ V2_ptr--];
V2_ptr-= RSKP_INC; }
/* Actual Calculation for Raw Sub-Image */ compute_hidden_ unit_activations
(INPUT_WINDOW_ SIZE 8 V2AUX_WINDOW_SIZE, N_HIDDEN_ NEURONS, V2_A_wts, V2_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */ ma t r i x_ vector _ product
(INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE, N_OUTPUTS,
V2_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */
for(input_lndex=0; input_index<N_HIDDEN_NEURONS;
input_ index++)
sigrect[input_ index] =
step_slgmoid(hidden_neuron_signal[input_index]);
/* Compute Contributions to Output Layer Activations from Hidden Units */ matrix_vector_ product(N_HIDDEN_NEURONS, N_OUTPUTS, V2_ C_wts, srgrect, output_neuron_signal)
/8 Compute Final Output Activations by Combining
Contributions from the Input and Hidden Layers and Store Results in Edge Detection Feature Array */ for(input_index=0; input_index<N_ OUTPUTS; input_index++)
{
output_neuron_ signal [input_index] +=
B_or_D_ fi[input_index];
frwd_ feature [ftr_counter] =
output_neuron_signal [input_index];
}
/* Compute Complement of the Input Sub-Image */ for (input_ index=0;
input_index < (INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE); ++input_index)
inρut_neuron_signal [input_index] =
1.0 - input_neuron_signal [input_index];
/* Repeat Offset Signal Calculation Using
the Complemented Sub-Image as Input */ compute_ hidden_unit_activations
(INPUT_WINDOW_SIZE * V2AUX_ WINDOW_SIZE, N_HIDDEN_NEURONS, V2_A_wts, V2_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */ matrix_vector_product
(INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE, N_OUTPUTS,
V2_D_wts, inpυt_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */
for(input_ index=0; input_index<N_HIDDEN_NEURONS;
input_index++) sigrect[input_ index] =
step_sigmoid(hidden_neuron_signal [input_index]);
/* Compute Contributions to Output Layer Activations from Hidden Units */
matrix_vector_product(N_HIDDEN_NEURONS , N_OUTPUTS, V2_C_wts,
sigrect, output_neuron_signal)
/* Compute Final Output Activations by Combining
Contributions from the Input and Hidden Layers
and Store Results in Edge Detection Feature Array */
for (input_index=0; input_ index<N_OUTPUTS; input_index++)
{
output_ neuron_signal [input_index] +=
B_or_D_ fi [input_index];
frwd_feature[ftr_counter] =
max (f rwd_feature [ftr_counter],
output_neuron_signal [input_index]);
++ftr_ counter;
}
/*--------------------------------------------------------------------------------------------------------------*/
/* */
/* Offset Signal Calculations for the Westerly Direction */
/*--------------------------------------------------------------------------------------------------------------*/
/* Initialization - Pack Data from Compressed Image
into the 7 x 3 Neural Network Input Array */ input_ index = V2_ptr = 0;
for (scan_ row=0; scan_ row<lNPUT_WINDOW_ SIZE; ++scan_row)
{
for (scan_col=0; scan col<V2AUX_WINDOW_SIZE; ++scan_ col)
input_neuron_ signal[input_index++) =
V2_hidden_Iayer[V2_ptr++];
V2_ptr += RSKP_ INC;
}
/* Actual Calculation for Raw Sub-Image */
compute_hidden_unit_activations
(INPUT_ WINDOW_ SIZE * V2AUX_WINDOW_SIZE, N_HIDDEN_ NEURONS,
V2_A_wts, V2_B_wts)
/* Compute Direct Contributions to Output Layer Activations
from Input Signals and Store them in Designated Array */
matrix_ vector_ product
(INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE, N_OUTPUTS, V2_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */
for(input_index=0; input_index<N_HIDDEN_NEURONS;
input_index++)
sigrect[lnput_ index] =
step_sigmoid(hidden_neuron_signal [input_index ]);
/* Compute Contributions to Output Layer Activations from Hidden Units */
matrix_vector_product(N_HIDDEN_NEURONS, N_OUTPUTS, V2_C_wts,
sigrect, output_neuron_ signal) /* Compute Final Output Activations by Combining Contributions from the Input and Hidden Layers and Store Results in Edge Detection Feature Array */
for(input_index=0; input_index<N_OUTPUTS; input_index++) {
output_neuron_ signal [input_index] +=
B_or_D_fi [input_index];
frwd_feature[ftr_counter] =
output_neuron_signal[input_ index];
}
/* Compute Complement of the Input Sub-Image */ for (input_ index=0;
input_index < (INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE); ++input_ index)
input_ neuron_signal [input_index] =
1.0 - input_neuron_signal[input_index];
/* Repeat Offset Signal Calculation Using
the Complemented Sub-Image as Input */ compute_hidden_ unit_activations
(INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE, N_HIDDEN_NEURONS, V2_A_wts, V22B_wts)
/* Compute Direct Contributions to Output Layer Activations
from lnput Signals and Store them in Designated Array */ matrix_vector_ product
(INPUT_WINDOW_SIZE * V2AUX_WINDOW_SIZE, N_OUTPUTS,
V2_D_wts, input_neuron_signal, B_or_D_fi)
/* Pass hidden unit activations through the sigmoid rectification */
for(input_index=0; input_index<N_HIDDEN_NEURONS;
input_index++)
sigrect[input_index] =
step_sigmoid(hidden_neuron_signal [input_index]);
/* Compute Contributions to Output Layer Activations from Hidden Unite */ matrix_vector_product(N_ HIDDEN_NEURONS, N_OUTPUTS, V2_C_ wts, sigrect, output_neuron_signal)
/* Compute Final Output Activations by Combining
Contribυtions from the Input and Hidden Layers and Store Results in Edge Detection Feature Array */ for (input_ index=0; input_ index<N_OUTPUTS; input_ index++)
{
output_neuron_signal [input_ index] +=
B_ or_D_ fi [input_ index];
frwd_feature [ftr_counter] =
max (frwd_feature[ftr_counter],
output_neuron_signal [input_ index]);
++ftr_counter;
}
/* Se t Flag to I nd i ca e that Offset Detection Signals Are Now valid */
} valid_V2_flag = 1;
} /* ART2.C Printed on 18-December-1989 */
#include <stdio.h>
#include <math.h>
#define MXITR 3
#define MXVAL 255
#define NONE -1
#define NDCKG -2
#define RESET -3
/* Specify "C" data type for representation of activation levels
in neural network edge detection algorithm */
#include "activation. h"
/* Header file to make Long Term Memory trace information
and ART 2 result descriptors available to outside programs */
#include "LTN.h"
#define max(a,b) (((a)>(b))?(a):(b))
#define min(a.b) (((a)<(b))?(a):(b))
#define fth(a) (((a)>theta)?(a):0.0)
#define sqr(a) ((a)*(a))
#define calloc2D(ny,nx,type) \
(type **)kalloc2D(ny,nx,sizeof (type), sizeof(type *))
#define calloclD(nx, type) (type *)calloc(nx,sizeof(type))
/* Number of output categories (F2 nodee) for the ART2 classifier */ int nF2;
int MXPAS;
float MXERR, step;
float a, b, c, d, rho, theta, alpha; /* ART2 control parameters */ float **ztd, **zbu,
*P. *q. *r, *u, *v, *w, *x, *y, **z;
float P, R, V, W;
int *Npatlst;
int Nactv, Jactv, Jpntr, *Jnext;
int ARTreset, ndchg;
/* Error messages for "Ordinary Differential Equation Integration" Routines */ char *odeerr_ 1 = "ART2: Step size too small in ODEINT.";
char *odeerr_2 = "ART2: Too many steps in routine ODEINT.";
char *odeerr_3 = "ART2: Step size too small in routine RKQC";
/*--------------------------------------------------------------------------------------------------------------*/ void ART_start (nF1)
/* Allocate aeaory for ART2 computation information and read in parameters from file on disk */ int nF1; /* Number of nodes in an input pattern */
{
FILE *ART2_parameter_file, *fopen();
void ode_alloc();
char *calloc();
char **kalloc2D();
int i. j;
float const;
char dumchar;
/* Attempt to open data file containing operational parameters for ART2 */ ART2_parameter file = fopen("ART2.par", "r");
if(ART2_ parameter_ file==NULL)
errmess("\nUnable to open ART 2 parameter file.\n\n");
fscanf(ART2_ parameter_ file, "%d%*[^\n]*%c" , &nF2);
fscanf(ART2_ parameter_ file, "%f%*[^\n]*%c" , &a);
fscanf(ART2_ parameter_ file, "%f%*[^\n]*%c" , &b),
fscanf(ART2_ parameter_ file, "%f%*[^\n]*%c" , &c);
fscanf(ART2_ parameter_ file, "%f%*[^\n]*%c" , &d);
fscanf(ART2_ parameter_ file, "%f%*[^\n]*%c" , &rho);
fscanf(ART2_ parameter_ file, "%f%*[^\n]*%c" , &theta);
fscanf(ART2_ parameter_ file, "%f%*[^\n]*%c" , &alpha);
fscanf(ART2_ parameter_ file, "%f%*[^\n]*%c" , &step);
fscanf(ART2_ parameter_ file, "%f%*[^\n]*%c" , &MXERR);
fscanf(ART2_ parameter_ file, "%d%*[^\n]*%c" , &MXPAS);
ode_alloc (2*nF1);
z = calloc2D (nF2, 2*nF1, float)
std = calloc1D (nF2, float *);
tbu = calloc1D (nF2, float *);
P = calloc1D (nF1, float);
q = calloc1D (nF1, float);
r = calloc1D (nF1, float);
u = calloc1D (nF1, float );
v = calloc1D (nF1, float);
w = calloc1D (nF1, float);
x = calloc1D (nF1, float);
y = calloc1D (nF2, float);
Jnext = calloc1D (nF2, int );
Npatlst = calloc1D (nF2, int );
Nactv=0;
const = alpha / ( (1.0-d) * sqrt ((float)nF1) );
tor (j=0; j<nF2; j++)
{
ztd[j]=(&z[j][0]);
zbu[j]=(&z[j][nF1]);
for(i= 0;i<nF1;i++)
{
ztd[j][i]=0.0;
zbu[j][i]=const;
}
} }
void run_ART(nF1, inpat, res_info, err_chan)
int nF1; /* Number of nodes in an input pattern */
ACTIVATION_DATA_TYPE inpat[ ]; /* Array of input pattern values */
/* Locations designated to store ART 2 results for the calling program. */ struct ART2_res_ptrs *res_info;
FILE *err_chan; /* Stream to receive ART 2 error messages */
/* Main routine for applying ART2 classifier to an extracted edge spectrum */ {
/* Use "stderr" to log ART 2 errors, if no particular stream specified */ if (err_chan == NULL) err_chan = stderr;
Jactv = Jpntr = NONE; /* Set flag to indicate no active category nodes */
/* Compute initial response of feature representation (F1) nodes */ flrelax(nF1, lnpat, res_info);
busignl (nF1);
do
{
f2chooz();
flrelax(nF1, inpat, res_info);
} while (ARTreset==RESET);
/* Store selected category (F2) node for use by the calling program */ res_info->cat_node = Jactv;
learn(nF1, inpat, res_info, err_chan);
}
flrelax(nF1, inpat, res_info)
/* Compute response of F1 (feature representation) nodes
to signals from input pattern and top down signals
from an active category representation (F2) node */ int nF1; /* Total number of input pattern values */
ACTIVATION_DATA_TYPE inpat[ ]; /* Array of input pattern values */
/* Locations designated to store ART 2 results for the calling program. */ struct ART2_res_ptrs *res_info;
{
int act_ cat;
float g;
act_cat = (Jactv != NONE) ? Jactv : 0;
/* Nullify contribution from F2 if no category nodee are active */
g=(Jactv ! =NONE)?1.0: 0.0;
stmmax(nF1, inpat, g, ztd[act_cat]); /* Store value of the "match quality" metric for use by the calling program */ res_ info->R_value = R;
}
stmlax(nF1, inpat, zmυlt, zz)
int nF1; /* Total number of input pattern values */
ACTIVATION DATA_TYPE inpat[ ]; /* Array of input pattern values */
float zmult, *zz;
{
int pat_indx, itr;
float NORM( );
for (pat_ indx=0; pat_indx<nF1; ++pat_ indx)
p[pat_indx] = q[pat_indx] = r(pat_indx] = u[pat_indx] = v[pat_indx] =
w[pat_indx] = x[pat_indx] = 0.0;
itr=0; ARTreset=0;
while (itr<MXlTR && ARTreset !=RESET)
{
itr++;
for (pat_ indx=0; pat_indx<nF1; ++pat_ indx)
w[pat_ indx] = inpat[pat_ indx] + a * u[pat_indx];
W = NORM(nF1, w);
for (pat_ indx=0; pat_indx<nF1; ++pat_ indx)
x[pat_indx] = w[pat_indx] / W;
for (pat_indx=0; pat_ indx<nF1; ++pat_ indx)
v[pat_indx] = fth(x[pat_ indx]) + b * fth(q[pat_indx]);
v=NORM(nF1, v);
for (pat_indx=0; pat_ indx<nF1; ++pat_indx)
u[pat_indx] = v[pat_indx] / V;
/* Add in top down signal from an F2 node */
for (pat_indx=0; pat_ indx<nF1; ++pat_indx)
p[pat_indx] = u[pat_indx] + d * zmult * zz[pat_ indx];
P=NORM(nF1, p);
for (pat_ indx=0; pat_ indx<nF1; ++pat_indx)
r[pat_indx] = (u[pat_ indx] + c * p[pat_indx)) / (1.0 + c*P);
R=NORM(nF1, r);
for (pat_indx=0; pat_indx<nF1 ; ++pat_indx) q[ρat_indx] = p[pat_indx]/P;
ARTreset=(R<rho)?RESET:0;
}
}
f2choos()
/* Code to select next category representation node for vigilance testing */
{
++Jpntr;
Jpntr=min (Jpntr, nF2-1); Jactv=Jnext[Jpntr);
I
jusignl (nF1)
in t nF1; /* Number of nodes in an input pattern */
{
int aux_indx, cat_ indx, N2, JJ;
float YY;
/* Consider all previously assigned nodes plus the next free one */ N2 = min (nF2, max (Nactv+1, 1));
/* Compute responses of category representation (F2) nodes to
bottom up signals from the feature representation (F1) nodes */ for (cat_ indx=0; cat_ indx<N2; ++cat _indx)
{
Jnext [cat_ indx] = cat_ indx;
for(aux_indx=0, y[cat_indx]=0.0; aux_indx<nF1; aux_indx++)
y[cat_indx] += p[aux_ indx] * zbu[cat_ indx][aux_ indx];
}
/* Sort eligible F2 nodes by decreasing activation and store
the sort results in the "Jnext" singly linked list */
for (cat_ indx=1; cat_ indx<N2; ++cat_ indx)
{
YY = y [cat_ indx];
JJ = Jnext[cat_indx];
for (aux_indx=cat_indx-1; aux_indx>=0 && y[ Jnext[aux_indx ]]<YY;
--aux_indx)
Jnext[aux_indx+1]-Jnext[aux_indx];
Jnext[aux_indx+1]=JJ;
}
ndchg=(Jnext[0] !=Jactv)?NDCHG:0;
}
learn(nF1, inpat, res_info, err_chan)
int nF1; /* Number of nodes in an input pattern */
ACTIVATION_DATA_TYPE inpat[ ]; /* Array of input pattern values */
/* Locations designated to store ART 2 results for the calling program. */ struct ART2_res_ptrs *res_info;
FlLE *err_chan; /* Stream to receive ART 2 error messages */
{
int pas,nok,nbad,j;
float ans,err,strt,fnsh,tol,
h=1.0e-02,hmin=1.0e-04,
*new,*zz;
char *calloc();
void ode();
zz=z[Jactv];
tol=10.*MXERR;
new=calloclD(2*nF1,float); if (Npatlst[Jactv]==0) Nactv++;
Npatlst[Jactv]++;
fprintf (stdout, "\nLEARNING CURRENT PATTERN ON NODE %d ", Jactv);
for ( j=0;j<2*nF1;j++) new[j]=zz[j];
strt=0.0;
pas= -1; err=1.e10; ARTreset=0;
while (pas<MXPAS && err>MXERR && ARTreset !=RESET)
{
pas++; err=0.0;
fnsh=strt+step;
ode(new, 2*nF1, strt, fnsh, tol, h, hmin, &nok, &nbad, inpat, err_chan); for (j=0;j<2-nF1;j++)
{
ans = fabs(new[j]-zz[j]);
if (ans>err) err-ans;
zz[j]-new[j],
}
busignl (nF1);
}
fprintf (stdout, " - pattern learned after %d passes. \n", pas);
/* Store actual number of learning passes for use by the calling program */ res_info->num_pass = pas;
if (ARTreset==RESET) fprintf(err_chan,"\nF2 RESET : R=%4f\n",R);
if (ndchg==NDCHG) fprintf(err_chan, "\nr2 CHANGE: %d->%d\n",Jactv,Jnext[0]);
}
/*--------------------------------------------------------------------------------------------------------------*/
/* Routines to perform explicit solution of differential equations for F2 */
#define MAXSTP 10000
#define TINY 1.0e-30
float *yscal, *yy, *dydx;
float *dysav, *ysav, *yteap;
float *dym, *dyt, *yt;
void ode_ alloc(nvar)
int nvar;
{
yy = calloclD (nvar, float);
dydx = calloclD (nvar, float);
yscal = calloclD (nvar, float);
ysav = calloclD (nvar, float);
dysav = calloclD (nvar, float);
ytemp = calloclD (nvar, float);
dym = calloclD (nvar, float);
dyt = calloclD (nvar, float);
yt = calloclD (nvar, float);
} void ode_ free()
{
free ( (char *)yy );
free ( (char *)dydx );
free ( (char *)yscal);
free ( (char *)ytemnp);
free ( (char *)dysav);
free ( (char *)ysav );
free ( (char *)yt );
free ( (char *)dyt );
free ( (char *)dym ) ;
}
void ode(ystart, nvar, x1, x2, eps, h1, hmin, nok, nbad, inpat, err_chan) float ystart [ ], x1, x2, eps, h1, hmin;
int nvar, *nok, *nbad;
ACTIVATION_DATA_TYPE inpat[ ]; /* Array of input pattern values */
FILE *err_chan; /* Stream to receive ART 2 error messages */
{
int nstp,i;
float xx, hnext, hdid, h;
void rkqc(),derivs();
xx-x1;
h=(x2 > x1) ? fabs(h1) : -fabs(h1);
*nok = (*nbad) = 0;
for (i=0; i<nvar;i++) yy[i]=ystart[i];
for (nstp=0;nstp<MAXSTP;nstp++) {
stmlax((nvar/2), inpat, 1.0, yy);
derivs(nvar, yy, dydx);
for (i=0;i<nvar;i++)
yscal[i]=fabs(yy[i])+fabs(dydx[i]*h)+TINY;
if ((xx+h-x2)*(xx*h-x1) > 0.0) h=x2-xx;
rkqc(nvar, &xx,h, eps, &hdid, &hnext, err_chan);
if (hdid == h) ++(*nok); else ++(*nbad);
if ((xx-x2)*(x2-x1) >= 0.0) {
for (i=0;i<nvar;i++) ystart[i]=yy[i];
return;
}
if (fabs(hnext)<=hmin)
if (err_ chan == stderr) message(odeerr_ 1);
else fprintf (err_chan, "\n "\s", odeerr_1);
h=hnext;
}
if (err_chan == stderr) message(odeerr_2);
else fprintf (err_chan, "\n "\s", odeerr_2);
}
#undef HAXSTP
#undef TINY
#define PGROW -0.20
#define PSHRNK -0.2S
#define FCOR 0.06666666 /* 1/15 */
#define SAFETY 0.9
#define ERRCON 6.0e-4
void rkqc(n, x, htry, eps, hdid, hnext, err_chan) float *x, htry, eps, *hdid, *hnext;
int n;
FILE *err_chan; /* Stream to receiv, ART 2 error
messages */ {
int i;
float xsav,hh,h, temp, errmax;
void derivs(),rk4();
xsav=(*x);
for (i=0;i (n;i++) {
ysav[i]=yy[i];
dysav[i]=dyJx[i];
}
h=htry;
for (;;) {
hh=0.5*h;
rk4(ysav,dysav,n,hh,ytemp);
*x=xsav+hh;
derivs (n, ytemp, dydx);
rk4 (ytemp,dydx,n,hh,yy);
*x=xsav+h;
if (*X == xsav)
if (err_chan == stderr) message(odeerr_ 3);
fprintf(err_chan, "\n "/s", odeerr_3);
rk4(ysav,dysav,n,h,ytemp);
errmax=0.0;
for (i=0;i<n;i++) {
ytemp[i]=yy[i]-ytemp[i];
temp=fabs(ytemp[i]/yscal[i]);
if (errmax>temp) errmax=teap;
}
errmax /= eps;
if (errmax <= 1.0) {
*hdid=h;
*hnext=(errmax > ERRCON ?
SAFETY*h*exP(PGROW*log(errmax)) : 4.0*h);
break ;
}
h=SAFETY*h*exp(PSHRNK*log(errmax));
}
for (i=0;i<n;i++) yy[i] += ytemp[i]*FCOR;
}
#undef PGROW
#undef PSHRNK
#undef FCOR
#undef SAPETY
#undef ERRCON
void rk4 (y,dydx,n,h,yout)
float y[ ],dydx[ ],h,yout[ ];
int n;
{
int i;
float hh,h6;
void derivs (); hh=h*0.5;
h6=h/6.0;
for (i=0;i>n;i++) yt [1]=y[i]+hh*dydx[1];
derivs(n, yt, dyt);
for (i=0;i <n;i++) yt [i]-y[i]+hh*dyt[i];
derivs (n, yt, dym);
for (i=0;i<n;i++) {
yt[i]=y[i]+h*dym[i];
dym[i] += dyt[i];
}
derivs(n, yt, dyt);
for (i=0;i<n;i*+)
yout[i]-y[i]+h6*(dydx[i]*dyt[i]+2.0*dym[i]);
{
char **kalloc2D (NY, NX, SIZE, SIZESTAR)
int NY, NX, SIZE, SIZESTAR;
{
char **K;
char *calloc();
int J;
R = (char **) calloc ( NY, SIZESTAR);
K[0] = (char *) calloc (NX*NY, SIZE );
for (J=1; J<NY; J++)
K[J] = K[0] + SIZE*NX*J;
return(K);
}
void derivs (nz, zz, dz)
/* Number of points where derivative calculation is to be performed */ int nz;
float *zz, *dz;
{
int i;
for (i=0; i (nz; i++) dz [i]=d*(p(i%(nz/2))-zz[i]);
}
float NORM(nelem, vec)
int nelea; /* Nuaber of eleaents in vector to be normalized */ float *vec;
{
int i;
float norm;
for (i=0, nora=0.0; i (nelem; i++) norm+=sqr(vec[i]);
return (sqrt(norm));
} /* IRP_LGN.c Printed on 18-December-1989 */
/*
Code to compute coarse, global image features (LGN)
to be used In object classification for IRP
*/
/* Originally coded by KG Heinemann on 08-August-1989 */
/* Coding resumed on 04 October 1989 */
#include (stdlo.h)
#include (math.h)
#include (suntool/sunview.h)
#include (suntool/canvas.h)
#include (suntool/panel.h)
#include "image_io.h"
#include "activation.h "
ACTIVATION_ DATA_ TYPE LGN_sum()
{
int loop_index;
long cumulint;
ACTIVATION_DATA_TYPE norm_sum;
for(loop_index=0, cumulint=0; loop_index((size_x*size_y); ++loop_index) cumulint += image (loop_index);
norm_sum =
(AUTIVATION_DATA_TYPE)cumulint / (ACTIVATION_DATA_TYPE) (VLT_SIZE - 1); return(norm_sum);
}

Claims

Claims
1. Apparatus for determining the abnormal or normal state of a biological cell within an image based on visual characteristics of the cell, said image being represented by signals whose values correspond to said visual characteristics, comprising
a location channel which determines the location of the cell within the image based on the signal values, and
a classification channel which categorizes the cell based on the signal values,
said location channel and said classification channel operating in parallel and cooperatively to recognize said pattern.
2. The apparatus of claim 1 wherein said classification channel comprises storage for information about the visual characteristics of cells for use in categorizing said cell as normal or abnormal.
3. Apparatus for determining the abnormal or normal state of a biological cell within an image based on visual characteristics of said cell, said cell having visible edges, said image being represented by signals whose values correspond to said visual characteristics, comprising
an orientation analyzer adapted to analyze the orientations of edges of the cell within subwindows of said image, and
a strength analyzer adapted to analyze the strengths of edges of the cell near the periphery of a portion of said image.
4. Apparatus for categorizing, among a set of user-specified categories, a biological cell that
appears in an image based on visual characteristics of the cell, said image being represented by signals whose values correspond to said visual characteristics,
comprising
an unsupervised classifier adapted to define classes of cells and to categorize said cell based on said visual features and said classes, and
a supervised classifier adapted to map said classes to said set of user-specified categories.
5. Apparatus for determining the abnormal or normal state of a biological cell within an image based on visual characteristics of said cell, said image being represented by signals whose values correspond to visual characteristics of said cell, comprising
a location channel which determines the location of the cell within the image based on the signal values, a classification channel which categorizes the cell based on the signal values, and
a feedback path from said said classification channel to said location channel to cause said location channel to adapt to classification results generated by said classification channel.
PCT/US1991/001534 1990-03-06 1991-03-06 Recognition of patterns in images WO1991014235A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US48947090A 1990-03-06 1990-03-06
US48944790A 1990-03-06 1990-03-06
US489,470 1990-03-06
US489,447 1990-03-06

Publications (1)

Publication Number Publication Date
WO1991014235A1 true WO1991014235A1 (en) 1991-09-19

Family

ID=27049716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/001534 WO1991014235A1 (en) 1990-03-06 1991-03-06 Recognition of patterns in images

Country Status (1)

Country Link
WO (1) WO1991014235A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10540535B2 (en) 2017-03-13 2020-01-21 Carl Zeiss Microscopy Gmbh Automatically identifying regions of interest on images of biological cells

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3643215A (en) * 1967-11-15 1972-02-15 Emi Ltd A pattern recognition device in which allowance is made for pattern errors
US4242662A (en) * 1978-10-16 1980-12-30 Nippon Telegraph And Telephone Public Corporation Method and apparatus for pattern examination
US4523278A (en) * 1979-02-01 1985-06-11 Prof. Dr.-Ing. Werner H. Bloss Method of automatic detection of cells and determination of cell features from cytological smear preparations
US4685143A (en) * 1985-03-21 1987-08-04 Texas Instruments Incorporated Method and apparatus for detecting edge spectral features
US4965725A (en) * 1988-04-08 1990-10-23 Nueromedical Systems, Inc. Neural network based automated cytological specimen classification system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3643215A (en) * 1967-11-15 1972-02-15 Emi Ltd A pattern recognition device in which allowance is made for pattern errors
US4242662A (en) * 1978-10-16 1980-12-30 Nippon Telegraph And Telephone Public Corporation Method and apparatus for pattern examination
US4523278A (en) * 1979-02-01 1985-06-11 Prof. Dr.-Ing. Werner H. Bloss Method of automatic detection of cells and determination of cell features from cytological smear preparations
US4685143A (en) * 1985-03-21 1987-08-04 Texas Instruments Incorporated Method and apparatus for detecting edge spectral features
US4965725A (en) * 1988-04-08 1990-10-23 Nueromedical Systems, Inc. Neural network based automated cytological specimen classification system and method
US4965725B1 (en) * 1988-04-08 1996-05-07 Neuromedical Systems Inc Neural network based automated cytological specimen classification system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10540535B2 (en) 2017-03-13 2020-01-21 Carl Zeiss Microscopy Gmbh Automatically identifying regions of interest on images of biological cells

Similar Documents

Publication Publication Date Title
US5313532A (en) Recognition of patterns in images
CN110675368B (en) Cell image semantic segmentation method integrating image segmentation and classification
US5291563A (en) Method and apparatus for detection of target object with improved robustness
Rowe et al. Statistical mosaics for tracking
Etoh et al. Segmentation and 2D motion estimation by region fragments
CN109583483A (en) A kind of object detection method and system based on convolutional neural networks
CN108830842B (en) Medical image processing method based on angular point detection
CN108564598A (en) A kind of improved online Boosting method for tracking target
CN113763424A (en) Real-time intelligent target detection method and system based on embedded platform
CN111915653A (en) Method for tracking double-station visual target
WO1991014235A1 (en) Recognition of patterns in images
Yan et al. Character and line extraction from color map images using a multi-layer neural network
Schuster Color object tracking with adaptive modeling
WO1991011783A1 (en) Recognition of patterns in images
Akhmetshina et al. Improvement of Grayscale Images in Orthogonal Basis of the Type-2 Membership Function.
CN108288041A (en) A kind of preprocess method of pedestrian target false retrieval removal
Förstner Mid-level vision processes for automatic building extraction
Bertin Mining pixels: The extraction and classification of astronomical sources
Várkonyi-Kóczy New advances in digital image processing
Eichel et al. Quantitative analysis of a moment-based edge operator
Basavaraj et al. Real Time Object Distance and Dimension Measurement using Deep Learning and OpenCV
Silberberg Multiresolution aerial image interpretation
Scott Applied machine vision
CN112580442B (en) Behavior identification method based on multi-dimensional pyramid hierarchical model
Margner et al. On benchmarking of document analysis systems

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE

NENP Non-entry into the national phase

Ref country code: CA