US4731860A - Method for identifying three-dimensional objects using two-dimensional images - Google Patents

Method for identifying three-dimensional objects using two-dimensional images Download PDF

Info

Publication number
US4731860A
US4731860A US06/874,313 US87431386A US4731860A US 4731860 A US4731860 A US 4731860A US 87431386 A US87431386 A US 87431386A US 4731860 A US4731860 A US 4731860A
Authority
US
United States
Prior art keywords
cluster
image
model
hough space
colinearities
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US06/874,313
Inventor
Friedrich M. Wahl
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: WAHL, FRIEDRICH M.
Application granted granted Critical
Publication of US4731860A publication Critical patent/US4731860A/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation

Definitions

  • Present invention is related to a method for recognizing or identifiying three-dimensional objects. More particularly, it is concerned with a recognition or identification method for a three-dimensional object, which uses a two-dimensional image of the object and a Hough transform of the available image.
  • the invention finds application e.g. in robotics systems or automatic part handling machines in which the objects or parts are seen e.g. by a TV camera, and in which on the basis of the two-dimensional TV image the kind and the orientation of the viewed object must be determined.
  • Hough transforms Some suggestions were made for using Hough transforms to recognize objects or mechanical parts. However, these methods require either higher-dimensional Hough transforms (more than 2-D) or some additional knowledge about spatial parameters of the objects (e.g. surface normals or depth information obtained by structured light or range procedures). They require much computational effort.
  • a more particular object of the invention is such a recognition method which uses a Hough transform of the available two-dimensional image of the object.
  • the invention determines, in the Hough transform domain (Hough space representation), whether particular structures or configurations of clusters are available.
  • clusters are known to be the center points of intersecting lines constituting the Hough space representation.
  • the configurations found are compared to other cluster configurations which are available for known models, and the result of such comparison is the selection of one or a few candidate models, and a relation between vertices (corner points) of such model and vertices of the object image.
  • Hough transform techniques an exact matching between vertices can then be made to select one of several candidate models, if necessary, and to determine the location or orientation of the object with respect to the basic orientation of the model.
  • the models may be either object primitives such as parallelepiped, tetrahedron etc. and one or several of such object primitives are found to be constituting the unknown object, or the models may be wire frame models each of which is a complete description of one object in 3-D representation so that only a finite set of different objects can be recognized.
  • a characterization of the specific cluster configurations which occur in their 2-D Hough-space representation is prepared and stored and is thus available for comparison.
  • the basic structure used for recognition is the colinearity or straight line arrangement of more than two clusters, and specific configurations are e.g. vertical colinearities, colinearities intersecting exactly in a cluster, arrangements of several such intersecting colinearities, etc.
  • the invention enables efficient recognition of objects and a determination of their location or orientation by merely using a Hough transform of a 2-D image, and by extracting information on cluster arrangements of specific structure and comparing them to stored data on cluster arrangements of known models. Even if the available image is not perfect because it contains noise or line interruptions, or partially occluded edges the recognition method is reliable.
  • FIG. 1 schematically illustrates the basic technique for using a 2-D image and a 3-D model for recognition
  • FIG. 2 is an illustration of the Hough transform
  • FIGS. 3a-d shows the development from a grey-scale image of an object to a Hough space cluster representation of the object
  • FIGS. 4a-d shows the development from an image to a cluster representation using a twin Hough space
  • FIG. 5 shows, for three object primitives, the two-dimensional line image representation and the particular cluster configurations which appear in the Hough space representation of these object primitives;
  • FIG. 6 shows, for a composed object, the line image representation and its Hough space cluster representation indicating all cluster colinearities
  • FIG. 7 shows, for the composed object of FIG. 6, a complete wire frame model including all edges and vertices, and the respective Hough space cluster representation indicating all cluster colinearities;
  • FIG. 8 is a boundary graph representation of the interconnections of all edges and vertices of the wire frame model of FIG. 7, indicating common slopes of edges;
  • FIG. 9 is a boundary graph representation of the interconnections of edges and vertices of the object of FIG. 6, as they can be derived from the Hough space cluster representation of the object, which graph is a subgraph of the graph shown in FIG. 8;
  • FIG. 10 is a block diagram of a robot control system implementing the invention.
  • FIG. 1 illustrates the principle of machine vision in which the present invention finds application.
  • a 3-D physical scene containing a cube might be mapped by means of a sensor, e.g., a TV camera, into a 2-D digital image.
  • the captured image can now be processed by a computer to extract image features such as object edges, vertices, etc. (cf. upper part of FIG. 1).
  • abstract object models can be represented in the computer in form of, e.g., CAD data structures.
  • a cube e.g., can be modeled as unit cube in 3-D coordinate space by coordinate triples of its vertices and their associated spatial relatons with each other.
  • a computer image can be generated by a subsequent perspective transform (cf. lower part of FIG. 1).
  • This 2-D representation can be used to fit the object ⁇ cube ⁇ with the image captured from the real scene.
  • the Hough transform is a noise-insensitive method of detecting colinear image points, e.g. straight lines, in images.
  • a line can be characterized by the equation
  • an image point with coordinate values x, y is mapped into a line in a,b-space, denoted as Hough space.
  • the Hough transform accumulates lines with intercept and slope determined by the x,y-coordinate values of its corresponding image points.
  • the very useful property is, as can be seen from FIG. 2, that colinear image points (P 1 , P 2 , P 4 , P 5 ) correspond to lines in Hough space, intersecting at exactly one distinct location called cluster.
  • colinear image points P 1 , P 2 , P 4 , P 5
  • the position and orientation of the line is fully determined by its corresponding cluster location in Hough space.
  • Several different straight lines result in several clusters in Hough space at corresponding locations.
  • FIG. 3 shows a processing example with the Hough transform.
  • a simple edge detector and a subsequent thresholding operation has been applied to a digital image containing a cube (FIG. 3a).
  • the resulting binary gradient image (FIG. 3b) is the input to the Hough transform.
  • the result of the Hough transform is shown in FIG. 3c.
  • the input of the Hough transform is noisy and some lines include gaps, clearly six distinct clusters can be recognized which easily can be detected (the result is shown in FIG. 3d).
  • the clusters of vertical edges are mapped to infinity in Hough space. This can easily be avoided by applying the Hough transform twice to the image (cf. FIG. 4).
  • the clusters form specific structures or configurations which are representative of the respective polyhedra, or at least of some portions or features of them.
  • the following is a listing of such structures or configurations:
  • the number of visible non-colinear straight edges of an object is equal to the number of clusters corresponding to this object.
  • n lines intersecting each other at one distinct point in image space correspond to a colinear arrangement of n clusters in Hough space (n-cluster-colinearity).
  • Two or more cluster colinearities intersecting each other at one distinct cluster location correspond to vertices which are colinear.
  • FIG. 5 shows some examples.
  • FIG. 5 shows, there are two pairs of parallel edges which appear as two pairs of vertically aligned cluster points in Hough space (dashed lines).
  • A, B where three edges intersect (A: 1, 3, 4 and B: 2, 4, 6) with edge (4) common to (A) and (B), we expect two 3-cluster-colinearities in Hough space intersecting each other exactly at one cluster location and an additional isolated cluster (5).
  • Complex polyhedral objects can be analysed and identified in two ways: (A) by successive decomposition of the Hough space into substructures corresponding to object primitives and (B) by comparison of the Hough space with the Hough space representation of wire frame models.
  • the t-shaped object shown in FIG. 6 is chosen.
  • the vertices are designated as Vi (V1 . . . V16), and the edges are designated as Ei (E1 . . . E24).
  • the polyhedron Under the shown view direction the polyhedron has nine visible vertices where three edges meet. These correspond to nine cluster colinearities in Hough space (which can be extracted by a Hough transform as well). It should be noted that in FIG.
  • Hough space leads to the problem of recognising a prior known substructures in Hough space which are Hough cluster representations of object primitives.
  • the decomposition is started with the most complex object, i.e. in this case with the object primitive "parallelepiped".
  • the corresponding Hough cluster structure is a z-structure
  • Table 2 is restricted to colinearities S2, S3, S6, and S8 each of which shares all its clusters with other colinearities and if the clusters covered by those other colinearities are marked by 0's, one obtains the ⁇ reduced colinearity cluster coverage table ⁇ shown in Table 3.
  • colinearities S2 and S3 there are two alternatives for colinearities S2 and S3 to be the central colinearity of a z-structure; e.g. colinearity S2 might be considered to share cluster C5 with colinearity S1 or alternatively with colinearity S3. This results in two lines (a) and (b) for each of the two colinearities S2 and S3 in Table 3.
  • the respective z-structures consist of following colinearities: S1-S2(a)-S6-S8/S3-S2(b)-S6-S8/S1-S3(a)-S4-S9/S2-S3(b)-S4-S9.
  • Table 3 can be decomposed into a set of four tables (Table 4) each representing several z-structures that can coexist.
  • Table 4 each representing several z-structures that can coexist.
  • the x,y-coordinates of the corresponding vertex points in image space are fully determined by the slope and intercept of the four colinearities of the corresponding z-structures in Hough space.
  • the x,y-coordinates of the remaining three vertex points of the parallelepiped where two edges intersect each other are implicitly given by the z-structures.
  • the objects are represented by wire frame models, generated e.g. by a CAD design system. Mapping a 3-D object wire frame representation into a 2-D image plane by means of a perspective transform with the focal distance point at infinity and transforming the resulting image into Hough space yields a representation which can be called the wire frame Hough space model.
  • a representation which can be called the wire frame Hough space model.
  • FIG. 7 For the t-shaped object an example for such a representation is shown in FIG. 7.
  • the edges (E1 . . . E24) and the vertices (V1 . . . V16) carry, for the ease of understanding, the same designations as in FIG.
  • Hough space wire frame representations implicitly comprise information about edge/vertex neighborhoods as well as about parallel edges. For further processing this information can be made explicit by precalculating the cluster colinearities; edges of common slope can be listed by looking for vertically aligned clusters in Hough space. (Note that these quantities could also have been derived from 3-D wire frame models directly.) Tables 5 and 6 show the colinearity table and an edge slope list, respectively, for the wire frame representation in FIG. 7.
  • the upper half of the graph in FIG. 8 corresponds to the peripheral vertices V1, V2, V3, V4, V13, V14, V15, V16 and the lower half stems from the nonperipheral vertex points V5, V6, V7, V8, V9, V10, V11. It is easy to verify, that if there is a parallelogram defined by the images' colinearity table with two vertices colinear with a third vertex point, this parallelogram must connect the nonperipheral and the peripheral vertex points. This reduces the number of possible subgraph matches to 16.
  • the proposed method would consist of two phases: a computational expensive off-line model generation phase and a fast on-line recognition phase.
  • the two phases comprise the following steps:
  • colinearity and edge slope tables corresponding to different views of an object and simple features for them may be precomputed and stored in an object model library in which several models belong to the same class of object. If necessary, remaining ambiquities may be resolved subsequently by geometric vertex fitting.
  • the steps of this modified method are as follows:
  • FIG. 10 a robot work cell with vision control is shown in which the invention is used.
  • the system comprises a robot control arm 11 with robot control unit 13 for handling work pieces 15 which are placed on a conveyor belt 17.
  • a sensor 19 such as a TV camera views a workpiece 15 and sends its output signals to an image capture and preprocessing unit 21.
  • the TV scan image such as the one shown in FIG. 3(a) is intermediately stored and in a preprocessing procedure a gradient image (as shown in FIG. 3(b)) is generated which essentially consists of lines representing edges of the workpiece.
  • a central processing unit 23 is provided for controlling the evaluation operations and for performing the necessary arithmetic and logic operations.
  • the gradient image is sent to memory 25 and to the Hough processor 27.
  • Hough processor 27 generates a Hough space representation of the workpiece from the gradient image it receives.
  • the Hough space representation comprises intersecting lines forming clusters as shown in FIG. 3(c). Each cluster represents a straight line, i.e. an edge of the workpiece.
  • the Hough space representation is sent to cluster extractor 29 which extracts the coordinate values of the center point of each cluster so that a clear and precise cluster representation is available which also can be stored in memory 25.
  • cluster extractor 29 which extracts the coordinate values of the center point of each cluster so that a clear and precise cluster representation is available which also can be stored in memory 25.
  • the horizontal projection vector thus comprises an accumulated intensity value for each column so that the columns or x-coordinates of clusters can be determined in a second step by finding the maxima in the horizontal projection vector.
  • a vertical projection vector is calculated for each of the maxima found in step two.
  • each vertical projection vector comprises an accumulated intensity value for a small row section of length 2 ⁇ .
  • the row or y-coordinate of each cluster in the respective strip can then be determined in a fourth step by finding the maxima in the vertical projection vectors.
  • the coordinate values of points where horizontal and vertical maxima coincide are stored as cluster centers.
  • the sequence can be reversed by starting with a vertical projection vector and then calculating a number of horizontal projection vectors.
  • the Hough space representation comprising only clusters represented by their center points (as shown in FIG. 3(d)) is stored in memory 25.
  • the necessary tables representing object models and the required programs for the respective application are stored in memory.
  • Hough space cluster table specific configurations are determined as was explained in the previous description.
  • Colinearities of clusters can be found by Hough processor 27 and cluster extractor 29 which transform each cluster point into a line, and the intersection of several lines (i.e. a cluster in the secondary Hough representation) indicates a colinearity in the original Hough space representation of object 15.
  • Colinearities of clusters and their structures and configurations are listed in tables as explained above and compared to those of existing models that were entered into memory for the application.
  • the type of object and its orientation are determined by the inventive procedure and the respective data transferred to robot control 13 before workpiece 15 reaches the position below robot arm 11 so that the latter can be controlled to handle the workpiece correctly.
  • the technique employs the noise insensitive Hough transform and thus the overall robustness of the technique with respect to imperfect image capture is good.
  • the analysis and decomposition as well as the model matching can be performed fast enough with a low cost general purpose processor.
  • the method improves the efficiency e.g. of vision feedback for robot control, where strong realtime requirements are to be met.

Abstract

For recognizing a three-dimensional object from its two-dimensional image which was produced e.g. by a TV camera, a Hough transform representation is generated of the image and specific configurations or structures of the cluster points which constitute the Hough transform representation are determined. The information about these specific configurations is compared to similar information stored for the Hough representation of known object models. By thus relating portions of the image to portions of one or several object models, vertices of the image which are present at line or edge intersections, are related to vertices of the known object model(s). This knowledge about the correspondence of model and object vertex points allows the exact fitting of vertices and thus recognition of the unknown object and its relative orientation. The models may be either primitive objects and the procedure determines of which primitives the unknown object is composed, or the models may be wire frame models each of which completely describes one more complicated object and the procedure determines to which of the models the entire unknown object fits best.

Description

FIELD OF INVENTION
Present invention is related to a method for recognizing or identifiying three-dimensional objects. More particularly, it is concerned with a recognition or identification method for a three-dimensional object, which uses a two-dimensional image of the object and a Hough transform of the available image. The invention finds application e.g. in robotics systems or automatic part handling machines in which the objects or parts are seen e.g. by a TV camera, and in which on the basis of the two-dimensional TV image the kind and the orientation of the viewed object must be determined.
BACKGROUND
In many industrial applications versatile vision systems play an essential role. In the area of quality control, the completeness of assemblies has to be tested; in automated manufacturing, machine parts have to be recognized and their position or orientation has to be determined in order to support flexible manipulator control, etc. Usually, only a two-dimensional picture such as a television camera output is available of the scene which comprises the assembly or the machine part to be handled.
An article by L. G. Roberts "Machine perception of three-dimensional solids", published in "Optical and Electrooptical Information Processing" (Eds: J. T. Tippett et al.), MIT Press, Cambridge, Mass., 1968, pp. 159-197, disclosed procedures for the recognition of three-dimensional objects on the basis of two-dimensional photographs. Polygons which are required for the identification have to be found in the picture. A difficulty is that in many pictures, the lines which represent the object and which form polygons are not complete due to noise or spots which cause local interruptions, so that finding the polygons will not be possible.
In U.S. Pat. No. 3,069,654 (Hough), a method and means are disclosed for recognizing complex patterns. It describes the basic procedure for generating a Hough transform of lines in an image which transform is then used for determining location and orientation of the lines in the image; these lines may be traces in a bubble chamber or a handwritten signature. No recognition of objects is considered.
Some suggestions were made for using Hough transforms to recognize objects or mechanical parts. However, these methods require either higher-dimensional Hough transforms (more than 2-D) or some additional knowledge about spatial parameters of the objects (e.g. surface normals or depth information obtained by structured light or range procedures). They require much computational effort.
OBJECT OF THE INVENTION
It is an object of the invention to devise a method for recognizing or identifying a three-dimensional object on the basis of a two-dimensional image of the object. A more particular object of the invention is such a recognition method which uses a Hough transform of the available two-dimensional image of the object.
DESCRIPTION OF THE INVENTION
For recognizing an object from the Hough transform of a two-dimensional object image, the invention determines, in the Hough transform domain (Hough space representation), whether particular structures or configurations of clusters are available. Such clusters are known to be the center points of intersecting lines constituting the Hough space representation. The configurations found are compared to other cluster configurations which are available for known models, and the result of such comparison is the selection of one or a few candidate models, and a relation between vertices (corner points) of such model and vertices of the object image. Using Hough transform techniques, an exact matching between vertices can then be made to select one of several candidate models, if necessary, and to determine the location or orientation of the object with respect to the basic orientation of the model.
The models may be either object primitives such as parallelepiped, tetrahedron etc. and one or several of such object primitives are found to be constituting the unknown object, or the models may be wire frame models each of which is a complete description of one object in 3-D representation so that only a finite set of different objects can be recognized.
For the models, a characterization of the specific cluster configurations which occur in their 2-D Hough-space representation is prepared and stored and is thus available for comparison. The basic structure used for recognition is the colinearity or straight line arrangement of more than two clusters, and specific configurations are e.g. vertical colinearities, colinearities intersecting exactly in a cluster, arrangements of several such intersecting colinearities, etc.
The invention enables efficient recognition of objects and a determination of their location or orientation by merely using a Hough transform of a 2-D image, and by extracting information on cluster arrangements of specific structure and comparing them to stored data on cluster arrangements of known models. Even if the available image is not perfect because it contains noise or line interruptions, or partially occluded edges the recognition method is reliable.
Embodiment of the invention is described in the following with reference to drawings.
LIST OF DRAWINGS
FIG. 1 schematically illustrates the basic technique for using a 2-D image and a 3-D model for recognition;
FIG. 2 is an illustration of the Hough transform;
FIGS. 3a-d shows the development from a grey-scale image of an object to a Hough space cluster representation of the object;
FIGS. 4a-d shows the development from an image to a cluster representation using a twin Hough space;
FIG. 5 shows, for three object primitives, the two-dimensional line image representation and the particular cluster configurations which appear in the Hough space representation of these object primitives;
FIG. 6 shows, for a composed object, the line image representation and its Hough space cluster representation indicating all cluster colinearities;
FIG. 7 shows, for the composed object of FIG. 6, a complete wire frame model including all edges and vertices, and the respective Hough space cluster representation indicating all cluster colinearities;
FIG. 8 is a boundary graph representation of the interconnections of all edges and vertices of the wire frame model of FIG. 7, indicating common slopes of edges;
FIG. 9 is a boundary graph representation of the interconnections of edges and vertices of the object of FIG. 6, as they can be derived from the Hough space cluster representation of the object, which graph is a subgraph of the graph shown in FIG. 8; and
FIG. 10 is a block diagram of a robot control system implementing the invention.
DETAILED DESCRIPTION Machine Vision Principle
FIG. 1 illustrates the principle of machine vision in which the present invention finds application. A 3-D physical scene containing a cube might be mapped by means of a sensor, e.g., a TV camera, into a 2-D digital image. The captured image can now be processed by a computer to extract image features such as object edges, vertices, etc. (cf. upper part of FIG. 1). On the other hand abstract object models can be represented in the computer in form of, e.g., CAD data structures. A cube, e.g., can be modeled as unit cube in 3-D coordinate space by coordinate triples of its vertices and their associated spatial relatons with each other. Applying 3-D coordinate transforms such as translation, rotation and scaling, a computer image can be generated by a subsequent perspective transform (cf. lower part of FIG. 1). This 2-D representation can be used to fit the object `cube` with the image captured from the real scene. With this in mind one can state the principle of identifying objects as follows: Given a real image and given some object models, find the model which under some coordinate transforms, with parameters to be determined, best fits the real image. For polyhedral objects, e.g., it is advantageous with respect to the computational expense to use extracted vertice points from the real image and to relate them to the model vertices. Present invention utilizes the power of robust feature detection (straight lines) by means of the Hough transform and a subsequent analysis of the Hough space representation to extract object vertice points for object recognition.
Principles of Hough Transform Representation
The Hough transform is a noise-insensitive method of detecting colinear image points, e.g. straight lines, in images. Reference is made to FIG. 2 to explain its principle. A line can be characterized by the equation
y=ax+b
with the two parameters a, b. Resolving this equation with respect to b yields
b=-ax+y.
As can be seen, an image point with coordinate values x, y is mapped into a line in a,b-space, denoted as Hough space. The Hough transform accumulates lines with intercept and slope determined by the x,y-coordinate values of its corresponding image points. The very useful property is, as can be seen from FIG. 2, that colinear image points (P1, P2, P4, P5) correspond to lines in Hough space, intersecting at exactly one distinct location called cluster. Thus, straight lines easily can be detected by extracting clusters in Hough space. The position and orientation of the line is fully determined by its corresponding cluster location in Hough space. Several different straight lines result in several clusters in Hough space at corresponding locations.
FIG. 3 shows a processing example with the Hough transform. A simple edge detector and a subsequent thresholding operation has been applied to a digital image containing a cube (FIG. 3a). The resulting binary gradient image (FIG. 3b) is the input to the Hough transform. The result of the Hough transform is shown in FIG. 3c. Although the input of the Hough transform is noisy and some lines include gaps, clearly six distinct clusters can be recognized which easily can be detected (the result is shown in FIG. 3d). As a cube in general has nine edges one might expect rather nine than six clusters. Obviously the clusters of vertical edges (edges with infinite slope) are mapped to infinity in Hough space. This can easily be avoided by applying the Hough transform twice to the image (cf. FIG. 4). In a first pass it is applied directly to the binary gradient image FIG. 4a) and subsequently in a second pass its by 90 deg rotated version is transformed. (Note that both passes can be calculated simultaneously by appropriate Hough space array indexing.) The resulting twin Hough space shown in FIG. 4b contains the complete set of clusters corresponding to the cube's edges. The transform of the extracted clusters (cf. FIG. 4c) back into image space (the backward Hough transform is identical with forward Hough transform) results in a Hough space representation of the original image which is shown in FIG. 4d. It can be seen that the Hough space representation on its own does not deliver object vertices which are required for vertice matching, i.e. object recognition. How vertex matching for object recognition can be achieved by the invention with the Hough transform is explained in the following.
New Method for 3-D Object Recognition
It has been found out that in the Hough space representation of polyhedra images, the clusters form specific structures or configurations which are representative of the respective polyhedra, or at least of some portions or features of them. The following is a listing of such structures or configurations:
(1) Clusters corresponding to lines (or edges) with the same intercept are aligned horizontally in Hough space.
(2) Clusters corresponding to lines with the same slope (parallel lines) are aligned vertically in Hough space.
(3) The number of visible non-colinear straight edges of an object is equal to the number of clusters corresponding to this object.
(4) n lines intersecting each other at one distinct point in image space (vertices) correspond to a colinear arrangement of n clusters in Hough space (n-cluster-colinearity).
It should be noted that vertical alignments of clusters in Hough space are also cluster colinearities and thus may be considered as parallel lines which intersect each other at infinity.
(5) Two or more cluster colinearities intersecting each other at one distinct cluster location correspond to vertices which are colinear.
These properties can be employed to characterize simple object primitives which are useful for decomposition and representation of more complex polyhedral objects. FIG. 5 shows some examples.
Tetrahedron:
Usually two (from special view angles one or three) faces of a tetrahedron are visible. As there are no parallel edges, no vertical alignments of cluster points occur in Hough space. There are two vertices (A, B) with three intersecting edges (A: 1, 3, 5 and B: 1, 2, 4); these share one common edge (1). Thus, in Hough space tetrahedra are represented by two 3-cluster-colinearities sharing exactly one cluster.
Prism:
Usually two (from special view angles one or three) faces of a prism are visible. As FIG. 5 shows, there are two pairs of parallel edges which appear as two pairs of vertically aligned cluster points in Hough space (dashed lines). As there are two vertices (A, B) where three edges intersect (A: 1, 3, 4 and B: 2, 4, 6) with edge (4) common to (A) and (B), we expect two 3-cluster-colinearities in Hough space intersecting each other exactly at one cluster location and an additional isolated cluster (5).
Parallelepiped:
Usually three faces (from special views one or two) faces of a parallelepiped are visible. As there are three triples of parallel edges (1, 2, 3; 4, 5, 6; 7, 8, 9) we expect three vertical cluster alignments with three clusters each (dashed lines). There are four vertices where three edges intersect (A: 1, 4, 8; B: 2, 5, 8; C: 3, 5, 7; D: 2, 6, 9) which correspond to four 3-cluster-colinearities in Hough space. As the central vertice point B shares its edges with A, C, D, the corresponding 3-cluster-colinearities of A, C, D, intersect the 3-cluster-colinearity of B exactly at three different cluster locations. The resulting structure may be considered as a z-shape with an additional diagonal crossing it (z-structure).
The examples demonstrate, how distinct vertex points of polyhedral objects can be identified in Hough space and thus can be related to model vertices. Using this information on the relation between vertex points in the object image and in a model, object matching is possible by means of geometric vertex fitting as described e.g. in the above mentioned article by L. G. Roberts.
Complex polyhedral objects can be analysed and identified in two ways: (A) by successive decomposition of the Hough space into substructures corresponding to object primitives and (B) by comparison of the Hough space with the Hough space representation of wire frame models.
(A) Decomposition Into Object Primitives
In order to demonstrate the decomposition principle by an illustrative example which is simple enough to follow without computer the t-shaped object shown in FIG. 6 is chosen. The vertices are designated as Vi (V1 . . . V16), and the edges are designated as Ei (E1 . . . E24). However, only a portion of all edges and vertices of the t shape are visible in the image of FIG. 6. Under the shown view direction the polyhedron has nine visible vertices where three edges meet. These correspond to nine cluster colinearities in Hough space (which can be extracted by a Hough transform as well). It should be noted that in FIG. 6 colinear line segments in image space correspond to one cluster point in Hough space; thus the distinction between edges E5/E19 and E6/E20 are mere artefacts of the Hough simulator used to produce the Hough space representations throughout this description. Listing the colinearities of FIG. 6 yields Table 1. Each row represents one colinearity. The left column contains the running index of each colinearity, the next three columns contain the indices of all clusters constituting a colinearity and the rightmost column is a count indicating how many cluster points of the colinearity are shared with other colinearities.
Decomposition of the Hough space leads to the problem of recognising a prior known substructures in Hough space which are Hough cluster representations of object primitives. Using the object primitive set shown in FIG. 5, the decomposition is started with the most complex object, i.e. in this case with the object primitive "parallelepiped". As the corresponding Hough cluster structure is a z-structure, one searches for a decomposition of the cluster pattern in Hough space shown in FIG. 6 such that some or possibly all clusters corresponding to the t-shaped object of the example are covered by one or more z-structures. In order to answer this question a so-called `colinearity cluster coverage table` (Table 2) is calculated which results by a slight rearrangement of Table 1; the 1's indicate which clusters belong to which colinearity. As the z-structures consist of one `central` colinearity sharing all three clusters with other colinearities one takes the four colinearities with a "3" in the rightmost column of Table 1 as potential candidates to be the central colinearities of such a z-structure. If then Table 2 is restricted to colinearities S2, S3, S6, and S8 each of which shares all its clusters with other colinearities and if the clusters covered by those other colinearities are marked by 0's, one obtains the `reduced colinearity cluster coverage table` shown in Table 3. As can be seen, there are two alternatives for colinearities S2 and S3 to be the central colinearity of a z-structure; e.g. colinearity S2 might be considered to share cluster C5 with colinearity S1 or alternatively with colinearity S3. This results in two lines (a) and (b) for each of the two colinearities S2 and S3 in Table 3. The respective z-structures consist of following colinearities: S1-S2(a)-S6-S8/S3-S2(b)-S6-S8/S1-S3(a)-S4-S9/S2-S3(b)-S4-S9. As the two alternatives cannot coexist in any solution, Table 3 can be decomposed into a set of four tables (Table 4) each representing several z-structures that can coexist. Thus the decomposition of the Hough space pattern of FIG. 6 into z-structures leads to the four alternative `reduced colinearity cluster coverage tables` as shown in Table 4. In case (A) a minimum of three z-structures is necessary to cover all clusters (central colinearities S2(a), S3(a), S6). In case (B) there is no combination of z-structures to cover all clusters in Hough space. In cases (C) and (D) two z-structures with colinearities S3(b) and S6 as central colinearities are sufficient to cover all clusters in Hough space. Thus a minimal decomposition of the t-shaped object into two parallelepipeds with edges E5, E21, E24 and E13, E15, E16 as inner edges of the parallelepiped results, and no further decomposition into other object primitives is necessary. The x,y-coordinates of the corresponding vertex points in image space are fully determined by the slope and intercept of the four colinearities of the corresponding z-structures in Hough space. In addition the x,y-coordinates of the remaining three vertex points of the parallelepiped where two edges intersect each other are implicitly given by the z-structures. Thus all vertex points of the primitives which constitute the object in image space can be related to the vertices of a unit cube in 3-D space and e.g. geometrical transform parameters can be calculated according to known approaches, such as described in the above-cited paper by L. G. Roberts.
Below the right portion of FIG. 6 there are shown three small circles each with an inclined line. All the clusters which are located on a vertical line above a circle represent edges in the image which run parallel (e.g. C4, C9, C15, C17, C22, C24 and E4, E9, E15, E17, E22, E24). The direction of the inclined line of the respective circle below the Hough space indicates the slope of the associated edges in the image space.
It should be noted that in general composite objects seen under arbitrary view angles do not expose the complete set of object primitive edges (in general several edges will be hidden). The technique described above for decomposing an object into object primitives can be summarized as follows:
(1) Capture image of scene and digitize it.
(2) Calculate gradient image and apply threshold to it.
(3) Transform binary gradient image into Hough space.
(4) Detect clusters in Hough space.
(5) Extract properties (configurations) of clusters in Hough space.
(6) Relate vertices of object primitives to vertices of object models (using known cluster properties).
(7) Find the model which best fits the vertices extracted from scene.
(B) Comparison With Wire Frame Hough Space Models
As opposed to the former approach full a priori knowledge about the objects to be recognised is employed in this case. The objects are represented by wire frame models, generated e.g. by a CAD design system. Mapping a 3-D object wire frame representation into a 2-D image plane by means of a perspective transform with the focal distance point at infinity and transforming the resulting image into Hough space yields a representation which can be called the wire frame Hough space model. For the t-shaped object an example for such a representation is shown in FIG. 7. The edges (E1 . . . E24) and the vertices (V1 . . . V16) carry, for the ease of understanding, the same designations as in FIG. 6, except that in the wire frame model image there are no hidden or invisible lines or vertices. Hough space wire frame representations implicitly comprise information about edge/vertex neighborhoods as well as about parallel edges. For further processing this information can be made explicit by precalculating the cluster colinearities; edges of common slope can be listed by looking for vertically aligned clusters in Hough space. (Note that these quantities could also have been derived from 3-D wire frame models directly.) Tables 5 and 6 show the colinearity table and an edge slope list, respectively, for the wire frame representation in FIG. 7.
The important point of the wire frame Hough space comparison is, that the edge images of physical objects which are transformed into Hough space exhibit cluster structures which are to be considered as subgraphs of graphs implicitly given by the wire frame Hough models. In order to demonstrate this, the colinearity table (Table 5) and edge slope list (Table 6) of FIG. 7 have been sketched in FIG. 8 (also known as labeled boundary representation). The edges in FIG. 8 are labeled according to the edges in FIG. 7. Assuming an edge image of a physical object like the image in FIG. 6 and transforming it into Hough space, the corresponding colinearity table would be equivalent to Table 1. The corresponding slope list is given in Table 7. Tables 1 and 7 immediately yield the boundary representation shown in FIG. 9. It has now to be checked whether this graph (whose edges are marked to indicate the image edge slope) matches some part of the graph derived from the wire frame Hough space models; this matching is done by schematic comparison of the contents of respective colinearity and edge slope tables in a computer. It easily can be verified that there are 32 different possibilities for matching the graph in FIG. 8 with the subgraph in FIG. 9.
Such procedures for matching a subgraph (which is the graph of FIG. 9, representing the object) to a complete graph (which is the graph of FIG. 8, representing the wire frame model) are well known, and are described e.g. in an article by J. R. Ullmann "An agorithm for subgraph isomorphism", Journal ACM, Vol. 23, No. 1, January 1976, pp. 31-42 and in the references cited in this article.
If the subgraph matching results in several possible solutions (of which only one should be correct), a final selection is made by exact vertex point fitting on the basis of given geometric relations between the known vertex points (distance etc.).
Although in terms of the computational expense the matching is not too burdensome it might be desirable to further cut down this search. This can be achieved by additional heuristic and object dependent rules. E.g., the upper half of the graph in FIG. 8 corresponds to the peripheral vertices V1, V2, V3, V4, V13, V14, V15, V16 and the lower half stems from the nonperipheral vertex points V5, V6, V7, V8, V9, V10, V11. It is easy to verify, that if there is a parallelogram defined by the images' colinearity table with two vertices colinear with a third vertex point, this parallelogram must connect the nonperipheral and the peripheral vertex points. This reduces the number of possible subgraph matches to 16. The additional property of the t-shaped object that a second parallelogram has one corner colinear with two other vertex points and thus must be a face defined by nonperipheral edges, further reduces the possible subgraph matches to a total of eight, etc. In general, as the discriminability of the objects' surfaces increases, the possibilities of subgraph matches decreases. The proposed method would consist of two phases: a computational expensive off-line model generation phase and a fast on-line recognition phase. The two phases comprise the following steps:
(I) Model Generation Phase
(1) Use wire frame models and calculate conlinearity/ege slope tables.
(2) Compute features of colinearity/edge slope tables for object and view dependent matching.
(II) Recognition Phase
(1) Capture image of scene and digitize it.
(2) Calculate gradient image and apply threshold to it.
(3) Transform binary gradient image into Hough space.
(4) Detect clusters in Hough space.
(5) Extract properties of clusters in Hough space.
(6) Perform subgraph matching which yields an object vertex to model vertex relation.
(7) Perform geometric vertex point fitting.
In order to speed up the subgraph matching, colinearity and edge slope tables corresponding to different views of an object and simple features for them, like the total number of colinearities, the number of clusters with given slope, etc., may be precomputed and stored in an object model library in which several models belong to the same class of object. If necessary, remaining ambiquities may be resolved subsequently by geometric vertex fitting. The steps of this modified method are as follows:
(I) Model Generation Phase
(1) Use wire frame models and calculate colinearity/edge slope tables for different view directions.
(2) Compute features of colinearity/edge slope tables to support matching.
(II) Recognition Phase
(1) Capture image of scene and digitize it.
(2) Calculate gradient image and apply threshold to it.
(3) Transform binary gradient image into Hough space.
(4) Detect clusters in Hough space.
(5) Extract properties of clusters in Hough space.
(6) Perform subgraph matching which yields an object vertex to model vertex relation.
(7) Verify graph match by geometric vertex point fitting.
Implementation in a Robot System
In FIG. 10, a robot work cell with vision control is shown in which the invention is used. The system comprises a robot control arm 11 with robot control unit 13 for handling work pieces 15 which are placed on a conveyor belt 17. A sensor 19 such as a TV camera views a workpiece 15 and sends its output signals to an image capture and preprocessing unit 21. In this unit, the TV scan image such as the one shown in FIG. 3(a) is intermediately stored and in a preprocessing procedure a gradient image (as shown in FIG. 3(b)) is generated which essentially consists of lines representing edges of the workpiece. A central processing unit 23 is provided for controlling the evaluation operations and for performing the necessary arithmetic and logic operations. The gradient image is sent to memory 25 and to the Hough processor 27.
Hough processor 27 generates a Hough space representation of the workpiece from the gradient image it receives. The Hough space representation comprises intersecting lines forming clusters as shown in FIG. 3(c). Each cluster represents a straight line, i.e. an edge of the workpiece.
The Hough space representation is sent to cluster extractor 29 which extracts the coordinate values of the center point of each cluster so that a clear and precise cluster representation is available which also can be stored in memory 25. Several procedures can be used for cluster extraction; one possible simple cluster extraction method is briefly described in the following.
First, a horizontal projection vector is calculated by adding up the weighed intensity values of each column of the Hough space representation. If the intensity value of each point in the Hough space is designated as H(i, j), the weightd value is then [H(i, j)]e with e=1.5 . . . 4 suitably selected. The horizontal projection vector thus comprises an accumulated intensity value for each column so that the columns or x-coordinates of clusters can be determined in a second step by finding the maxima in the horizontal projection vector. In a third step, a vertical projection vector is calculated for each of the maxima found in step two. This is done by adding up, for a small vertical strip of width 2Δ around each maximum, the weighed intensity values [H(i, j)]e of each row. Thus, each vertical projection vector comprises an accumulated intensity value for a small row section of length 2Δ. The row or y-coordinate of each cluster in the respective strip can then be determined in a fourth step by finding the maxima in the vertical projection vectors. The coordinate values of points where horizontal and vertical maxima coincide are stored as cluster centers. Of course, the sequence can be reversed by starting with a vertical projection vector and then calculating a number of horizontal projection vectors.
The Hough space representation comprising only clusters represented by their center points (as shown in FIG. 3(d)) is stored in memory 25. For further processing, the necessary tables representing object models and the required programs for the respective application are stored in memory.
From the Hough space cluster table, specific configurations are determined as was explained in the previous description. Colinearities of clusters can be found by Hough processor 27 and cluster extractor 29 which transform each cluster point into a line, and the intersection of several lines (i.e. a cluster in the secondary Hough representation) indicates a colinearity in the original Hough space representation of object 15. Colinearities of clusters and their structures and configurations are listed in tables as explained above and compared to those of existing models that were entered into memory for the application. As was described above, the type of object and its orientation are determined by the inventive procedure and the respective data transferred to robot control 13 before workpiece 15 reaches the position below robot arm 11 so that the latter can be controlled to handle the workpiece correctly.
CONCLUSION
A new approach for 3-D object recognition has been proposed. The technique employs the noise insensitive Hough transform and thus the overall robustness of the technique with respect to imperfect image capture is good. As the Hough space clusters of an image deliver a representation with substantially decreased data volume the analysis and decomposition as well as the model matching can be performed fast enough with a low cost general purpose processor. Thus the method improves the efficiency e.g. of vision feedback for robot control, where strong realtime requirements are to be met.
              TABLE 1                                                     
______________________________________                                    
 ##STR1##                                                                 
______________________________________                                    
              TABLE 2                                                     
______________________________________                                    
 ##STR2##                                                                 
______________________________________                                    
              TABLE 3                                                     
______________________________________                                    
 ##STR3##                                                                 
______________________________________                                    
                                  TABLE 4                                 
__________________________________________________________________________
REDUCED COLINEARITY CLUSTER COVERAGE TABLES (EXPLICIT)                    
__________________________________________________________________________
C1    C4                                                                  
        C5                                                                
          C8                                                              
            C9                                                            
              C10                                                         
                 C13                                                      
                    C14                                                   
                       C15                                                
                          C16                                             
                             C17                                          
                                C18                                       
                                   C20                                    
                                      C21                                 
                                         C22                              
                                            C23                           
                                               C24                        
__________________________________________________________________________
CASE A: S2(a)/S3(a)                                                       
S2(a)                                                                     
    0 0 1        0     1  0  0  1  0                                      
S3(a)   1 0            0        0  0  1  0  0  1                          
S6      0   0 0  1  0  1  1  0  0                                         
S8      0           0  0  0  1  1  1  0  0                                
CASE B: S2(b)/S3(a)                                                       
S2(b)   1        0     1  0  0  1  0  0        0                          
S3(a)   1 0            0        0  0  1  0  0  1                          
S6      0   0 0  1  0  1  1  0  0                                         
S8      0           0  0  0  1  1  1  0  0                                
CASE C: S2(a)/S3(b)                                                       
S2(a)                                                                     
    0 0 1        0     1  0  0  1  0                                      
S3(b)                                                                     
    0 0 1 0                        0  1  0  0  1                          
S6      0   0 0  1  0  1  1  0  0                                         
S8      0           0  0  0  1  1  1  0  0                                
CASE D: S2(b)/S3(b)                                                       
S2(b)   1        0     1  0  0  1  0  0        0                          
S3(b)                                                                     
    0 0 1 0                        0  1  0  0  1                          
S6      0   0 0  1  0  1  1  0  0                                         
S8      0           0  0  0  1  1  1  0  0                                
__________________________________________________________________________
              TABLE 5                                                     
______________________________________                                    
 ##STR4##                                                                 
______________________________________                                    
              TABLE 6                                                     
______________________________________                                    
EDGE SLOPE TABLE (FOR FIG. 7)                                             
______________________________________                                    
SLOPE 1  E2     E4     E9   E11  E15  E17  E22  E24                       
SLOPE 2  E1     E3     E10  E12  E16  E18  E21  E23                       
SLOPE 3  E5     E6     E7   E8   E13  E14  E19  E20                       
______________________________________                                    
              TABLE 7                                                     
______________________________________                                    
EDGE SLOPE TABLE (FOR FIG. 6)                                             
______________________________________                                    
SLOPE 1  E4     E9      E15  E17   E22  E24                               
SLOPE 2  E1     E10     E16  E18   E21  E23                               
SLOPE 3  E5     E6      E8   E13   E14  E19  E20                          
______________________________________                                    

Claims (18)

Having thus described my invention what I claim as new and desire to secure by Letters Patent is:
1. A machine method for recognizing or identifying a three-dimensional object from a two-dimensional image of the object, using a Hough space representation of said image comprising clusters each represented by its center point, comprising the steps of:
(a) determining in the Hough space representation of said object image, cluster configurations having specific properties;
(b) providing predetermined cluster configurations of Hough space representations which correspond to known object models identifying vertex points in three-dimensional space;
(c) relating cluster configurations determined in step (a) to said predetermined cluster configurations of Hough space representations, in order to identify said object by comparison of the vertex points of said object image represented by cluster configurations determined in step (a) to corresponding vertex points of said object model representations; and
(d) if further information is required for identification, fitting vertex points of the object image, the locations of which are given by the slope and intercept of respective cluster point colinearities in Hough space representation, to vertex points of at least one object model, on the basis of the relations determined in step (b).
2. Method according to claim 1, wherein said known object models comprise object primitive models representing basic geometrical objects, and for each such model a Hough space representation of its image as seen from a given view angle, with data characterizing existing specific cluster configurations of the respective model, is stored for comparison.
3. Method according to claim 1, wherein said known object models comprise wire frame models each representing the respective object modeled completely by providing geometric information about all its edges and vertices, and for each such wire frame model a Hough space representation with data characterizing existing specific cluster configurations is stored for comparison.
4. Method according to claim 3, wherein a Hough space representation with data characterizing specific cluster configurations is stored for each of a plurality of images of each such wire frame model as seen from different view angles, said images not comprising the edges and vertices which are hidden for the respective view angle.
5. Method according to claim 3, further comprising the steps of:
providing a graph representing the interrelation between edges and vertices of each wire frame model and information about the common slopes of edges;
providing a graph representing the interrelation of viewed edges and vertices, including common slopes, from the information about colinearities in the Hough space representation of an unknown object; and
in a graph matching procedure, identifying those portions of the graph of each wire frame model which fit the graph of the unknown object.
6. Method according to claim 1, wherein said specific properties of cluster configurations include colinear arrangements of clusters, cluster colinearities intersecting each other exactly at a cluster, and parallel arrangements of cluster colinearities.
7. Method according to claim 1, wherein if more than one possible solution is found in step (c), one actual solution is selected by the vertex fitting procedure of step (d).
8. A machine method for recognizing a three-dimensional object from a two-dimensional image of the object, using a Hough space representation of said image comprising clusters each repesented by its center point, comprising the steps of:
(a) determining, in the Hough space representation of said object image, linear arrangements of cluster points, called cluster colinearities, each of said cluster colinearities representing a vertex in said object image;
(b) preparing tables of such cluster colinearities indicating their interrelations, including common cluster points;
(c) preparing tables representing specific cluster colinearity configurations identifying specific vertex point arrangements in said object image;
(d) preparing a Hough space representation for each of a plurality of preselected object models;
(e) determining cluster colinearities and preparing tables of cluster colinearities indicating their interrelations, for each of the preselected model Hough space representations, each of their cluster colinearities being related to a 3-D defined vertex point of the respective model; and
(f) comparing the object colinearity tables prepared in step (c) to model colinearity tables prepared in step (e) for determining similar cluster colinearity configurations, thus relating vertex points of the object image to vertex points of at least one model.
9. Method according to claim 8, comprising the further steps of preparing additional edge slope tables for said object image as well as for the models, indicating common slopes of edges which are represented as cluster points in the Hough space representations of said object image and of each model, and additionally using in step (f) the indications of the edge slope tables for relating image vertex points to model vertex points.
10. A system for recognizing or identifying a three-dimensional object from a two-dimensional image of the object, using a Hough space representation of said image comprising clusters each represented by its center point, comprising:
(a) means for determining in the Hough space representation of said object image, cluster configurations having specific properties;
(b) means for storing predetermined cluster configurations of Hough space representations which correspond to known object models identifying vertex points in three-dimensional space;
(c) means for comparing said cluster configurations determined by said determining means to said predetermined cluster configurations of Hough space representations stored in said storage means, in order to identify said object by comparison of the vertex points of said object image represented by said determined cluster configurations to corresponding vertex points of said object model representations; and
(d) means, responsive to said comparing means, for identifying said object and, if further information is required for identification, for fitting vertex points of the object image, the locations of which are given by the slope and intercept of respective cluster point colinearities in Hough space representation, to vertex points of at least one object model, on the basis of the relations determined by said comparing means.
11. System according to claim 10, wherein said known object models comprise object primitive models representing basic geometrical objects, and for each such model a Hough space representation of its image as seen from a given view angle, with data characterizing existing specific cluster configurations of the respective model, is stored in said storage means for comparison.
12. System according to claim 10, wherein said known object models comprise wire frame models each representing the respective object modeled completely by providing geometric information about all its edges and vertices, and for each such wire frame model a Hough space representation with data characterizing existing specific cluster configurations is stored in said storage means for comparison.
13. System according to claim 12, wherein a Hough space representation with data characterizing specific cluster configurations is stored in said storage means for each of a plurality of images of each such wire frame model as seen from different view angles, said images not comprising the edges and vertices which are hidden for the respective view angle.
14. System according to claim 12, further comprising:
means for providing a graph of the interrelation between edges and vertices of each wire frame model and information about the common slopes of edges;
means for providing a graph representing the interrelation of viewed edges and vertices, including common slopes, from the information about colinearities in the Hough space representation of an unknown object; and
graph matching means for identifying those portions of the graph of each wire frame model which fit the graph of the unknown object.
15. System according to claim 10, wherein said specific properties of cluster configurations include colinear arrangements of clusters, cluster colinearities intersecting each other exactly at a cluster, and parallel arrangements of cluster colinearities.
16. System according to claim 10, wherein said identifying means comprises means, if more than one possible model is found by said comparing means, for selecting one actual model according to said vertex fitting procedure.
17. A system for recognizing a three-dimensional object from a two-dimensional image of the object, using a Hough space representation of said image comprising clusters each represented by its center point, comprising:
(a) means for determining, in the Hough space representation of said object image, linear arrangements of cluster points, called cluster colinearities, each of said cluster colinearities representing a vertex in said object image;
(b) first means for preparing tables of such clusters colinearities indicating their interrelations, including common cluster points;
(c) second means for preparing tables representing specific cluster colinearity configurations identifying specific vertex point arrangements in said object image;
(d) means for preparing a Hough space representation for each of a plurality of preselected object models;
(e) third means for determining cluster colinearities and preparing tables of cluster colinearities indicating their interrelations, for each of the preselected model Hough space representations, each of their cluster colinearities being related to a 3-D defined vertex point of the respective model; and
(f) means for comparing the object colinearity tables prepared by said second means to model colinearity tables prepared by said third means for determining similar cluster colinearity configurations, thus relating vertex points of the object image to vertex points of at least one model.
18. System according to claim 17, further comprising means for preparing additional edge slope tables for said object image as well as for the models, indicating common slopes of edges which are represented as cluster points in the Hough space representations of said object image and of each model, and means for providing the indications of said additional edge slope tables to said comparing means for additional use in relating image vertex points to model vertex points.
US06/874,313 1985-06-19 1986-06-13 Method for identifying three-dimensional objects using two-dimensional images Expired - Fee Related US4731860A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP85107445A EP0205628B1 (en) 1985-06-19 1985-06-19 Method for identifying three-dimensional objects using two-dimensional images
EP85107445 1985-06-19

Publications (1)

Publication Number Publication Date
US4731860A true US4731860A (en) 1988-03-15

Family

ID=8193567

Family Applications (1)

Application Number Title Priority Date Filing Date
US06/874,313 Expired - Fee Related US4731860A (en) 1985-06-19 1986-06-13 Method for identifying three-dimensional objects using two-dimensional images

Country Status (4)

Country Link
US (1) US4731860A (en)
EP (1) EP0205628B1 (en)
JP (1) JPH0685183B2 (en)
DE (1) DE3578241D1 (en)

Cited By (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4807042A (en) * 1986-01-27 1989-02-21 Fuji Photo Film Co., Ltd. Method of image signal encoding by orthogonal transformation
US4819169A (en) * 1986-09-24 1989-04-04 Nissan Motor Company, Limited System and method for calculating movement direction and position of an unmanned vehicle
US4837487A (en) * 1984-02-22 1989-06-06 Fanuc Ltd. System for coupling a visual sensor processor and a robot controller
US4868752A (en) * 1987-07-30 1989-09-19 Kubota Ltd. Boundary detecting method and apparatus for automatic working vehicle
US4887155A (en) * 1986-12-06 1989-12-12 Robert Massen Method and arrangement for measuring and/or monitoring properties of yarns or ropes
US4980971A (en) * 1989-12-14 1991-01-01 At&T Bell Laboratories Method and apparatus for chip placement
US5048965A (en) * 1990-05-02 1991-09-17 At&T Bell Laboratories Three-dimensional imaging technique with occlusion avoidance
US5063604A (en) * 1989-11-08 1991-11-05 Transitions Research Corporation Method and means for recognizing patterns represented in logarithmic polar coordinates
US5097516A (en) * 1991-02-28 1992-03-17 At&T Bell Laboratories Technique for illuminating a surface with a gradient intensity line of light to achieve enhanced two-dimensional imaging
US5101442A (en) * 1989-11-24 1992-03-31 At&T Bell Laboratories Three-dimensional imaging technique using sharp gradient of illumination
US5127061A (en) * 1990-12-03 1992-06-30 At&T Bell Laboratories Real-time three-dimensional imaging technique
US5172317A (en) * 1988-08-10 1992-12-15 Honda Giken Kogyo Kabushiki Kaisha Automatic travelling apparatus
US5172315A (en) * 1988-08-10 1992-12-15 Honda Giken Kogyo Kabushiki Kaisha Automatic travelling apparatus and method
US5299268A (en) * 1991-11-27 1994-03-29 At&T Bell Laboratories Method for detecting the locations of light-reflective metallization on a substrate
US5379353A (en) * 1988-05-09 1995-01-03 Honda Giken Kogyo Kabushiki Kaisha Apparatus and method for controlling a moving vehicle utilizing a digital differential analysis circuit
US5430810A (en) * 1990-11-20 1995-07-04 Imra America, Inc. Real time implementation of the hough transform
US5629989A (en) * 1993-04-27 1997-05-13 Honda Giken Kogyo Kabushiki Kaisha Image line-segment extracting apparatus
US5631982A (en) * 1993-06-10 1997-05-20 International Business Machines Corporation System using parallel coordinates for automated line detection in noisy images
WO1997043733A1 (en) * 1996-04-26 1997-11-20 Sones Richard A Color pattern evaluation system for randomly oriented articles
US5760778A (en) * 1995-08-15 1998-06-02 Friedman; Glenn M. Algorithm for representation of objects to enable robotic recongnition
US5764788A (en) * 1995-08-31 1998-06-09 Macmillan Bloedel Limited Strand orientation sensing
US5822450A (en) * 1994-08-31 1998-10-13 Kabushiki Kaisha Toshiba Method for monitoring equipment state by distribution measurement data, and equipment monitoring apparatus
US5845048A (en) * 1995-02-06 1998-12-01 Fujitsu Limited Applicable recognition system for estimating object conditions
US5864640A (en) * 1996-10-25 1999-01-26 Wavework, Inc. Method and apparatus for optically scanning three dimensional objects using color information in trackable patches
US5930378A (en) * 1996-01-31 1999-07-27 Kabushiki Kaisha Toshiba Dynamic image processing apparatus and method
US5978497A (en) * 1994-09-20 1999-11-02 Neopath, Inc. Apparatus for the identification of free-lying cells
US5978498A (en) * 1994-09-20 1999-11-02 Neopath, Inc. Apparatus for automated identification of cell groupings on a biological specimen
US5987158A (en) * 1994-09-20 1999-11-16 Neopath, Inc. Apparatus for automated identification of thick cell groupings on a biological specimen
US6178262B1 (en) * 1994-03-11 2001-01-23 Cognex Corporation Circle location
US6259809B1 (en) * 1997-08-29 2001-07-10 Advantest Corporation System and method for recognition of image information
US6324299B1 (en) * 1998-04-03 2001-11-27 Cognex Corporation Object image search using sub-models
US6349245B1 (en) * 1998-02-18 2002-02-19 Armstrong Healthcare Limited Method of and apparatus for registration of a robot
US6414711B2 (en) * 1995-09-06 2002-07-02 Fanuc Ltd. Apparatus for correcting movement path of a robot and a method therefor
US6490369B1 (en) 1999-07-06 2002-12-03 Fanuc Robotics North America Method of viewing and identifying a part for a robot manipulator
US6714319B1 (en) 1999-12-03 2004-03-30 Xerox Corporation On-line piecewise homeomorphism model prediction, control and calibration system for a dynamically varying color marking device
US20040064384A1 (en) * 2002-05-24 2004-04-01 David Miles Apparatus and method for identification of transmissions and other parts
US20040085323A1 (en) * 2002-11-01 2004-05-06 Ajay Divakaran Video mining using unsupervised clustering of video content
US6771818B1 (en) * 2000-04-04 2004-08-03 Microsoft Corporation System and process for identifying and locating people or objects in a scene by selectively clustering three-dimensional regions
US20050058367A1 (en) * 2002-02-15 2005-03-17 Fujitsu Limited Image transformation method and apparatus, image recognition apparatus, robot control apparatus and image projection apparatus
US6873432B1 (en) * 1999-11-30 2005-03-29 Xerox Corporation Method and apparatus for representing color space transformations with a piecewise homeomorphism
US20050180636A1 (en) * 2004-01-28 2005-08-18 Sony Corporation Image matching system, program, and image matching method
US20060126943A1 (en) * 2001-06-05 2006-06-15 Christian Simon Geometric hashing method for model-based recognition of an object
US20060248371A1 (en) * 2005-04-28 2006-11-02 International Business Machines Corporation Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center
US20070071328A1 (en) * 2005-09-28 2007-03-29 Larsen Paul A Systems and methods for automatically determining object information and systems and methods for control based on automatically determined object information
US20070159476A1 (en) * 2003-09-15 2007-07-12 Armin Grasnick Method for creating a stereoscopic image master for imaging methods with three-dimensional depth rendition and device for displaying a steroscopic image master
US20080031400A1 (en) * 2004-05-06 2008-02-07 Luc Beaulieu 3D Localization Of Objects From Tomography Data
US20090113051A1 (en) * 2007-10-30 2009-04-30 Modern Grids, Inc. Method and system for hosting multiple, customized computing clusters
US20100114374A1 (en) * 2008-11-03 2010-05-06 Samsung Electronics Co., Ltd. Apparatus and method for extracting feature information of object and apparatus and method for creating feature map
US20100286827A1 (en) * 2009-05-08 2010-11-11 Honda Research Institute Europe Gmbh Robot with vision-based 3d shape recognition
WO2013028280A2 (en) 2011-08-19 2013-02-28 Qualcomm Incorporated Dynamic selection of surfaces in real world for projection of information thereon
US8553939B2 (en) 2009-01-30 2013-10-08 Microsoft Corporation Pose tracking pipeline
US8811938B2 (en) 2011-12-16 2014-08-19 Microsoft Corporation Providing a user interface experience based on inferred vehicle state
US9087232B2 (en) 2004-08-19 2015-07-21 Apple Inc. 3D object recognition
US20150302027A1 (en) * 2014-02-14 2015-10-22 Nant Holdings Ip, Llc Object ingestion through canonical shapes, systems and methods
US20170048406A1 (en) * 2014-04-28 2017-02-16 Hewlett-Packard Development Company, L.P. Detecting signature lines within an electronic document
US10290118B2 (en) 2015-08-06 2019-05-14 Cognex Corporation System and method for tying together machine vision coordinate spaces in a guided assembly environment

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2203877A (en) * 1986-09-18 1988-10-26 Violet Frances Leavers Shape parametrisation
DE3735935C2 (en) * 1987-10-23 1996-07-11 Ibm Deutschland Procedure for determining clusters in the Hough area
US4906099A (en) * 1987-10-30 1990-03-06 Philip Morris Incorporated Methods and apparatus for optical product inspection
JPH0239283A (en) * 1988-07-28 1990-02-08 Agency Of Ind Science & Technol Object recognizing device
JPH06500872A (en) * 1990-04-30 1994-01-27 インパック・テクノロジー・インコーポレイティド Electronic system for classifying objects
US5381572A (en) * 1991-01-09 1995-01-17 Park; Young-Go Twist rolling bed
JP3426002B2 (en) * 1993-09-20 2003-07-14 三菱電機株式会社 Object recognition device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3069654A (en) * 1960-03-25 1962-12-18 Paul V C Hough Method and means for recognizing complex patterns
US4618989A (en) * 1983-01-21 1986-10-21 Michio Kawata, Director-General of Agency of Industrial Science and Technology Method and system for detecting elliptical objects

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3069654A (en) * 1960-03-25 1962-12-18 Paul V C Hough Method and means for recognizing complex patterns
US4618989A (en) * 1983-01-21 1986-10-21 Michio Kawata, Director-General of Agency of Industrial Science and Technology Method and system for detecting elliptical objects

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Perkins et al., "A Corner Finder for Visual Feedback", Computer Graphics and Image Processing, 1973, pp. 355-376.
Perkins et al., A Corner Finder for Visual Feedback , Computer Graphics and Image Processing, 1973, pp. 355 376. *

Cited By (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4837487A (en) * 1984-02-22 1989-06-06 Fanuc Ltd. System for coupling a visual sensor processor and a robot controller
US4807042A (en) * 1986-01-27 1989-02-21 Fuji Photo Film Co., Ltd. Method of image signal encoding by orthogonal transformation
US4819169A (en) * 1986-09-24 1989-04-04 Nissan Motor Company, Limited System and method for calculating movement direction and position of an unmanned vehicle
US4887155A (en) * 1986-12-06 1989-12-12 Robert Massen Method and arrangement for measuring and/or monitoring properties of yarns or ropes
US4868752A (en) * 1987-07-30 1989-09-19 Kubota Ltd. Boundary detecting method and apparatus for automatic working vehicle
US5379353A (en) * 1988-05-09 1995-01-03 Honda Giken Kogyo Kabushiki Kaisha Apparatus and method for controlling a moving vehicle utilizing a digital differential analysis circuit
US5172315A (en) * 1988-08-10 1992-12-15 Honda Giken Kogyo Kabushiki Kaisha Automatic travelling apparatus and method
US5172317A (en) * 1988-08-10 1992-12-15 Honda Giken Kogyo Kabushiki Kaisha Automatic travelling apparatus
US5063604A (en) * 1989-11-08 1991-11-05 Transitions Research Corporation Method and means for recognizing patterns represented in logarithmic polar coordinates
US5101442A (en) * 1989-11-24 1992-03-31 At&T Bell Laboratories Three-dimensional imaging technique using sharp gradient of illumination
US4980971A (en) * 1989-12-14 1991-01-01 At&T Bell Laboratories Method and apparatus for chip placement
US5048965A (en) * 1990-05-02 1991-09-17 At&T Bell Laboratories Three-dimensional imaging technique with occlusion avoidance
US5430810A (en) * 1990-11-20 1995-07-04 Imra America, Inc. Real time implementation of the hough transform
US5127061A (en) * 1990-12-03 1992-06-30 At&T Bell Laboratories Real-time three-dimensional imaging technique
US5097516A (en) * 1991-02-28 1992-03-17 At&T Bell Laboratories Technique for illuminating a surface with a gradient intensity line of light to achieve enhanced two-dimensional imaging
US5299268A (en) * 1991-11-27 1994-03-29 At&T Bell Laboratories Method for detecting the locations of light-reflective metallization on a substrate
US5629989A (en) * 1993-04-27 1997-05-13 Honda Giken Kogyo Kabushiki Kaisha Image line-segment extracting apparatus
US5631982A (en) * 1993-06-10 1997-05-20 International Business Machines Corporation System using parallel coordinates for automated line detection in noisy images
US6178262B1 (en) * 1994-03-11 2001-01-23 Cognex Corporation Circle location
US5822450A (en) * 1994-08-31 1998-10-13 Kabushiki Kaisha Toshiba Method for monitoring equipment state by distribution measurement data, and equipment monitoring apparatus
US6137899A (en) * 1994-09-20 2000-10-24 Tri Path Imaging, Inc. Apparatus for the identification of free-lying cells
US5978497A (en) * 1994-09-20 1999-11-02 Neopath, Inc. Apparatus for the identification of free-lying cells
US6134354A (en) * 1994-09-20 2000-10-17 Tripath Imaging, Inc. Apparatus for the identification of free-lying cells
US5987158A (en) * 1994-09-20 1999-11-16 Neopath, Inc. Apparatus for automated identification of thick cell groupings on a biological specimen
US5978498A (en) * 1994-09-20 1999-11-02 Neopath, Inc. Apparatus for automated identification of cell groupings on a biological specimen
US5845048A (en) * 1995-02-06 1998-12-01 Fujitsu Limited Applicable recognition system for estimating object conditions
US5760778A (en) * 1995-08-15 1998-06-02 Friedman; Glenn M. Algorithm for representation of objects to enable robotic recongnition
US5764788A (en) * 1995-08-31 1998-06-09 Macmillan Bloedel Limited Strand orientation sensing
US6414711B2 (en) * 1995-09-06 2002-07-02 Fanuc Ltd. Apparatus for correcting movement path of a robot and a method therefor
US5930378A (en) * 1996-01-31 1999-07-27 Kabushiki Kaisha Toshiba Dynamic image processing apparatus and method
US5911003A (en) * 1996-04-26 1999-06-08 Pressco Technology Inc. Color pattern evaluation system for randomly oriented articles
WO1997043733A1 (en) * 1996-04-26 1997-11-20 Sones Richard A Color pattern evaluation system for randomly oriented articles
US5864640A (en) * 1996-10-25 1999-01-26 Wavework, Inc. Method and apparatus for optically scanning three dimensional objects using color information in trackable patches
US6259809B1 (en) * 1997-08-29 2001-07-10 Advantest Corporation System and method for recognition of image information
US6349245B1 (en) * 1998-02-18 2002-02-19 Armstrong Healthcare Limited Method of and apparatus for registration of a robot
US6324299B1 (en) * 1998-04-03 2001-11-27 Cognex Corporation Object image search using sub-models
US6490369B1 (en) 1999-07-06 2002-12-03 Fanuc Robotics North America Method of viewing and identifying a part for a robot manipulator
US6873432B1 (en) * 1999-11-30 2005-03-29 Xerox Corporation Method and apparatus for representing color space transformations with a piecewise homeomorphism
US6714319B1 (en) 1999-12-03 2004-03-30 Xerox Corporation On-line piecewise homeomorphism model prediction, control and calibration system for a dynamically varying color marking device
US6771818B1 (en) * 2000-04-04 2004-08-03 Microsoft Corporation System and process for identifying and locating people or objects in a scene by selectively clustering three-dimensional regions
US7327888B2 (en) * 2001-06-05 2008-02-05 Matrox Electronic Systems, Ltd. Geometric hashing method for model-based recognition of an object
US20060126943A1 (en) * 2001-06-05 2006-06-15 Christian Simon Geometric hashing method for model-based recognition of an object
US20050058367A1 (en) * 2002-02-15 2005-03-17 Fujitsu Limited Image transformation method and apparatus, image recognition apparatus, robot control apparatus and image projection apparatus
US7567728B2 (en) * 2002-02-15 2009-07-28 Fujitsu Limited Method and apparatus using image transformation of picked up image into image enabling position
US7529697B2 (en) * 2002-05-24 2009-05-05 Atc Drivetrain, Inc. Apparatus and method for identification of transmissions and other parts
US20040064384A1 (en) * 2002-05-24 2004-04-01 David Miles Apparatus and method for identification of transmissions and other parts
US20040085323A1 (en) * 2002-11-01 2004-05-06 Ajay Divakaran Video mining using unsupervised clustering of video content
US7375731B2 (en) * 2002-11-01 2008-05-20 Mitsubishi Electric Research Laboratories, Inc. Video mining using unsupervised clustering of video content
US20070159476A1 (en) * 2003-09-15 2007-07-12 Armin Grasnick Method for creating a stereoscopic image master for imaging methods with three-dimensional depth rendition and device for displaying a steroscopic image master
US20050180636A1 (en) * 2004-01-28 2005-08-18 Sony Corporation Image matching system, program, and image matching method
US7747103B2 (en) * 2004-01-28 2010-06-29 Sony Corporation Image matching system, program, and image matching method
US20080031400A1 (en) * 2004-05-06 2008-02-07 Luc Beaulieu 3D Localization Of Objects From Tomography Data
US9087232B2 (en) 2004-08-19 2015-07-21 Apple Inc. 3D object recognition
US20100064009A1 (en) * 2005-04-28 2010-03-11 International Business Machines Corporation Method and Apparatus for a Common Cluster Model for Configuring, Managing, and Operating Different Clustering Technologies in a Data Center
US8843561B2 (en) 2005-04-28 2014-09-23 International Business Machines Corporation Common cluster model for configuring, managing, and operating different clustering technologies in a data center
US20060248371A1 (en) * 2005-04-28 2006-11-02 International Business Machines Corporation Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center
US20070071328A1 (en) * 2005-09-28 2007-03-29 Larsen Paul A Systems and methods for automatically determining object information and systems and methods for control based on automatically determined object information
US7885467B2 (en) * 2005-09-28 2011-02-08 Wisconsin Alumni Research Foundation Systems and methods for automatically determining object information and systems and methods for control based on automatically determined object information
US20090113051A1 (en) * 2007-10-30 2009-04-30 Modern Grids, Inc. Method and system for hosting multiple, customized computing clusters
US8352584B2 (en) 2007-10-30 2013-01-08 Light Refracture Ltd., Llc System for hosting customized computing clusters
US7822841B2 (en) 2007-10-30 2010-10-26 Modern Grids, Inc. Method and system for hosting multiple, customized computing clusters
US20110023104A1 (en) * 2007-10-30 2011-01-27 Modern Grids, Inc. System for hosting customized computing clusters
US8352075B2 (en) * 2008-11-03 2013-01-08 Samsung Electronics Co., Ltd. Apparatus and method for extracting feature information of object and apparatus and method for creating feature map
US20100114374A1 (en) * 2008-11-03 2010-05-06 Samsung Electronics Co., Ltd. Apparatus and method for extracting feature information of object and apparatus and method for creating feature map
US8860663B2 (en) 2009-01-30 2014-10-14 Microsoft Corporation Pose tracking pipeline
US8553939B2 (en) 2009-01-30 2013-10-08 Microsoft Corporation Pose tracking pipeline
US8565485B2 (en) 2009-01-30 2013-10-22 Microsoft Corporation Pose tracking pipeline
US8610665B2 (en) 2009-01-30 2013-12-17 Microsoft Corporation Pose tracking pipeline
US9465980B2 (en) 2009-01-30 2016-10-11 Microsoft Technology Licensing, Llc Pose tracking pipeline
US8731719B2 (en) * 2009-05-08 2014-05-20 Honda Research Institute Europe Gmbh Robot with vision-based 3D shape recognition
US20100286827A1 (en) * 2009-05-08 2010-11-11 Honda Research Institute Europe Gmbh Robot with vision-based 3d shape recognition
US9245193B2 (en) 2011-08-19 2016-01-26 Qualcomm Incorporated Dynamic selection of surfaces in real world for projection of information thereon
WO2013028280A2 (en) 2011-08-19 2013-02-28 Qualcomm Incorporated Dynamic selection of surfaces in real world for projection of information thereon
US8811938B2 (en) 2011-12-16 2014-08-19 Microsoft Corporation Providing a user interface experience based on inferred vehicle state
US9596643B2 (en) 2011-12-16 2017-03-14 Microsoft Technology Licensing, Llc Providing a user interface experience based on inferred vehicle state
US20150302027A1 (en) * 2014-02-14 2015-10-22 Nant Holdings Ip, Llc Object ingestion through canonical shapes, systems and methods
US9501498B2 (en) * 2014-02-14 2016-11-22 Nant Holdings Ip, Llc Object ingestion through canonical shapes, systems and methods
US20170048406A1 (en) * 2014-04-28 2017-02-16 Hewlett-Packard Development Company, L.P. Detecting signature lines within an electronic document
US10887479B2 (en) * 2014-04-28 2021-01-05 Hewlett-Packard Development Company, L.P. Multifunctional peripheral device detecting and displaying signature lines within an electronic document
US10290118B2 (en) 2015-08-06 2019-05-14 Cognex Corporation System and method for tying together machine vision coordinate spaces in a guided assembly environment
US11049280B2 (en) 2015-08-06 2021-06-29 Cognex Corporation System and method for tying together machine vision coordinate spaces in a guided assembly environment

Also Published As

Publication number Publication date
JPH0685183B2 (en) 1994-10-26
EP0205628B1 (en) 1990-06-13
JPS6225385A (en) 1987-02-03
DE3578241D1 (en) 1990-07-19
EP0205628A1 (en) 1986-12-30

Similar Documents

Publication Publication Date Title
US4731860A (en) Method for identifying three-dimensional objects using two-dimensional images
JP4865557B2 (en) Computer vision system for classification and spatial localization of bounded 3D objects
US6081269A (en) Image processing system and method for generating data representing a number of points in a three-dimensional space from a plurality of two-dimensional images of the space
Thompson et al. Three-dimensional model matching from an unconstrained viewpoint
Ayache et al. Trinocular stereovision for robotics
Palazzolo et al. Fast image-based geometric change detection given a 3d model
Faugeras A few steps toward artifial 3D vision
Faugeras et al. Towards a flexible vision system
Palazzolo et al. Change detection in 3d models based on camera images
Avidar et al. Local-to-global point cloud registration using a dictionary of viewpoint descriptors
Maver et al. How to decide from the first view where to look next
Vinther et al. Active 3D object recognition using 3D affine invariants
Alhwarin Fast and robust image feature matching methods for computer vision applications
Huynh Feature-based stereo vision on a mobile platform
Pietikäinen et al. Progress in trinocular stereo
Zaki et al. The use of invariant features for object recognition from a single image
Chen et al. Characteristic-view modeling of curved-surface solids
Gingins et al. Model-based 3D object recognition by a hybrid hypothesis generation and verification approach
Uzunalioğlu Model-based recognition of polyhedral objects.
Hofman et al. Three-dimensional scene analysis using multiple range finders—Data capture, coordinate transformations and initial segmentation
Fernandes et al. Computing box dimensions from single perspective images in real time
CN116758211A (en) Medicine surface real-time three-dimensional reconstruction method and system based on binocular vision
Helena Reconstruction of 3D surface from colonoscopic video
Magee et al. Isolation of three-dimensional features of known height using a light-striped stereoscopic system
Lawton Applications Of Translational Motion Processing To Robotics

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, ARMON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNOR:WAHL, FRIEDRICH M.;REEL/FRAME:004565/0769

Effective date: 19860527

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
FP Lapsed due to failure to pay maintenance fee

Effective date: 20000315

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362